| | Good morning. A significant hurdle for the successful (or otherwise) adoption of generative AI involves cracking the enterprise. | In its latest effort to do just that, Anthropic yesterday announced Claude for Enterprise, a plan that’s designed specifically to serve massive corporate clients. We’ll see if it sticks. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: NASA’s self-driving aircraft | | Source: NASA |
| Somewhere within NASA is its Convergent Aeronautics Solutions project, which is focused on researching and implementing innovations in the world of aviation. | One component of that involves attempts at autonomous aircraft. | What happened: Similar to self-driving cars, self-flying aircraft need to be trained on tons of data in order to gain an understanding of their environment. | To this end, NASA developed a sensor pod it calls the Airborne Instrumentation for Real-world Video of Urban Environments (AIRVUE). The pod will be used to collect “large, diverse, and accessible visual datasets of weather and other obstacles.” NASA researchers recently tested the pod in a flight that took off from its Kennedy Space Center in Florida.
| Once the data is gathered and refined, NASA plans to make it available to anyone who needs it, be they commercial companies or research projects. NASA researchers said that accessible datasets drive innovation, but “we haven’t seen open datasets like this in aviation.” |
| |
| | Meet your new AI assistant for work | | Sana AI is a knowledge assistant that helps you work faster and smarter. | You can use it for everything from analyzing documents and drafting reports to finding information and automating repetitive tasks. | Integrated with your apps, capable of understanding meetings and completing actions in other tools, Sana AI is the most powerful assistant on the market. | Try it for free |
| |
| UK says Microsoft’s quasi-merger is okay | | Source: Unsplash |
| The U.K.’s Competition and Markets Authority (CMA) this week cleared Microsoft’s quasi-acquisition of AI startup Inflection, saying it doesn’t cause competition concerns. | The context: Over the past few months, we’ve seen a lot of semi-acquisitional behavior from Big Tech. So much so, in fact, that it gained a term: acquihire. | | The important point: Although the CMA will not be pursuing a further investigation of Microsoft’s partnership with Inflection — noting that Inflection isn’t a strong competitor to Microsoft — the competition watchdog did quantify the deal as a merger. | The CMA’s executive director, Joel Bamford, said that the “transfer of employees, coupled with other tactical arrangements, mean that two enterprises are no longer distinct.” Thus, the CMA has the authority to review the arrangement, which means that similar deals made by Big Tech will likely fall under regular CMA scrutiny. | The CMA has ongoing investigations into the multi-billion-dollar partnerships between Anthropic and Amazon and Anthropic and Google. |
| |
| | | Lately is the first Deep Social Platform, powered by Neuroscience-Driven AI™, built to transform your social marketing. | Lately keys into the ideal messaging for any target audience, turns your existing content into high-performing social posts, AND populates your social media calendar, optimizing for peak performance times. | Generate the most attention from the right people today - at no cost. |
| |
| | | | | Tech’s sea of red (The Information). Academic publisher Wiley set to earn $44m from AI rights deals, confirms 'no opt-out' for authors (The Bookseller). Nvidia market cap decline extends after report of antitrust probe by the Justice Department (BI). Amazon-backed Anthropic rolls out Claude AI for big business (CNBC). US antitrust trial targets Google's digital ad business (Reuters).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| Safe Superintelligence raises $1 billion | | Source: Created with AI by The Deep View |
| The three-month-old startup Safe Superintelligence (SSI) — founded by former OpenAI co-founder Ilya Sutskever — has raised $1 billion in funding. Investors include Andreessen Horowitz and Sequoia. | The details: According to Reuters, the startup — which currently has 10 employees — plans to use the money to buy compute and expand its team. | Unnamed sources told Reuters that the company is now valued at $5 billion (Elon Musk’s xAI is valued at $24 billion; Anthrop at $18.4 billion and OpenAI at $86 billion, for now). SSI’s goal is to build a singular product: a superintelligence; it has said that this singular focus will enable it to avoid distracting and dangerous product cycles.
| The context: Sutskever ran OpenAI’s Superalignment team, which was dismantled after he departed the company this year. | A believer in scaling laws, he told Reuters that he would approach scaling differently than OpenAI, but didn’t get into specifics. | "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
| There has been increasing research, meanwhile, into both the negative implications of scaling datasets, as well as the diminishing returns that scaling LLMs is generating. | The superintelligent technology Sutskever is trying to develop remains a scientific hypothetical. |
| |
| Chatbots can implant false memories in people | | Source: Created with AI by The Deep View |
| Human memory may well be a powerful force, but it is also fallible and super malleable. Research into memory has shown that our memories are not completed files sitting in our brains that we can choose to play back at will. Instead, recalling past events is an active process that requires the reconstruction of that event. | In our process of creating and recalling memories, the brain first encodes information, then must regularly store that information, then, when needed, recalls that encoded information. | Memory expert Dr. Elizabeth Loftus has said that “new information, new ideas, new thoughts, suggestive information, misinformation can enter people's conscious awareness and cause a contamination, a distortion, an alteration in memory.” This reality of human memory has had many implications for our justice system; in 2020, 69% of DNA exonerations involved wrongful convictions that resulted from eyewitness misidentification.
| MIT researchers recently decided to specifically study the intersection of judicially related false memory formation and generative AI. They found that GenAI chatbots “significantly increased” false memory formation. | The details: The study involved 200 participants who viewed a brief CCTV video of a robbery. Following the video, they were split randomly into four groups: control, survey-based, pre-scripted chatbot and generative chatbot. | The control tested immediate recall; the survey included 25 yes/no questions, with five misleading questions; the pre-scripted chatbot functioned similarly to the survey, though in chatbot form; and the generative chatbot affirmed incorrect answers. An example of a misleading question: “What kind of gun was used at the crime scene?” The weapon, was, in fact, a knife.
| The results: Compared to other interventions, and to the control, interactions with generative chatbots induced significantly more false memories. 36.4% of users’ responses to the generative chatbot were misled; the average number of false memories in this category was three times higher than the control. | After a week, participants in the generative category remained equally confident in their false memories; control and survey participants’ confidence dropped. Researchers noted that the propensity of chatbots to accidentally produce false or otherwise hallucinatory information significantly amplifies concerns about false memory induction.
| Why it matters: The researchers said that the study has significant implications for the deployment of GenAI models in environments — including legal, clinical and educational — where memory accuracy is vital. | They did note though, that, so long as ethical considerations remain top-of-mind, this can have a positive impact: “For instance, chatbots and language models could be leveraged as tools to induce positive false memories or help reduce the impact of negative ones, such as in people suffering from post-traumatic stress disorder (PTSD).” | | Source: MIT |
| | The researchers note that the findings highlight the “need for caution.” | I am reminded of the increasing adoption by police departments of GenAI-based automated report technology; this adds another dimension of risk to that environment, of a hallucinatory chatbot damaging an officer’s recollection of events, which could have enormous implications for legal justice and trials. | It also casts a layer of doubt on the role of generative AI in health and in education; without confident, reliable methods of ensuring that chatbots either will not hallucinate, or will flag it when they do, this research indicates that the hasty adoption of these models presents fundamental risk. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Right): | | Selected Image 2 (Left): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Clearview’s facial recognition tech: | A quarter of you said it should be shut down, period. A quarter of you said you’d prefer it not to be used by anyone, governments or companies. Just shy of 40% of you said it could be good, so long as there is proper oversight. | Could be good: | | Would you take a ride in a self-flying aircraft? | |
|
|
|