| | | Good morning. OpenAI, according to The Information, is considering charging customers between $2,000 and $20,000 per month for access to advanced AI “agents.” | Yeah, that definitely seems like the appropriate response to DeepSeek, for sure, for sure. | To quote one Twitter user, “LOL, LMAO even.” | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | ⚕️ AI for Good: A better kind of drug delivery 🏛️ Musk’s OpenAI lawsuit is going to trial 👁️🗨️ UK says Microsoft and OpenAI’s partnership is okay 🚁 Scale AI secures military contract in major step toward the era of ‘agentic warfare’
|
| |
| AI for Good: A better kind of drug delivery |  | Source: MIT |
| The method by which chemotherapy doses are calculated dates back more than a century to a formula based on body surface area (BSA). It’s a formula that only takes into account a patient’s height and weight, and further, can only be estimated; it can’t be accurately measured. | But it has remained in operation due, according to this paper, to a lack of “any better alternative.” Researchers at MIT are working on an alternative. | The details: The researchers have developed a device designed to continuously monitor a number of patient-specific details, including drug infusion levels and the specifics of the patient’s body composition, organ toxicity levels and enzyme fluctuations related to a given drug. | The device, nicknamed CLAUDIA, functions as a closed-loop system, in which “drug concentrations can be continually monitored, and that information is used to automatically adjust the infusion rate of the chemotherapy drug and keep the dose within the target range.” This, according to MIT, enables the “personalization of the drug dose in a manner that considers circadian rhythm changes in the levels of drug-metabolizing enzymes, as well as … chemotherapy-induced toxicity of the organs that metabolize the drugs.” It does this by taking fresh blood samples every five minutes; the concentration of the drug in those samples is then provided as input to an algorithm that automatically adjusts the drug infusion rate if necessary.
| Why it matters: Though it is early days, the system was validated in a study with animal subjects. The results were strong enough to suggest that the system can be tuned for any drug in a clinical setting. The device was built using off-the-shelf materials. | “Recognizing the advances in our understanding of how drugs are metabolized, and applying engineering tools to facilitate personalized dosing, we believe, can help transform the safety and efficacy of many drugs,” MIT Professor Giovanni Traverso, the study’s senior author, said. |
| |
| | Supercharge Your Revenue Teams with 200+ Expert AI Prompts | | Momentum is an enterprise listening platform for revenue teams that uses advanced AI agents to transform your customer conversations—emails, calls, support tickets—into actionable insights for revenue success. | Now, exclusively through Momentum Partners like Deepview, we're excited to launch our Prompt Library—a curated collection of 200+ high-impact prompts engineered by our lead prompt expert to supercharge Sales, Customer Success, Marketing, RevOps, and Leadership teams. | Get started now to unlock these powerful tools and take your go-to-market strategy to the next level for free. |
| |
| Musk gets denied; OpenAI lawsuit might be headed to trial |  | Source: C-Span |
| The news: In the latest from Elon Musk’s litigation front, a federal judge has denied Musk’s motion for an injunction that would have prevented OpenAI from converting into a for-profit organization. | The judge, noting that the “relief requested is extraordinary and rarely granted,” said Musk failed to meet the burden of proof. | The details: This suit, filed twice, now, by Musk accuses Sam Altman and OpenAI of, among other things, taking advantage of Musk’s “altruism” by suckering him into OpenAI’s founding board for his money before changing the startup’s mission. | The judge wrote that Musk provided no evidence to support his allegations of an antitrust arrangement between Microsoft and OpenAI. And when it comes to Musk’s push to stop OpenAI’s conversion into a for-profit company, his evidence here revolved solely around email exchanges from the early days of OpenAI’s founding, exchanges that, though “highly suggestive” do not meet the burden of proof for such an injunction.
| However, “given the public interest at stake and potential for harm if a conversion contrary to law occurred,” the judge is prepared to expedite the trial to the fall of 2025, though Musk would have to drop all his allegations against OpenAI except the one related to its pending conversion. | This latest comes a few weeks after Musk submitted an unsolicited offer to purchase OpenAI’s nonprofit arm for $97 billion, a move that was seemingly designed to further complicate OpenAI’s conversion by over-valuing the nonprofit arm of the business. | While Musk tries to slow OpenAI down, the startup really doesn’t have much of a choice in the matter; the terms of its $6.6 billion funding round last year stipulated that OpenAI would convert to a for-profit within two years. If it has not done so within that time frame, the investors can ask for their money back. |
| |
| | | Summarization mishaps: Just a day after the LA Times debuted a generative AI tool called Insights, intended to provide summaries and alternative viewpoints for LA Times opinion columns, the chatbot offered up a defense of the Ku Klux Klan, and has since been removed from the article in question. Nice. Trade War: The White House has announced a month-long pause on auto levies that, until yesterday, were subject to President Trump’s latest round of tariffs against Canada and Mexico. He is reportedly open to additional exemptions, though the White House has said that there will be no exemptions for the reciprocal tariffs Trump will instate on April 2.
| | The questions ChatGPT shouldn’t answer (The Verge). How Meta and Google benefit from AI-created ads — made by other firms (The Information). Researchers surprised to find less-educated areas adopting AI writing tools faster (Ars Technica). Some DOGE staffers are drawing six-figure government salaries (Wired). Supreme Court rejects Trump administration's bid to avoid paying USAID contractors (NBC News).
|
| |
| UK says Microsoft and OpenAI’s partnership is okay |  | Source: Microsoft |
| The news: Somewhat related to the above story, the U.K.’s Competition and Markets Authority (CMA) on Wednesday concluded a months-long investigation into the partnership between OpenAI and Microsoft, ruling that the partnership does not merit further investigation. | The details: The CMA began its investigation on the belief that, when it became OpenAI’s lead investor — and, let’s be honest, lifeline — in 2019, Microsoft obtained the ability to “materially influence OpenAI’s policy,” something Microsoft acknowledged during the course of the CMA’s investigation. | The CMA’s focus was to determine whether that material influence had shifted into “de facto control” over the startup; the investigation opened up in the direct aftermath of Sam Altman’s firing and rapid re-hiring in November of 2023. The CMA said the investigation was complicated by the fact that the partnership between the two companies has evolved so quickly.
| “Overall, taking into account all of the available evidence, particularly in light of recent developments in the Partnership which reduce OpenAI’s reliance on Microsoft for compute, the CMA does not believe that Microsoft currently controls OpenAI’s commercial policy, and instead exerts a high level of material influence over that policy,” the CMA wrote. Because of this, there are no competition concerns, and the investigation is over. | The landscape: The CMA has investigated and cleared several prominent partnerships in the AI sector, including those between Anthropic and Google, Anthropic and Amazon and Microsoft and Inflection. |
| |
| Scale AI secures military contract in major step toward the era of ‘agentic warfare’ |  | Source: Unsplash |
| In generative artificial intelligence, companies have discovered a technological advancement that could have significant implications for warfare. And those same companies, eager to reap some financial rewards from their massive AI-related expenditures, have been increasingly turning to AI integrations for the military, and the massive contracts that follow. | What happened: Scale AI on Wednesday said that it had secured a deal with the U.S. Department of Defense’s “Defense Innovation Unit” (DIU) for something called “Thunderforge,” an initiative designed to integrate AI into military operations. | The deal represents a multi-million-dollar contract, according to CNBC, though the actual financial terms of the partnership were not mentioned by either the DIU or Scale AI. | The Thunderforge initiative, according to Scale, marks the “DoD’s first foray into integrating AI agents in and across military workflows to provide advanced decision-making support systems for military leaders.” The program will include a team of “global technology partners,” including Microsoft and Anduril. The program will be leveraged to automate workflows, conduct warfare simulations and (vaguely) support military decision-making. Microsoft will be providing state-of-the-art large language models (LLMs) for the program.
| The initiative “marks a decisive shift toward AI-powered, data-driven warfare,” according to a statement from the DIU. Statements from the DIU and Scale both focused heavily on opportunities to increase the speed it takes the military to respond to threats. | Neither statement makes any mention of ethics or responsible deployment. Scale AI’s statement makes one mention of “human oversight,” something that is not mentioned at all in the DIU’s statement. | Neil Sahota, an AI advisor for the United Nations, told me that, while AI might pose “significant advantages” to the streamlining of operational efficiency, “its use in warfare (requires) meticulous consideration of ethical, legal and societal implications … Given the potential impact to people and society, we need robust governance frameworks to oversee AI applications used by the military, aiming to prevent unintended consequences and maintain human oversight over critical decisions.” | DIU director Doug Beck said that Thunderforge will enhance “the Joint Force’s ability to plan, adapt and respond to emerging challenges at machine speed — helping the warfighter to deter major conflict, or win if forced to fight.” | The system will be initially deployed in the U.S. Indo-Pacific Command and the U.S. European Command, and will scale from there. | In a separate page, Scale AI says that “the era of agentic warfare is here.” | The trend: This is the latest evidence of an ongoing trend wherein tech companies are quietly removing pledges that would prevent their tech from being used to augment weaponry, and where companies including Google, Microsoft, OpenAI, Palantir and Anduril are increasingly working to offer defense-tech solutions to militaries around the world. | The ethics of it all: That conundrum of the ethics of warfare is a concept I have explored far more often in recent months than I ever anticipated. But the ethical problems here are much the same as those associated more generally with GenAI — privacy, data security, hallucination, trust and a lack of explainability and traceability. Plus, there’s the far more significant problem of accountability for automated actions. | In warfare, these issues are all massively heightened. | Dr. Elke Schwarz, a professor of political theory at the Queen Mary University of London, told me that she is unsurprised by both the contract and the partnership of tech firms that have received it. But she said that the current tech remains unreliable, prompting questions over genuine efficacy. “Most AI engineers tell me that at this stage of AI progress, they would not trust an AI agent to book their vacation or make an appointment,” she said, adding that it is unclear “why military leaders, who rely on the highest degree of accuracy for their decision making, would come to rely on a technology which is, at this stage of the state of AI, likely to be quite flawed and unreliable.”
| This perspective echoes a 2022 West Point analysis by U.S. Army Cyber officers that identified a number of threats — data poisoning, evasion and reverse-engineering — to the deployment of AI: “First, a human must remain in the loop in any use of artificial intelligence,” they wrote. “And second, AI may not provide a strategic advantage for the United States in the era of great power competition, but we must continue to invest and encourage the ethical use of AI.” | Schwarz — who has previously discussed the phenomenon in which kill decisions become easier the farther a soldier is from their target — reiterated that the ethics of this integration are challenging. | “Imagine an AI agent makes the suggestion that there is hostile activity underway. What is the mandate for the military decision-maker? To what degree are they able to overrule any recommendation to resort to force? And if the system is flawed and makes a recommendation on erroneous data and civilians come to harm, who will carry the blame? Who can … be held accountable in a legal sense?” | | Dr. Elke Schwarz |
|
| Schwarz added that the timing of this investment in an “unproven and somewhat speculative” technology — during the seemingly daily slashes of government programs and employees by Elon Musk’s DOGE — “is interesting.” | “Given the vested financial interests, however (Peter Thiel's Founders Fund is a VC investor to Scale AI and, of course, Anduril), it is unsurprising,” she said. “It's an interesting illustration of the consolidation of all AI companies' interest in making defense their own playground.” | This is, of course, not the first deployment of AI in the military; Irina Raicu, the director of the Internet Ethics Program at the Markkula Center for Applied Ethics, noted that this move is in line with the DoD’s 2023 AI Adoption Strategy. | “Given the pervasiveness of AI, different kinds of AI, it would be naive to expect that AI tools would not be adopted in warfare, too,” she said. “I suspect that the level of AI literacy … in the military is low, just like it is among most people. And it's very important for those in the military to understand the limitations of those tools, which are often downplayed in pitches from vendors.” | | This is an area in which strict, international regulation is needed. | My greatest concern — stemming from a lack of full transparency from those involved — centers around accountability, not just for mistaken violence, but for illegal (but intentional) acts of warfare. | As IBM said in 1979, “a computer can never be held accountable. Therefore, a computer must never make a management decision.” | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Uber robotaxis: | 23% of you would love to ride in one; 23% would never. | 20% think it could be cool and 19% wouldn’t turn one away. | Something else: | | Never: | | If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
|
|
|