| | Good morning. The New York Times’ copyright infringement lawsuit against OpenAI reached a critical turning point yesterday during a hearing focused on OpenAI’s motion to dismiss the case. | The two sides presented their arguments, according to Wired reporter Kate Knibbs, and by the end of it, the judge said that he had a “lot to think about and you'll get an opinion in due course.” | Whether the judge shoots it down or allows the case to proceed, his decision will have massive bearings on the copyright future of AI. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🩺 AI for Good: Advanced diabetes diagnosis 🏛️ Biden signs executive order on AI infrastructure 🌎 Britain wants to become the leader in AI, it just needs you to adopt it first
|
| |
| AI for Good: Advanced diabetes diagnosis | | Source: Unsplash |
| Today, around 38 million Americans (11.6% of the U.S. population) have diabetes. A further 97.6 million people (38% of the population) are prediabetic. | Traditionally, diabetes has been categorized into two groups: Type 1, which appears in childhood, and Type 2, which can develop later in life. But people that fall within the Type 2 category vary wildly, a fact that inspired Stanford researchers to develop an AI-based algorithm designed to process data from continuous blood-glucose monitors in order to determine hyper-specific Type 2 sub-categories. | Such subclassification, according to the researchers, is vital in determining an individual’s risk of developing related conditions (liver, kidney and eye complications, for instance), as well as the efficacy of different drugs and treatment plans. In a study that included 54 participants, the algorithm was able to detect and identify metabolic subtypes — including insulin resistance and beta-cell deficiency — 90% of the time, with “greater accuracy than traditional metabolic tests.”
| Why it matters: “It’s a tool that people can use to take preventative measures,” Dr. Michael Snyder, a professor of genetics who co-led the study, said in a statement. “If the levels trigger a prediabetes warning, for instance, dietary or exercise habits could be adjusted.” | Such information matters, according to Dr. Tracey McLaughlin, a professor of endocrinology, even if a person doesn’t develop diabetes, as “insulin resistance is a risk factor for a variety of other health conditions, like heart disease and fatty liver disease.” |
| |
| | This is the Biggest Breakthrough in Next-Gen Tech Since iPhone—last chance for a 20% bonus ends tonight! | | Fun fact: Did you know Disney's princess IP alone has generated $46.4B in revenue!? Wild. | Here’s another fun fact: a next-gen tech entertainment company called Elf Labs has won 100+ trademark battles for characters like Cinderella, Little Mermaid, and Snow White—yes, really—and is using advanced proprietary tech to bring them to life right in your living room through next-gen VR (without clunky headsets). | Using unprecedented compression algorithms and patented tech, they’re creating hyper-realistic, AI powered 3D worlds with beloved, billion-dollar characters. | Now get this—Elf Labs has opened up an investment opportunity to the public. The team has already done $6B+ in licensing deals in their careers, and they're launching three new franchises in 2025. | This is a limited opportunity: With major new announcements on the horizon, this is the last chance to become a part of this game-changing movement at this share price. | There’s less than 1 day left to invest in Elf Labs with a 20% bonus–lock it in now! |
| |
| | | Employment in U.S. data centers has increased by 60% between 2016 and 2023, according to the latest census data. This employment is, however, uneven, with just five states contributing to 40% of data center employment (California, Texas, Florida, New York and Georgia). The U.S. FTC and Department of Justice have backed Elon Musk’s legal fight against OpenAI in a statement of interest that says that the court should decide on Musk’s motion for a preliminary injunction.
| | Displaced Los Angeles-area residents face spiking rents as authorities warn of price gouging (NBC News). UK competition watchdog launches probe into Google search (Semafor). Amazon races to transplant Alexa’s ‘brain’ with generative AI (FT). Barings Law enleagues 15,000 claimants against Google and Microsoft (ComputerWeekly). OpenAI announces ChatGPT Tasks for automating future actions (Tom’s Guide).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
| |
| Biden signs executive order on AI infrastructure | | Source: Created with AI by The Deep View |
| President Joe Biden on Tuesday signed a new executive order designed to ramp up the scale of AI-enabling data center infrastructure in the U.S. | The details: The order directs the departments of Defense and Energy to select (and then lease) sites where the private sector can build and operate “gigawatt scale” AI data centers, as well as clean power facilities, all intended to minimize negative impacts on local communities and the environment. | The private companies who will gain access to that land will have to foot the bill required to build, own and operate the infrastructure. The order calls for the “expeditious” permitting and environmental analysis of these sites, so that the infrastructure can get moving quickly. It also specifically states that developers selected to build on these new areas “will be required to bring online sufficient clean energy generation resources to match the full electricity needs of their data centers.”
| The developers will be subject to certain obligations, including purchasing a sizable share of domestically produced chips, paying workers well, adhering to high lab security standards and ensuring that their data centers will not increase the cost of local electricity. | Biden, who is in his final days in office, said that America cannot take its AI lead for granted: “We will not let America be out-built when it comes to the technology that will define the future, nor should we sacrifice critical environmental standards and our shared efforts to protect clean air and clean water.” | This comes as the data center expansion continues to move at a rapid pace, a pace so intense that several reports have warned that the U.S. electric grid will soon be unable to meet the spike in demand. The carbon emissions associated with data centers, meanwhile, have worsened a public health crisis caused by poor air quality. A handful of the biggest tech companies in the world spent more than $200 billion on AI infrastructure in 2024, and plan to spend even more in 2025. |
| |
| Britain wants to become the leader in AI, it just needs you to adopt it first | | Source: Unsplash |
| The U.K., no longer bound by the rules and regulations of the European Union, is very publicly planning to take a markedly different approach to AI than the rest of its western peers. This approach centers around mass adoption, the first stage of a sweeping effort to become an (or the) global leader in artificial intelligence. | At the core of this all is the 50-point AI Action Plan announced Monday by Prime Minister Keir Starmer, a plan that “looks to how we can remove the barriers so we are getting those companies, so we are, not only attracting those companies to come invest in the U.K., but also to ensure that we have the capacity to develop the frontier models of the future,” according to Feryal Clark, Britain’s minister for AI and digital government. | “We want to be a country where AI flourishes, where AI develops, AI grows,” Clark told CNBC in an interview Tuesday. “It’s really important that we as the U.K. do our own thing when it comes to regulation … that we bake in that safety right at the beginning when models are being developed.”
| But AI regulation — safety-related or otherwise — remains lacking even in detailed proposals in the U.K., with Clark saying that the government plans to consult with businesses before creating a regulatory proposal. The EU, meanwhile, remains far ahead on the regulatory front, with a risk-based framework that is already entering into force, even as the U.S. continues to lack any level of federal legislation leading into the first days of Donald Trump’s second presidency. | Britain’s 50-point plan: At the core of the plan — which was written by Matt Clifford, chair of the U.K.’s Advanced Research and Investment Agency — is a desire to enable AI to drive economic growth, improve healthcare, education and government interaction and create new types of jobs. The government has already “taken decisive action to support the AI sector and take down the barriers to growth.” Achieving its ambitions will require sweeping partnerships with corporations, in addition to some “tough choices.” | At the top of the plan, it makes very clear that, in order for Britain to remain a major leader in AI, it needs to not only lead in building every layer of the tech, but also in achieving the "widespread use” of it. The U.K. wants to “be on the side of innovators” and “push hard” on AI adoption. The plan calls for a massive increase in British compute capacity, complete with new data centers and infrastructure that will enable the country to train multiple leading models each year by the end of the decade. It also calls for public and private (and government) data sets to be unlocked for AI training — and indeed, the country is currently considering a proposal that would upend established copyright law, legally allowing developers to train models on copyrighted content.
| The plan calls for an expansion of AI education, in addition to an expansion of its immigration programs to attract AI talent. The plan does mention the importance of regulation, though with the addendum that regulation “could hold back adoption in crucial sectors like the medical sector,” and should instead be focused on adoption, both in the government, and outside it. | “AI should become core to how we think about delivering services, transforming citizens’ experiences, and improving productivity,” the report reads. “As well as strengthening the foundations — data, skills, talent, IP and assurance measures set out above — government should also focus on its role as a major user and customer of AI and how it uses its powers to catalyze private sector adoption.” | The Prime Minister has agreed to take forward all 50 recommendations laid out in the plan, a plan that promises to “mainline AI into the veins of this enterprising nation.” | Ed Newton Rex, the CEO of Fairly Trained, said that the U.K.’s Labor Party has “truly become the party of Big Tech.” | | Not once, across those 50 points, were issues of hallucination, surveillance or algorithmic bias mentioned. Not once did the report actually delve into the risks posed by the technology; issues of sustainability were mentioned once, in the context of enabling clean energy investments for data centers. | What I see here is an uncritical framework to advance adoption at all costs, a framework focused on money rather than efficacy, legitimacy or safety. | GenAI can be a great boon for healthcare, but when adoption is pushed uncritically, hallucinations and overreliance can cause tremendous harm. And when regulation doesn’t foresee certain scenarios — such as insurance companies algorithmically adjusting coverage based on AI-enabled genetic predictions — people will get hurt. Algorithmic bias and discrimination have already caused harm, both medically and elsewhere, and uncritical adoption seems like a poor way to reduce the odds of ongoing harm. | This is all not to mention the cost in energy — and the subsequent, known health impacts — associated with steadily expanding AI-enabled data centers. | AI can be a good tool, when used by specific experts in specific scenarios, when the cost-benefit analysis has been transparently and clearly conducted. But it poses a lot of distinct harms and threats; these ought to be addressed with clarity and caution (and no marketing hype) before countries push for adoption. | But that’s not the world we’re living in. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on OpenAI’s proposals: | 38% of you said there’s good and bad in the proposals; 28% said they’re awful and 22% said they sound great. | Good and bad: | “The idea of regulations require a deeper understanding of the technology. Governments have typically looked at technology from the rear view mirror - meaning any regulations would miss the mark on this fast changing tech. However, the idea of principles based regulations are Good - but I would argue more thought is required.”
| Awful: | | Do you like the UK's approach? | |
|
|
|