| | | Good morning. Some have claimed that GenAI will soon make the programming field extinct, and so people should stop learning to code. | But computer science professor Andrew Ng said recently that “this advice will be seen as some of the worst career advice ever given.” | As GenAI makes coding easier, he said, more people should code, not fewer. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🛰️ AI for Good: Landmine removal ⚡️ Europe is building a ton of new AI infrastructure 🚨 How Big Tech is using the AI ‘race’ and the specter of AGI to cement its power
|
| |
| AI for Good: Landmine removal |  | Source: Unsplash |
| War leaves many things behind. One of them is unexploded ordinance. | The HALO Trust recently began a pilot program that would combine its expertise in landmine removal with drones and specifically trained machine learning algorithms to accelerate its review of mine zones and aid its effort to get rid of the explosives. | The details: The humanitarian organization partnered with AWS last year to enable HALO to train algorithms that leverage both satellite imagery and drone footage in the detection of possible minefields. | The organization has flown hundreds of drone flights over minefields in Ukraine, gathering terabytes of data that can now be algorithmically analyzed. But the training process is time-consuming and tricky; it first requires the manual labeling of thousands of images, then a regular validation and vetting process by trained professionals to ensure ongoing accuracy.
| Why it matters: Whether humans or robots are the ones actually clearing the minefields, detailed analyses of these areas enables HALO to prioritize its focus, resulting in faster clean-ups that can bring people back to the land safely and sooner. |
| |
| | The Future of AI, LLMs, and Observability with Google Cloud | | Discover 7 key insights for technical leaders from Google’s Director of AI, Dr. Ali Arsanjani, and Datadog’s VP of Engineering, Sajid Mehmood. | Get the eBook to read the most important takeaways as they break down their predictions of the future, such as: | Smarter AI and LLM strategies for your org Building customer trust in AI outputs Scaling your tooling as LLM expertise grows
| Don’t fall behind—download now. |
| |
| Europe is building a bunch of new AI infrastructure |  | Source: Unsplash |
| Last month, as part of its effort to become an actual contender in the international “AI race” that’s being run with increasingly desperate intensity by actors around the world, the European Union announced a $206 billion investment in AI infrastructure (in other words, chips and data centers). | As part of this buildout, the European Commission last week selected six new sites for its future lineup of AI factories. | The details: The factories will be built in Austria, Bulgaria, France, Germany, Poland and Slovenia. Combined with the Commission’s seven previously selected sites, the announcement brings Europe’s planned AI factory ecosystem up to a total of 13 locations across Europe. | Each factory is designed to focus on slightly different things. Five of the six sites, for example, will additionally feature supercomputers specifically intended to boost AI workloads. Some, like the sites in Germany and France, will focus on all sectors and industries, while others, like the site in Poland, will focus on academia as well as the public and private sectors. The factory in Austria will “support ethical, practical and sustainable AI development,” though details around water consumption and energy procurement for this — and all the other sites — remain unclear.
| “By 2026, these AI Factories will be the backbone of Europe’s AI strategy, combining computing power, data and talent to drive innovation and secure Europe’s leadership in AI,” the Commission wrote in a statement. | The landscape: The move echoes an investment and build-out that is happening all over the world. The most prominent example of this is probably the U.S.-based, OpenAI-led Project Stargate, a supposedly $500 billion data center effort that still doesn’t represent all of the AI data center investment happening in this country. France, meanwhile, has announced $100 billion in data center investments and the U.K. is similarly pushing hard for cross-industry adoption. | This all regardless of the known environmental and public health impact of these data centers. |
| |
| | | | Musk V Altman: Elon Musk and OpenAI have agreed to fast-track a trial regarding OpenAI’s pending for-profit shift. Both parties jointly proposed a trial date for December. Open source? Google recently released Gemma 3, a purportedly open model whose licensing terms make commercial use … difficult, to say the least.
| | AI coding assistant refuses to write code, tells user to learn programming instead (Ars Technica). U.S. consumers are starting to crack as tariffs add to inflation, recession concerns (CNBC). House GOP subpoenas Big Tech for evidence that Biden made AI woke (The Verge). DeepSeek is now being closely guarded (The Information). This Russian tech bro helped steal $93 million and landed in a US prison. Then Putin called (Wired).
|
| |
| How Big Tech is using the AI ‘race’ and the specter of AGI to cement its power |  | Source: Created with AI by The Deep View |
| More than two years ago, OpenAI CEO Sam Altman said that his worst-case scenario for AI is “lights out for all of us.” | Since then, Altman and OpenAI have flip-flopped on whether the hypothetical, scientifically dubious artificial general intelligence (AGI) that they’re pursuing will bring about a utopia or just destroy human civilization; in an essay Altman published last month, the angle was that we are on the threshold of a world of unimaginable economic growth and cures for all diseases. | Nice. | Then, in a blog published last week, OpenAI said that the technology could lead “to painful setbacks for humanity” and “irrecoverable loss of human thriving,” whatever that means. | Not nice. | But the policy proposal that the firm just submitted to the White House — yet again — takes a different tack, suggesting that AGI is nigh, and therefore, “we are at the doorstep of the next leap in prosperity.” OpenAI, it is worth noting, is currently in talks to raise $40 billion at a $300 billion valuation. | Here’s what OpenAI wants: Its 15-page proposal, submitted in response to the Trump Administration's request for information regarding an AI Action Plan, is centered around the dynamics of an increasingly tense competition between the U.S. and China over AI. (The document mentions ‘China’ three times, the “PRC” 19 times, the “CCP” 12 times and DeepSeek eight times). | The gist of all this is that, in OpenAI’s view, DeepSeek has demonstrated that America’s AI lead is “not wide and is narrowing” — the Action Plan, therefore, “should ensure that American-led AI prevails … securing …. American leadership on AI.” OpenAI’s following proposal, so considerately rooted in an effort to ensure “our national security,” calls for the “freedom to innovate,” again, whatever that means.
| Specifically, it calls for “purely voluntary” and “optional” work between AI companies and the federal government to test models, something that will additionally enable AI companies to keep the government up-to-speed on “cutting-edge capabilities that support U.S. national interests.” | It also calls for tougher tech export controls, an enshrinement that training AI models on copyrighted content is protected under the U.S. Fair Use Doctrine (which the U.S. Copyright Office has yet to address), a massive increase in data centers and energy to power them — which would be rooted in “categorical exclusions” from the environmental reviews required by the National Environmental Policy Act — and a massive, sweeping adoption of GenAI by the federal government. | In the context of all of this, as well as OpenAI’s frustration with “having to comply with overly burdensome state laws,” the document specifically highlights some of China’s big advantages over the U.S. — namely, that it is an authoritarian state: “As an authoritarian state, its ability to quickly marshal resources — data, energy, technical talent and the enormous sums needed to build out its own domestic chip development capacity.” | Oh boy. | Dr. Seena Rejal, CCO of the AI startup NetMind, told me that the “parallel between some elements of this proposal and strategies employed by the very nations it is trying to challenge” ought to invite a “broader conversation about international best practices.” | He added that the risk of a growing “AI arms race,” alongside increasing disparities between AI-enabled nations and less oversight of AI’s societal impacts are “significant concerns.” | OpenAI further suggested that the U.S. government monitor “domestic policy debates and ongoing litigation … weighing in where fundamental, pro-innovation principles are at risk.” | Now, at the same time, Google submitted its own policy proposal, a 12-page document that, similarly to OpenAI, calls for government-wide adoption of GenAI, a massive increase in investment into energy and data centers and “balanced copyright rules, such as fair use and text-and-data mining exceptions.” | Note: both Google and OpenAI remain embroiled in copyright-related lawsuits, as the main question of the legality of training highly commercialized AI models on copyrighted content has yet to be legally resolved. Both companies’ push for an enshrinement of this protection comes as lawsuits intensify, and despite the content licensing deals that they have secured.
| But, unlike OpenAI’s proposal, Google’s makes no mention of DeepSeek, almost no mention of China and only a handful of vague referrals to national security. Google wants “balanced” copyright laws, a remedy to a litigious situation that OpenAI framed like this: “applying the fair use doctrine to AI is … a matter of national security.” | Both companies, though, are on the same page when it comes to infrastructure, with Google writing: “While we are seeing significant efficiency improvements, widespread AI adoption may still result in large increases in electricity requirements, with projections of AI data center power demand rising by nearly 40 GW globally from 2024 to 2026. Current U.S. energy infrastructure and permitting processes appear inadequate to meet these escalating needs.” | The energy and carbon intensity of data centers, today, meanwhile, poses a steadily rising risk to public health due to a massive increase in air pollution. | While it carries similar themes, Google’s 2023 response to the Biden Administration’s request for information was framed much differently, with a strong focus on trust, safety, security and responsibility. | Anthropic submitted its own proposal last week, a document that carries many thematic similarities to the above proposals. | “To move forward responsibly, we must prioritize collaboration, inclusivity and a balanced approach that fosters both innovation and ethical governance,” Rejal said. | | Altman and OpenAI’s consistent re-angling of whether AGI will be utopic or dystopic is evidence to me that AGI is nothing more than a tool. A tool not to cure cancer but to sway the public policy conversation, a tool to leverage OpenAI into a position that will guarantee a return on the billions of dollars it has invested into trying to make its technology work. | By positioning it all as a battle for control between the U.S. and China, by painting utopic vistas while highlighting the risk of any other country having access to such exceedingly powerful technology, OpenAI’s real goals shine through: protection from expensive litigation; an increase in government adoption, which would make OpenAI fundamental to governmental operation, leading to a larger base of consistent enterprise users for OpenAI; and access to greater energy reserves sans pesky environmental reviews, allowing them to keep pushing on training and inference compute without limitations. | What they want is clear. | This company is using the specter of AGI as a means of convincing legislators that they should get it. | It’s a sleight of hand, wherein AGI is the bright light, or the loud bang, designed to divert attention while they attempt to leverage regulation as a means of affirming their position. | I’ll add that a recent AAAI survey of a diverse array of AI researchers found that 76% think it unlikely that scaling up current architectures will lead to AGI. | Many researchers don’t see a clear path to AGI, since human cognition is vast, flexible and, well, not very well understood by researchers. | The human brain operates, according to Princeton Professor Timothy Jorgensen, on just 12 watts of power (and the human brain hasn’t been ‘trained’ on an internet-scale corpus of textual and visual data). | These companies are talking about unlocking gigawatts to power these data centers. And they’re seeking legal shields so they can leverage even more data. | But you can’t brute-force true intelligence. | The question now is whether the illusion of AGI is strong enough for governments to enable, rather than restrict, the operations of these companies. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AI Search: | 45% of you check the original sources in your AI searches, though only half of those think it’s needed. | 20% don’t. | Something else: | | Yes!: | | Do you trust Big Tech? | | If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
|
|
|