| | Good morning. I watched Avengers: Age of Ultron last night on a whim, and I have to say, it encapsulates a lot of current X-Risk discourse quite well: a suddenly sentient AI, out of our control, that begins to self-replicate as it burns through the world and internet, intent on humanity’s destruction as the only way to accomplish its peaceful programming. | But Ultron was unachievable without a super fictional magical gemstone. The gap between ‘science fiction’ and ‘science’ is more than a little broad. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🌊 AI for Good: Ocean research 🏛️ The state of model compliance with the EU AI Act 🔋 Big Tech is going after nuclear power 🇬🇧 Britain is chasing AI gold
|
| |
| AI for Good: Ocean research | | Source: NOAA |
| Covering some 70% of the planet’s surface, Earth’s oceans are a largely unexplored ecosystem full of species, knowledge and, of course, data, waiting to be uncovered. As of June, 2024, only 26% of the seafloor had been mapped with modern technologies, according to NOAA. | Some oceanographers are turning to artificial intelligence and machine learning to help out with the process. | To that effect, there’s a relatively new platform called Ocean Vision AI, a program — led in part by the Monterey Bay Aquarium Research Institute (MBARI) and Purdue University — whose goal is to advance AI tools to analyze visual ocean data. Right now, the program is working on building reliable training data sets to ensure that its vision models will be able to function accurately; it recently released a mobile game called FathomVerse which allows players to identify marine images to improve the AI models in question.
| Why it matters: MBARI Principal Engineer Dr. Kakani Katija has been “frustrated by our lack of capacity to monitor life in the ocean. There’s a lot of activities that are happening or will happen in the ocean that could have a significant impact on the life that lives there, and I want to ensure that we have all the information we need to conduct those activities in a really sustainable way.” |
| |
| | Real-Time Transcription in 50+ Languages | | Speechmatics' real-time transcription delivers over 90% accuracy with <1-second latency—no compromises. | With 25% fewer errors than their nearest competitor, Microsoft, enjoy the most reliable speech recognition available. | From customer service voice bots to television subtitling and critical healthcare transcriptions, Speechmatics offers unparalleled speed and accuracy in 50+ languages. | Try it for free today! |
| |
| The state of model compliance with the EU AI Act | | Source: European Commission |
| Europe, moving quickly compared to the rest of the world, was one of the first to get AI-related legislation out there with its somewhat recently enacted AI Act. The legislation takes a risk-based approach, meaning more powerful models will be met with more intense restrictions. | But, as the legislation begins to come into force, there has been a lack of clarity around the ways in which its regulatory requirements translate to trackable technical benchmarks. (Lack of compliance will result in fines of $38 million or 7% of global annual turnover). | What happened: This area is one that LatticeFlow AI, alongside research institute partners ETH Zurich and INSAIT, aims to bridge. The group mapped the Act’s requirements into a set of 18 technical benchmarks, which in turn enables the evaluation of Large Language Models (LLMS) in the context of the EU AI Act. | The team observed that, across a suite of 12 state-of-the-art models — including GPT-4, LLama and Mistral — “smaller models generally score poorly on technical robustness and safety … and almost all examined models struggle with diversity, non-discrimination and fairness.” They further found that, since details about training practices and training data remain obscured by the developers, none of the models are currently in compliance with the Act.
| The European Commission told Reuters: “The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements." |
| |
| | | | Our friends at Innovating with AI just welcomed 170 new students into The AI Consultancy Project, their new program that trains you to build a business as an AI consultant. Here are some highlights... | The tools and frameworks to find clients and deliver top-notch services A 6-month plan to build a 6-figure AI consulting business A chance for AI Tool Report readers to get early access to the next enrollment cycle
| Click here to get early access to The AI Consultancy Project |
| |
| | | | | AI-powered social media app promises to ‘shape reality’ (404 Media). Nvidia and TSMC’s alliance shows signs of stress (The Information). FTC announces final ‘click to cancel’ rule, making it easier to people to cancel subscriptions (FTC). Australia is planning a social media ban (Reuters). Anyone can turn you into an AI chatbot. There’s little you can do to stop them (Wired).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| Big Tech is going after nuclear power | | Source: Unsplash |
| Just yesterday, we were talking about Big Tech and nuclear energy, specifically the recent announcement that Google will be bringing a series of Small Modular Reactors (SMRs) online within the next decade in order to meet its spiking energy demand. | Not to be outdone, Amazon on Wednesday made a similar announcement. | Amazon said that it has signed three new agreements to support the development of nuclear SMR projects. The first, with Energy Northwest, a group of state public utilities in Washington, will bring four SMRs online in the early 2030s. The power generated by these SMRs will help support Amazon’s operations in addition to the local grid. (The SMRs will be owned by Energy Northwest, not Amazon). The second is with X-Energy, a developer of nuclear fuel and reactors. The third, with Virginia’s Dominion Energy, will develop an additional SMR near Dominion’s existing North Anna nuclear power station.
| This all comes as an addition to the nuclear-powered data center Amazon purchased through Talen Energy earlier this year. | In total, according to CNBC, Amazon is sinking about $500 million into these projects. | “We see the need for gigawatts of power in the coming years, and there’s not going to be enough wind and solar projects to be able to meet the needs, and so nuclear is a great opportunity,” Matthew Garman, CEO of AWS, said. | The context: We’ve now got at least three of the biggest tech companies in the world — Google, Microsoft and Amazon — inking major deals for nuclear energy as a hedge to their newly voracious energy appetites. | I would be surprised if more don’t follow … this could be the beginning of a paradigm shift. |
| |
| Britain is chasing AI gold | | Source: Unsplash |
| At its International Investment Summit this week, Britain’s new Labour Party government announced around $82 billion worth of new investments across a variety of sectors, including life sciences, infrastructure, technology, and, of course, artificial intelligence. | Prime Minister Keir Starmer, vowing to “rip up the bureaucracy that blocks investment,” said that “this is the moment to back Britain.” | A series of cloud providers including ServiceNow and Coreweave announced plans to invest a total of around $8 billion in the U.K. in a data center expansion that is sorely needed if Britain is serious about becoming an AI hotspot. Earlier this year, Salesforce opened a global AI center in the U.K.; last year, Microsoft announced a multi-billion-dollar investment in the country to expand its data center infrastructure.
| Speaking at the summit — alongside former Google CEO Eric Schmidt — Starmer said that AI will bring “incredible change” over the next decade, and his government will seek to embrace it. | “Are we leaning in and seeing this as an opportunity, as I do? Or are we leaning out, saying ‘this is rather scary, we better regulate it,’ which I think will be the wrong approach,” Starmer said. “Obviously, there's always a question of balance, but your primary posture really matters on AI, it is a game changer that has massive potential on productivity, and on driving our economy, and we need to run towards it.” | He added that, in light of the significant energy requirements posed by AI-enabled data centers, the demand at play here would be a boon for clean energy: “On the face of it, there's a tension, but I actually think if we're smart about this, we can turn that apparent tension into a massive advantage.” | Starmer’s approach represents something of a departure from the former administration, which more heavily emphasized the potential of AI-enabled harms and risks. A year ago, former Prime Minister Rishi Sunak said that “AI will bring new knowledge, new opportunities for economic growth, new advances in human capability and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears … doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”
| The U.K., as we’ve reported in the past, is also very curious about AI advancements in healthcare, funding projects and penning strategies that center around the use of AI-enabled genomic health prediction, a technology that, according to a recent report, contains a number of inherent risks and ought not be deployed until there is ample regulation. | | What concerns me here is an overly-simplistic conflation — made by Starmer — between a regulation-inducing fear of AI and proper caution about an impactful technology. | The words, said, in Starmer’s mind, by the AI Opposition, “this is scary, we should regulate it,” are deeply misinformed, seeming to reference any number of vague, hypothetical risks. But it misses the environment that we are currently faced with, that there are a large number of clear, harmful realities posed by generative artificial intelligence. | Issues of algorithmic bias and discrimination remain top-of-mind, here, alongside problems with transparency, explainability, interpretability, sustainability and hallucination. The impact unregulated AI models could have on enhanced surveillance, deepfake harassment and carbon emissions, to name just a few, are enormous. | In the example of genomic health prediction, imagine — in an underregulated environment — insurance companies gaining access to genomic health prediction data and refusing coverage (or increasing the cost of coverage) for people who are genetically predisposed to certain conditions. Genomic health prediction could be amazing, but without proper guardrails and oversight, it will cause harm.
| There are fundamental, society-changing issues at play here that need to be addressed through regulation designed to protect ordinary people, rather than massive corporations. If the goal is to actually apply GenAI at scale, it is not a case of one or the other; it is a case of necessary and focused regulation to ensure the proper, safe application of the technology. | It won’t work without that. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on unsafe responses from LLMs: | 40% haven’t really encountered any obvious safety violations in your interactions with LLMs; 15% encounter such violations often. | The rest aren’t really sure. | Often: | | Something else: | | How are you feeling about the Big Tech nuclear energy push? | |
|
|
|