| | Good morning. Yesterday, AI image generator Midjourney made its website freely accessible to everyone. At the same time, Ideogram launched 2.0, its “most advanced text-to-image model” to date, simultaneously making the service available on the IOS app store. | Yeah, this sounds like a good idea. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Chasing storms and extreme weather | | Source: Nvidia |
| In the midst of a hurricane season NOAA promised would “rank among the busiest on record,” Nvidia has published new research that positions artificial intelligence as a valuable storm chaser. | The details: Nvidia this week unveiled a new generative AI model called Storm Cast, which can emulate “high-fidelity atmospheric dynamics.” | “This means the model can enable reliable weather prediction at mesoscale — a scale larger than storms but smaller than cyclones — which is critical for disaster planning and mitigation,” Nvidia said. The diffusion model is able to autoregressively predict 99 state variables at kilometer scale, a scale that deep learning models have previously struggled with.
| Though StormCast is only one step toward more advanced weather forecasting, researchers are already excited about its potential. Imme Ebert-Uphoff, machine learning lead at Colorado State University’s Cooperative Institute for Research in the Atmosphere, said that the development of high-resolution weather models requires AI models to resolve convection, something he called a “huge challenge.” | But StormCast represents a “significant step toward the development of future AI models for high-resolution weather prediction.” |
| |
| | Accurate, Explainable, and Relevant GenAI Results | | Vector search alone doesn’t get precise GenAI results. Combine knowledge graphs and RAG into GraphRAG for accuracy, explainability, and relevance. | Read this blog for a walkthrough of how to: | |
| |
| Washington Post publishes story based on AI-led investigation | | Source: Unsplash |
| On Sunday, the Washington Post published an article investigating TV advertising related to U.S. immigration and the border. The investigation involved the analysis of 745 campaign ads that ran between January and June of 2024. | And though the story was put together by a long list of Post journalists and editors, the analysis was built on the back of a large language model. | The details: The Post developed an in-house tool called Haystacker that extracts stills from videos, then labels the objects present in each still. Post reporters then analyzed the resulting text and visual information. | With Haystacker, the Post was able to determine that 20% of the ads showed images "that are outdated, lack context, or are paired with voice-overs and text that do not accurately depict what is shown on the screen." The tool was in development for more than a year.
| My view: This kind of video analysis is definitely a pain point in journalism. My only concern with applying LLMs to it is in developing an over-reliance on these tools; if I was using Haystacker, I would constantly be wondering what the model isn’t showing me. It creates a degree of separation that, as a journalist, I’d rather not have. | But utilized as the backbone of an analysis that’s later confirmed by journalists certainly seems like the best option. |
| |
| | Save Time & Scale Video Creation with AI | | | Say goodbye to video production bottlenecks. PlayPlay's AI Video Assistant removes the hassle of video editing — helping teams create professional-looking videos in seconds. | Simply describe your video needs in a sentence and watch the Assistant create your video using the perfect template, text, images, and music. | Cut down time and money, while delivering consistent video comms. Boost engagement and conversions with captivating videos. Streamline content creation workflows across your entire organization.
| Trusted by 3,000+ brands like Dell, CVS Health and L'Oréal. | Create engaging videos now — start your 14-day free trial. |
| |
| | | Story, a startup offering to use blockchain to protect artists’ content from being scraped by AI companies, raised $80 million in funding at a $2.25 billion valuation. Construction startup Trunk Tools raised $20 million in Series A funding.
| | Meta’s search for AI clout takes it into new terrain (The Information). US judge strikes down Biden administration ban on worker 'noncompete' agreements (Reuters). Anthropic asks court to “prune” Universal lawsuit (Music Business Worldwide). Fed minutes point to ‘likely’ rate cut coming in September (CNBC). Google’s AI ‘Reimagine’ tool helped us add wrecks, disasters, and corpses to our photos (The Verge).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| ElevenLabs launches program to help ALS victims reclaim their voice | | Source: ElevenLabs |
| ElevenLabs, the audio-genAI company, this week launched an Impact Program “to help one million people reclaim their voice.” | The details: To start, the firm is partnering with nonprofits to offer free pro licenses to patients with ALS and MND, which would allow them to clone and preserve their voice with ElevenLabs’ AI tech. | The base technology on offer here is almost identical to Apple’s Personal Voice, an IOS 17 feature that allows users to digitally clone their voice, which they can then use through text-to-speech on calls, FaceTime and in-person.
| Creative AI expert Alex Champandard said that, while the offer itself is good, the context of offering a free “license” is a red flag. The licenses — which lock people into the platform — could end at any time, he said, which would leave the people they’re trying to help in a worse state than before. | “This kind of thing should be open-source only,” Champandard said. “It's always dangerous to have profit-seeking companies use disabled people for PR, e.g. to fix the reputation problem caused by the rest of their business.”
| The context: In terms of the rest of their business, I’ve written about ElevenLabs a few times, first, when they were identified as the platform that was used to generate that fake robocall of President Joe Biden in January, and more recently, in the context of digital necromancy. |
| |
| Exclusive interview: CEO of AI engineering startup Monumo talks LEMs | | Source: Monumo |
| We’ve talked often about that one area where artificial intelligence far surpasses human capabilities: deriving insights from truly enormous piles of data. This is prevalent in everything from medical applications of AI, in which AI is used to find new drug candidates, to applications of AI in environmental conservation, where AI is used to track and analyze everything from pollution levels to the movements of endangered species. | British deep tech startup Monumo is applying the tech — specifically 3D generative AI (large engineering models, or LEMs) — to engineering, with the simple idea of revolutionizing and optimizing the creation of electric motors to create cheaper, more sustainable and more capable designs. | Monumo, which was founded in 2021, came out of stealth with £10.5 million in funding in February of 2024. | I sat down with Dominic Vergine, Monumo’s founder and CEO. | The details: The reason Monumo decided to start with the electric motor is two-fold: first, it is at the heart of clean energy and so plays a big role in decarbonization efforts, and second, it’s a “nicely constrained multi-physics problem. It’s neither too big nor too small.” | Customers give Monumo their ideal motor specifications, and Monumo takes those specs and runs millions of motor simulations until the design is as optimized as possible. It’s generative AI in a different form; training a genAI model on engine textbooks might enable you to generate essays on engines, but training a genAI model instead on a narrow set of motor classes (which are in the public domain) allows Monumo to “explore and invent and experiment.”
| | Dominic Vergine, CEO, Monumo |
| “Super” human isn’t about intelligence: Vergine said that the system “is already finding things that are arguably beyond human in terms of some of the shapes and some of the waveforms that are being produced.” | The reality of this, though, isn’t about intelligence, according to Vergine. It’s about capability. In both the corporate and academic worlds, an engineer has to have some evidence that a “route you’re going to take is going to yield results.” | But the AI-driven simulation approach allows engineers to “go down avenues that no one in, in either a business setting or in an academic setting, would ever normally explore, because it takes too long and it costs too much.” | The narrow approach ties into Monumo’s sustainable focus: smaller, more narrow models are just less energy-intensive than their massive, internet-trained counterparts. Monumo uses an in-house 300-core supercomputer that Vergine said is more than enough for their needs. “Given the potential energy savings and cost savings that we're producing, the cost of finding these new designs is really minuscule.”
| “You have to start somewhere, but you can grow this out into being, ultimately, a system that can design the next aircraft, or the next car, or the next wind turbine in its entirety,” Vergine said. “You explore every conceivable route. Some of them are dead ends. Some of them are not, but ultimately, you come up with an apex design,” he said, referring to a design that uses the fewest materials and delivers the maximum result for its given task. | “We're very, very far off apex designs of any kind of motor or airplane or wind turbine at the moment,” Vergine added. “I think we will get there in the next 50 years or so.” | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on electing an AI to public office: | More than half of you — I’m with you guys on this — said “are you serious, absolutely not.” 10% said you would and 18% said you’re open-minded; it depends who the competition is. | Something else: | | Absolutely not: | “It is true that AI would not commit the same technical mistakes as humans. However, it is susceptible to fail in situations where a human wouldn’t, such as empathizing or understanding emotions, crucial aspects in the public sphere. In addition, we, as humans, don’t really understand AI ‘thoughts’ and cannot anticipate its decisions.”
| How do you feel about AI in journalism? | |
|
|
|