| | Good morning. Perplexity attracted yet another lawsuit on Monday. The New York Post and Dow Jones filed a joint lawsuit against the AI search startup alleging massive copyright infringement. | The litigation here is interesting considering that Dow Jones parent News Corp inked a massive partnership with OpenAI back in May. Clearly, News Corp isn’t allergic to lawsuits. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🌊 AI for Good: Oceanic robots 🤖 Microsoft, Google and AI Agents 🖥️ IBM releases small new AI models (and actually tells us the details) 📊 Report: Europeans would let an AI vote for them
|
| |
| AI for Good: Underwater robots | | Source: Minnesota Interactive Robotics and Vision Laboratory |
| While artificial intelligence is good for parsing vast quantities of data, it needs a way to access that data. And when it comes to oceanic research and marine conservation, there is a vast, unexplored sea of data, information and observations waiting to be gathered. | What happened: Researchers at the Minnesota Interactive Robotics and Vision Laboratory have developed an AI-powered solution, called the Low-Cost, Open-Source, Autonomous Underwater Vehicle, or LoCO AUV. | The overriding idea is that this underwater autonomous robot is highly accessible and cheap to build; materials for the machine cost about $4,000, consisting largely of off-the-shelf and 3D-printed items. The AI-powered robots can collect tons of data, providing detailed information regarding marine life and helping develop the foundations of detailed habitat maps.
| The project has received nearly $1 million in funding from the U.S. National Science Foundation. | Why it matters: "Our project is about making underwater robots more effective tools for scientists and conservationists,” Junaed Sattar, a lead investigator on the project, said in a statement. “With improved vision and localization, these robots can better understand and protect our underwater environments, which is crucial for ecological balance and human prosperity.” |
| |
| | Meet your new AI assistant for work | | Sana AI is a knowledge assistant that helps you work faster and smarter. | You can use it for everything from analyzing documents and drafting reports to finding information and automating repetitive tasks. | Integrated with your apps, capable of understanding meetings and completing actions in other tools, Sana AI is the most powerful assistant on the market. | |
| |
| Microsoft, Google and AI Agents | | Source: Microsoft |
| Just a few weeks after announcing a host of new upgrades to its Copilot software, including the ability for customers to build custom “AI agents,” Microsoft said Monday that it is rolling out these agentic capabilities to the public (in a public preview) next month. | The details: Microsoft additionally announced that it is adding 10 new autonomous agents to its reserves for customers to choose from. | Anxious to demonstrate the efficacy of its agentic AI, Microsoft’s blog post is full of testimonials — Microsoft said that Clifford Chance, McKinsey, Pets at Home and Thompson Reuters are already using agents to “increase revenue, reduce costs and scale impact.” Microsoft also said that it is using its own agents to improve HR, marketing and sales.
| At the same time, Honeywell — one of the Copilot testimonials Microsoft mentioned in its blog — announced a partnership with Google Cloud to build AI agents trained on Honeywell’s Internet of Things platform Honeywell Forge. | Honeywell customers will be able to build customized agents using Google’s tech to automate tasks and assist with the resolution of maintenance issues by 2025. | The thing that neither company mentioned in their promotional announcements is the fact that these agents are based on the language models they have developed, and language models are known to exhibit biases, hallucinations and general reliability issues. | Microsoft’s only concession to this point was that customers could build guardrails into their agents to govern their processes. |
| |
| | NVIDIA, Databricks, HP, and more share GenAI tips | | Learn how NVIDIA, Databricks, Twilio, HP, and ServiceNow get their GenAI apps into production! | Don't miss GenAI Productionize 2.0 for GenAI best practices, including: | How to design an enterprise-scale GenAI stack Techniques for AI governance, evaluation, and observability Proven strategies for getting GenAI apps into production
| Free Registration |
| |
| | | | | Perplexity is in talks to raise funds that would double its valuation to $8 billion (Wall Street Journal). EU joins up with venture capital firms to boost the region's tech sector (Reuters). Inside the FBI’s secret phone company (404 Media). Intuit asked us to delete part of this Decoder episode (The Verge). Elon Musk, Tesla and WBD sued over alleged ‘Blade Runner 2049’ AI ripoff for Cybercab promotion (CNBC).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| IBM releases small new AI models (and actually tells us the details) | | Source: IBM |
| IBM on Monday released Granite 3.0, the third generation of its Granite language model family. | The details: The models, which specifically target enterprise users, have exhibited strong performance across retrieval augmented generation (RAG), classification, summarization and tool use. | IBM’s pitch is that if enterprises combine one of these small models with their own enterprise-specific data, “businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept).” IBM at the same time released Granite Guardian 3.0, a new family of models that allow users to check and oversee their existing AI systems across a series of reliability and harm dimensions, including bias, toxicity, profanity, violence and jailbreaking.
| What makes this different: Developers are always releasing new AI models. But often, details about these models are slim to nonexistent. That’s not the case here. Both in an extensive research paper and technical report and responsible use guide, IBM provided details on its training data and data curation process, in addition to details on the energy use and carbon emissions associated with the models. | Training the family of four models took a total of 1.2 million GPU hours, resulting in the consumption of 945.6 megawatt hours (MWh) of electricity and the emission of 368.7 tons of carbon dioxide. For context, the average U.S. household consumes about 10.5 MWh of electricity each year, and per-capita carbon emissions in the U.S. amount to 14.9 tons annually. | Since the models are “relatively lightweight, they can be run on a single GPU,” meaning the energy use of operation shouldn’t be enormously consumptive. | In terms of training data, the models were built on a mix of synthetic data and scraped internet data with “permissible licenses.” That data was then carefully analyzed and cleaned before being used for training. |
| |
| Europeans would let an AI vote for them | | Source: European Union |
| IE University’s Center for the Governance of Change (CGC) unveiled its annual European Tech Insights report on Monday. A key focus of the report centered around increasing integrations of generative artificial intelligence into societal institutions. | And Europeans are really mixed about the whole thing. | AI in elections: Around 40% of Europeans, and nearly 75% of public servants, are concerned about the misuse of generative AI in elections, with a third of those surveyed saying that they believe AI has already influenced their voting behavior. | Yet in the case of a hypothetical AI app that would process individual personal data to present political candidates they would likely approve of, more than a third of Europeans between the ages of 18 and 34 would trust such an app to autonomously vote for them. Nearly a third of Europeans under age 45 trust AI systems to predict accurate election results more than electoral surveys.
| AI in public integrations: 60% of those polled had no idea that their governments are actively using AI to deliver public services. | Nearly 80% of Europeans support the use of AI to help job seekers find jobs; nearly 100% of public servants are okay with such a service being operated without human oversight. Nearly 60% of Europeans trust governments to use AI to determine eligibility and amounts for welfare, with only 46% preferring this process to include human supervision; 65% would be comfortable using an AI to process their tax returns.
| And right around 75% of Europeans support the use of AI for police and military operations, “e.g. using facial recognition and biometric data for surveillance.” | At the same time, 64% of those polled don’t support the use of AI to determine eligibility for parole and 50.6% are against the use of AI to determine immigration visa eligibility. | And 70% support the passage of laws to limit automation in order to protect their jobs. | “European citizens’ attitudes toward adopting emerging technologies can be summed up as ‘yes, as long as core values are respected,’” the report reads. “This is evident in their cautious optimism about AI in public life, including elections, where many trust its potential but worry about misuse.” | The report surveyed 3,006 adults across Estonia, France, Germany, Italy, the Netherlands, Poland, Romania, Spain, Sweden and the U.K. | | AI literacy is clearly lacking and needs to improve. | The juxtaposition here of people prepared to “trust” AI while worrying about misuse is a hard one to grapple with. I see the potential of AI; I do not trust its output. | To trust an AI tool to vote on their behalf, while at the same time acknowledging concern about electoral misuse, seems based on anything but an understanding of the technology. | Likewise, to support the use of AI in a variety of government operations, including military and police operations, while wanting laws that prevent automation to protect their jobs ignores the real, active harms and threats posed by the integration of hallucinatory, biased systems in high-stakes environments. (Also, I find it unlikely that governments will pass such laws while taking advantage of these systems themselves). | There is a risk here of regulatory efforts being swayed by civilians and government officials who are vastly misinformed about the genuine capabilities, promises, risks and active harms of current AI, something that could enable the entrenchment of faulty systems at scale, increasing threats of unfettered surveillance, enhanced predictive policing and ingrained biases and critical mistakes in crucial operations. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on fake news: | The bulk of you (50%) just cross-check your information across a few different trusted sources. It really is that easy. | The rest aren’t really sure what to do about it. | Cross-check: | | Something else: | | Would YOU let an AI vote for you? | |
|
|
|