| | Good morning. Apparently, OpenAI’s Strawberry model is launching within the next two weeks. | Little is known about the model, how it will be released or what it might cost. But we break down what we DO know, below. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: New Airpods double as hearing aids | | Source: Apple |
| At its Monday iPhone event, Apple also unveiled some new capabilities coming to Airpods Pro 2, namely, a suite of hearing health features. | The details: The features cover three levels — awareness, protection and assistance. | It starts with a simple hearing test that will allow each user to create a hearing profile. Then, Hearing Protection features work to reduce damaging exterior noise, something Apple said it achieved — through the use of a “dynamic” algorithm — while keeping sounds at live events “natural and vibrant.” And for those with mild to moderate hearing loss, Airpods Pro 2 doubles as a “clinical-grade” hearing aid.
| The feature functions based on each user’s personalized hearing profile, making personalized adjustments to different types of sound, both exterior and on-device. Apple said the feature is still pending FDA approval, but is expected to release in the fall. | Why it matters: At $249, the Airpods aren’t cheap, but they’re significantly cheaper than your average hearing aid. Plus, many people with moderate to mild hearing loss forego hearing aids due to the stigma associated with wearing one; this makes them more accessible. | This is an example of algorithms at work, and this is an example of algorithms that can be genuinely helpful to a whole lot of people. I don’t love everything Apple does, but I like this. |
| |
| | 💥 Use AI to 10X your productivity & efficiency at work with AI (free bonus) 🤯 | | Still struggling to achieve work-life balance and manage your time efficiently? | Join this 3 hour Intensive Workshop on AI tools & ChatGPT (usually $399) but FREE for first 100 readers. | 🗓️ Tomorrow | ⏱️ 10 AM EST | An AI-powered professional will earn 10x more. 💰 | An AI-powered founder will build & scale his company 10x faster 🚀 | An AI-first company will grow 50x more! 📊 | In this workshop, you will learn how to: | ✅ Make smarter decisions based on data in seconds using AI | ✅ Automate daily tasks and increase productivity & creativity | ✅ Skyrocket your business growth by leveraging the power of AI | ✅ Save 1000s of dollars by using ChatGPT to simplify complex problems | 👉 Hurry! Click here to register (FREE for First 100 people only) 🎁 |
| |
| OpenAI’s Strawberry is incoming | | Source: Created with AI by The Deep View |
| The Information reported that OpenAI’s new Strawberry model — for weeks, the subject of plenty of extraordinary internet rumors — will be launching within the next two weeks as part of its ChatGPT interface. | The details: Anonymous sources told The Information that the difference between Strawberry and other models is its ingrained capacity to “think” before responding to queries. | Rather than immediately generating output, the Strawberry model will wait 10-20 seconds. The other big difference is that, at least initially, the model will be a text model only; no multi-modal capabilities here. The report said that the model is only “slightly better” than GPT-4o, and will likely be priced differently than OpenAI’s other models.
| The context: The project was initially positioned in reports by Reuters as a significant step toward human-level reasoning (whatever that means), though this latest report suggests that the reality won’t be quite so glamorous, something that could have implications for the future viability of OpenAI’s business (and it’s survival of a potential bubble burst). | GPT-5, meanwhile, remains nowhere in sight. | OpenAI didn’t return my request for comment. |
| |
| | Real-Time Transcription in 50+ Languages | | Speechmatics' real-time transcription delivers over 90% accuracy with <1-second latency—no compromises. | With 25% fewer errors than their nearest competitor, Microsoft, enjoy the most reliable speech recognition available. | From customer service voice bots to television subtitling and critical healthcare transcriptions, Speechmatics offers unparalleled speed and accuracy in 50+ languages. | Try it for free today! |
| |
| | | Becoming ISO 42001 compliant shows your customers that you are taking the necessary steps to ensure responsible usage and development of AI, learn how with the ISO 42001 Compliance Checklist.* Want to become an AI consultant? Deep View readers get early access to The AI Consultancy Project. Request early access.* Get high-quality meeting minutes, tasks, and decisions for all your online and offline meetings without awkward meeting bots. Save 10 hours every week and try Jamie now for free.* Still managing your inbox? Superfilter uses AI to turn important emails into tasks, and then offers to handle them for you — just like human assistants do. Click here to skip the waitlist.*
| | EU triumphs in court fight with Google and Apple, which now owe billions in fines and back taxes (AP). AI-powered search startup Glean doubles valuation in new funding round led by Altimeter (CNBC). SpaceX launches billionaire's private crew on milestone spacewalk mission (Reuters). Australia moves to ban young people from social media (Semafor). Sony announces the $700 PS5 Pro (The Verge).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| Update: Countries endorse blueprint for AI in the military | | Source: Government of Singapore |
| As a follow-up to our story yesterday about the South Korean summit for responsible AI in the military, around 60 countries — including the U.S. — on Tuesday endorsed a “blueprint for action” that would regulate the responsible use of AI in the military. China did not endorse the blueprint. | The details: Government officials told Reuters the document is much more “action-oriented” than the call to action that resulted from last year’s summit. Still, it is not a legally binding document. | The document lays out detailed recommendations and practical guidelines regarding the importance of risk assessments and maintaining human control. It will be discussed at the UN General Assembly in October. Singapore’s Minister for Defence Dr. Ng Eng Hen encouraged other countries to “endorse the Blueprint. Only if we work together, can there be assurance that AI will improve and not harm our economies and people,” he said.
| “This Blueprint represents a step forward by providing us with action-oriented guidelines to implement responsible AI within our militaries, ensuring appropriate human decision-making at crisis points,” he said. |
| |
| Cohere is aiming to disrupt the finance industry | | Source: Created with AI by The Deep View |
| The terms “AI” and “disruption” have closely followed each other around ever since ChatGPT launched back in 2022. There has been this pervasive idea that generative AI models, like ChatGPT, are destined to violently — and quickly — transform existing industries. | That prophesy has yet to really play out at scale. Sure, there’s been some adoption and a bit of displacement. But entire industries haven’t changed overnight, or over the course of the past (nearly) two years. | One of those industries that is supposedly ripe for AI disruption is finance, where the idea of AI-powered enhancements, from robo advisors to stock selection, has taken off. | What happened: Enterprise AI firm Cohere on Tuesday announced a partnership with Japanese consulting firm Nomura Research Institute (NRI), to launch the NRI Financial AI Platform. | NRI said the platform intends to focus on three strategic areas: sales operations support, compliance operations support and advanced/autonomous back-office operations. The platform will integrate Cohere’s Command R + and Embed models, though NRI said that it will evaluate “additional models” to integrate into the platform. Cohere said the platform will launch in the first half of 2025.
| Addressing concerns of security and data privacy, NRI said the platform will leverage Oracle Alloy, which allows the company to create dedicated environments for each organization within its data centers. This, according to the company, will ensure the security and privacy of data. | Trustworthy AI: The high-risk capacity of this deployment highlights a few issues inherent to large language models (LLMs) that have yet to be resolved, if a solution to them is even possible. | The core of the issue here is trustworthy AI — how will NRI ensure that the model output is transparent, unbiased and explainable so that users know when to trust it? And while the platform seems focused on aiding human advisors, are certain use cases deemed too risky for the platform? A Cohere spokesperson told me that its Command R model series comes complete with in-line citations so users can verify output, and is more accurate through the application of “retrieval-augmented generation (RAG) to mitigate hallucinations.”
| “We also think it's important to keep humans in the loop and that this platform with NRI will help employees at financial organizations free up time to do their best work by increasing efficiency and productivity with daily tasks,” they said. “The platform will be designed to meet the high standards of a regulated industry like financial services.” | Risk vs Reward: Despite saying that generative AI is poised to transform multiple areas within the finance sector, Deloitte has acknowledged that genAI models “don’t recognize when they could be wrong,” adding: “perhaps, one day, models will even have to pass a certification test to provide finance-related insights, advice and engagement. Regardless, Finance leaders would do well to remain vigilant in validating and certifying content.” | The root of this is the “black box” problem of LLMs — while scientists generally know how they function, they don’t understand how a specific model works or how a particular output was derived from a given input. This, combined with obscured training data, means explainability is a near impossibility, something that cripples trust. | The black-box nature of LLMs has led some experts to argue that such models should never be used in high-stakes decision-making. | | My concern is in misplaced overreliance. | It’s not too much of a stretch to envision financial professionals — not properly trained or instructed on AI — assuming a level of reliability that does not exist and therefore not examining output closely enough to catch mistakes. | Sure, there’s room for automation, but in this industry, that automation needs to be trusted. And I see no evidence that it should be. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 2 (Left): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AI in the military: | Around a third of you said it’s fine, but it needs supervision; a third said it’s necessary. 22% of you said we need legal limits on the scope of applications and 10% of you said it should be shut down. | Necessary: | “Of course, human supervision is strongly desired and almost always a good thing but there are times, like anti-missile defense, where even a couple of seconds delay means the difference of missile remains falling inside or outside a city.”
| Something else: | | Would you be okay if your finance professionals used generative AI? | |
|
|
|