| | | Good morning, and happy Friday … unless, that is, you’re a quantum computing company. | A range of quantum stocks took a beating Thursday after Nvidia said that it is opening its own quantum research lab, despite previous skepticism from Jensen Huang that usable quantum computers will be here anytime soon. | In totally related news, the Celtics just got sold for $6.1 billion, a record. | Big numbers, but all I care about is whether the Knicks can actually beat them next time. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🧠 AI for Good: Traumatic brain injuries 💰 OpenAI secures major partnership with UK-based bank 💻 Anthropic chases OpenAI in competing with Google, Perplexity 🏛️ A crisis of hallucination: ChatGPT tells a father that he killed two of his sons
|
| |
| AI for Good: Traumatic brain injuries |  | Source: Unsplash |
| Traumatic brain injury (TBI) is often described as “the silent epidemic” for a simple confluence of reasons: a steady lack of awareness of and research into it, its widespread reach — impacting nearly 70 million people each year — and its damaging, sometimes lethal, impact to its victims. | The causes of TBI are varied: car accidents, falls or, generally, blows to the head. And the resulting levels of TBI are varied, as well, ranging from mild concussions to severe injuries that can lead to permanent cognitive damage. | The details: In the midst of this crisis, one that kills nearly 70,000 Americans every year, researchers are increasingly studying the diagnostic impact of specifically trained and tuned machine learning algorithms. And though complicated, the integration is promising. | The predominant means of diagnosing TBI today involves the manual review of CT brain scans, a process that is necessarily time-consuming. A recent, systematic, review of algorithmic approaches to TBI diagnosis found that “each algorithm has a strong ability to automatically identify and quantify important CT findings caused by TBI.”
| Why it matters: The implication of this is that a combination of different machine learning algorithms can work as a “good supporting tool” to reduce the workload of radiologists and clinicians, while boosting the odds of early detection. | Still, it’s early days, here. The algorithms — both in machine learning and deep learning — have certain limitations, namely related to algorithmic bias resulting from data sets that don’t completely capture the full range of TBI. Much more research needs to be done for clinicians to see the introduction of a reliable, usable tool. |
| |
| | Fyxer AI: Gain 1 hour every day with an AI executive assistant. | | Meet Fyxer, your AI Executive Assistant. Fyxer AI will get you back one hour, every day. Begin your day with emails neatly organized, replies crafted to match your tone, and crisp notes from every meeting. | Email Organization: Fyxer prioritizes important emails, ensuring high-value messages are addressed quickly while filtering out the spam. Automated Email Drafting: Fyxer drafts personalized email responses in your tone of voice. It's so accurate that 63% of emails are sent without edits. Meeting Notes: Fyxer listens to your video calls and sends a meeting summary with clear next steps.
| Fyxer AI is even adaptable to teams, learning from team members' communication styles and enhancing productivity across the board. | Setting up Fyxer AI is easy—it takes just 30 seconds to get started with Gmail or Outlook, no learning curve or training needed. | There's no credit card required for the 7-day free trial, making it a risk-free productivity booster. Start your free trial now. |
| |
| OpenAI secures major partnership with UK-based bank |  | Source: Unsplash |
| OpenAI’s list of enterprise partners expanded Thursday in the form of a partnership with NatWest, the first collaboration of its kind with a U.K.-headquartered bank, according to a statement. | The details: The focus of the partnership is on the consumer end, with NatWest citing the development of digital assistants to enable “bank-wide simplification.” For NatWest, this represents a deeper push into chatbots; the bank has already deployed a customer-facing chatbot called Cora, and an internal chatbot, called AskArchie. | The bank will look for new ways to enable these assistants, with eyes on using chatbots as a means of reporting fraud, among other things. NatWest also wants to use chatbots to help customers better understand their specific financial situations. NatWest said it is exploring 275 AI projects, 25 of which are actively in use today. Part of this involves analytical modeling, though it’s unclear where specifically OpenAI’s tech will be put to use or how it will be leveraged. The financial terms of the arrangement are also unclear, as is any data-sharing agreement between the two organizations.
| NatWest’s code of conduct highlights commitments to transparency, explainability and data privacy — in this instance, dealing with highly sensitive financial data, it’s unclear how NatWest and OpenAI will ensure that it is protected. | The landscape: OpenAI has been inking similar deals with firms across the U.S., recently securing a partnership with BNY, America’s oldest bank. OpenAI has also partnered with PWC. The terms of all of these deals — covering economics and data-sharing principles — remain unknown. |
| |
| | | The copyright battle: In the midst of Meta’s steadily intensifying copyright infringement lawsuit, The Atlantic published a searchable dataset of the millions of books Meta pirated to train its GenAI systems. At around the same time, a former Meta employee-turned-whistleblower published a book detailing the inner workings of the company, a book that shot to the top of best-seller lists after Meta leveraged legal methods to prevent her from promoting it. I wonder if Meta’s team pirated and trained on that one, too? Voice Mode: OpenAI released new text-to-speech models Thursday that are designed to be highly customizable. The company also released an upgraded version of its Whisper transcription model — this one, according to OpenAI, doesn’t hallucinate nearly as much as the previous model did.
| | Low-cost drone add-ons from China let anyone with a credit card turn toys into weapons of war (Wired). Apple streaming losses top $1 billion per year (The Information). The end of the EPA’s fight to protect overpolluted communities (Grist). Apple shuffles AI executives in bid to turn around Siri (Bloomberg). Microsoft chose not to exercise $12 billion Coreweave option (Semafor).
| | | | At Athyna, we connect you with the innovators and changemakers who drive real growth—because exceptional people build exceptional companies. | From engineering and product to design and operations, we’ve got you covered. | Get AI-matched with vetted talent from companies like Google, AWS, Uber, Accenture, and more. | Find out more * |
| |
| Anthropic chases OpenAI in competing with Google, Perplexity |  | Source: Anthropic |
| The news: Anthropic on Thursday finally brought web search to Claude. | The startup launched a preview version of this real-time search integration to paid users in the U.S., saying that it will soon roll out both globally and to users on Anthropic’s free tier. | When the capability is enabled, Claude will tap into the web “when applicable,” according to Anthropic, providing “direct citations so you can easily fact check sources.” This, according to Anthropic, boosts the chatbots accuracy on queries and topics that benefit from real-time data.
| The landscape: Anthropic is a little late to the game on this one. The functionality of integrating a search function into a chatbot interface is one that has become increasingly popular, beginning with Perplexity, the AI search startup, and diffusing out to the heavy-hitters. | OpenAI, for instance, incorporated search into ChatGPT months ago; Google said recently that it is doubling down on this format of conversational chatbots with integrated — and attributed — search results with an experimental version of something called AI Mode. | Even as these alternative new search options steadily gain traction, Google’s dominance — AI mode or not — is hard to shake. The search engine has been operating with roughly 90% of the global search market for years, and that has yet to dramatically shift. | One more thing: The combination of Chain of Thought inference, hallucinations and web search seems likely to dramatically increase Claude’s energy intensity and associated carbon emissions, especially considering the fact that many of these queries can be quickly, cheaply and reliably answered on a normal search engine, with its 10 blue links. |
| |
| A crisis of hallucination: ChatGPT tells a father that he killed two of his sons |  | Source: Unsplash |
| When Arve Hjalmar Holmen searched his name on ChatGPT, he was just curious. | But the chatbot’s output — a blend of reality and fiction that presented Arve as a man who had been sentenced to 21 years in prison for the murder of two of his three sons — had him horrified. | The chatbot got some things right, including his hometown in Norway and the number and gender of his children. But the major component of that output — that he is a convicted murderer — was completely false. | What happened: The non-profit organization noyb has since filed a case against OpenAI with Norway’s data protection authority, seeking to rectify the issue. This case marks noyb’s second complaint against OpenAI for the same issue: hallucination. | The complaint makes three requests of the Norwegian data privacy watchdog: to require OpenAI to delete the output and, further, to fine-tune the model so it doesn’t happen again; to restrict OpenAI from processing Holmen’s personal data until an investigation is complete; and to impose a fine against OpenAI for what noyb is calling a violation of Europe’s GDPR. noyb acknowledged that, now that ChatGPT doubles as a search engine, its original response to the simple question of ‘who is Arve Hjalmar Holmen?’ has changed; indeed, upon trying it out myself, ChatGPT, through web search, says that Holmen is a Norwegian citizen who has filed a complaint against OpenAI “after the chatbot mistakenly identified him as a criminal who had murdered two of his children.”
| But the fact that the output is no longer showing up isn’t a cause for relief, for Holmen. | “The incorrect data may still remain part of the LLM’s dataset,” according to noyb. “By default, ChatGPT feeds user data back into the system for training purposes. This means there is no way for the individual to be absolutely sure that this output can be completely erased.” | OpenAI did not return a request for comment. | “The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth,” Joakim Söderberg, a data protection lawyer at noyb, said. “Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” | | Though this one comes in a remarkably specific form, the ramifications of hallucination — a term I like less and less each day — remain a paramount inhibitor to adoption. | We’ve all heard about the lawyers who have been sanctioned over the fake case law their generative AI systems outputted. And we’ve all heard about the continual mistakes in Google’s AI Overviews. | Earlier this year, Apple had to roll back its AI-powered news summary tool in the U.K. after it was found relaying false information that was presented as fact and attributed to news organizations. Many of the lawsuits against the major developers additionally go beyond copyright infringement to address issues of reputational risk around the false, misleading and partial reproductions of articles, or fictitious inventions of information connected to a prominent organization’s name. | Hallucination is the trillion-dollar question in AI. | If someone can come up with a system that doesn’t make these kinds of mistakes, or makes them in a predictable way, that person would be the envy of the entire industry. | Perhaps that is what engineers actually mean when they talk about the pursuit of general intelligence. But there is no evidence at this stage that language models alone can get to a place where this is no longer a problem. | Arve Hjalmar Holmen is not the first — and he won’t be the last — recipient of false output that is presented and packaged as truth. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on Tesla’s Texas Taxis (hypothetically): | 42% of you would be happy to test out a Tesla Texas robotaxi, if it ever appeared. 35% won’t go near one. | The rest: not so sure. | Nope: | | Maybe: | | Claude Search: A welcome addition, or too little, too late? | | If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
|
|
|