| | Good morning. Yesterday, as part of its ‘12 Days of OpenAI’ special, OpenAI is now allowing people to call in to ChatGPT. Just dial 1-800-242-8478, and you can chat with the chatbot (for 15 minutes each month). | Didn’t see that coming. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | ⚕️ AI for Good: Sepsis detection ⚡️ Sam Altman-backed nuclear startup signs major data center energy deal 💰 AI startup SandBoxAQ closes $300 million round at $5.6 billion valuation 🏥 Suki partners with Google Cloud to bring AI assistants to more hospitals
|
| |
| AI for Good: Sepsis detection | | Source: Unsplash |
| Sepsis — which occurs when an infection triggers a chain reaction in the body that can lead to organ failure — is easy to miss, since the symptoms are also common to a number of other conditions. | What happened: In 2022, researchers at Johns Hopkins University developed a machine learning system designed to aid doctors in the early detection of sepsis. | The details: The system combines a patient’s medical history with their lab results and current symptoms; it provides warnings to doctors when a patient is seemingly at risk of developing sepsis. | During a study, thousands of clinicians from five hospitals used the system to treat nearly 600,000 patients. In 82% of the sepsis cases, the system was accurate 40% of the time. Previous attempts, according to Johns Hopkins, caught less than half as many cases and were only accurate in 2% to 5% of them.
| "It is the first instance where AI is implemented at the bedside, used by thousands of providers, and where we're seeing lives saved," Suchi Saria, lead author of the studies, which evaluated more than a half million patients over two years, said. "This is an extraordinary leap that will save thousands of sepsis patients annually.” |
| |
| | Keep your SSN out of criminals' hands | | The most likely source of your personal data being littered across the web? Data brokers. | They're using and selling your information—home address, Social Security Number, phone number, and more. | Incogni helps scrub this personal info from the web and gives you peace of mind to keep data brokers at bay. | Protect yourself: | Before the spam madness gets even worse, try Incogni. It takes three minutes to set up. Get your data off 200+ data brokers' and people-search sites automatically with Incogni.
| Incogni offers a full 30-day money-back guarantee if you're not happy ... but you will be. | Don't wait! Use code DEEPVIEW today to get an exclusive 58% discount |
| |
| Sam Altman-backed nuclear startup signs major data center energy deal | | Source: Oklo |
| Oklo, a nuclear energy startup, on Wednesday said it signed a deal to power Switch, a data center firm specializing in AI and the cloud. | The details: Under the terms of the non-binding arrangement, Oklo will develop nuclear reactors — which it calls the Aurora powerhouse — through 2044, with a total capacity of 12 gigawatts. As Oklo pointed out, this marks one of the largest corporate nuclear energy agreements that has been signed to date. | It is not yet clear when the first reactors are slated to come online. Oklo, despite being a publicly traded company, has yet to complete construction on its first power plant and has yet to bring in any revenue. Oklo has spent the past year inking deals to power data centers once its systems come online. Similar to Big Tech’s increasingly common push toward nuclear power, the announcement specifically calls out the “growing electricity demands” of AI, adding that the deal will position Switch to “handle AI workloads well into the future.”
| The landscape: Though nuclear is becoming an increasingly popular energy option for Big Tech players including Amazon, Google, Meta and Microsoft, it’s not the silver bullet solution it sounds like. A big drawback to nuclear is the amount of time and money it takes to get nuclear power plants operational; this deal, like all the others, is looking at a time horizon of years, not months, which means that the grid will keep on emitting carbon in the meantime. | Oklo is chaired by OpenAI’s Sam Altman, who took the startup public in May through his special purpose acquisition company (SPAC). Altman has said that his impending Age of AI will require an energy “breakthrough.” | Shares of Oklo have spiked some 80% this year. |
| |
| | Here's How to Speed Up Your App by Reducing TTFB | | Time to First Byte (TTFB) is a critical metric for your app’s performance. A slow TTFB frustrates users, hurts your SEO, and clogs up workflows—but you don’t have to settle for sluggish speeds. | In Sentry's latest blog, Lazar Nikolov shares actionable strategies to pinpoint and reduce TTFB issues from optimizing server response times to tackling bottlenecks. | Here’s what you’ll learn: | The key factors that cause slow TTFB How to monitor and debug TTFB issues effectively Real-world solutions to streamline performance
| With the right tools and techniques, you can turn TTFB from a blocker into an edge. | Read the blog now to learn how to get your app running at full speed. |
| |
| | | Cohere has quietly partnered with Palantir; its models are already in use by a number of unnamed Palantir customers. It’s the latest in the recent developer push toward AI use in military and defense, with OpenAI signing a deal with Anduril and Anthropic signing a deal with Palantir. Google has changed its terms of service, which now explicitly approves the use of its generative AI products in high-risk domains so long as there remains a human in the loop. This enables human-supervised, automated decision-making in regions such as healthcare.
| | Honda-Nissan in merger talks to create the world’s third-biggest automaker (Semafor). China’s AI elite rethink their Silicon Valley dream jobs (Rest of World). U.S. Supreme Court agrees to hear challenge to TikTok divestment law (CNBC). The edgelord AI that turned a shock meme into millions in crypto (Wired). Nvidia says it can built a cloud business to rival AWS. Is it possible? (The Information).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | |
| |
| AI startup SandBoxAQ closes $300 million round at $5.3 billion valuation | | Source: SandBox |
| AI startup SandBoxAQ said Wednesday that it closed a $300 million funding round, a round that included Meta’s Chief AI Scientist Yann LeCun, former Google chief Eric Schmidt and billionaire Marc Benioff as investors. | The round, according to statement, valued the company at $5.3 billion. | The company: Spinning out of Google just three years ago, SandBoxAQ has been working to bring quantum computing and artificial intelligence together (hence the “A” and the “Q”). | The company is developing something it calls an LQM — a Large Quantitative Model. Unlike the Large Language Models (LLMs) that power generative A applications such as ChatGPT — which are trained on content scraped from the internet — SandBox’s LQMs are trained on data gathered from physics-based equations and physics-based sensors. The funding will be used to accelerate the development of these LQMs and other AI-related applications.
| Focused on B2B applications, the company draws a significant portion of its revenue from the biopharma, life sciences and chemicals industries, where its tech is used to help accelerate drug discovery, among other things. | SandBox currently has solutions in cybersecurity, aerospace navigation, cardiac diagnostics, material sciences and drug discovery. | “While LLMs are very helpful tools for consumers, it is quantitative AI that will define work in large sectors of the economy including biopharma, chemicals and financial services,” LeCun said in a statement. “SandboxAQ has emerged as a leader in novel applications of AI that solve the most pressing challenges in the world and their technical success is impressive.” |
| |
| Suki partners with Google Cloud to bring AI assistants to more hospitals | | Source: Suki |
| Healthtech firm Suki announced this week that it has partnered up with Google Cloud to launch a few updates to its generative AI assistant, which on Wednesday rolled out to a “select group of clinicians at health systems.” | The company: Suki’s main offering is the Suki Assistant, a GenAI-powered system that, before this latest round of updates, was designed to automatically generate clinical documents by ambiently listening in on clinician-patient interactions. | The idea behind Suki is one of administrative burden and burn-out; the issue of doctors drowning in paperwork is a well-documented one, and Suki’s pitch is that its assistant will enable doctors to spend far less time on paperwork, time that can instead be spent caring for patients and caring for their own well-being. Suki has already secured numerous partnerships, including one with Ascension Saint Thomas, a major hospital in Tennessee, in August.
| The details: Suki said that the two updates to its GenAI assistant include patient summarization and Q&A functionality. Clinicians, Suki said, will be able to interact with patient documentation through the Suki Assistant, asking questions such as "What medications is this patient taking for diabetes,” which, according to Suki, will better support the medical decision-making process. | The announcement raises quite a few questions. With the new updates, it is not clear how Suki will guarantee reliability in the system’s responses and summarizations, or otherwise present its information in a manner that discourages clinician overreliance. Hallucination — or confabulation — and algorithmic bias remain a fundamental flaw of generative AI, which creates prominent issues of reliability, specifically in high-risk applications. It isn’t clear what mechanisms Suki has employed — obvious information attribution, for example — to address these issues. The scope and scale of the rollout, as well as the specific hospitals involved at this stage, likewise remain unknown.
| Suki did not return a request for comment regarding these points. | When I met with Suki several months ago, the company told me that it employs an “extensive clinical evaluation framework to assess the quality of LLM output.” Bel Srikanth, Suki’s VP of product and engineering, told me at the time that, since it is providing patient-specific data, rather than general medical information, it doesn’t have the same hallucinatory risks as other chatbots. | The company said that it stores conversation recordings in a “secure cloud environment” for seven days before deleting them, and that it “uses industry-leading security measures to ensure the authenticity, integrity and privacy of data, both at rest and in transit.” | The company also said that patients have the option to opt out if they choose, though it remains unclear how this option — and the use of Suki, and the description of its underlying technology — is presented to patients. | Suki closed a $70 million funding round in October. | | I remain concerned about overreliance, not necessarily on a flawed system — I have not evaluated Suki — but on a flawed technology. There are mitigation processes that can be deployed to guard against hallucination, but there’s no way to erase it, and if the technology is not properly presented (as a mostly accurate automation tool prone to mistakes) then clinicians won’t understand how much — or how little — they ought to trust it. | Clinicians are, after all, not computer scientists. And healthcare decision-making really shouldn’t be outsourced in a way that could provide seemingly accurate, but untrue, information. Imagine a system that hallucinates in responding to a query about what drugs the patient is currently on … | Another element to this, as one nurse has said, is that, while efficiency sounds good, you don’t want it in a hospital. | “When you’re optimizing for efficiency, you’re getting rid of redundancies,” they said. “But when patients’ lives are at stake, you actually want redundancy. You want extra slack in the system. You want multiple sets of eyes on a patient in a hospital.” | There’s a multi-layered cost to efficiency, and we must not lose sight of the balance. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | | Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AI search: | 22% of you don’t see a need for AI search and 8% wish Google would just turn AI Overviews off. But 20% love AI Overviews, and another 30% don’t use AI Overviews, but do use Perplexity and ChatGPT search. | Turn it off: | “AI overviews should be an opt in feature. It's a genuine waste of resources as it very rarely has helpful info for me (additionally, as someone who works in information science, I would always rather take the extra step and verify info I'm getting is from a reputable source, and not just a Reddit post telling me to eat glue).”
| Something else: | “I don't mind AI Overviews, but do double-check them. Sometimes they're very helpful quick recaps, but occasionally they're 1) not accurate, 2) provide info about a loosely affiliated topic that doesn't match my inquiry, or 3) are based off info from a Reddit user's post/blogs (not necessarily accurate, need additional sources)…”
| Are you gonna call ChatGPT? | |
|
|
|