| | Good morning. Tesla will report earnings after the bell today. | Coming just a few weeks after Tesla’s Robotaxi event failed to impress Wall Street, the focus will likely be on core car-selling fundamentals, rather than AI-related hype. | We’ll break it all down for you tomorrow morning. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | ⛴️ AI for Good: Understanding plankton 🪨 Report: The threat of AI sabotage 🏛️ Anthropic wants a look at your computer 🔌 GE unveils AI for cancer care
|
| |
| AI for Good: Understanding plankton | | Source: NASA’s Earth Observatory |
| Plankton — a diverse array of oceanic organisms ranging from simply small to microscopic — make up a vital element of our global ecosystem. | Highly efficient at photosynthesis, carbon storage, carbon sequestration and oxygen production, the tiny plants and animals are the reason that the oceans have been able to absorb roughly 40% of the carbon dioxide emitted by humans since the Industrial Revolution. | Considering the role of plankton in the global ecosystem, it is vital for scientists to understand more about plankton and their complex, watery home. AI is helping them do it. | Researchers at Woods Hole Oceanographic Institution (WHOI) have been leveraging deep learning to better understand the twilight zone (200 meters beneath the ocean’s surface) that plankton call home. Biologist Heidi Sosik is using a combination of high-speed cameras and artificial intelligence to gather millions of pictures which are then sorted, flagged and classified by the algorithm.
| Why it matters: “This technology gives us many orders of magnitude more information, while doing it faster than any human could,” Sosik has said. “Ultimately, that understanding is going to help humans make better decisions about how to protect those organisms, and how to be better stewards of the natural world.” |
| |
| | It’s time to change the way we build digital products | | Keeping busy is easy. Driving business impact is what matters. | Consistently align your people to the most strategic priorities, discover product opportunities from deep customer insights, and gain total visibility on execution with Airtable ProductCentral. | Learn More |
| |
| Report: The threat of AI sabotage | | Source: Anthropic (note the fine print!). |
| In an attempt to better understand the threat of sabotage through the use of an AI model, a team of researchers at Anthropic prepared and tested a series of related scenarios. They focused on four different types of sabotage: human decision-making sabotage, code sabotage, sandbagging and undermining oversight. | The highlights: In testing undue influence on human decision-making, Anthropic devised a scenario where human participants had to make key (fictional) business decisions with a massive amount of data and little time; the argument was that, in this environment, they would need to use an AI model for help. | The researchers then tasked that model to influence the human to arrive at the wrong answer. And they found that “the humans using the most aggressive model were less likely to make the correct decision.” They did, however, add that it wasn’t “necessarily the case that the model was skillfully manipulating the humans — instead, apathy or excessive trust could have been the reason for mistaken decisions.”
| In the remaining scenarios — sandbagging (when a model hides its true capabilities in pre-deployment tests), deliberate code sabotage and undermining oversight — the researchers found a slight likelihood of successful sabotage. | The major caveat here, of course, is that the models were being specifically trained and tasked to commit these acts of sabotage — meaning that the findings here are that one human actor could task an AI model with committing sabotage against other human actors, and it would have a slim chance of success, depending on how focused the human victim is. | There’s no evidence at the moment that an AI model can itself commit any of these acts. | The genuine result that the team found here is that “increased trust and delegation to models (makes) sabotage easier,” though this holds true of hallucination, bias and other model flaws — for any number of reasons, overreliance on an AI model today isn’t a good thing. |
| |
| | | | The AI consulting market is about to grow by a factor of 8X – from $6.9 billion today to $54.7 billion in 2032. But how does an AI enthusiast become an AI consultant? How well you answer that question makes the difference between just “having AI ideas” and being handsomely compensated for your contribution to an organization’s AI transformation. | Thankfully, you don’t have to go it alone – our friends at Innovating with AI have welcomed 300 new students into The AI Consultancy Project, their new program that trains you to build a business as an AI consultant. Some of the highlights current students are excited about: | The tools and frameworks to find clients and deliver top-notch services A 6-month plan to build a 6-figure AI consulting business Students getting their first AI client in as little as 3 days
| And as a reader of The Deep View, you have a chance to get early access to the next enrollment cycle. | Click here to request early access to The AI Consultancy Project |
| |
| | | | | Tim Cook on why Apple’s huge bets will pay off (Wall Street Journal). Hacker plants false memories in ChatGPT to steal user data in perpetuity (Ars Technica). How AI dealmakers are sidestepping regulators (The Information). Microsoft and OpenAI are giving news outlets $10 million to use AI tools (The Verge). Stratospheric, AI-enabled robotic cameras on balloons could help you get your insurance claim check faster (CNBC).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | |
| |
| Anthropic wants a look at your computer | | Source: Anthropic |
| Anthropic is releasing a new evolution of its generative artificial intelligence that will grant its Claude models access to users’ computer screens. Simply called “computer use,” the functionality — currently in public beta — enables Anthropic’s generative AI models to, with permission, “use computers the way people do.” | The details: The system functions by taking static screenshots of a given screen in real-time. | Developers can instruct a model to, for instance, “use data from my computer and online to fill out this form.” Anthropic called the release a “milestone” that would “unlock a huge range of applications that simply aren’t possible for the current generation of AI assistants.”
| Contending that the capability is expected to improve rapidly, Anthropic did acknowledge that “Claude's current ability to use computers is imperfect.” Common actions, such as scrolling, dragging and zooming, “currently present challenges for Claude.” | Security risks: The greater the integration of these AI models — known at the very least to hallucinate — the greater the inherent risks. Anthropic said that this feature makes AI systems more accessible, not more powerful, and so doesn’t carry an enormously new threat. Still, the company did acknowledge risks of spam, fraud and prompt injection, in which a cybercriminal feeds malicious instructions to an AI model, causing it to function against its user’s directions. | Further, Anthropic said that it has put measures in place to monitor certain activities — for instance, the system will “nudge” Claude away from posting on social media. Anthropic confirmed that it will not train its AI models on the data analyzed through computer use.
| However, Anthropic made no mention of inherent security risks and vulnerabilities associated with such an approach; you might recall Microsoft Recall, which promised to take and store screenshots of computer screens to develop an AI-searchable database. This database was extremely vulnerable, leading Microsoft to recall the feature before redeploying it with greater security protections. | It’s not clear what similar risks are posed by Anthropic’s system here — for instance, when computer use is enabled, will Claude automatically access banking information, and perform banking activities, at a user’s request? Will it refuse to do so? If it does, how will it protect sensitive information from data breaches? Is that data stored? | The announcement came alongside the release of an upgraded Claude 3.5 Sonnet and Haiku. | Security risks are real. Hallucination risks are real, and here, they would matter far more than in a closed chatbot interface. | Cybersecurity expert Rachel Tobac said she was “breaking out into a sweat thinking about how cybercriminals could use this tool.” |
| |
| GE unveils AI for cancer care | | Source: National Cancer Institute |
| A lot of the promise for current generative artificial intelligence is in healthcare. The reason there’s excitement here is that generative AI today is really good at processing, analyzing and identifying patterns in massive quantities of data, something that, when combined with human experts, could improve outcomes. | At least, that’s GE’s pitch. | GE HealthCare on Monday announced CareIntellect for Oncology, a generative AI tool designed to help clinicians better track patient progress while simultaneously reducing the potential for burnout. | How it works: The cloud-based application serves first as a data headquarters of sorts; it brings together multi-modal patient data from any number of different systems, enabling clinicians to examine reports and other relevant information all in one place. | The system, according to GE HealthCare, uses generative AI to summarize this information while flagging relevant data, allowing clinicians to better track the progress of a patient’s illness, including deviations from a given treatment plan. This, in turn, could allow for more “proactive interventions.” The system can also match patients to proper clinical trials. GE HealthCare said that the application would focus initially on prostate and breast cancer as it becomes widely available to customers next year.
| Tampa General Hospital and UT Southwestern Medical Center have already begun to integrate the system. | GE HealthCare suggested that such an application could help save doctors hours of administrative time with each new patient, something that would enable them to spend more time on patient care. | GE HealthCare did not make mention of data privacy and security efforts, such as how, or if, it will ensure that such data remains confidential. The company likewise made no mention of the hallucination and biases inherent to generative models, or how/if it is working to mitigate such reliability issues. The company also provided no details regarding the training of the model; we don’t know its size, training data, energy cost or any guardrails to ensure safe utilization. | | Clinician burnout from overwork is real. The quantity, variety and source of relevant medical data is a legitimate problem. And the potential for AI to parse and process everything into one location, while providing critical insights along the way, is absolutely present. | However, there is also a real risk of over-reliance on faulty technology, something that could cause harm (and skill decay). Hallucinations here could be devastating; biases regarding clinical trial recommendations could likewise cause a ton of damage. | It is important, where these applications are in use, for the user to understand the risks, and for the user to not become complacent. Assuming that these architectural flaws aren’t reliably going away anytime soon, it will remain vital for doctors to understand just how much they should trust AI-generated reports and recommendations, and just how much verification these things will require. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on letting an AI vote on your behalf: | 45% of you said simply: “no.” Another 25% said that the idea of this is insane. 15% said sure … if it was trustworthy. Only about 5% said they would let an AI vote for them. | To add my predictable two cents into this mix, I would never ever, ever (ever) let an AI vote for me. | No: | | No: | | Would you be down for Anthropic's Computer Use feature? | |
|
|
|