| Thursday |
| | Good morning. I spoke with cybersecurity experts at HiddenLayer about the security vulnerabilities inherent to generative artificial intelligence. | Read on for the full story. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Tracking water ecosystems over time | | Source: UNEP |
| One of the United Nations’ sustainable development goals is focused on the protection of water-related ecosystems. That protective effort begins, unsurprisingly, with data and trends. | The details: In 2021, the UN launched an AI-powered data platform called the Freshwater Ecosystems Explorer. | | Why it matters: Freshwater represents a vital, multi-faceted resource for humanity. But it is facing a number of challenges and threats. | | It additionally allows water managers and scientists to better understand how to care for these ecosystems. | “For countries to develop sustainably we must restore and protect water-related ecosystems,” The UN said. |
| |
| | Protect yourself from Data Breach | | Identifying threats before they happen online is more important now than ever before! | Whether you’re looking to browse ad-free, protect yourself from viruses, or surf the web without being tracked, Surfshark has what you need. | Surfshark Alert performs real-time credit & ID alerts, instant email data breach notifications, and regular personal data security reports to keep you safe from threats before they happen! | Plus, they are offering 80% off when you sign up now. | Get Surfshark and protect yourself! |
| |
| Study: GenAI tutors harm learning | | Source: Unsplash |
| Shortly after its launch in 2022, ChatGPT began invading schools, serving — at least initially — as a convenient essay writer for students who were disinterested in writing one themselves. In the time that has passed, though, ChatGPT and similar tools have been positioned as tools that can help students learn (GPT-4o was specifically positioned as a potential math tutor during its launch). | A recent study, however, found that AI tools don’t actually help students learn. | The details: Researchers at the University of Pennsylvania sought to understand the way automation impacts human learning, considering the fact that the tech isn’t yet reliable and so requires vigilance and technical understanding from human users. | The study — of around a thousand high school students — found that, though the use of GenAI in school dramatically improved performance, students used it like a “crutch.” On an exam in which the tools were taken away, the students who had access to GenAI tutors performed 17% worse than their non-AI-assisted peers.
| “These results suggest that while access to generative AI can improve performance, it can substantially inhibit learning,” the researchers said. | The context: Despite this and other research that has pointed out the pitfalls of uncritically leveraging AI in education, a majority of students, parents and teachers have a favorable view of the tech. |
| |
| | | Nagish, a startup using AI to help deaf people communicate, announced $16 million in funding. Medal, known for its desktop AI assistant, announced a further $13 million in funding.
| | Elon Musk delays Tesla robotaxi unveil (Reuters). Britain’s Competition and Markets Authority has opened an initial investigation into Microsoft’s acquisition of InflectionAI (CNBC). It’s never been easier for the cops to break into your phone (The Verge). A space startup’s moon mining plans get boost with NASA grant (The Information). Regrow hair in as few as 3-6 months with Hims’ range of treatments. Restore your hairline with Hims today.*
| | | | | | |
| |
| Morgan Stanley calls Apple a ‘top pick’ due to AI push | | Source: Unsplash |
| Apple shares pushed to a record high on Monday, bolstered by a note in which Morgan Stanley analysts named the company a “top pick” stock. The reason for their optimism? Apple Intelligence. | The details: Morgan Stanley analyst Erik Woodring boosted his price target on the stock to $273 per share, saying that Apple Intelligence is a “clear catalyst” for a massive upgrade cycle. | He expects Apple to ship nearly 500 million iPhones over the next two years, a prediction based on his belief that Apple Intelligence will “deliver much improved, and unique-to-the-Apple-ecosystem utility value,” forcing device upgrades across Apple’s install base. “We believe that there is a record level of pent-up demand entering the iPhone 16 cycle later this year,” Woodring wrote. “Coming out of WWDC — where Apple debuted Apple Intelligence — we have even greater conviction that FY25 could be the start of a multi-year device refresh cycle.”
| Apple unveiled its multi-pronged push into generative AI in June, highlighting the many ways it will be incorporating AI into its IOS architecture. Among these are generative writing assistants and coming improvements to Siri and voice memos, some of which will be powered by what Apple calls a secure cloud. | But a lot of questions about user choice, data privacy and security still remain. | | Prepare for high-paying AI-enabled roles | | Ready to transition to top AI/ML roles or get AI-enabled in your domain? | Interview Kickstart's courses help you build practical AI/ML skills taught live by FAANG+ experts and industry leaders. | ✅Hands-on capstone projects and case studies | ✅Taught live by FAANG+ AI expert instructors | ✅Tailored for engineers and managers | ✅1:1 Career support | Join the Free Webinar |
| |
| Cybersecurity experts say artificial intelligence is an ‘incredibly vulnerable’ technology | | Source: Unsplash |
| One of the most significant challenges to enterprise adoption of generative artificial intelligence has to do with security. | A recent study conducted by Okta found that 74% of C-suite executives surveyed are concerned about data privacy and 71% are concerned about security risks. But these concerns aren’t enough to slow down adoption; a recent McKinsey report found that 65% of organizations surveyed are regularly using generative AI, a significant increase over last year’s numbers. | But this growing adoption of a tech that is not secure by design has already led to some complications. A report by cybersecurity company HiddenLayer found that 77% of companies have reported breaches to their AI in the past year. The rest were “uncertain whether their AI models had seen an attack.”
| “Artificial intelligence is, by a wide margin, the most vulnerable technology ever to be deployed in production systems,” HiddenLayer CEO Chris Sestito said in the report. “It’s vulnerable at a code level, during training and development, post-deployment, over networks, via generative outputs and more.” | I sat down with Sestito and Hidden Layer’s Chief Security Officer Malcolm Harkins to discuss the vulnerabilities inherent to GenAI, and the steps that companies should be taking to secure them. | Why AI is so vulnerable: Sestito said that the security side of GenAI isn’t too different from every other technology; nothing was really secure in the beginning. | This is the beginning for AI. The problem is the speed at which it’s moving. | “The difference with AI is we are fast-forwarding like crazy. So we are just deploying faster, we're adopting faster, we're taking way more risks with it,” Sestito said. “And so it's not actually inherently more or less vulnerable than other technologies are.” But a big problem that’s sweeping the enterprise at the moment, according to Harkins, is that solutions for other technologies do exist (firewalls, endpoint detection, network detection, etc.), which could create a false sense of security since they “do not directly protect AI.”
| This, they said, is why AI should be approached like every other technology, where third-party security solutions are more than a norm, they are a necessity. | AI isn’t — and shouldn’t — be built with security in mind: Again, historically, there has been a separation between the developers and the cybersecurity defenders. Sestito thinks that separation ought to remain in place. | Developers could potentially find ways to make their models more robust and secure, but he said the efficacy of these models would likely drop and the cost would rise. “I believe that separation of duties is really important, in general, because there's going to be competing priorities there,” Sestito said. “But also security people are a lot better at security and data scientists are a lot better at data science. We should be kind of allowing that to happen.”
| By applying security layers on top of an existing model architecture, the efficacy of that model isn’t compromised, and neither is the security. | Regulation and safety: Just because these AI systems are known to be ‘black boxes,’ according to Sestito, doesn’t mean you can’t enact certain measures to secure them (this could be as simple as requiring model interaction logs). | “We just think you should hold AI to the same standards you hold other software. No more, no less,” he said. “Don't go overboard and slow things down. Don't underdo it. Most people treat AI like they get intimidated. They say ‘oh, it's a big black box. You can't do anything.’ And that's really not true.” | | I think the important thing to note here is the differences in GenAI applications: for the enterprise and for consumers. | A lot of the AI tools we see today are targeted at consumers, i.e. ChatGPT, image generators, audio generators, etc. These consumer-facing generation tools pose a host of risks, not just in security, but in instigating harm against other people. | I’ve spoken to cybersecurity researchers who have said that these models were not built with security in mind, and so are ripe for abuse; there might be something to the idea of separating the consumer-facing products from the enterprise products, wherein the consumer products are designed to be safer, more secure and perhaps less efficacious, and the enterprise products can come without security baked in, under the expectation that third-parties will be used to apply any necessary layers. | Whatever it is, it’s yet another complex arena that needs to be secured at a variety of levels, all in addition to every other piece of technology we have at our disposal today. | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your view on exciting AI healthcare applications: | 40% of the healthcare workers among you are excited about the application of AI for health monitoring; 30% are excited about the advent of automated note-taking (so long as it can be reliable). | Something else | | Something else: | “Note-taking with summarisation already exists (Nabla Copilot). Useless, too slow and hallucinate too often. We want only medical record summarization … Give us tools to manipulate, resume, collect, file, extract, fill the forms, automate whatever is possible. Don’t try to copy a doctor's brain, but be the doctor's arms, eyes and ears. Give me the right data at the right time and we will do the medical part of the job.”
| What do you think about the use of AI as a crutch in education? | | *Hims Disclaimer: Based on studies of topical and oral minoxidil. Hair Hybrids are compounded products and have not been approved by the FDA. The FDA does not verify the safety or effectiveness of compounded drugs. Individual results may vary. |
|
|
|