| | | Good morning. I spent my day Wednesday at Axios’ AI summit (ironically held at the Altman building in New York). | Overall, I was a little disappointed by the softballs that Axios’ reporters served up to folks from Meta and AWS. But Helen Toner (the former OpenAI board member) and Meredith Whitaker (the president of Signal) were the event’s saving grace. | In today’s newsletter: | 🧯 AI for Good: Australian scientists are fighting fire with AI ⛑️ Helen Toner’s suggestion for you: ‘Don’t be intimidated by AI’ 📽️ Meredith Whitaker: AI tech was born from Big Tech’s ‘surveillance business model’ 🏛️ Gladstone AI presents a policy guide for Congress
|
| |
| | AI for Good: Australian scientists are fighting fire with AI | | Photo by Matt Palmer (Unsplash). |
| Wildfires have lately been getting worse. Last year, Canadian forest fires burned so fiercely that smoke drifted down south to blanket the East Coast in an apocalyptic shroud. I had never witnessed anything like it. | | The details: While methods previously existed to detect fires from space, the issue was with detecting them early enough to act. One of the scientists said that stationary satellites take images of wildfires every 10 minutes, but “the resolution is too coarse to detect small fires.” | | Researchers at the University of South Australia developed an algorithm that allows cube satellites, equipped with AI image processing tech, to detect fires “500 times faster.” The team hopes to get the system in orbit sometime next year. | Why it matters: “This would allow us to detect fires less than an hour after they occur,” researchers said, meaning that firefighters would potentially be able to use information from these systems to respond to fires before they get too large. | | Helen Toner’s suggestion to you: ‘Don’t be intimidated by AI’ | | Photo by Ian Krietzberg (The Deep View). |
| The last question Helen Toner was asked is the one I thought was the most interesting: “What is our role as citizens? What can people who care about this … do?” | | The nuance: But her biggest suggestion was a little different: Don’t “be intimidated by this technology.” | “People hear AI and they think it’s way too complicated,” she said, adding that though there is a bit of complicated math involved, current systems are pretty straightforward (it’s just computers making predictions based on data).
| Toner went on to list a series of questions ordinary citizens should absolutely be asking themselves about AI: | How is this affecting my life? How do I want it to affect my life? What would I hope for my computer to be able to help me with? How is it harming me or taking away things that I enjoy?
| “The core underlying thing is not to think of this as a distant, mysterious, magical technology that you could never have a view on,” Toner said. | Do you have any answers to Toner's questions on AI? Leave a comment and let us know! | | | Together with Vanta | | When it comes to ensuring your company has top-notch security practices, things can get complicated, fast. | Vanta automates compliance for SOC 2, ISO 27001, ISO 42001, NIST AI, and more, saving you time and money while helping you build trust. | Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing Trust Center, all powered by Vanta AI. | Over 7,000 global companies use Vanta to manage risk and prove security in real time. | The Deep View readers can get $1,000 off Vanta here. | | Meredith Whitaker: AI tech is a result of Big Tech’s ‘surveillance business model’ | | Photo by Matthew Henry (Unsplash). |
| AI, as we’ve discussed numerous times, isn’t a new technology. But it has had a powerful resurgence over the past few years, and this current obsession, according to Meredith Whitaker, comes as a result of surveillance business empires that have spent the past 20 years entrenching themselves in society. | Key points: Whitaker said that at the core of today’s AI are the Big Tech giants who make up the bulk of the largest companies in the world. | | The landscape: That we live in a world of surveillance capitalism is at this point clear. If I mention a product in conversation, I’ll see an ad for that product on Instagram a few hours later. We are tracked closely by websites, apps, devices (even our cars) and government agencies. The resulting data has become the big commodity of today’s digital era. For a while, companies relied on it to conduct targeted advertising. Now, the terms of service are changing, and they want it to train AI models, too. | What it all means: Whitaker noted that this reality doesn’t mean AI is not useful. | But it does mean that the business of AI favors the “Big Tech monopolies that … do have that infrastructure, that data. It is not so much a product of scientific progress, it is a product of a recognition that you can do new things with old algorithms when you have compute and data.”
| If you’ve ever heard the term ‘regulatory capture,’ this is what that refers to. And this is what the opponents of regulation designed to protect the incumbents are up against. |
| |
| | 💰AI Jobs Board: | AI Inference Engineer: Tether · United Kingdom· Remote · Full-time · (Apply here) AI Engineer LLM: Tether · United Kingdom · Remote · Full-time · (Apply here) ML Researcher: DeepRec.ai · United States · Remote · Full-time · (Apply here)
| | 📊 Funding & New Arrivals: | Modern CMS company Storyblok announced the closure of an $80 million Series C round. AI-powered digital physical therapy startup Sword Health raised $30 million and let employees sell $100 million in equity, bringing its valuation to $3 billion. Market research company GetWhy raised $34.5 million in a Series A round.
| | 🌎 The Broad View: | Top news app in the U.S. has roots in China and publishes ‘fiction’ with the help of AI (Reuters). European privacy advocacy group files 11 claims against Meta for its AI-training policies (NOYB). Elon Musk accused of selling $7.5 billion of Tesla stock before releasing disappointing sales data (Fortune). SpaceX conducts fourth Starship flight (Watch). US regulators to open antitrust inquiries of Microsoft, OpenAI and Nvidia (Reuters).
| *Indicates a sponsored link |
| |
| Together with Brilliant | Unlock your AI potential | | We talk a lot about two things here: Large Language Models (LLMs) and the steadily growing adoption of AI technology across global industries and businesses. | The task of understanding LLMs & the concepts behind them, however, is often a challenging one. But that’s where Brilliant comes in. | Offering bite-sized, personalized courses in everything from math to coding to LLMs, you can dive into the world of AI (at your own pace) and develop real, actionable knowledge in each of these critical areas. | It’s fun, it’s interactive, and, most importantly, it’s easily accessible. | With Brilliant, you won’t get left behind by the boom of the AI craze. | Join 10 million other proactive learners around the world and start your 30-day free trial today. Plus, readers of The Deep View get a special 20% off a premium annual subscription. |
| |
| | Gladstone AI presents a policy guide for Congress | | Created with AI by The Deep View. |
| In March, Gladstone AI published a report that detailed the myriad risks posed by AI. The report was the result of more than a year of research, during which its authors spoke to hundreds of executives and researchers at all the major AI labs. | Yesterday, Gladstone published the first installment of a series of briefings, distilled from the initial report, designed to serve as national policy guides for lawmakers. | The details: The report pushes for what it calls a “safety-forward” deployment path. | Right now, AI models are first trained and then deployed. Often, the report says, developers don’t really find out about a model’s capabilities until after it gets released (Google’s AI Overviews is a good example of this). This method of “public experimentation,” Gladstone says, is “backwards. Instead, developers of the most powerful AI systems should build a clear and compelling safety case before development occurs.”
| In order to implement this, Gladstone says we need: | Clarity on requirements through a licensing framework; Capacity to respond to the landscape with a dedicated regulatory body; Consequences for companies that do not meet obligations.
| Gladstone called for strong whistleblower protections as well as Congressional hearings to explore licensing and liability regimes. | Zoom out: Regulatory efforts have been playing catch-up to AI innovation from day one. Thus far, the EU’s AI Act is the only piece of legislation out there, and it only just got implemented. The U.S. currently has no federal legislation specific to AI, and it doesn’t seem close to getting there (Big Tech lobbying efforts remain strong). | At the same time, California is close to passing SB 1047, a bill that broadly would require organizations developing models of a certain level of compute to pass safety tests before deployment. It would also codify certain liabilities for developers of these more powerful systems.
| My thoughts: At the core of these conversions is this impression that over-regulation will “stifle innovation.” But that’s not really the case. As several researchers have told me, regulation will create demand for a responsible AI ecosystem. | The right regulation will not stifle innovation; on the contrary, it will fuel the right kind of innovation, resulting in tech that’s hopefully more helpful than harmful. And in any case, looking at the under-regulated environment of social media, I don’t know that over-regulation is such a bad thing. We tried it the other way for a long time. | It hasn’t worked. |
| |
| | | | | Image 1 |
| Which image is real? | | | Image 2 |
|
| |
| | MindEcho: A Google Chrome Browser Extension to search saved bookmarks and web content. LaterOn: A tool to aggregate newsletters into one summarized email. Durable: AI-powered website creator for small businesses.
| Have cool resources or tools to share? Submit a tool or reach us by replying to this email (or DM us on Twitter). | *Indicates a sponsored link |
| |
| SPONSOR THIS NEWSLETTER | The Deep View is currently one of the world’s fastest-growing newsletters, adding thousands of AI enthusiasts a week to our incredible family of over 200,000! Our readers work at top companies like Apple, Meta, OpenAI, Google, Microsoft and many more. | If you want to share your company or product with fellow AI enthusiasts before we’re fully booked, reserve an ad slot here. |
| |
| One last thing👇 | | Abeba Birhane @Abebab | |
| |
so apt | | | Jun 4, 2024 | | | | 363 Likes 67 Retweets 0 Replies |
|
| That's a wrap for now! We hope you enjoyed today’s newsletter :) | What did you think of today's email? | | We appreciate your continued support! We'll catch you in the next edition 👋 | -Ian Krietzberg, Editor-in-Chief, The Deep View |
| |
|
|