Platformer - Why I'm having trouble covering AI
Here’s a free edition of Platformer that addresses a subject I’ve wanted to write about for a while: how should journalists cover AI? As AI becomes a bigger and bigger story, I worry that some of the deeper risks posed by the technology are fading into the background. If you want to get the most out of Platformer, consider upgrading your subscription today and get our scoops in real time. Recently, paid subscribers learned how employees reacted to Elon Musk’s demolition of the old verification program at Twitter. We’d love to share scoops like these with you every week. Subscribe now and we’ll send you the link to join us in our chatty Discord server.
Why I'm having trouble covering AIIf you believe that the most serious risks from AI are real, should you write about anything else?It’s going to be a big week for announcements related to artificial intelligence. With that in mind, today I want to talk a bit about the challenges I’ve found in covering the rise of generative AI as it works its way into the product roadmaps of every company on my beat. Unlike other technological shifts I’ve covered in the past, this one has some scary (and so far mostly theoretical) risks associated with it. But covering those risks is tricky, and doesn’t always fit into the standard containers for business reporting or analysis. For that reason, I think it’s worth naming some of those challenges — and asking for your thoughts on what you think makes for good journalism in a world where AI is ascending. To start with, let’s consider two recent perspectives on the subject from leading thinkers in the field. One is from Geoffrey Hinton, an AI pioneer who made significant strides with neural networks, a key ingredient in the field’s recent improvements. Last week Hinton left his job at Google in part so he could speak out about AI risk, and told the New York Times’ Cade Metz that “a part of him … now regrets his life’s work.” “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said. Among his concerns: a flood of misinformation that makes it impossible to discern what is true; massive job losses through automation; and killer robots. So that’s one set of possible outcomes. Here’s another, from Jürgen Schmidhuber, who is sometimes called “the father of artificial intelligence.” He argues AI fears are misplaced, and that whatever bad actors do with AI can likely be countered with good actors using AI. Here’s Josh Taylor in the Guardian:
Whether you find yourself more inclined here to believe Hinton or Schmidhuber seems likely to color how you might cover AI as a journalist. If you believe Hinton’s warnings, and we are starting down a path that leads to killer robots or worse, it could make sense to center that risk in all coverage of AI, no matter how seemingly benign the individual announcement. If, on the other, you’re more sympathetic to Schmidhuber, and think that all of the problems created with AI will resolve themselves without causing much damage to society at all, you’d probably spend more time covering AI at the level of products and features and how people are using them in their lives. The reason I’m having trouble covering AI lately is because there is such a high variance in the way that the people who have considered the question most deeply think about risk. When the list of possible futures ranges from fully automated luxury communism to a smoking ruin where our civilization used to be, where is the journalist supposed to begin? (The usual answer is to talk to a lot of people. But the relevant people here are saying very different things!) All of this is on my mind lately for a couple reasons. One is that I recently spent some time talking with AI safety researchers who I thought made a convincing case that, no matter how much time executives and regulators spend warning us about the risks here, the average person still probably hasn’t grappled with them enough. These folks believe we essentially need to shut down AI development for a long while, invest way more money into safety research, and prevent further commercial development until we’ve developed a strategy to avoid the worst outcomes. The other reason it’s on my mind is that Google I/O is this week. On Wednesday the company is expected to showcase a wide range of new features drawing on its latest advancements in generative AI, and I’ll be there to cover it for you. (The Wall Street Journal and CNBC appear to have scooped some of the announcements already.) The Google announcements represent the fun side of AI: the moment when, after years of hype, average people can finally get their hands on new tools to help them with their work and daily lives. Even the most diehard believer in existential risk from AI can’t deny that, at least for the moment, tens of millions of people are finding the tools extremely useful for a broad range of tasks. One of my biases is that I started writing about tech because I love stuff like this: incremental advances that help me research faster, write better, and even illustrate my newsletter. Even as I’ve increasingly focused my writing on business coverage and tech policy, the instinct to say “hey, look at this cool thing” remains strong within me. And if — please! — Schmidhuber’s benign vision of our AI world comes to pass, I imagine I’ll feel fine about any incremental product coverage I did along the way to point people to useful new tools. But what if Hinton’s vision is closer to the mark? (And it seems noteworthy that there are more AI researchers in his camp than Schmidhuber’s.) Will I feel OK about having written a piece in 2022 titled “How DALL-E could power a creative revolution” if that revolution turns out to have been a step on the road to, uh, a worse one? Thinking through all this, I have in mind the criticism folks like me received in the wake of the 2016 US presidential election. We spent too much time hyping up tech companies and not enough time considering the second-order consequences of their hyper-growth, the argument went. (It’s truer to say we criticized the wrong things than nothing at all, I think, but perhaps that’s splitting hairs.) And while opinions vary on just how big a role platforms played in the election’s outcome, it seems undeniable now that if we could do it all over again we would probably cover tech differently from 2010 to 2016 than a lot of us, myself included, actually did. The introspection we did after 2016 was easier in one key respect than the question we face now, though. The tech backlash of 2017 was retrospective, rooted in the question of what social networks had done to our society. The AI question, on the other hand, is speculative. What is this thing about to do to us? I don’t want to set up a false dilemma here. The question is not whether AI coverage should be generally positive or generally negative. There is clearly room for a wide range of opinions. My discomfort, I think, comes with the heavy shadow that all AI coverage has looming in the background — and the way that the shadow often goes unacknowledged, including by me. So many of the leading researchers and even AI executives spend a great deal of time warning of potential doom. If you believe that doom is a serious possibility, shouldn’t you mention it all the time? Or, as Max Read has written, does that sort of warning only end up hyping up the companies building this technology? I haven’t come to any solid conclusions here. But today I offer a couple minor evolutions as my thinking changes. One, I updated Platformer’s About page, a link to which gets emailed to all new subscribers, to add AI as a core coverage interest. On that same page, I also added this paragraph to the section on what I’ve come to believe:
Adding a few lines on an About page isn’t of great to use to readers who happen upon the odd story from me here or there. But the nice thing about writing a newsletter is that many of you are dedicated readers! And now hopefully you have a more complete understanding of how I’m thinking about a subject I expect to return to often in the coming years. At the same time, I am going to be writing about the AI products that platforms release along the way. Understanding how AI will shape the future requires having a good sense of how people are using the technology, and I think that means staying up to date with what platforms are building and releasing into the world. When I write about these tools, though — even the most fantastically useful of them — I’ll strive to maintain the baseline skepticism that I tried to bring to this piece. I’ll end what has been a long and uncharacteristically meta reflection here by saying the situation I’m describing here isn’t unique. Plenty of journalism is rooted in uncertainty in how events will play out, from US politics to climate change. Take your pick of potential catastrophes, and there’s probably a group of journalists figuring out how to capture the full range of perspectives in 1,200 words. And personally, I started writing a daily newsletter because of the way it freed me from having to write a definitive take in every story. Instead I could just show up a few times a week, tell you what I learned today, and give you some ways to think about what might happened next. It’s not perfect, but it’s the best that I’ve come up with so far. If you have other ideas, though, I’m all ears. Governing
Industry
Those good tweetsFor more good tweets every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and your thoughts on AI coverage: casey@platformer.news and zoe@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
Bluesky's big moment
Tuesday, May 2, 2023
A new Twitter clone is surging in popularity. Could it have legs?
How BeReal missed its moment
Wednesday, April 26, 2023
To become the next big social app, competitors have to move faster
Can Snap snap back?
Friday, April 21, 2023
At its annual summit, the company gets bullish on AI — but feels haunted by the ghost of its past ambitions
How two insurgents are taking on Twitter
Wednesday, April 19, 2023
Artifact's Kevin Systrom on the disruptive power of good comments. PLUS: Substack's naïveté around Notes
So much for Elon Musk’s everything app
Wednesday, April 19, 2023
Suspending emergency accounts, enabling anti-trans speech, and other stops Twitter is making in its pursuit of X
You Might Also Like
Theory Two
Friday, November 22, 2024
Tomasz Tunguz Venture Capitalist If you were forwarded this newsletter, and you'd like to receive it in the future, subscribe here. Theory Two Today, we're announcing our second fund of $450
🗞 What's New: AI creators may be coming to TikTok
Friday, November 22, 2024
Also: Microsoft's AI updates are helpful for founders ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
behind the scenes of the 2024 digital health 50
Friday, November 22, 2024
the expert behind the list is unpacking this year's winners. don't miss it. Hi there, Get an inside look at the world's most promising private digital health companies. Join the analyst
How to get set up on Bluesky
Friday, November 22, 2024
Plus, Instagram personal profiles are now in Buffer! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
10words: Top picks from this week
Friday, November 22, 2024
Today's projects: Remote Nursing Jobs • CopyPartner • Fable Fiesta • IndexCheckr • itsmy.page • Yumestudios • Limecube • WolfSnap • Randomtimer • Fabrik • Upp • iAmAgile 10words Discover new apps
Issue #131: Building $1K-$10K MRR Micro SaaS Products around AI Search Optimisation, Fine-Tuning Image Models, AI-…
Friday, November 22, 2024
Build Profitable SaaS products!! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
(Free) Trial & Error— The Bootstrapped Founder 357
Friday, November 22, 2024
Today, I'll dive into the difference between a trial user and a trial abuser and what you can do to invite the former and prevent the latter. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
💎 Specially for you - will never be repeated again!
Friday, November 22, 2024
The biggest Black Friday sale in Foundr history...but it won't last forever! Black Friday_Header_2 Hey Friend , We knew our Black Friday deal was amazing—but wow, the response has been so unreal
Northvolt files for bankruptcy
Friday, November 22, 2024
Plus: Slush 2024 takeaways; Europe's newest unicorn View in browser Sponsor Card - Up Round-31 Good morning there, European climate tech poster child Northvolt is filing for Chapter 11 bankruptcy
Nov 2024: My first million!
Friday, November 22, 2024
$1M in annual revenue, B2B sales, SOC 2, resellers, grow team, and other updates in November 2024. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏