The AI industry really should slow down a little
Here’s this week’s free Platformer — a look at the much-discussed letter from technologists arguing that the AI industry needs to slow down. If you want to get the most out of Platformer, consider upgrading your subscription today and get our scoops in real time. This week, paid subscribers learned about the secret list of Twitter VIPs whose accounts have been boosted even as Elon Musk pledges to make the service feel more equitable to all users. We’d love to share scoops like these with you every week. Subscribe now and we’ll send you the link to join us in our chatty Discord server.
The AI industry really should slow down a littleThis year has given us a bounty of innovations. We could use some time to absorb themI. What a difference four months can make. If you had asked in November how I thought AI systems were progressing, I might have shrugged. Sure, by then OpenAI had released DALL-E, and I found myself enthralled with the creative possibilities it presented. On the whole, though, after years watching the big platforms hype up artificial intelligence, few products on the market seemed to live up to the more grandiose visions that have been described for us over the years. Then OpenAI released ChatGPT, the chatbot that captivated the world with its generative possibilities. Microsoft’s GPT-powered Bing browser, Anthropic’s Claude, and Google’s Bard followed in quick succession. AI-powered tools are quickly working their way into other Microsoft products, and more are coming to Google’s. At the same time, as we inch closer to a world of ubiquitous synthetic media, some danger signs are appearing. Over the weekend, an image of Pope Francis that showed him in an exquisite white puffer coat went viral — and I was among those who was fooled into believing it was real. The founder of open-source intelligence site Bellingcat was banned from Midjourney after using it to create and distribute some eerily plausible images of Donald Trump getting arrested. (The company has since disabled free trials in an effort to reduce the spread of fakes.) Synthetic text is rapidly making its way into the workflows of students, copywriters, and anyone else engaged in knowledge work; this week BuzzFeed became the latest publisher to begin experimenting with AI-written posts. At the same time, tech platforms are cutting members of their AI ethics teams. A large language model created by Meta leaked and was posted to 4chan, and soon someone figured out how to get it running on a laptop. Elsewhere, OpenAI released plug-ins for GPT-4, allowing the language model to access APIs and interface more directly with the internet, sparking fears that it would create unpredictable new avenues for harm. (I asked OpenAI about that one directly; the company didn’t respond to me.) It is against the backdrop of this maelstrom that a group of prominent technologists is now asking makers of these tools to slow down. Here’s Cade Metz and Gregory Schmidt at the New York Times:
If nothing else, the letter strikes me as a milestone in the march of existential AI dread toward mainstream awareness. Critics and academics have been warning about the dangers posed by these technologies for years. But as recently as last fall, few people playing around with DALL-E or Midjourney worried about “an out-of-control race to develop and deploy ever more digital minds.” And yet here we are. There are some worthwhile critiques of the technologists’ letter. Emily M. Bender, a professor of linguistics at the University of Washington and AI critic, called it a “hot mess,” arguing in part that doomer-ism like this winds up benefiting AI companies by making them seem much more powerful than they are. (See also Max Read on that subject.) In an embarrassment for a group nominally worried about AI-powered deception, a number of the people initially presented as signatories to the letter turned out not to have signed it. And Forbes noted that the institute that organized the letter campaign is primarily funded by Musk, who has AI ambitions of his own. There are also arguments that speed should not be our primary concern here. Last month Ezra Klein argued that our real focus should be on these system’s business models. The fear is that ad-supported AI systems prove to be more powerful at manipulating our behavior than we are currently contemplating — and that will be dangerous no matter how fast or slow we choose to go here. “Society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions,” Klein wrote. These are good and necessary criticisms. And yet whatever flaws we might identify in the open letter — I apply a pretty steep discount to anything Musk in particular has to say these days — in the end I’m persuaded of their collective argument. The pace of change in AI does feel as if it could soon overtake our collective ability to process it. And the change signatories are asking for — a brief pause in the development of language models larger than the ones that have already been released — feels like a minor request in the grand scheme of things. Tech coverage tends to focus on innovation and the immediate disruptions that stem from it. It’s typically less adept at thinking through how new technologies might cause society-level change. And yet the potential for AI to dramatically affect the job market, the information environment, cybersecurity and geopolitics — to name just four concerns — should gives us all reason to think bigger. II. Aviv Ovadya, who studies the information environment and whose work I have covered here before, served on a red team for OpenAI prior to the launch of GPT-4. Red-teaming is essentially a role-playing exercise in which participants act as adversaries to a system in order to identify its weak points. The GPT-4 red team discovered that if left unchecked, the language model would do all sorts of things we wish it wouldn’t, like hire an unwitting TaskRabbit to solve a CAPTCHA. OpenAI was then able to fix that and other issues before releasing the model. In a new piece in Wired, though, Ovadya argues that red-teaming alone isn’t sufficient. It’s not enough to know what material the model spits out, he writes. We also need to know what effect the model’s release might have on society at large. How will it affect schools, or journalism, or military operations? Ovadya proposes that experts in these fields be brought in prior to a model’s release to help build resilience in public goods and institutions, and to see whether the tool itself might be modified to defend against misuse. Ovadya calls this process “violet teaming”:
If adopted by companies like OpenAI and Google, either voluntarily or at the insistence of a new federal agency, violet teaming could better prepare us for how more powerful models will affect the world around us. At best, though, violet teams would only be part of the regulation we need here. There are so many basic issues we have to work through. Should models as big as GPT-4 be allowed to run on laptops? Should we limit the degree to which these models can access the wider internet, the way OpenAI’s plug-ins now do? Will a current government agency regulate these technologies, or do we need to create a new one? If so, how quickly can we do that? I don’t think you have to have fallen for AI hype to believe that we will need an answer to these questions — if not now, then soon. It will take time for our sclerotic government to come up with answers. And if the technology continues to advance faster than the government’s ability to understand it, we will likely regret letting it accelerate. Either way, the next several months will let us observe the real-world effects of GPT-4 and its rivals, and help us understand how and where we should act. But the knowledge that no larger models will be released during that time would, I think, give comfort to those who believe AI could be as harmful as some believe. If I took one lesson away from covering the backlash to social media, it’s that the speed of the internet often works against us. Lies travel faster than anyone can moderate them; hate speech inspires violence more quickly than tempers can be calmed. Putting brakes on social media posts as they go viral, or annotating them with extra context, have made those networks more resilient to bad actors who would otherwise use them for harm. I don’t know if AI will ultimately wreak the havoc that some alarmists are now predicting. But I believe those harms are more likely to come to pass if the industry keeps moving at full speed. Slowing down the release of larger language models isn’t a complete answer to the problems ahead. But it could give us a chance to develop one. Coming up on the podcast tomorrow morning: Kevin and I sit down in person with Google CEO Sundar Pichai to talk about launching Bard, the AI arms race, and how he thinks about balancing AI risk with competitive pressures. And, of course: did he order the code red? If you’re not listening to Hard Fork yet — this is the moment. Apple | Spotify | Stitcher | Amazon | Google Governing
Industry
Those good tweetsFor more good tweets every day, follow Casey’s Instagram stories. interviewer: what do u bring to the table
me: potato salad if it’s like a family thing
interviewer: i meant to work
me: [clearing my throat] i would bring regular potatoes. none of that funny business it’s been completely silent on zoom for the past 20 mins so i just asked the professor what we're supposed to be doing and apparently we were taking a test Talk to usSend us tips, comments, questions, and slow AI: casey@platformer.news and zoe@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
The secret list of Twitter VIPs getting boosted over everyone else
Tuesday, March 28, 2023
Congratulations to Ben Shapiro, AOC, and ... LeBron?
How TikTok failed to make the case for itself
Friday, March 24, 2023
After a rocky appearance before Congress, the company has reason to reflect
TikTok nears the endgame
Friday, March 17, 2023
Breaking off TikTok from ByteDance might be the right thing to do — but it will come at a high cost. PLUS: How Twitter keeps competitors off its For You page
Microsoft just laid off one of its responsible AI teams
Tuesday, March 14, 2023
As the company accelerates its push into AI products, the ethics and society team is gone
Meta is building a decentralized, text-based social network
Friday, March 10, 2023
Is this the Twitter replacement we've been waiting for?
You Might Also Like
🗞 What's New: AI creators may be coming to TikTok
Friday, November 22, 2024
Also: Microsoft's AI updates are helpful for founders ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
behind the scenes of the 2024 digital health 50
Friday, November 22, 2024
the expert behind the list is unpacking this year's winners. don't miss it. Hi there, Get an inside look at the world's most promising private digital health companies. Join the analyst
How to get set up on Bluesky
Friday, November 22, 2024
Plus, Instagram personal profiles are now in Buffer! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
10words: Top picks from this week
Friday, November 22, 2024
Today's projects: Remote Nursing Jobs • CopyPartner • Fable Fiesta • IndexCheckr • itsmy.page • Yumestudios • Limecube • WolfSnap • Randomtimer • Fabrik • Upp • iAmAgile 10words Discover new apps
Issue #131: Building $1K-$10K MRR Micro SaaS Products around AI Search Optimisation, Fine-Tuning Image Models, AI-…
Friday, November 22, 2024
Build Profitable SaaS products!! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
(Free) Trial & Error— The Bootstrapped Founder 357
Friday, November 22, 2024
Today, I'll dive into the difference between a trial user and a trial abuser and what you can do to invite the former and prevent the latter. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
💎 Specially for you - will never be repeated again!
Friday, November 22, 2024
The biggest Black Friday sale in Foundr history...but it won't last forever! Black Friday_Header_2 Hey Friend , We knew our Black Friday deal was amazing—but wow, the response has been so unreal
Northvolt files for bankruptcy
Friday, November 22, 2024
Plus: Slush 2024 takeaways; Europe's newest unicorn View in browser Sponsor Card - Up Round-31 Good morning there, European climate tech poster child Northvolt is filing for Chapter 11 bankruptcy
Nov 2024: My first million!
Friday, November 22, 2024
$1M in annual revenue, B2B sales, SOC 2, resellers, grow team, and other updates in November 2024. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Google wants to keep Chrome
Friday, November 22, 2024
The search giant is pushing back on the government's plan to break it up — but competition is coming anyway Platformer Platformer Google wants to keep Chrome The search giant is pushing back on the