Platformer - The case for a little AI regulation
Here’s a preview of a piece looking at the intense debate over this week’s AI regulations coming out of the United States and the United Kingdom. To read the full account and to get the complete Platformer experience, consider upgrading your subscription today. The case for a little AI regulationNot every safety rule represents "regulatory capture" — and even challengers are asking the government to interveneToday, as a summit about the future of artificial intelligence plays out in the United Kingdom, let’s talk about the intense debate about whether it is safer to build open-source AI systems or closed ones — and consider the argument that government attempts to regulate them will benefit only the biggest players in the space. The AI Safety Summit in England’s Bletchley Park marks the second major government action related to the subject this week, following President Biden’s executive order on Monday. The UK’s minister of technology, Michelle Donelan, released a policy paper signed by 28 countries affirming the potential of AI to do good while calling for heightened scrutiny on next-generation large language models. Among the signatories was the United States, which also announced plans Thursday to establish a new AI safety institute under the Department of Commerce. Despite fears that the event would devolve into far-out debates over the potential for AI to create existential risks to humanity, Bloomberg reports that attendees in closed-door sessions mostly coalesced around the idea of addressing nearer-term harms. Here’s Thomas Seal:
A focus on the potential for practical harm characterizes the approach taken by Biden’s executive order, which directs agencies to explore individual risks around weapon development, synthetic media, and algorithmic discrimination, among other harms. And while UK Prime Minister Rishi Sunak has played up the potential for existential risk in his own remarks, so far he has taken a light-touch, business-friendly approach to regulation, reports the Washington Post. Like the United States, the UK is launching an AI safety institute of its own. “The Institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models,” the British Embassy told me in an email, “including exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely.” Near the summit’s end, most of the major US AI companies — including OpenAI, Google DeepMind, Anthropic, Amazon, Microsoft and Meta — signed a non-binding agreement to let governments test their models for safety risks before releasing them publicly. Writing about the Biden executive order on Monday, I argued that the industry was so far being regulated gently, and on its own terms. I soon learned, however, that many people in the industry — along with some of their peers in academia — do not agree. II. The criticism of this week’s regulations goes something like this: AI can be used for good or for ill, but at its core it is a neutral, general-purpose technology. To maximize the good it can do, regulators should work to get frontier technology into more hands. And to address harms, regulators should focus on strengthening defenses in places where AI could enable attacks, whether they be legal, physical, digital. To use an entirely too glib analogy, imagine that the hammer has just been invented. Critics worry that this will lead to a rash of people smashing each other with hammers, and demand that everyone who wants to buy a hammer first obtain a license from the government. The hammer industry and their allies in universities argue that we are better off letting anyone buy a hammer, while criminalizing assault and using public funds to pay for a police force and prosecutors to monitor hammer abuse. It turns out that distributing hammers more widely leads people to build things more quickly, and that most people do not smash each other with hammers, and the system basically holds. A core objection to the current state of AI regulation is that it sets an arbitrary limit on the development of next-generation LLMs: in this case, models that are trained with 10 to the 26th power floating point operations, or flops. Critics suggest that had similar arbitrary limits been issued earlier in the history of technology, the world would be impoverished. Here’s Steven Sinofsky, a longtime Microsoft executive and board partner at Andreessen Horowitz, on the executive order:... Keep reading with a 7-day free trialSubscribe to Platformer to keep reading this post and get 7 days of free access to the full post archives.A subscription gets you:
|
Older messages
Biden seeks to rein in AI
Wednesday, November 1, 2023
An executive order gives AI companies the guardrails they asked for. Will the US go further?
Twitter is dead and Threads is thriving
Friday, October 27, 2023
One year after Elon Musk let that sink in, an elegy for the platform that was — and some notes on the one that is poised to succeed it
The states sue Meta over child safety
Wednesday, October 25, 2023
Everyone agrees there's a teen mental health crisis. Is this how you fix it?
How to see the future using DALL-E 3
Tuesday, October 24, 2023
To understand how quickly AI is improving, forget ChatGPT — use an image generator
Inside Discord’s reform movement for banned users
Friday, October 20, 2023
Most platforms ban their trolls forever. Discord wants to rehabilitate them
You Might Also Like
⏰ Final day to join MicroConf Connect (Applications close at midnight)
Wednesday, January 15, 2025
MicroConf Hey Rob! Don't let another year go by figuring things out alone. Today is your final chance to join hundreds of SaaS founders who are already working together to make 2025 their
How I give high-quality feedback quickly
Wednesday, January 15, 2025
If you're not regularly giving feedback, you're missing a chance to scale your judgment. Here's how to give high-quality feedback in as little as 1-2 hours per week. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
💥 Being Vague is Costing You Money - CreatorBoom
Wednesday, January 15, 2025
The Best ChatGPT Prompt I've Ever Created, Get More People to Buy Your Course, Using AI Generated Videos on Social Media, Make Super Realistic AI Images of Yourself, Build an in-email streak
Enter: A new unicorn
Wednesday, January 15, 2025
+ French AI startup investment doubles; Klarna partners with Stripe; Bavaria overtakes Berlin View in browser Leonard_Flagship Good morning there, France is strengthening its position as one of the
Meta just flipped the switch that prevents misinformation from spreading in the United States
Wednesday, January 15, 2025
The company built effective systems to reduce the reach of fake news. Last week, it shut them down Platformer Platformer Meta just flipped the switch that prevents misinformation from spreading in the
Ok... we're now REALLY live Friend !
Tuesday, January 14, 2025
Join Jackie Damelian to learn how to validate your product and make your first sales. Hi Friend , Apologies, we experienced some technical difficulties but now We're LIVE for Day 3 of the Make Your
Building GTM for AI : Office Hours with Maggie Hott
Tuesday, January 14, 2025
Tomasz Tunguz Venture Capitalist If you were forwarded this newsletter, and you'd like to receive it in the future, subscribe here. Building GTM for AI : Office Hours with Maggie Hott On
ICYMI: Musk's TikTok, AI's future, films for founders
Tuesday, January 14, 2025
A recap of the last week ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
🚨 [LIVE IN 1 HOUR] Day 3 of the Challenge with Jackie Damelian
Tuesday, January 14, 2025
Join Jackie Damelian to learn how to validate your product and make your first sales. Hi Friend , Day 3 of the Make Your First Shopify Sale 5-Day Challenge is just ONE HOUR away! ⌛ Here's the link
The Broken Ladder & The Missing Manager 🪜
Tuesday, January 14, 2025
And rolling through work on a coaster͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏