Platformer - The case for a little AI regulation
Here’s a preview of a piece looking at the intense debate over this week’s AI regulations coming out of the United States and the United Kingdom. To read the full account and to get the complete Platformer experience, consider upgrading your subscription today. The case for a little AI regulationNot every safety rule represents "regulatory capture" — and even challengers are asking the government to interveneToday, as a summit about the future of artificial intelligence plays out in the United Kingdom, let’s talk about the intense debate about whether it is safer to build open-source AI systems or closed ones — and consider the argument that government attempts to regulate them will benefit only the biggest players in the space. The AI Safety Summit in England’s Bletchley Park marks the second major government action related to the subject this week, following President Biden’s executive order on Monday. The UK’s minister of technology, Michelle Donelan, released a policy paper signed by 28 countries affirming the potential of AI to do good while calling for heightened scrutiny on next-generation large language models. Among the signatories was the United States, which also announced plans Thursday to establish a new AI safety institute under the Department of Commerce. Despite fears that the event would devolve into far-out debates over the potential for AI to create existential risks to humanity, Bloomberg reports that attendees in closed-door sessions mostly coalesced around the idea of addressing nearer-term harms. Here’s Thomas Seal:
A focus on the potential for practical harm characterizes the approach taken by Biden’s executive order, which directs agencies to explore individual risks around weapon development, synthetic media, and algorithmic discrimination, among other harms. And while UK Prime Minister Rishi Sunak has played up the potential for existential risk in his own remarks, so far he has taken a light-touch, business-friendly approach to regulation, reports the Washington Post. Like the United States, the UK is launching an AI safety institute of its own. “The Institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models,” the British Embassy told me in an email, “including exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely.” Near the summit’s end, most of the major US AI companies — including OpenAI, Google DeepMind, Anthropic, Amazon, Microsoft and Meta — signed a non-binding agreement to let governments test their models for safety risks before releasing them publicly. Writing about the Biden executive order on Monday, I argued that the industry was so far being regulated gently, and on its own terms. I soon learned, however, that many people in the industry — along with some of their peers in academia — do not agree. II. The criticism of this week’s regulations goes something like this: AI can be used for good or for ill, but at its core it is a neutral, general-purpose technology. To maximize the good it can do, regulators should work to get frontier technology into more hands. And to address harms, regulators should focus on strengthening defenses in places where AI could enable attacks, whether they be legal, physical, digital. To use an entirely too glib analogy, imagine that the hammer has just been invented. Critics worry that this will lead to a rash of people smashing each other with hammers, and demand that everyone who wants to buy a hammer first obtain a license from the government. The hammer industry and their allies in universities argue that we are better off letting anyone buy a hammer, while criminalizing assault and using public funds to pay for a police force and prosecutors to monitor hammer abuse. It turns out that distributing hammers more widely leads people to build things more quickly, and that most people do not smash each other with hammers, and the system basically holds. A core objection to the current state of AI regulation is that it sets an arbitrary limit on the development of next-generation LLMs: in this case, models that are trained with 10 to the 26th power floating point operations, or flops. Critics suggest that had similar arbitrary limits been issued earlier in the history of technology, the world would be impoverished. Here’s Steven Sinofsky, a longtime Microsoft executive and board partner at Andreessen Horowitz, on the executive order:... Keep reading with a 7-day free trialSubscribe to Platformer to keep reading this post and get 7 days of free access to the full post archives.A subscription gets you:
|
Older messages
Biden seeks to rein in AI
Wednesday, November 1, 2023
An executive order gives AI companies the guardrails they asked for. Will the US go further?
Twitter is dead and Threads is thriving
Friday, October 27, 2023
One year after Elon Musk let that sink in, an elegy for the platform that was — and some notes on the one that is poised to succeed it
The states sue Meta over child safety
Wednesday, October 25, 2023
Everyone agrees there's a teen mental health crisis. Is this how you fix it?
How to see the future using DALL-E 3
Tuesday, October 24, 2023
To understand how quickly AI is improving, forget ChatGPT — use an image generator
Inside Discord’s reform movement for banned users
Friday, October 20, 2023
Most platforms ban their trolls forever. Discord wants to rehabilitate them
You Might Also Like
💥 Make 2025 The Best Year of Your Life - CreatorBoom
Wednesday, December 25, 2024
Six Figure Local Newsletter, How Eddie Shleyner Built Very Good Copy, 10 Newsletter Success Stories Generating $1.1M in MRR, 4 Boring Websites That Make over $35k Per Month, 6 Things to Do if Your
🚀 This holiday, learn from the best & transform 2025
Wednesday, December 25, 2024
These experts have built $100M+ businesses—now they're here to help you do the same. fdrlogo Hey Friend , What do 30000+ Foundr students know that you don't? They know the difference between
🗞 What's New: AI video editing is coming to Instagram
Tuesday, December 24, 2024
Also: Mobile app earnings jumped 15.7% in 2024 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
[CEI] Chrome Extension Ideas #171
Tuesday, December 24, 2024
ideas for Amazon, Podcast, Twitter, and AI ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Top angel investors in the U.S.
Tuesday, December 24, 2024
Inspiration for who to raise from when you're raising your early rounds ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
🎁 🎄 HO HO HO! Here's the ultimate gift for your business journey
Tuesday, December 24, 2024
Unwrap your holiday gifts and start building your dream in 2025! fdrlogo Hey Friend , HO HO HO! Your holiday gifts have arrived! This isn't your typical holiday surprise—these gifts are proven
Biggest rounds of 2024
Tuesday, December 24, 2024
+ Sriram Krishnan joining Trump's government View in browser Sponsor Card - Up Round-35 Good morning there, Welcome to the last Sifted Daily newsletter of 2024, in which we look back on the biggest
The Corner Office & Low Exp 👩💼
Monday, December 23, 2024
And some holiday news͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
🗞 ICYMI: insights on o3, AI job disruption, marketing on Bluesky
Monday, December 23, 2024
Also: a new social network ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
top strategy research of 2024
Monday, December 23, 2024
every billion-dollar startup around the globe, tech that will change the world, and the strategy team playbook CB-Insights-Logo-light copy Our top strategy research of 2024 Highlights: Every billion-