Astral Codex Ten - Most Technologies Aren't Races
[Disclaimer: I’m not an AI policy person, the people who are have thought about these scenarios in more depth, and if they disagree with this I’ll link to their rebuttals] Some people argue against delaying AI because it might make China (or someone else) “win” the AI “race”. But suppose AI is “only” a normal transformative technology, no more important than electricity, automobiles, or computers. Who “won” the electricity “race”? Maybe Thomas Edison, but that didn’t cause Edison’s descendants to rule the world as emperors, or make Menlo Park a second Rome. It didn’t even especially advantage America. Edison personally got rich, the overall balance of power didn’t change, and today all developed countries have electricity. Who “won” the automobile race? Karl Benz? Henry Ford? There were many steps between the first halting prototype and widespread adoption. Benz and Ford both personally got rich, their companies remain influential today, and Mannheim and Detroit remain important auto manufacturing hubs. But other companies like Toyota and Tesla are equally important, the overall balance of power didn’t change, and today all developed countries have automobiles. Who “won” the computer “race”? Charles Babbage? Alan Turing? John von Neumann? Steve Jobs? Bill Gates? Again, it was a long path of incremental improvements. Jobs and Gates got rich, and their hometowns are big tech hubs, but other people have gotten even richer, and the world chip manufacturing center is in Taiwan now for some reason. The overall balance of power didn’t change (except maybe during a brief window when the Bombes broke Enigma) and today all developed countries have computers. The most consequential “races” have been for specific military technologies during wars; most famously, the US won the “race” for nuclear weapons. America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II. Maybe in some sense the British won a “race” for radar, although it wasn’t a “race” in the sense that the Axis knew about it and was competing to get it first. Maybe in some sense countries “race” to get better fighter jets, tanks, satellites, etc than their rivals. But ordinary mortals don’t concern themselves with such things. No part of US automobile policy is based on “winning the car race” against China, in some sense where consumer car R&D will affect tanks and our military risks being left behind. I think some people hear transhumanists talk about an “AI race” and mindlessly repeat it, without asking what assumptions it commits them to. Transhumanists talk about winning an AI “race” for two reasons: First, because if you believe unaligned AI could destroy humanity at some point, it’s important to align AI before it gets to that point. Companies that care about alignment might race to reach that point before companies that don’t care about alignment. Right now this is all academic, because nobody knows how to align AIs. But if someone figured that out, we would want those people to win a race.¹ Second, because some transhumanists think AI could cause a technological singularity that speedruns the next several millennia worth of advances in a few years. In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage. That’s not enough to automatically win any conflict (Russia has a 10x GDP advantage over Ukraine; India has a 10x GDP advantage over Pakistan). It’s a big deal, but it probably still results in a multipolar world. Slow-takeoff worlds have races, but not crucial ones. So one case in which losing an AI race is fatal is in what transhumanists call a “fast takeoff”, where AI speedruns millennia worth of usual tech progress in months, weeks, or even days. This probably only happens if superintelligent AI can figure out ways to improve its own intelligence in a critical feedback loop. I’m pretty skeptical of these scenarios in the current AI paradigm where compute is often the limiting resource, but other people disagree. In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,. We remember the race for nuclear weapons because they’re a binary technology - either you have them, or you don’t. When the US invented stealth bombers, its enemies had slightly worse planes that were slightly less stealthy. But when the US invented nukes, its enemies were stuck with normal bombs; there is no slightly-worse-nuke that can only destroy half a city. Everywhere outside the most extreme transhumanist scenarios, AI is more like the stealth bomber. You may have GPT-3, GPT-4, some future GPT-5, but a two year gap means you have slightly worse AIs, not that you have no AI at all. The only case where there’s a single critical point - where you either have the transformative AI or nothing - is in the hard-takeoff scenario where at a certain threshold AI recursively self-improves to infinity. If someone reaches this threshold before you do, then you’ve lost a race!² Everyone I know who believes in fast takeoffs is a doomer. There’s no way you go to sleep with a normal only-slightly-above-human-level AI, wake up with the AI having godlike powers, and the AI is still doing what you want. You have no chance to debug the AI at level N and get it ready for level N+1. You skip straight from level N to level N + 1,000,000. The AI is radically rewriting its code many times in a single night. You are pretty doomed. If you don’t believe in crazy science fiction scenarios like these, fine. But then why are you so sure that it’s crucial to “win” the AI “race”? If you’re sure these kinds of thing won’t happen, then you should treat AI like electricity, automobiles, or stealth bombers. It might tip the balance of a badly timed war, but otherwise you can just steal the tech and catch up. I’m harping on this point because people think “we need to win the race with China” is an argument against worrying about alignment at all. I’m on board with claims like “as we worry about alignment, one thing we should consider is whether we’re losing a race against China”. But if you’re using the idea of a race to argue against alignment worries, I think you’re confused. In most scenarios, you don’t care that much about the race. And in the scenario where you do, you really want to worry about alignment. 1 Or, rather, we’d want everyone to cooperate in implementing their solution. But if we can’t get this, then second-best would be for the good guys to win a race. 2 Even in the unlikely scenario where AI causes a singularity and remains aligned, I have trouble worrying too much about races. The whole point of a singularity is that it’s hard to imagine what happens on the other side of it. I care a lot how much relative power Xi Jinping, Mark Zuckerberg, and Joe Biden have today, but I don’t know how much I care about them after a singularity. “Wouldn’t Xi Jinping put people in camps?” Why? He put the Uighurs in camps because he was afraid they would revolt against Chinese rule. Nobody can revolt against someone who controls a technological singularity, so why put them in camps? “Wouldn’t Joe Biden overregulate small business?” There won’t be small business! If you want to build a customized personal utopian megastructure, you won’t hire a small business, you’ll just say “AI, build me a customized personal utopian megastructure and it will materialize in front of you. Probably you should avoid doing this in a star system someone else owns, but there will be enough star systems to go around. If people insist on having an economy for old time’s sake, you can just build a Matrioshka brain the size of Jupiter, ask it which policies are good for the economy, then do those ones. “Wouldn’t Mark Zuckerberg perpetuate structural racism?” You will be able to change your race, age, gender, species, and state of matter at will. Nobody will even remember what race you were. If for some reason the glowing clouds of plasma that used to be black people have smaller customized personal utopian megastructures than the glowing clouds of plasma that used to be white people, you can ask the brain the size of Jupiter how to solve it, and it will tell you (I bet it involves using slightly different euphemisms to refer to things, that’s always been the answer so far). People come up with these crazy stories about “winning races” that don’t matter without a technological singularity - then act like any of their current issues will still matter after a technological singularity. Sorry, no, it will be weirder than that. Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore. As long as they’re not actively a sadist who wants to hurt people, they can just let people enjoy the technological utopia they’ve created, and implement a few basic rules like “if someone tries to punch someone else, the laws of physics will change so that your hand phases through their body”. And yeah, that “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar. There are some ideologues and terrible people who don’t, but they seem far away from the cutting edge of AI. This isn’t to say the future won’t have controversial political issues. Should you be allowed to wirehead yourself so thoroughly that you never want to stop? In what situations should people be allowed to have children? (surely not never, but also surely not creating a shockwave of trillions of children spreading at near-light-speed across the galaxy). Who gets the closest star systems? (there will be enough star systems to go around, but I assume the ones closer to Earth will be higher status) What kind of sims can you voluntarily consent to participate in? I’m okay with these decisions being decided by the usual decision-making methods of the National People’s Congress, the US constitution, or Meta’s corporate charter. At the very least, I don’t think switching from one of these to another is a big enough deal that it should trade off against the chance we survive at all. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
Highlights From The Comments On Telemedicine Regulations
Monday, April 3, 2023
...
Open Thread 270
Monday, April 3, 2023
...
MR Tries The Safe Uncertainty Fallacy
Thursday, March 30, 2023
...
The Government Is Making Telemedicine Hard And Inconvenient Again
Wednesday, March 29, 2023
...
Turing Test
Monday, March 27, 2023
...
You Might Also Like
Strategic Bitcoin Reserve And Digital Asset Stockpile | White House Crypto Summit
Saturday, March 8, 2025
Trump's new executive order mandates a comprehensive accounting of federal digital asset holdings. Forbes START INVESTING • Newsletters • MyForbes Presented by Nina Bambysheva Staff Writer, Forbes
Researchers rally for science in Seattle | Rad Power Bikes CEO departs
Saturday, March 8, 2025
What Alexa+ means for Amazon and its users ADVERTISEMENT GeekWire SPONSOR MESSAGE: Revisit defining moments, explore new challenges, and get a glimpse into what lies ahead for one of the world's
Survived Current
Saturday, March 8, 2025
Today, enjoy our audio and video picks Survived Current By Caroline Crampton • 8 Mar 2025 View in browser View in browser The full Browser recommends five articles, a video and a podcast. Today, enjoy
Daylight saving time can undermine your health and productivity
Saturday, March 8, 2025
+ aftermath of 19th-century pardons for insurrectionists
I Designed the Levi’s Ribcage Jeans
Saturday, March 8, 2025
Plus: What June Squibb can't live without. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission.
YOU LOVE TO SEE IT: Defrosting The Funding Freeze
Saturday, March 8, 2025
Aid money starts to flow, vital youth care is affirmed, a radical housing plan takes root, and desert water gets revolutionized. YOU LOVE TO SEE IT: Defrosting The Funding Freeze By Sam Pollak • 8 Mar
Rough Cuts
Saturday, March 8, 2025
March 08, 2025 The Weekend Reader Required Reading for Political Compulsives 1. Trump's Approval Rating Goes Underwater Whatever honeymoon the 47th president enjoyed has ended, and he doesn't
Weekend Briefing No. 578
Saturday, March 8, 2025
Tiny Experiments -- The Lazarus Group -- Food's New Frontier ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Your new crossword for Saturday Mar 08 ✏️
Saturday, March 8, 2025
View this email in your browser Happy Saturday, crossword fans! We have six new puzzles teed up for you this week! You can find all of our new crosswords in one place. Play the latest puzzle Click here
Russia Sanctions, Daylight Saving Drama, and a Sneaky Cat
Saturday, March 8, 2025
President Trump announced on Friday that he is "strongly considering" sanctions and tariffs on Russia until it agrees to a ceasefire and peace deal to end its three-year war with Ukraine. ͏