Astral Codex Ten - Kelly Bets On Civilization
Scott Aaronson makes the case for being less than maximally hostile to AI development:
Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions, he’s more arguing against people who say that AIs should be banned because they might spread misinformation or gaslight people or whatever. Still, I think about this argument a lot. I agree he’s right about nuclear power. When it comes out in a few months, I’ll be reviewing a book that makes this same point about institutional review boards: that our fear of a tiny handful of deaths from unethical science has caused hundreds of thousands of deaths from delaying ethical and life-saving medical progress. The YIMBY movement makes a similar point about housing: we hoped to prevent harm by subjecting all new construction to a host of different reviews - environmental, cultural, equity-related - and instead we caused vast harm by creating an epidemic of homelessness and forcing the middle classes to spend increasingly unaffordable sums on rent. This pattern typifies the modern age; any attempt to restore our rightful utopian flying-car future will have to start with rejecting it as vigorously as possible. So how can I object when Aaronson turns the same lens on AI? First, you are allowed to use Inside View. If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down because “in the past, shutting down progress out of exaggerated fear of potential harm has killed far more people than the progress itself ever could”, you are permitted to respond “yes, but you are Osama bin Laden, and this is a supervirus lab.” You don’t have to give every company trying to build the Torment Nexus a free pass just because they can figure out a way to place their work in a reference class which is usually good. All other technologies fail in predictable and limited ways. If a buggy AI exploded, that would be no worse than a buggy airplane or nuclear plant. The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause maximum damage while undetected. Also it’s smarter than you. Also this might work so well that nobody realizes they’re all buggy until there are millions of them. But maybe opponents of every technology have some particular story why theirs is a special case. So let me try one more argument, which I think is closer to my true objection. There’s a concept in finance called Kelly betting. It briefly gained some fame last year as a thing that FTX failed at, before people realized FTX had failed at many more fundamental things. It works like this (warning - I am bad at math and may have gotten some of this wrong): suppose you start with $1000. You’re at a casino with one game: you can, once per day, bet however much you want on a coin flip, double-or-nothing. You’re slightly psychic, so you have a 75% chance of guessing the coin flip right. That means that on average, you’ll increase your money by 50% each time you bet. Clearly this is a great opportunity. But how much do you bet per day? (for the sake of argument, let’s say you have completely linear marginal utility of money) Tempting but wrong answer: bet all of it each time. After all, on average you gain money each flip - each $1 invested in the coin flip game becomes $1.50. If you bet everything, then after five coin flips you’ll have (on average) $7,500. But if you just bet $1 each time , then (on average), you’ll only have $1,008. So obviously bet as much as possible, right? But after five coin flips of $1000, there’s an 76% chance that you’ve lost all your money. Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money. So although technically this has the highest “average utility”, all of this is coming from one super-amazing sliver of probability-space where you own more money than exists in the entire world. In every other timeline, you’re broke. So how much should you bet? $1 is too little. These flips do, on average, increase your money by 50%; it would take forever to get anywhere betting $1 at a time. You want something that’s high enough to increase your wealth quickly, but not so high that it’s devastating and you can’t come back from it on the rare occasions when you lose. In this case, if I understand the Kelly math right, you should bet half each time. But the lesson I take from this isn’t just the exact math. It’s: even if you know a really good bet, don’t bet everything at once. Science and technology are great bets. Their benefits are much greater than their harms. Whenever you get a chance to bet something significantly less than everything in the world on science or technology, you should take it. Your occasional losses will be dwarfed by your frequent and colossal gains. If we’d gone full-speed-ahead on nuclear power, we might have had one or two more Chernobyls - but we’d save the tens of thousands of people who die each year from fossil-fuel-pollution-related diseases, end global warming, and have unlimited cheap energy. But science and technology aren’t perfect bets. Gain-of-function research on coronaviruses was a big loss. Leaded gasoline, chlorofluorocarbon-based refrigerants, thalidomide for morning sickness - all of these were high-tech ideas that ended up going badly, not to mention all the individual planes that crashed or rockets that exploded. Society (mostly) recovered from all of these. A world where people invent gasoline and refrigerants and medication (and sometimes fail and cause harm) is vastly better than one where we never try to have any of these things. I’m not saying technology isn’t a great bet. It’s a great bet! But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%. It’s not that you should never do this. Every technology has some risk of destroying the world; the first time someone tried vaccination, there was an 0.000000001% chance it could have resulted in some weird super-pathogen that killed everybody. I agree with Scott Aaronson: a world where nobody ever tries to create AI at all, until we die of something else a century or two later, is pretty depressing. But we have to consider them differently than other risks. A world where we try ten things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is probably a world where a handful of people have died in freak accidents but everyone else lives in safety and abundance. A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Key phrases
Older messages
Open Thread 266
Monday, March 6, 2023
...
Impact Market Mini-Grants Update
Friday, March 3, 2023
...
Against Ice Age Civilizations
Friday, March 3, 2023
...
OpenAI's "Planning For AGI And Beyond"
Wednesday, March 1, 2023
...
Highlights From The Comments On Geography Of Madness
Monday, February 27, 2023
Plus: A case for culture-bound mental disorder skepticism
You Might Also Like
What Does Donald Trump’s Gag Order Really Mean?
Friday, April 26, 2024
Columns and commentary on news, politics, business, and technology from the Intelligencer team. Intelligencer FRIDAY, APRIL 26 Donald Trump Is a Special Kind of Courtroom-Discipline Problem Judge
I Found EltaMD Sunscreen on Sale
Friday, April 26, 2024
23 things on sale you'll actually want to buy. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission.
GeekWire Awards: Grab tickets before the big show sells out
Friday, April 26, 2024
GeekWire Awards: Grab tickets before the big show sells out Limited number of GeekWire Awards tickets released The much-anticipated GeekWire Awards — celebrating the top innovators, entrepreneurs and
A Diamond in the Rough
Friday, April 26, 2024
An American Story, and New Shirts! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Are we going to see more nonprofit newsrooms team up?
Friday, April 26, 2024
PLUS: How Ben McCarthy built a Salesforce-focused media company with 400000 monthly readers ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Would you choose cohabitation over marriage?
Friday, April 26, 2024
Plus: Home Planet, Trudeau's plan to fight populism, and more. Each week, a different Vox editor curates their favorite work that Vox has published across text, audio, and video. This week's
The jeans we’re wearing this spring
Friday, April 26, 2024
If you miss skinny jeans ... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Haunted by breaches, Microsoft says it’s ‘putting security above all else’
Friday, April 26, 2024
Bill payment company Doxo disputes FTC lawsuit | AI fuels hot streak at UW's Institute for Protein Design ADVERTISEMENT GeekWire SPONSOR MESSAGE: Science Firsthand: Learn how Bristol Myers Squibb
☕ Just like a movie
Friday, April 26, 2024
Francis Ford Coppola: the 'accidental hotelier.' April 26, 2024 Retail Brew It's Friday, and the economy's got the jitters. The combination of slowing GDP growth and continued inflation
My conversation with former Rep. Ken Buck (R-CO)
Friday, April 26, 2024
I sit down with recently retired Representative Ken Buck (R-CO). My conversation with former Rep. Ken Buck (R-CO) By Isaac Saul • 26 Apr 2024 View in browser View in browser I'm Isaac Saul, and