Desperately Trying To Fathom The Coffeepocalypse Argument
One of the most common arguments against AI safety is:
I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn’t more like those?” The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument. The first hundred times this happened, I thought I must be misunderstanding something. Surely “I can think of one thing that didn’t happen, therefore nothing happens” is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people! Usually the thing that didn’t happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse: You can read the full thread here, but I’m warning you, it’s just going to be “once people were worried about coffee, but now we know coffee is safe. Therefore AI will also be safe.”¹ I keep trying to steelman this argument, and it keeps resisting my steelmanning. For example:
So my literal, non-rhetorical question, is “how can anyone be stupid enough to think this makes sense?” I’m not (just) trying to insult the people who say this; I consider their existence a genuine philosophical mystery. Isn’t this, in some sense, no different from saying (for example):
The coffee version is:
Nobody would ever take it seriously in its halibut form. So what part of reskinning it as about coffee makes it more credible? Whenever I wonder how anyone can be so stupid, I start by asking if I myself am exactly this stupid in some other situation. This time, I remembered an argument from one of Stuart Russell’s pro-AI-risk arguments. He pointed out that physicist Ernest Rutherford declared nuclear chain reactions impossible less than twenty-four hours before Fermi successfully achieved a nuclear chain reaction. At the time, I thought this was a cute and helpful warning against being too sure that superintelligence was impossible. But isn’t this the same argument as the coffeepocalypse? A hostile rephrasing might be:
And an only slightly less hostile rephrasing:
How is this better than the coffeepocalypse argument? In fact, how is it even better than the halibut argument? What are we doing when we make arguments like these? Some thoughts: As An Existence Proof? When I think of why I appreciated Prof. Russell’s argument, it wasn’t because it was a complete proof that superintelligence was possible. It was more like an argument for humility. “You may think it’s impossible. But given that there’s at least one case where people thought that and were proven wrong, you should believe it’s at least possible.” But first of all, one case shouldn’t prove anything. If you doubt you will win the lottery, I can’t prove you wrong - even in a weak, probabilistic way - by bringing up a case of someone who did. I can’t even prove you should be humble - you are definitely allowed to be arrogant and very confident in your belief you won’t win! And second of all, existence proofs can only make you slightly more humble. They can refute the claim “I am absolutely, 100% certain that AI is/isn’t dangerous”. But not many people make this claim, and it’s uncharitable to suspect your opponent of doing so. Maybe this debate collapses into the debate around the Safe Uncertainty Fallacy, where some people think if there’s any uncertainty at all about something, you have to assume it will be totally safe and fine (no, I don’t get it either), and other people think if there’s even a 1% chance of disaster, you have to multiply out by the size of the disaster and end up very concerned (at the tails, this becomes Pascalian reasoning, but nobody has a good theory of where the tails begin). I still don’t think an existence proof that it’s theoretically possible for your opponent to be wrong goes very far. Still, this is sort of what I was trying to do with the diphyllic dam example here - show that a line of argument can sometimes be wrong, in a way that forces people to try something more sophisticated. As An Attempt To Trigger A Heuristic? Maybe Prof. Russell’s argument implicitly assumes that everyone has a large store of knowledge about failed predictions - no heavier-than-air flying machine is possible, there is a world market for maybe five computers. You could think of this particular example of a prediction being false as trying to trigger people’s existing stock of memories that very often people’s predictions are false. You could make the same argument about the coffeepocalypse. “People worried about coffee but it was fine” is intended to activate a long list of stored moral panics in your mind - the one around marijuana, the one around violent video games - enough to remind you that very often people worry about something and it’s nothing. But - even granting that there are many cases of both - are these useful? There are many cases of moral panics turning out to be nothing. But there are many other cases of moral panics proving true, or of people not worrying about things they should worry about. People didn’t worry enough about tobacco, and then it killed lots of people. People didn’t worry enough about lead in gasoline, and then it poisoned lots of children. People didn’t worry enough about global warming, OxyContin, al-Qaeda, growing international tension in the pre-WWI European system, etc, until after those things had already gotten out of control and hurt lots of people. We even have words and idioms for this kind of failure to listen to warnings - like the ostrich burying its head in the sand. (and there are many examples of people predicting that things were impossible, and they really were impossible, eg perpetual motion). It would seem like in order to usefully invoke a heuristic (“remember all these cases of moral panic we all agree were bad? Then you should assume this is probably also a moral panic”), you need to establish that moral panics are more common than ostrich-head-burying. And in order to usefully invoke a heuristic against predicting something is impossible, you need to establish that failed impossibility proofs are more common than accurate ones. This seems somewhere between “nobody has done it” and “impossible in principle”. Insisting on it would eliminate 90%+ of discourse. See also Caution On Bias Arguments, where I try to make the same point. I think you can rewrite this section to be about proposed bias arguments (“People have a known bias to worry about things excessively, so we should correct for it”). But as always, you can posit an opposite bias (“People have a known bias to put their heads in the sand and ignore problems that it would be scary to think about or expensive to fix”), and figuring out which of these dueling biases you need to correct for, is the same problem as figuring out which of the dueling heuristics you need to invoke. What Is Evidence, Anyway? Suppose someone’s trying to argue for some specific point, like “Russia will win the war with Ukraine”. They bring up some evidence, like “Russia has some very good tanks.” Obviously this on its own proves nothing. Russia could have good tanks, but Ukraine could be better at other things. But then how does any amount of evidence prove an argument? You could make a hundred similar statements: “Russia has good tanks”, “Russia has good troop transport ships”, “the Russian general in the 4th District of the Western Theater is very skilled” […], and run into exactly the same problem. But an argument that Russia will win the war has to be made up of some number of pieces of evidence. So how can it ever work? I think it has to carry an implicit assumption of “…and you’re pretty good at weighing how much evidence it would take to prove something, and everything else is pretty equal, so this is enough evidence to push you over the edge into believing my point.” For example, if someone said “Russia will win because they outnumber Ukraine 3 to 1 and have better generals” (and then proved this was true), that at least seems like a plausible argument that shouldn’t be immediately ignored. Everyone knows that having a 3:1 advantage, and having good generals, are both big advantages in war. It carries an implied “and surely Ukraine doesn’t have some other advantage that counterbalances both of those”. But this could be so plausible that we accept it (it’s hard to counterbalance a 3:1 manpower advantage). Or it could be a challenge to pro-Ukraine people (if you can’t name some advantage of your side that sounds as convincing as these, then we win). And it’s legitimate for someone who believes Russia will win, and has talked about it at length, to write one article about the good tanks, without explicitly saying “Obviously this is only one part of my case that Russia will win, and won’t convince anyone on its own; still, please update a little on this one, and maybe as you keep going and run into other things, you’ll update more.” Is this what the people talking about coffee are doing? An argument against: you should at least update a little on the good tanks, right? But the coffee thing proves literally nothing. It proves that there was one time when people worried about a bad thing, and then it didn’t happen. Surely you already knew this must have happened at least once! An argument in favor: suppose there are a hundred different facets of war as important as “has good tanks”. It would be very implausible if, of two relatively evenly-matched competitors, one of them was better at all 100, and the other at 0. So all that “Russia has good tanks” is telling you is that Russia is better on at least one axis, which you could have already predicted. Is this more of an update than the coffee situation? My proposed answer: if you knew the person making the argument was deliberately looking for pro-Russia arguments, then “has good tanks” updates you almost zero - it would only convince you that Russia was better in at least 1 of 100 domains. If you thought they were relatively unbiased and just happened to stumble across this information, it would update you slightly (we have chosen a randomly selected facet, and Russia is better). If you thought the person making the coffee argument was doing an unbiased survey of all times people had been worried, then the coffee fact (in this particular time people worried, it was unnecessary) might feel like sampling a random point. But we have so much more evidence about whether things are dangerous or safe that I don’t think sampling a random point (even if we could do so fairly) would mean much. Conclusion: I Genuinely Don’t Know What These People Are Thinking I would like to understand the mindset of people who make arguments like this, but I’m not sure I’ve succeeded. The best I can say is that sometimes people on my side make similar arguments (the nuclear chain reaction one) which I don’t immediately flag as dumb, and maybe I can follow this thread to figure out why they seem tempting sometimes. If you see me making an argument that you think is like coffeepocalypse, please let me know, so I can think about what factors led me to think it was a reasonable thing to do, and see if they also apply to the coffee case. . . . although I have to admit, I’m a little nervous asking for this, though. Douglas Adams once said that if anyone ever understood the Universe, it would immediately disappear and be replaced by something even more incomprehensible. I worry that if I ever understand why anti-AI-safety people think the things they say count as good arguments, the same thing might happen. 1 And as some people on Twitter point out, it’s wrong even in the case of coffee! The claimed danger of coffee was that “Kings and queens saw coffee houses as breeding grounds for revolution”. But this absolutely happened - coffeehouse organizing contributed to the Glorious Revolution and the French Revolution, among others. So not only is the argument “Fears about coffee were dumb, therefore fears about AI are dumb”, but the fears about coffee weren’t even dumb. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
Contra Hanson On Medical Effectiveness
Wednesday, April 24, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 326
Monday, April 22, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
ACX Survey Results 2024
Friday, April 19, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Ye Olde Bay Area House Party
Thursday, April 18, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Updates on Lumina Probiotic
Tuesday, April 16, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Roots of wintertime celebrations and holidays
Wednesday, December 25, 2024
+ how Charlie Brown TV special was almost dropped
Time to get rid of all those gifts you don’t want
Wednesday, December 25, 2024
Some advice for your post-Christmas declutter. December 25, 2024 View in browser Whizy Kim is a senior reporter at Vox covering wealth, economic inequality, and consumer trends. Whizy Kim is a senior
Operation Christmas Drop, Christmas NFL Games, and What's Open Today
Wednesday, December 25, 2024
Seven nations' air forces collaborated in Operation Christmas Drop 2024, delivering over 77000 pounds of humanitarian aid to remote Pacific islands in the DOD's longest-running airlift mission.
9 Things Anna Kendrick Can’t Live Without
Wednesday, December 25, 2024
Plus: Nice things to spend your FSA money on. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission.
Holiday Briefing: A day to celebrate
Tuesday, December 24, 2024
A special edition for a special day. View in browser|nytimes.com Ad Morning Briefing: Asia Pacific Edition December 25, 2024 Natasha Frost headshot Gaya Gupta headshot By Natasha Frost and Gaya Gupta
Here’s how we do it.
Tuesday, December 24, 2024
How did our work reach millions of eyes and ears in 2024? It's because we follow the money to find the real story behind breaking news.
☕ You’re missing out
Tuesday, December 24, 2024
CMOs on overlooked marketing trends and opportunities. December 24, 2024 View Online | Sign Up Marketing Brew 'Twas the night before Christmas, and all through the house, not a creature was
☕ From bad to purse
Tuesday, December 24, 2024
Luxury handbag or empty box? December 24, 2024 View Online | Sign Up Retail Brew It's Tuesday, December 24, and you know what that means: Valentine's Day is right around the corner. You should
Memory Missing
Tuesday, December 24, 2024
The Colour Of Memory // Missing Links In American History Textbooks Memory Missing By Kaamya Sharma • 24 Dec 2024 View in browser View in browser The Colour Of Memory Grace Linden | Public Domain
Thank you. For everything. And see you in 2025.
Tuesday, December 24, 2024
Our end-of-year note, and some fun updates on what's coming. Thank you. For everything. And see you in 2025. Our end-of-year note, and some fun updates on what's coming. By Isaac Saul • 24 Dec