Astral Codex Ten - Absurdity Bias, Neom Edition
Alexandros M expresses concern about my post on Neom. My post mostly just makes fun of Neom. My main argument against it is absurdity: a skyscraper the height of WTC1 and the length of Ireland? Come on, that’s absurd! But isn’t the absurdity heuristic a cognitive bias? Didn’t lots of true things sound absurd before they turned out to be true (eg evolution, quantum mechanics)? Don’t I specifically believe in things many people have found self-evidently absurd (eg the multiverse, AI risk)? Shouldn’t I be more careful about “this sounds silly to me, so I’m going to make fun of it”? Here’s a possible argument why not: everything has to bottom out in absurdity arguments at some level or another. Suppose I carefully calculated that, with modern construction techniques, building Neom would cost 10x more than its allotted budget. This argument contains an implied premise: “and the Saudis can’t construct things 10x cheaper than anyone else”. How do we know the Saudis can’t construct things 10x cheaper than anyone else? The argument itself doesn’t prove this; it’s just left as too absurd to need justification. Suppose I did want to address this objection. For example, I carefully researched existing construction projects in Saudi Arabia, checked how cheap they were, calculated how much they could cut costs using every trick available to them, and found it was less than 10x? My argument still contains the implied premise “there’s no Saudi conspiracy to develop amazing construction technology and hide it from the rest of the world”. But this is another absurdity heuristic - I have no argument beyond that such a conspiracy would be absurd. I might eventually be able to come up with an argument supporting this, but that argument, too, would have implied premises depending on absurdity arguments. So how far down this chain should I go? One plausible answer is “just stop at the first level where your interlocutors accept your absurdity argument”. Anyone here think Neom’s a good idea? No? Even Alexandros agrees it probably won’t work. So maybe this is the right level of absurdity. If I was pitching my post towards people who mostly thought Neom was a good idea, then I might try showing that it would cost 10x more than its expected budget, and see whether they agreed with me that Saudis being able to construct things 10x cheaper than anyone else was absurd. If they did agree with me, then I’ve hit the right level of argument. And if they agree with me right away, before I make any careful calculations, then it was fine for me to just point to it and gesture “That’s absurd!” I think this is basically the right answer for communications questions, like how to structure a blog post. When I criticize communicators for relying on the absurdity heuristic too much, it’s because they’re claiming to adjudicate a question with people on both sides, but then retreating to absurdity instead. When I was young a friend recommended me a pseudoscience book on ESP, with lots of pseudoscientific studies proving ESP was real. I looked for skeptical rebuttals, and they were all “Ha ha! ESP? That’s absurd, you morons!” These people were just clogging up Google search results that could have been giving me real arguments. But if nobody has ever heard of Neom, and I expect my readers to immediately agree that Neom is absurd, then it’s fine (in a post describing Neom rather than debating it) to stop at the first level. (I do worry that it might be creating an echo chamber; people start out thinking Neom is a bad idea for the obvious reasons, then read my post and think “and ACX also thinks it’s a bad idea” is additional evidence; I think my obligation here is to not exaggerate the amount of thought that went into my assessment, which I hope I didn’t.) But the absurdity bias isn’t just about communication. What about when I’m thinking things through in my head, alone? I’m still going to be asking questions like “is Neom possible?” and having to decide what level of argument to stop at. To put it another way: which of your assumptions do you accept vs. question? Question none of your assumptions, and you’re a closed-minded bigot. Question all of your assumptions, and you get stuck in an infinite regress. The only way to escape (outside of a formal system with official axioms) is to just trust your own intuitive judgment at some point. So maybe you should just start out doing that. Except that some people seem to actually be doing something wrong. The guy who hears about evolution and says “I know that monkeys can’t turn into humans, this is so absurd that I don’t even have to think about the question any further” is doing something wrong. How do you avoid being that guy? Some people try to dodge the question and say that all rationality is basically a social process. Maybe on my own, I will naturally stop at whatever level seems self-evident to me. Then other people might challenge me, and I can reassess. But I hate this answer. It seems to be preemptively giving up and hoping other people are less lazy than you are. It’s like answering a child’s question about how to do a math problem with “ask a grown-up”. A coward’s way out! Eliezer Yudkowsky gives his answer here:
This is all true as far as it goes, but it’s still just rules for the rare situations when your intuitive judgments of absurdity are contradicted by clear facts that someone else is handing you on a silver platter. But how do you, pondering a question on your own, know when to stop because a line of argument strikes you as absurd, vs. to stick around and gather more facts and see whether your first impressions were accurate? I don’t have a great answer here, but here are some parts of a mediocre answer:
You’re a free subscriber to Astral Codex Ten. For the full experience, become a paid subscriber. |
Older messages
Slightly Against Underpopulation Worries
Thursday, August 4, 2022
...
Model City Monday 8/1/22
Monday, August 1, 2022
...
Open Thread 235
Monday, August 1, 2022
...
Your Book Review: Viral
Saturday, July 30, 2022
Finalist #12 in the Book Review Contest
Links For July
Friday, July 29, 2022
...
You Might Also Like
How “Y.O.L.O. Joe” Can Beat The Lame Duck
Thursday, November 28, 2024
Here is what Democrats could actually achieve in the months before Trump takes office. Need a productive political topic to discuss at the Thanksgiving table? Want to impart key facts as you pass the
Trump Cabinet Bomb Threats, Ancient Sandwiches, and a Popsicle Caper
Thursday, November 28, 2024
Several of President-elect Donald Trump's Cabinet nominees and administration appointees faced bomb threats and "swatting" attacks on Tuesday and Wednesday. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
First-ever UEFI bootkit for Linux in the works, experts say [Thu Nov 28 2024]
Thursday, November 28, 2024
Hi The Register Subscriber | Log in The Register Daily Headlines 28 November 2024 KITTY LOOKS AT SCREEN AI GENERATED First-ever UEFI bootkit for Linux in the works, experts say Bootkitty doesn't
On My Mind: Fig Ornaments and Striped Bath Mats
Thursday, November 28, 2024
Plus: Eensy-weensy, teeny-tiny gifts. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission. November 27,
What It’s Like to Be on Trump’s Enemies List
Wednesday, November 27, 2024
Columns and commentary on news, politics, business, and technology from the Intelligencer team. Intelligencer power What It's Like to Be on Trump's Enemies List “Revenge does take time.” Photo-
GeekWire Mid-Week Update
Wednesday, November 27, 2024
Read the top tech stories so far this week from GeekWire Top stories so far this week Microsoft credited with spotting sophisticated Chinese hack that hit telecoms including T-Mobile US officials say a
Thursday Briefing: A fragile cease-fire in Lebanon
Wednesday, November 27, 2024
Plus, a post-election Thanksgiving. View in browser|nytimes.com Ad Morning Briefing: Asia Pacific Edition November 28, 2024 Author Headshot By Gaya Gupta Good morning. We're covering the first day
Turn your ideas into reality at AWS re:Invent 2024
Wednesday, November 27, 2024
Join in person or the free livestream and learn all things AWS and generative AI GeekWire is pleased to present this special sponsored message to our Pacific NW readers. Don't miss your chance to
SIROTA’S SIGNALS: A New MAGA Plot To Kill Anti-Corruption Laws
Wednesday, November 27, 2024
Plus, new data on Liz Cheney's election effect, the connection between real estate and your insurance premium, and a hidden city discovered under the ice. A New MAGA Plot To Kill Anti-Corruption
Erik Prince sued The Intercept
Wednesday, November 27, 2024
There is an increasingly common strategy by billionaires to weaponize libel law against journalism — and in the Donald Trump era, we can expect the legal attacks on the free press to rise. In 2020, The