Astral Codex Ten - Contra DeBoer On Temporal Copernicanism
Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn’t expect a singularity, apocalypse, or any other out-of-distribution thing in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes:
(I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn’t my main objection.) He then condemns a wide range of people, including me, for failing to understand this:
I deny misunderstanding this. Freddie is wrong. Since we don’t know when a future apocalypse might happen, we can sanity-check ourselves by looking at past apocalyptic near-misses. The closest that humanity has come to annihilation in the past 300,000 years was probably the Petrov nuclear incident in 1983¹, ie within Freddie’s lifetime. Pretty weird that out of 300,000 years, this would be only 41 years ago! Maybe you’re more worried about environmental devastation than nuclear war? The biggest climate shock of the past 300,000 years is . . . also during Freddie’s lifetime². Man, these three-in-a-thousand coincidences keep adding up! “Temporal Copernicanism”, as described, fails basic sanity checks. But we shouldn’t have even needed sanity checks as specific as these: common sense already tells us that new apocalyptic weapons and environmental disasters were more likely to arise during the 20th century than, say, the century between 184,500 BC and 184,400 BC. What’s Freddie doing wrong, and how can we do better? The following argument is loosely based on one by Toby Ord. Consider three types of events: First, those uniformly distributed across calendar time. For example, asteroid strikes are like this. Here Freddie is completely right: if there are 300,000 years of human history, and you live 100 years, there’s an 0.03% chance that the biggest asteroid in human history strikes during your lifetime. Because of this, most people who think about existential risk don’t take asteroid strikes too seriously as a potential cause of near-term apocalypses. Second, those uniformly distributed across humans. This is what you might use to solve Sam Bankman-Fried’s Shakespeare problem - what’s the chance that the greatest playwright in human history is alive during a given period? Freddie sort of gets this far³, and provides a number: 7% of humans who ever lived are alive today⁴. Third, those uniformly distributed across techno-economic advances. You’d use this to answer questions like “how likely is it that the most important technological advance in history thus far happens during my lifetime?” This seems like the right way to predict things like nuclear weapons, global warming, or the singularity. But it’s harder to measure than the previous two. You could try using GDP growth. At the beginning of Freddie’s life, world GDP (measured in real dollars) was about $40 trillion per year. Now it’s about $120 trillion. So on this metric, about 66% of absolute techno-economic progress has happened during Freddie’s lifetime. But we might be more interested in relative techno-economic progress. That is, the Agricultural Revolution might have increased yields from 10 bushels to 100 bushels of corn. And some new tractor design invented yesterday might increase it from 10,000 bushels to 10,100 bushels. But that doesn’t mean the new tractor design was more important than the Agricultural Revolution. Here I think the right measure is log GDP growth; by this metric, about 20% of techno-economic progress has happened during Freddie’s lifetime. Freddie sort of starts thinking in this direction⁵, but shuts it down on the grounds that some people think technological growth rates have slowed down since the mid-20th century. Usually the metric that gets brought out to support this is changes in total factor productivity, which do show the mid-20th century as a more dynamic period than today. So fine, let’s do the same calculation with total productivity. My impression from eyeballing this paper is that about 35% of all increase in TFP growth and 15% of all log TFP growth has still happened during Freddie’s lifetime. So what’s our prior that the most exciting single technological advance in history thus far will happen during Freddie’s lifetime? I think a conservative number would be 15%⁶. How do we move from “most exciting advance in history” to questions about the singularity or the apocalypse? Robin Hanson cashes out “the singularity” as an economic phase change of equal magnitude to the Agricultural or Industrial Revolutions. If we stick to that definition, we can do a little better at predicting it: it’s a change of a size such that it’s happened twice before. Using our previous number, we estimate ~30% chance that such a change happens in our lifetime. (sanity check: the last such earth-shattering change was the Industrial Revolution, about 3 - 4 lifetimes ago.) What about the apocalypse? This one is tougher. Freddie tries to do an argument from absurdity: suppose the apocalypse happened tomorrow. Wouldn’t it be crazy that, you, of all the humans who have ever existed, were correct when you thought the apocalypse was nigh? No, it’s not crazy at all. If the apocalypse happens tomorrow, then 7% of humans throughout history would have been right to predict an apocalypse in their lifetime. That’s not a such a low percent - your probability of being born in the final generation is about the same as (eg) your probability of being born in North America. Here’s a question I don’t know how to answer - the number above (7%) is about how surprised you should be if the apocalypse happens in your lifetime. But I don’t think it’s the overall chance that the apocalypse happens in your lifetime, because the apocalypse could be millions of years away, after there had been trillions of humans, and then retroactively it would seem much less likely that the apocalypse happened during the 21st century. So: is it possible to calculate this chance? I think there ought to be a way to leverage the Carter Doomsday Argument here, but I’m not quite sure of the details. Speaking of the Carter Doomsday Argument… …Freddie is re-inventing anthropic reasoning, a well-known philosophical concept. The reason why the hundreds of academics who have written books and papers about anthropics have never noticed that it disproves transhumanism and the singularity is because Freddie’s version has obvious mistakes that a sophomore philosophy student would know better than to make. (local Substacker Bentham’s Bulldog is a sophomore philosophy student, and his anthropics mistakes are much more interesting.) The world’s leading expert on anthropic reasoning is probably Oxford philosophy professor Nick Bostrom, who literally wrote the book on the subject. Awkwardly for Freddie, Bostrom is also one of the founders of the modern singularity movement. This is because, understood correctly, anthropics provides no argument against a singularity or any other transhumanist idea, and might (weakly) support them. I think if you use anthropic reasoning correctly, you end up with a prior probability of something like 30% that the singularity (defined as a technological revolution as momentous as agriculture or industry) happens⁷ during your lifetime, and a smaller percent that I’m not sure about (maybe 7%?) that the apocalypse happens during your lifetime. None of these probabilities are lower than the probability that you’re born in North America, so people should stop acting like they’re so small as to be absurd or impossible. But also, prior probabilities are easy-come, easy-go. The prior probability that you’re born in Los Angeles is only 0.05%. But if you look out your maternity ward window and see the Hollywood sign, ditch that number immediately and update to near certainty. No part of anthropics should be able to prevent you from updating on your observations about the world around you, and on your common sense. (except maybe the part about how you’re in a simulation, or the part about how there’s definitely a God who created an infinite number of universes, or how there must be thousands of US states, or how the world must end before 10,000 AD, or how the Biblical Adam could use his reproductive decisions as a shortcut to supercomputation, or several other things along these same lines. I actually hate anthropic reasoning. I just think that if you’re going to do it, you should do it right.) 1 The Toba supervolcano is over-rated. You could argue Cuban Missile Crisis was worse than Petrov, but that just brings us back 60 years instead of 40, which I think still proves my point. 2 Something called “the Eemian” 130,000 years ago was larger in magnitude, but happened gradually over several thousand years. 3 If he got this far halfway down, why did he even present the obviously-wrong 0.03% number as his headline result? Was he hoping we wouldn’t read the rest of his post? 4 This is slightly wrong for the exact framing of the question; your life is a span rather than a point, so probably by the time you die, about 10% of humans will have been alive during your lifespan. The exact way you think about this depends on how old you are, and I’ll stick with the 7% number for the rest of the essay. 5 Again, I don’t understand why he bothered giving the earlier obviously-wrong-for-this-problem numbers, vaguely half-alluded to the existence of this one in order to complain that someone could miscalculate it, and then put no effort into calculating it correctly or at least admitting that he couldn’t calculate the number that mattered. 6 Some of these numbers depend on how you’re thinking of “lifespan” vs. “lifespan so far” and how much of your actually-existing foreknowledge about the part of your life you’ve already lived you’re using. I’m just going to handwave all of that away since it depends on how you’re framing the question and doesn’t change results by more than a factor of two or three. 7 Realistically the Agricultural and Industrial Revolutions were long processes instead of point events. I think the singularity will be shorter (just as the Industrial Revolution was shorter than the Agricultural), but if this bothers you, imagine we’re talking about the start (or peak) of each. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
Open Thread 346
Monday, September 9, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Your Book Review: The Pale King
Friday, September 6, 2024
Finalist #12 in the Book Review Contest ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Highlights From The Comments On "Sorry You Feel That Way"
Thursday, September 5, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Interview Day At Thiel Capital
Tuesday, September 3, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 345
Monday, September 2, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Revenge Tour
Saturday, March 1, 2025
March 01, 2025 The Weekend Reader Required Reading for Political Compulsives 1. Musk, DOGE, and the AI-Fueled Plot to Fire Everybody MAGA takes a cue from big tech and embraces post-human politics.
Weekend Briefing No. 577
Saturday, March 1, 2025
The State of Startup Funding -- Creative Flow Mystery -- I Once Was Blind ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Your new crossword for Saturday Mar 01 ✏️
Saturday, March 1, 2025
View this email in your browser Challenge yourself — and your friends — with our latest crossword. We have six new puzzles teed up for you this week! You can find all of our new crosswords in one place
Trump Boots Zelensky, English Official U.S. Language, and a Daring Rescue
Saturday, March 1, 2025
A highly anticipated White House meeting on a Ukraine rare earth minerals deal turned confrontational Friday, with President Trump and Vice President Vance clashing with President Zelensky, leaving the
☕ Hanging up on Skype
Saturday, March 1, 2025
Microsoft pulls the plug on the OG video chat... March 01, 2025 View Online | Sign Up | Shop Morning Brew Presented By Boka Good morning. Have you ever watched videos featuring our insanely funny
106718 is your Substack verification code
Saturday, March 1, 2025
Here's your verification code to sign in to Substack: 106718 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
234474 is your Substack verification code
Saturday, March 1, 2025
Here's your verification code to sign in to Substack: 234474 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
790484 is your Substack verification code
Saturday, March 1, 2025
Here's your verification code to sign in to Substack: 790484 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
Have We All Just Agreed to Live With Soul-Crushing Racism?
Saturday, March 1, 2025
February 28, 2025 THE SYSTEM Have We All Just Agreed to Live With Soul-Crushing Racism? By Zak Cheney-Rice Elon Musk throwing up a Nazi-style salute on Trump's Inauguration Day. Photo: Mark
342612 is your Substack verification code
Friday, February 28, 2025
Here's your verification code to sign in to Substack: 342612 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email