Astral Codex Ten - Contra DeBoer On Temporal Copernicanism
Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn’t expect a singularity, apocalypse, or any other out-of-distribution thing in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes:
(I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn’t my main objection.) He then condemns a wide range of people, including me, for failing to understand this:
I deny misunderstanding this. Freddie is wrong. Since we don’t know when a future apocalypse might happen, we can sanity-check ourselves by looking at past apocalyptic near-misses. The closest that humanity has come to annihilation in the past 300,000 years was probably the Petrov nuclear incident in 1983¹, ie within Freddie’s lifetime. Pretty weird that out of 300,000 years, this would be only 41 years ago! Maybe you’re more worried about environmental devastation than nuclear war? The biggest climate shock of the past 300,000 years is . . . also during Freddie’s lifetime². Man, these three-in-a-thousand coincidences keep adding up! “Temporal Copernicanism”, as described, fails basic sanity checks. But we shouldn’t have even needed sanity checks as specific as these: common sense already tells us that new apocalyptic weapons and environmental disasters were more likely to arise during the 20th century than, say, the century between 184,500 BC and 184,400 BC. What’s Freddie doing wrong, and how can we do better? The following argument is loosely based on one by Toby Ord. Consider three types of events: First, those uniformly distributed across calendar time. For example, asteroid strikes are like this. Here Freddie is completely right: if there are 300,000 years of human history, and you live 100 years, there’s an 0.03% chance that the biggest asteroid in human history strikes during your lifetime. Because of this, most people who think about existential risk don’t take asteroid strikes too seriously as a potential cause of near-term apocalypses. Second, those uniformly distributed across humans. This is what you might use to solve Sam Bankman-Fried’s Shakespeare problem - what’s the chance that the greatest playwright in human history is alive during a given period? Freddie sort of gets this far³, and provides a number: 7% of humans who ever lived are alive today⁴. Third, those uniformly distributed across techno-economic advances. You’d use this to answer questions like “how likely is it that the most important technological advance in history thus far happens during my lifetime?” This seems like the right way to predict things like nuclear weapons, global warming, or the singularity. But it’s harder to measure than the previous two. You could try using GDP growth. At the beginning of Freddie’s life, world GDP (measured in real dollars) was about $40 trillion per year. Now it’s about $120 trillion. So on this metric, about 66% of absolute techno-economic progress has happened during Freddie’s lifetime. But we might be more interested in relative techno-economic progress. That is, the Agricultural Revolution might have increased yields from 10 bushels to 100 bushels of corn. And some new tractor design invented yesterday might increase it from 10,000 bushels to 10,100 bushels. But that doesn’t mean the new tractor design was more important than the Agricultural Revolution. Here I think the right measure is log GDP growth; by this metric, about 20% of techno-economic progress has happened during Freddie’s lifetime. Freddie sort of starts thinking in this direction⁵, but shuts it down on the grounds that some people think technological growth rates have slowed down since the mid-20th century. Usually the metric that gets brought out to support this is changes in total factor productivity, which do show the mid-20th century as a more dynamic period than today. So fine, let’s do the same calculation with total productivity. My impression from eyeballing this paper is that about 35% of all increase in TFP growth and 15% of all log TFP growth has still happened during Freddie’s lifetime. So what’s our prior that the most exciting single technological advance in history thus far will happen during Freddie’s lifetime? I think a conservative number would be 15%⁶. How do we move from “most exciting advance in history” to questions about the singularity or the apocalypse? Robin Hanson cashes out “the singularity” as an economic phase change of equal magnitude to the Agricultural or Industrial Revolutions. If we stick to that definition, we can do a little better at predicting it: it’s a change of a size such that it’s happened twice before. Using our previous number, we estimate ~30% chance that such a change happens in our lifetime. (sanity check: the last such earth-shattering change was the Industrial Revolution, about 3 - 4 lifetimes ago.) What about the apocalypse? This one is tougher. Freddie tries to do an argument from absurdity: suppose the apocalypse happened tomorrow. Wouldn’t it be crazy that, you, of all the humans who have ever existed, were correct when you thought the apocalypse was nigh? No, it’s not crazy at all. If the apocalypse happens tomorrow, then 7% of humans throughout history would have been right to predict an apocalypse in their lifetime. That’s not a such a low percent - your probability of being born in the final generation is about the same as (eg) your probability of being born in North America. Here’s a question I don’t know how to answer - the number above (7%) is about how surprised you should be if the apocalypse happens in your lifetime. But I don’t think it’s the overall chance that the apocalypse happens in your lifetime, because the apocalypse could be millions of years away, after there had been trillions of humans, and then retroactively it would seem much less likely that the apocalypse happened during the 21st century. So: is it possible to calculate this chance? I think there ought to be a way to leverage the Carter Doomsday Argument here, but I’m not quite sure of the details. Speaking of the Carter Doomsday Argument… …Freddie is re-inventing anthropic reasoning, a well-known philosophical concept. The reason why the hundreds of academics who have written books and papers about anthropics have never noticed that it disproves transhumanism and the singularity is because Freddie’s version has obvious mistakes that a sophomore philosophy student would know better than to make. (local Substacker Bentham’s Bulldog is a sophomore philosophy student, and his anthropics mistakes are much more interesting.) The world’s leading expert on anthropic reasoning is probably Oxford philosophy professor Nick Bostrom, who literally wrote the book on the subject. Awkwardly for Freddie, Bostrom is also one of the founders of the modern singularity movement. This is because, understood correctly, anthropics provides no argument against a singularity or any other transhumanist idea, and might (weakly) support them. I think if you use anthropic reasoning correctly, you end up with a prior probability of something like 30% that the singularity (defined as a technological revolution as momentous as agriculture or industry) happens⁷ during your lifetime, and a smaller percent that I’m not sure about (maybe 7%?) that the apocalypse happens during your lifetime. None of these probabilities are lower than the probability that you’re born in North America, so people should stop acting like they’re so small as to be absurd or impossible. But also, prior probabilities are easy-come, easy-go. The prior probability that you’re born in Los Angeles is only 0.05%. But if you look out your maternity ward window and see the Hollywood sign, ditch that number immediately and update to near certainty. No part of anthropics should be able to prevent you from updating on your observations about the world around you, and on your common sense. (except maybe the part about how you’re in a simulation, or the part about how there’s definitely a God who created an infinite number of universes, or how there must be thousands of US states, or how the world must end before 10,000 AD, or how the Biblical Adam could use his reproductive decisions as a shortcut to supercomputation, or several other things along these same lines. I actually hate anthropic reasoning. I just think that if you’re going to do it, you should do it right.) 1 The Toba supervolcano is over-rated. You could argue Cuban Missile Crisis was worse than Petrov, but that just brings us back 60 years instead of 40, which I think still proves my point. 2 Something called “the Eemian” 130,000 years ago was larger in magnitude, but happened gradually over several thousand years. 3 If he got this far halfway down, why did he even present the obviously-wrong 0.03% number as his headline result? Was he hoping we wouldn’t read the rest of his post? 4 This is slightly wrong for the exact framing of the question; your life is a span rather than a point, so probably by the time you die, about 10% of humans will have been alive during your lifespan. The exact way you think about this depends on how old you are, and I’ll stick with the 7% number for the rest of the essay. 5 Again, I don’t understand why he bothered giving the earlier obviously-wrong-for-this-problem numbers, vaguely half-alluded to the existence of this one in order to complain that someone could miscalculate it, and then put no effort into calculating it correctly or at least admitting that he couldn’t calculate the number that mattered. 6 Some of these numbers depend on how you’re thinking of “lifespan” vs. “lifespan so far” and how much of your actually-existing foreknowledge about the part of your life you’ve already lived you’re using. I’m just going to handwave all of that away since it depends on how you’re framing the question and doesn’t change results by more than a factor of two or three. 7 Realistically the Agricultural and Industrial Revolutions were long processes instead of point events. I think the singularity will be shorter (just as the Industrial Revolution was shorter than the Agricultural), but if this bothers you, imagine we’re talking about the start (or peak) of each. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
Open Thread 346
Monday, September 9, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Your Book Review: The Pale King
Friday, September 6, 2024
Finalist #12 in the Book Review Contest ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Highlights From The Comments On "Sorry You Feel That Way"
Thursday, September 5, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Interview Day At Thiel Capital
Tuesday, September 3, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 345
Monday, September 2, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
AI, tech talent, and regional innovation, with retiring WTIA CEO Michael Schutzler
Saturday, December 21, 2024
Nick Hanauer calls wealth tax proposal 'impractical' ADVERTISEMENT GeekWire SPONSOR MESSAGE: Improve focus and memory with Thinkie: For a limited time, save $50 on Thinkie plus get your first
Gift of the Day: A Status Dog Leash
Saturday, December 21, 2024
“The cool leash that you see walking around in Soho.” The Strategist Gifts Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate
Guest Newsletter: Five Books
Saturday, December 21, 2024
Five Books features in-depth author interviews recommending five books on a theme. Guest Newsletter: Five Books By Sylvia Bishop • 21 Dec 2024 View in browser View in browser Five Books features in-
Read this. You will be glad you did.
Saturday, December 21, 2024
You can support the high-impact investigative reporting of The Intercept AND skip the flood of year-end fundraising emails. Let's all acknowledge the elephant in the room. This is a fundraising
What cephalopods know, and how we know it
Saturday, December 21, 2024
+ Bob Dylan's creative risks
It’s Gift-Buying Crunch Time
Saturday, December 21, 2024
Plus: What Chloe Bailey can't live without. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission.
Placating Paranoia
Saturday, December 21, 2024
December 21, 2024 The Weekend Reader Required Reading for Political Compulsives 1. What Is MAHA? How wellness culture with legitimate concerns (and some conspiratorial beliefs) became a movement poised
YOU LOVE TO SEE IT: Banning The Bans
Saturday, December 21, 2024
Censorship gets banned, youth score a climate win, nurses win a major union vote, workers' rights are clear and unmistakable, and small businesses go boom. Banning The Bans By Sam Pollak • 21 Dec
The 34 best last-minute gifts
Saturday, December 21, 2024
It's not too late View in browser Ad The Recommendation December 21, 2024 Ad Procrastinators, rejoice A selection of last-minute gifts Wirecutter recommends, including Glerups, water color paint, a
Weekend Briefing No. 567
Saturday, December 21, 2024
My Top 11 Books of 2024 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏