Astral Codex Ten - Mantic Monday 4/18/22
WarcastingChanges in Ukraine prediction markets since my last post March 21:
If you like getting your news in this format, subscribe to the Metaculus Alert bot for more (and thanks to ACX Grants winner Nikos Bosse for creating it!) Nuclear Risk UpdateLast month superforecaster group Samotsvety Forecasts published their estimate of the near-term risk of nuclear war, with a headline number of 24 micromorts per week. A few weeks later, J. Peter Scoblic, a nuclear security expert with the International Security Program, shared his thoughts. His editor wrote:
In other words: the Samotsvety analysis was the best that domain-general forecasting had to offer. This is the best that domain-specific expertise has to offer. Let’s see if they line up: Superficially not really! In contrast to Samotsvety’s 24 micromorts, Scoblic says 370 micromorts, an order of magnitude higher. Most of the difference comes from two steps. First, conditional on some kind of nuclear exchange, will London (their index city for where some specific person worrying about nuclear risk might be) get attacked? Samotsvety says only 18% chance. Scoblic raises this to 65%, saying:
Second, what is the probability that an “informed and unbiased” person could escape a city before it gets nuked? Samotsvety said 75%; Scoblic said 30%. I think this is a fake disagreement. Some people I know were so careful that they had already left their cities by the time this essay was posted; the odds of this person escaping a nuclear attack are 100%. Other people are homebound, never watch the news, and don’t know there’s a war at all; the odds of these people escaping a nuclear attack are 0%. In between are a lot of different levels of caution; do you start leaving when the war starts to heat up? Or do you want until you hear that nukes are already in the air? Do you have a car? A motorcycle for weaving through traffic? Do you plan to use public transit? My guess is that the EAs who Samotsvety were writing for are better-informed, more cautious, and better-resourced than average, and the 75% chance they’d escape was right for them. Scoblic seems to interpret this question as saying that people have to escape after the nuclear war has already started, and his 30% estimate seems fine for that situation. If we halve Scoblic’s estimate (or double Samotsvety’s) to adjust for this “fake disagreement” factor - then it’s still 24 vs. 185 micromorts, a difference of 8x. What do we want - and what do we have the right to expect - from forecasting? If it’s order-of-magnitude estimates, it looks like we have one: we’ve bounded nuclear risk in the order of magnitude between 24 and 185 (at least until some third group comes around with something totally different than either of these two). Or maybe it’s a better understanding of our “cruxes” - where the real disagreement is that accounts for almost all of the downstream uncertainty. In this case, this exercise is pretty successful - everyone is pretty close to each other on the risk of small-scale nuclear war, and the big disagreement is over whether small-scale nuclear war would inevitably escalate. The Samotsvety team says they plan to meet, discuss Scoblic’s critiques, and see if they want to update any of their estimates. And they made what I consider some pretty strong points in the comments that maybe Scoblic will want to adjust on. Both sides seem to be treating this as a potential adversarial collaboration, and I’d be interested in seeing if this can bound the risk even further. AI Risk “Update”Everyone’s been talking about this Metaculus question: ”Weakly general AI” in the question means a single system that can perform a bunch of impressive tasks - passing a “Turing test”, scoring well on the SAT, playing video games, etc. Read the link for the full operationalization, but the short version is that this is advanced stuff AI can’t do yet, but still doesn’t necessarily mean “totally equivalent to humans in any way”, let alone superintelligence. For the past year or so, this had been drifting around the 2040s. Then last week it plummeted to 2033. I don’t want to exaggerate the importance of this move: it was also on 2033 back in 2020, before drifting up a bit. But this is certainly the sharpest correction in the market’s two year history. The drop corresponded to three big AI milestones. First, DALL-E2, a new and very impressive art AI. Second, PALM, a new and very impressive language AI: Third, Chinchilla, a paper and associated model suggesting that people have been training AIs inefficiently all this time, and that probably a small tweak to the process could produce better results with the same computational power. (there’s also this Socratic Models paper that I haven’t even gotten a chance to look at, but which looks potentially impressive) This raises the eternal question of “exciting game-changer” vs. “incremental progress at the same rate as always”. These certainly don’t seem to me to be bigger game changers than the original DALL-E or GPT-3, but I’m not an expert and maybe they should be. It’s just weird that they used up half our remaining AI timeline (ie moved the date when we should expect AGI by this definition from 20 years out to 10 years out) when I feel like there have been four or five things this exciting in the past decade. Or is there another explanation? A lot of AI forecasters on Metaculus are Less Wrong readers; we know that the Less Wrong Yudkowsky/Christiano debate on takeoff speeds moved the relevant Metaculus question a few percent: Early this month on Less Wrong, Eliezer Yudkowsky posted MIRI Announces New Death With Dignity Strategy, where he said that after a career of trying to prevent unfriendly AI, he had become extremely pessimistic, and now expects it to happen in a few years and probably kill everyone. This caused the Less Wrong community, already pretty dedicated to panicking about AI, to redouble its panic. Although the new announcement doesn’t really say anything about timelines that hasn’t been said before, the emotional framing has hit people a lot harder. I will admit that I’m one of the people who is kind of panicky. But I also worry about an information cascade: we’re an insular group, and Eliezer is a convincing person. Other communities of AI alignment researchers are more optimistic. I continue to plan to cover the attempts at debate and convergence between optimistic and pessimistic factions, and to try to figure out my own mind on the topic. But for now the most relevant point is that a lot of people who were only medium panicked a few months ago are now very panicked. Is that the kind of thing that moves forecasting tournaments? I don’t know. Shorts1: Will Elon Musk acquire over 50% of Twitter by the end of 2022? Why are these two so different? Do lots of people expect Musk to acquire Twitter after June 1 but still in 2022? 2: Will Marine Le Pen win the 2022 French presidential election? Beautiful correspondence, beautiful volume numbers. 3: Will cumulative reported deaths from COVID-19 in China exceed 50,000 by the end of 2022? You’re a free subscriber to Astral Codex Ten. For the full experience, become a paid subscriber. |
Older messages
Open Thread 220
Sunday, April 17, 2022
...
Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It
Monday, April 11, 2022
A Machine Learning Monday post, 4/11/22
Open Thread 219
Sunday, April 10, 2022
...
Spring Meetups In Seventy Cities
Sunday, April 10, 2022
...
Dictator Book Club: Xi Jinping
Wednesday, April 6, 2022
...
You Might Also Like
☕ Great chains
Wednesday, January 15, 2025
Prologis looks to improve supply chain operations. January 15, 2025 View Online | Sign Up Retail Brew Presented By Bloomreach It's Wednesday, and we've been walking for miles inside the Javits
Pete Hegseth's confirmation hearing.
Wednesday, January 15, 2025
Hegseth's hearing had some fireworks, but he looks headed toward confirmation. Pete Hegseth's confirmation hearing. Hegseth's hearing had some fireworks, but he looks headed toward
Honourable Roulette
Wednesday, January 15, 2025
The Honourable Parts // The Story Of Russian Roulette Honourable Roulette By Kaamya Sharma • 15 Jan 2025 View in browser View in browser The Honourable Parts Spencer Wright | Scope Of Work | 6th
📬 No. 62 | What I learned about newsletters in 2024
Wednesday, January 15, 2025
“I love that I get the chance to ask questions and keep learning. Here are a few big takeaways.” ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
⚡️ ‘Skeleton Crew’ Answers Its Biggest Mystery
Wednesday, January 15, 2025
Plus: There's no good way to adapt any more Neil Gaiman stories. Inverse Daily The twist in this Star Wars show was, that there was no twist. Lucasfilm TV Shows 'Skeleton Crew' Finally
I Tried All The New Eye-Shadow Sticks
Wednesday, January 15, 2025
And a couple classics. The Strategist Beauty Brief January 15, 2025 Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission
How To Stop Worrying And Learn To Love Lynn's National IQ Estimates
Wednesday, January 15, 2025
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
☕ Olympic recycling
Wednesday, January 15, 2025
Reusing wi-fi equipment from the Paris games. January 15, 2025 View Online | Sign Up Tech Brew It's Wednesday. After the medals are awarded and the athletes go home, what happens to all the stuff
Ozempic has entered the chat
Wednesday, January 15, 2025
Plus: Hegseth's hearing, a huge religious rite, and confidence. January 15, 2025 View in browser Jolie Myers is the managing editor of the Vox Media Podcast Network. Her work often focuses on
How a major bank cheated its customers out of $2 billion, according to a new federal lawsuit
Wednesday, January 15, 2025
An explosive new lawsuit filed by the Consumer Financial Protection Bureau (CFPB) alleges that Capital One bank cheated its customers out of $2 billion. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏