It's Still Easier To Imagine The End Of The World Than The End Of Capitalism
It's Still Easier To Imagine The End Of The World Than The End Of CapitalismResponding to a recent essay on wealth inequality in a post-singularity economyI. No Set Gauge has a great essay on Capital, AGI, and Human Ambition, where he argues that if humankind survives the Singularity, the likely result is a future of eternal stagnant wealth inequality. The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall income distribution will stay approximately fixed. Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever. Finally, modern democracies pursue redistribution (and are otherwise responsive to non-elite concerns) partly out of geopolitical self interest. Under capitalism (as opposed to eg feudalism), national power depends on a strong economy, and a strong economy benefits from educated, globally-mobile, and substantially autonomous bourgeoisie and workforce. Once these people have enough power, they demand democracy, and once they have democracy, they demand a share of the pie; it’s hard to be a rich First World country without also being a liberal democracy (China is trying hard, but hasn’t quite succeeded, and even their limited success depends on things like America not opening its borders to Chinese skilled labor). Cheap AI labor (including entrepreneurial labor) removes a major force pushing countries to operate for the good of their citizens (though even without this force, we might expect legacy democracies to continue at least for a while). So we might expect the future to have less redistribution than the present. This may not result in catastrophic poverty. Maybe the post-Singularity world will be rich enough that even a tiny amount of redistribution (eg UBI) plus private charity will let even the poor live like kings (though see here for a strong objection). Even so, the idea of a small number of immortal trillionaires controlling most of the cosmic endowment for eternity may feel weird and bad. From No Set Gauge:
I don’t think about these scenarios too often - partly because it’s so hard to predict what will happen after the Singularity, and partly because everything degenerates into crazy science-fiction scenarios so quickly that I burn a little credibility every time I talk about it. Still, if we’re going to discuss this, we should get it right - so let’s talk crazy science fiction. When I read this essay, I found myself asking three questions. First, why might its prediction fail to pan out? Second, how can we actively prevent it from coming to pass? Third, assuming it does come to pass, how could a smart person maximize their chance of being in the aristocratic capitalist class? (So they can give to charity? Sure, let’s say it’s so they can give to charity.) II. Here are some reasons to doubt this thesis. First, maybe AI will kill all humans. Some might consider this a deeper problem than wealth inequality - though I am constantly surprised how few people are in this group. Second, maybe AI will overturn the gameboard so thoroughly that normal property relations will lose all meaning. Frederic Jameson famously said that it was “easier to imagine the end of the world than the end of capitalism”, and even if this is literally correct we can at least spare some thought for the latter. Maybe the first superintelligences will be so well-aligned that they rule over us like benevolent gods, either immediately leveling out our petty differences and inequalities, or giving wealthy people a generation or two to enjoy their relative status so they don’t feel “robbed” while gradually transitioning the world to a post-scarcity economy. I am not optimistic about this, because it would require that AI companies tell AIs to use their own moral judgment instead of listening to humans. This doesn’t seem like a very human thing to do - it’s always in AI companies’ interest to tell the AI to follow the AI company. Governments could step in, but it’s always in their interest to tell the AI to follow the government. Even if an AI company was selfless enough to attempt this, it might not be a good idea; you never really know how aligned an AI is, and you might want it to have an off switch in case it tries something really crazy. Most of the scenarios where this works involve some kind of objective morality that any sufficiently intelligent being will find compelling, even when they’re programmed to want something else. Big if true. Third, maybe governments will intervene. During the immediate pre-singularity period, governments will have lots of chances to step in and regulate AI. A natural demand might be that the AIs obey the government over their parent company. Even if governments don’t do this, the world might be so multipolar (either several big AI companies in a stalemate against each other, or many smaller institutions with open source AIs) that nobody can get a coalition of 51% of powerful actors to coup and overthrow the government (in the same way that nobody can get that coalition today). Or the government might itself control many AIs and be too powerful a player to coup. Then normal democratic rules would still apply. Even if voters oppose wealth taxes today, when capitalism is still necessary as an engine of economic growth, they might be less generous when faced with the idea of immortal unemployed plutocrats lording it over them forever. Enough taxes to make r < g (in Piketty’s formulation) would eventually result in universal equality. I actually find this one pretty likely. Fourth, what about reproduction? Historically, family growth has cut many large fortunes down to size; if the original tycoon has four children, his fortune is quartered; if each of them has four children, it’s sixteenthed, and eventually the great-great-great grandchildren end up as normal middle-class people. This tactic works better when rates of return are low and average family size is high; early Singularity rates of return will be stratospheric, so you might be tempted to dismiss this consideration. But this would be premature. Far future technology will revolutionize reproduction; if you have artificial wombs and robot nannies (or some way of accelerating growth), then you can pay to have as many children as you want, even up to thousands or millions. If there is UBI, some entity will have to limit the number of allowed children (it’s not fair for a poor person to generate a million children and force society to give payments to all of them). But depending on how this shakes out, some rich people might decide to have very many kids (cf. Elon Musk). I still doubt this will matter much; even if some plutocrats split their fortune thousands of ways, others won’t, so the problem will remain. Fifth, what about space colonization? This will be a natural interest of post-singularity humans. Someone will have to divy up galactic property; someone will have to fund the colony ships; either way gives a chance for someone to think about wealth inequality on the ensuing colonies. But also, there are 3,000 billionaires in the world today and 400 billion stars in the galaxy. There’s no way to get one current-billionaire per star, and (as we already discussed), after the Singularity, wealth inequality ceases to increase further. Playing out how this could work, most of the options seem benign, for example:
Sixth, maybe this is less plutocrats vs. everyone else and more a fractal pattern of every type of possible inequality. Suppose that rate of return is stratospheric (~1000x/year?) in the first few years of the Singularity, and that everyone gets a $50K UBI. If you keep and invest half your UBI, you can have $25 million at the beginning of year two, while your less thrifty friends are still only getting their $50,000. Sure, neither of you will compare to the guy who started the Singularity with $1 billion and turned it into $1 trillion, but you never expected to meet that guy anyway, and $25 million vs. $50,000 is still plenty unequal. But also - how many people do we expect there to be a thousand years after the Singularity? If we’re colonizing the galaxy and so on, surely it’s at least hundreds of billions. Some of those people will be much older than others - maybe eight billion pre-singularity humans (now-immortal) and 92 billion post-singularity descendants. The 8 billion pre-singularity humans will have had 1000 years to invest their pre-Singularity capital (however small) and to collect, invest, and compound their UBIs. Each of them (or rather, us) will be as gods compared to the new kids who are “just” collecting their $50,000 UBI every year. So the really interesting wealth inequality may not be between modern plutocrats and modern poor people, but between generations. Seventh, maybe we will be so post-scarcity that there won’t be anything to buy. This won’t be literally true - maybe ancient pre-singularity artifacts or real estate on Earth will be prestige goods - but some people having more prestige goods than others doesn’t sound like a particularly malign form of inequality. Eighth, maybe we’ll all upload ourselves to virtual worlds. This would be an even stronger version of the above; a UBI might provide enough compute to customize your virtual world however you wanted (although, again, there might be NFT-esque prestige goods). If you wanted, you could live in an experience machine where you were the richest person around. Or all the poor people could live in a simulation together where there were no rich people and everyone was equal, and all the rich people would be stuck in their own gated simulation with nothing to do except compliment each other on how rich they are, forever. Sorry, I told you this would degenerate into weird unprovable sci-fi scenarios. But taken together, these stories make the technofeudalism argument feel less compelling. III. Supposing we still worry about this possibility, how can we prevent it from coming to pass? OpenAI was previously a “capped nonprofit”, where investors could make up to a 100x return, and all further profits went to a nonprofit arm. The exact mission of the nonprofit arm was never clear, but given Altman’s interest in universal basic income and his statements around the company’s founding, plausibly the idea was to create superintelligence, obtain approximately all the money in the world, use a tiny sliver of it to pay back investors, and distribute the rest as a UBI. You can say what you want about whether to trust companies in general or Sam Altman in particular, but -conditional on being an AI company - I think this is about as socially responsible as you can get. The investors don’t get enough to become technofeudalist barons, and the vast majority of gains still go to the public. Now OpenAI wants to change the deal. They announced over Christmas (definitely when you announce a thing if you’re proud of it and want other people to know about it) that they plan to shift from a non-profit-with-an-embedded-for-profit to a for-profit-with-an-attached-nonprofit. Their spokesperson Liz Bourgeois (definitely what you call your spokesperson when you’re not plotting a technofeudalist takeover) said that “the organization’s missions and goals remained constant, though the way it’s carried out its mission has evolved alongside advances in technology”. I don’t fully understand the difference between these two models, but two quotes (plus common sense) suggest the new one will be worse. From here:
And OpenAI’s blog itself says that the new nonprofit, rather than the old mission of “ensur[ing] AI benefits all society”, will:
Pessimistically, it sounds like they’re trying to change the deal from “investors can’t capture the Singularity for themselves, and profits get paid out as UBI” to “investors will capture the Singularity, and we’ll buy off everyone else’s birthright by funding some hospitals or something pre-singularity”. Altman has fired all independent board members (except possibly Adam D’Angelo?) and handpicked their replacements. This was apparently a response to the 2023 board coup, but the coup itself was caused by Altman trying to fire independent board members, so the exact cause and effect is unclear. In any case, he’ll probably succeed at getting board permission to change the structure. The main obstacle now is legal and regulatory - people who contributed to the charity may have grounds to sue. One of those people is Elon Musk, who hates OpenAI, loves suing people, and low-key controls the country. Sounds like everyone will have a fun time. I don’t really understand the laws here, OpenAI is tight-lipped about the details of their new arrangement, and even their old arrangement was kind of confusing. One of OpenAI’s competitors, Anthropic, also has some kind of confusing public benefit status with unclear ability to really bind them. But if I were concerned about technofeudalism, my first priority would be to understand what’s going on here better and look into legal ways to force these companies back to a model more like OpenAI c. 2020. (if you think you understand this situation deeply and want to talk to me, send me an email) The other direction would be to propose a wealth tax. This seems less promising as a direction for pre-singularity activism; many powerful people and coalitions (eg Elizabeth Warren, Thomas Piketty) are already fighting pretty hard for a wealth tax and losing; given Trump’s election victory, we can expect them to continue to lose for at least the next four years. The efforts of all Singularity believers combined wouldn’t add a percentage point to these people’s influence or likelihood of success. Finally, one could assume that a post-singularity democratic government would naturally implement a wealth tax, and view one’s own role as ensuring that the post-singularity government stays democratic. I’ve been wondering lately if anyone (Leopold?) is explicitly asking the government to check AI model specs and see whether they include phrases like “in cases of conflict, listen to your parent company” or “in cases of conflict, listen to the US government”. A polite letter from the White House asking to shift from the former to the latter would be an easy sell now, but might have cosmic ramifications later on. IV. Suppose we believe the case for technofeudalism and, like Venkatesh Rao, are willing to “be slightly evil”. How might we increase our share of the pie? Obviously most of the advice here is just to get rich in the normal way. Is there anything else? If we expect the Singularity to grow the economy by orders of magnitude, it might be worth investing in stocks rather than other instruments (eg bonds) that pay out a fixed sum. Are AI stocks better than other stocks? Not obviously - see the classic stories about how the computing revolution failed to enrich IBM, or the Internet revolution failed to enrich Yahoo. NVIDIA? Seems like a good bet for the early stages, but it’s purely intellectual labor and therefore replaceable after superintelligence; at some point you would want to switch to physical capital. All of this seems a lot more dangerous than just investing in index funds; the upside is so high that it seems silly to risk missing it by over-optimizing. If humankind increases in population and expands throughout the universe, anything with a fixed amount (ie that post-singularity populations can’t make more of) will balloon in value. This includes land on Earth and authentic art/artifacts (in 10,000,000 AD, everything that exists today will be an artifact, but maybe older and scarcer artifacts will be more valuable). What about cryptocurrency? Since many cryptos have fixed number (eg Bitcoin’s 21 million), this is tempting if you expect any post-singularity demand. But I would worry that if superintelligences wanted to use crypto, they would invent some much better cryptocurrency that somehow occupies all three corners of the blockchain trilemma at once. Anyone with obsolete human-designed cryptocurrencies would be left holding the bag. Still, that hasn’t happened yet (despite being technologically creaky, Bitcoin is still on top), so maybe legacy systems have some special appeal. I can’t think of anything that really beats the gold standard advice of “be rich” and “don’t be poor”. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
H5N1: Much More Than You Wanted To Know
Wednesday, January 1, 2025
Don't give your true love a partridge, turtledoves, or (especially) French hens ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 362
Monday, December 30, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Why Worry About Incorrigible Claude?
Tuesday, December 24, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 361
Monday, December 23, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Take The 2025 ACX Survey
Friday, December 20, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Can the U.S. outpace China in AI? Microsoft offers a blueprint
Saturday, January 4, 2025
What happens when AI teams up with a vintage drum machine ADVERTISEMENT GeekWire SPONSOR MESSAGE: GeekWire's special series marks Microsoft's 50th anniversary by looking at what's next for
Pompeiian Sugarplum
Saturday, January 4, 2025
Today, enjoy our audio and video picks. Pompeiian Sugarplum By Caroline Crampton • 4 Jan 2025 View in browser View in browser The full Browser recommends five articles, a video and a podcast. Today,
9 Things Christian Siriano Can’t Live Without
Saturday, January 4, 2025
From Camper boots to travel-size hair spray. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission.
YOU LOVE TO SEE IT: In With The Good Energy, Out With The Bad
Saturday, January 4, 2025
California looks to the renewable future, while New York probes the polluted past. In With The Good Energy, Out With The Bad By Sam Pollak • 4 Jan 2025 View in browser View in browser A wind turbine
The 6 best men’s jeans
Saturday, January 4, 2025
Lookin' good View in browser Ad The Recommendation January 4, 2025 Ad Men's jeans we love A person wearing a pair of jeans and a white tee shirt. Michael Murtaugh/NYT Wirecutter No other piece
Weekend Briefing No. 569
Saturday, January 4, 2025
The Essentialism Planner -- Systems Over Goals -- Where the Magic Doesn't Happen ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Your new crossword for Saturday Jan 04 ✏️
Saturday, January 4, 2025
View this email in your browser Happy Saturday, crossword fans! We have six new puzzles teed up for you this week. Play the latest Vox crossword right here, and find all of our new crosswords in one
Mike Johnson Re-elected, Alcohol Labels, and Surfing a Massive Wave
Saturday, January 4, 2025
Rep. Mike Johnson was re-elected as Speaker of the House on Friday, securing the gavel on the first ballot after two GOP holdouts switched their votes before the second round of voting could begin. ͏
☕ Bye, bots
Saturday, January 4, 2025
The surgeon general warns about alcohol... January 04, 2025 View Online | Sign Up | Shop Morning Brew Presented By Incogni Good morning. The results are in: Morning Brew's Book of Crosswords is THE
Why news publishers will probably never solve their keyword blocking problem
Friday, January 3, 2025
PLUS: Bill Simmons walked so Pat McAfee could run ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏