Highlights From The Comments On The Repugnant Conclusion And WWOTF
Original post here. 1: Petey writes:
This is a good point, but two responses. First, for me the conclusion’s repugnance doesn’t hinge on the lives of the people involved being especially bad. It hinges on people having to be sadder and poorer than the alternative, their standard of living forever capped, just in order to tile the world with as many warm bodies as possible. I genuinely don’t care how big the population is. I don’t think you can do harm to potential people by not causing them to come into existence. Hurting actual people in order to please potential people seems plenty repugnant to me regardless of the exact level of the injury. Second, MacAskill actually cites some research about where we should put the zero point. Weirdly, it’s not in the section about the repugnant conclusion, it’s in a separate section about whether we should ascribe the future positive value. In one study, researchers ask people to rate their lives on a 1-10 scale; in another, they ask people to rate where they think the neutral point where being alive no longer has positive value is. These aren’t the same people, so we can’t take this too seriously, but if we combine the two studies than about 5-10% of people’s lives are below neutral. Another study contacted people at random times during their day and asked them whether they would like to skip over their current activity (eg sleepwalk through work, then “wake up” once they got home). Then they compared these in various ways to see whether people would want to skip their entire lives, and about 12% of people did. I don’t entirely understand this study and I’m only repeating it for the nominative determinism value - one of the authors was named Dr. Killingsworth. There are also a few studies that just ask this question directly; apparently 16% of Americans say their lives contain more suffering than happiness, 44% say even, and 40% say more happiness than suffering; nine percent wish they were never born. A replication in India found similar numbers. Based on all of this, I think if we trust this methodology about 10% of people live net negative lives today, which means the neutral point where the Repugnant Conclusion would force us to is at about the tenth percentile of the population. This doesn’t quite make sense, because you would think the tenth percentile of America and the tenth percentile of India are very different; there could be positional effects going on here, or it could be that India has some other advantages counterbalancing its poverty (better at family/community/religion?) and so tenth-percentile Indians and Americans are about equally happy. Happiness isn’t exactly the same as income, but if we assume they sort of correlate, it’s worth pointing out that someone in the tenth percent of the US income distribution makes about $15,000. So maybe the average person in the Repugnant Conclusion would live a life similar to, or have a happiness level the same as, the average American who makes $15,000. Another way of thinking about this: about 8% of Americans are depressed, so the tenth-percentile American is just barely about the threshold for a depression diagnosis; we might expect the average Repugnant Conclusion resident to be in a similar state. 2: Jack Johnson writes:
I think this way of thinking about things is understandable but subtly wrong, and that the “now equalize happiness” step in Repugnant Conclusion is more defensible than communism or other forms of real-life equalizing. In the Repugnant Conclusion, we’re not creating a world, then redistributing resources equally. We’re asking which of two worlds to create. It’s only coincidence that we were thinking of the unequal one first. Imagine we thought about them in the opposite order. Start with World P, with 10 billion people, all happiness level 95. Would you like to switch to World Q, which has 5 billion people of happiness level 80 plus 5 billion of happiness level 100? If so, why? You’re just choosing half the people at random, making their lives a little better, and then making the lives of the other half a lot worse, while on average leaving everyone worse off. MacAskill calls the necessary assumption “non-anti-egalitarianism”, ie you don’t think equality is so bad in and of itself that you would be willing to make the world worse off on average just to avoid equality. While you can always come up with justifications for this (maybe the lack of equality creates something to strive for and gives life meaning, or whatever) I don’t think most people would naturally support this form of anti-egalitarianism if they didn’t know they needed it to “win” the thought experiment. Communism wants to take stuff away from people who have it for some specific reason (maybe because they earned it), and (according to its opponents), makes people on average worse off. In the thought experiment, nothing is being taken away (because the “losers” never non-counterfactually had it), there was never any reason for half the population to have more than the other half, and it makes people on average better off. So we can’t use our anti-communism intuitions to reject the equalizing step of the Repugnant Conclusion. 3: Regarding MacAskill’s thought experiment intending to show that creating hapy people is net good, Blacktrance writes:
This is a fascinating analogy, but I’m not sure it’s true. If playing Civ and losing was genuinely exactly equal in utility to going to the museum, then it might be true that playing Civ and winning dominates it. I agree with Blacktrance that this dones’t feel true, but I think this is just because I’m bad at estimating utilities and they’re so close together that they don’t register as different to me. 4: MartinW writes:
I should stress that even the people who accept the repugnant conclusion don’t believe that “preventing the existence of a future person is as bad as killing an existing person”; in many years of talking to weird utilitarians, I have never heard someone assert this. More generally, I think that talk of “moral obligation” is misleading here. If you accept the repugnant conclusion, creating new people is good. Other things that are good include donating money to charity, being vegetarian, spending time with elderly people, donating your kidney, and living a zero-carbon lifestyle. Basically nobody does all these things, and most people have an attitude of “it is admirable to do this stuff but you don’t have to”. Anyone who did all this stuff would be very strange and probably get a Larissa MacFarquahar profile about them. If having children is good, it would be another thing in this category. In fact, it’s worth pointing out how incredibly unlikely it is that your decision to have children has an expected utility of exactly zero. Either you believe creating happy people is good in and of itself, or you believe in the underpopulation crisis, or you believe in the overpopulation crisis, or maybe your kid will become a doctor and save lives, or maybe your kid will become a criminal and murder people. When you add up the probabilities of all of that, it would be quite surprising if it equalled zero. But that means having a child is either mildly-positive-utility or mildly-negative-utility. Unless you want to ban people from having kids / require them to do so, you had better get on board with the program of “some things can have nonzero utility but also be optional”. Also, a few commenters point out that even if you did have an obligation to have children, you would probably have an even stronger obligation to spend that money saving other people’s children (eg donating it to orphanages, etc). 5: Rana Dexsin writes:
You are the first person I’ve ever met or heard of who genuinely has average utilitarian philosophical intuitions. I feel like you should be in a museum somewhere. Also, I hope no one ever puts you in charge of Hell. 6: Magic9Mushroom writes:
Thanks, I had forgotten about that. I think I am going to go with “morality prohibits bringing below-zero-happiness people into existence, and says nothing at all about bringing new above-zero-happiness people into existence, we’ll make decisions about those based on how we’re feeling that day and how likely it is to lead to some terrible result down the line.” 7: hammerspacetime writes:
There’s a pragmatic discount rate, where we discount future actions based on our uncertainty about whether we can do them at all. I am near-certain that if I give a beggar $100 today, he will get the $100. But if I leave $100 in a bank with a will saying that it should be given to a poor person in the year 5000 AD, someone could steal it, the bank could go out of business, the bank could lose my will, humankind could go extinct, etc. If there’s only a 1% chance that money saved in this way will really reach its target, then we have an implicit 99% discount rate per 3000 years. There’s been some debate about whether we should additionally have an explicit discount rate, where we count future people as genuinely less important than us. Most people come out against, because why should we? It doesn’t intuitively seem true that the suffering of future people matters less than the suffering of people today. Eliezer Yudkowsky and Robin Hanson had an interesting debate about this in 2008; you can read Eliezer here and Robin here. I think Robin later admitted that his view meant people in the past were much more valuable than people today, so much so that we should let an entire continent worth of present people die in order to prevent a caveman from stubbing his toe, and that he sort of kind of endorses this conclusion; see here. 8: Hari Seldon writes:
I also thought about that when reading this! My main concern is that it’s hard to come up with a model where the future doesn’t have more beings in it than the present. The universe is still relatively young. Suppose humankind wipes itself out tomorrow; surely most intelligent life in the universe will be aliens who live after this point? But I think something like the Grabby Aliens model could explain this: intelligent species arise relatively young in the universe’s history, get replaced by non-conscious AIs, and the AIs spread across the universe until there are no more uncolonized stars to spawn biological life. This is really awkward because it suggests AIs can’t be conscious - not just that one particular AI design isn’t conscious, but that no alien race will design a conscious AI. An alternative possibility is that AIs naturally remain single hiveminds, so that most individuals are biological lifeforms even if AI eventually dominates the galaxy. But how could an AI remain a single hivemind when spread across distances so vast that the lightspeed limit hinders communication? I’m not sure how to resolve this except that maybe some idiot destroys the universe in the next few hundred million years. 9: David Chapman and many other people took me as attacking philosophy: .@slatestarcodex vs. philosophy
[philosophy is bad. don’t do it. gently ridicule anyone who takes it seriously] astralcodexten.substack.com/p/book-review-… I disagree with this. I feel like I was doing philosophy. In a sense all attacks on philosophy are doing philosophy, but I feel like I was doing philosophy even more than the bare minimum that you have to in order to have an opinion at all. I’m not sure how moral realist vs. anti-realist I am. The best I can do is say I’m some kind of intuitionist. I have some moral intuitions. Maybe some of them are contradictory. Maybe I will abandon some of them when I think about them more clearly. When we do moral philosophy, we’re examining our intuitions to see which ones survive vs. dissolve under logical argument. The repugnant conclusion tries to collide two intuitions: first, that the series of steps that gets you there are all valid, and second that the conclusion is bad. If you feel the first intuition very strongly and the second one weakly, then you “have discovered” that the repugnant conclusion is actually okay and you really should be creating lots of mildly happy people. I have the opposite intuitions: I’m less sure about the series of steps than I am that I’m definitely unhappy with the conclusion, and I will reject whatever I need to reject to avoid ending up there. In fact, I’m not sure what to reject. Most of the simple solutions (eg switch to average utilitarianism) end up somewhere even worse. On the other hand, I know that it’s not impossible to come up with something that satisfies my intuitions, because “just stay at World A” (the 5 billion very happy people) satisfies them just fine. So I think of this as a question of dividing up a surplus. World A is very nice. It seems possible that we can do better than World A. How much better? I’m not sure, because some things which superficially appear better turn out to be worse. Someone who is smarter than I am might be able to come up with a proof that the best we can do according to my intuitions is X amount better - in which case I will acknowledge they are a great philosopher. Nobody knows exactly what their moral system is - even the very serious utilitarians who accept the Repugnant Conclusion can’t explain their moral system so precisely that a computer could calculate it. We all have speculative guesses about which parts of our intuition we can describe by clear rules, and which ones have to stay vague and “I know it when I see it”. I prefer to leave this part of population ethics vague until someone can find rules that don’t violate my intuitions so blatantly. This isn’t “anti-philosophy”, it’s doing philosophy the same as we do it everywhere else. 10: Mark Lutter: Nonononono that’s not what I’m saying I’m really upset that everyone thinks this is what I’m saying! C.S Peirce was a famous logician who was also a racist. He wrote about how you shouldn’t trust natural-language logic in real life, and gave the following example of how it could go wrote:
Peirce is a tragic figure because he was smart enough to discover that logic disconfirmed his biases, but decided to just shrug it off instead of being genuinely open to change. Now, Peirce was sort of right in that most of the time “logic” seems to give you a crazy result, you’re doing something wrong - there are plenty of “proofs” that 1 = 2 which are tough to find the errors in. Most of the time when you use logic to discover something crazy, you should be skeptical that you’re applying logic right. But if you keep doing it, and it keeps giving you the same answer, then yeah, this is one of the only ways you can ever overcome your own biases and discover new moral truths. If you just laugh it off, you run a risk of ending up like Peirce, spending your life promoting evil or at least less good than you otherwise could. I’m not saying the Repugnant Conclusion doesn’t matter. I’m saying it’s wrong. I’m saying we should treat it like Russell’s Paradox, where we admit that the system we were working on implies it, agree that this is bad, and try to figure out how best to contain the damage without throwing out the stuff we want like the ability to do arithmetic or have sets at all. I hope I’m not doing the C.S. Peirce thing here. I’m relatively confident I’m not, because it’s pretty obvious where the trouble comes from (we are treating bringing new people into existence the same as improving the lives of existing people) and I feel okay saying that is not morally correct (the only people harmed by this are nonexistent people, who don’t exist). But this is really different from just saying “Logic is stupid, never use it” or even retreating back to some kind of “Oh, morality is just the vague indescribable things we all owe one another” which is deliberately designed to be so weak and unquantifiable that it can never challenge any of your existing beliefs. Don’t do that! 11: Siberian Fox writes: excellent meme from astralcodexten.substack.com/p/book-review-…
but I still disagree. I'm open to being wrong because it means I get my eyes pecked by seagulls, but I do believe a galactic civilization with trillions of barely worth living meh lives > a bubble utopia of 5000 people around wasteland I’m also sympathetic to the galactic civilization, but only because it’s glorious. This is different from “it has a lot of people experiencing mild contentment”. Isaac Asimov wrote some books about the Spacers, far-future humans who live the lives of old-timey aristocrats with thousands of robot servants each. Suppose we imagine a civilization of super-Spacers with only one human per thousand star systems - even though all of these star systems are inhabited, full of beautiful monuments, and doing (AI-run) scientific research and creative work. Overall there are only five thousand humans in the galaxy, but galactic civilization would be super-impressive and getting better every day. Sometimes some people die and others are born, but it’s always around five thousand. Or you can have the city of Jonesboro, Arkanasas (population: 80,000) exactly as it currently exists, preserved in some kind of force field. For some reason the economy doesn’t collapse even though it has no trade partners; maybe if you send trucks full of goods into the force field, it sends back trucks full of other goods. Sometimes some people die and others are born, but it never changes much or gets better. I find that the same part of me that preserves the galactic supercivilization in Siberian Fox’s example also prefers the galactic supercivilization in my example, even though it’s hard to justify with total utilitarianism (there are fewer than 10% as many people; even though their lives are probably much better, I don’t think the intuition depends on them being more than 10x better). 12: Alexander Berger writes: Interesting/surprising to me that the Repugnant Conclusion is where @slatestarcodex gets off the crazy train: You can probably predict my response here - I don’t think I’m doing anything that could be described as “getting off the crazy train”. Like if someone is thinking “Scott believes in so many weird things, like AI risk and deregulating the FDA and so on, it’s weird that this is where he’s choosing to stop believing weird things”, I think you’re drawing the weird-thing category in the wrong place. I believe in AI risk because I think it is going to happen. If I’m a biased person, I can choose to bias myself not to believe in it, but if I try to be unbiased, the best I can do is just follow the evidence wherever it leads, even if it goes somewhere crazy. But in the end I am kind of a moral nonrealist who is playing at moral realism because it seems to help my intuitions be more coherent. If I ever discovered that my moral system requires me to torture as many people as possible, I would back off, realize something was wrong, and decide not to play the moral realism game in that particular way. This is what’s happening with the repugnant conclusion. Maybe Berger was including my belief in eg animal welfare as a crazy train stop. I do think this is different. If my moral code is “suffering is wrong”, and I learn that animals can suffer, that’s a real fact about the universe that I can’t deny without potentially violating my moral code. If someone says “I think we should treat potential people exactly the same as real people”, and I notice my moral intuitions don’t care about this, then you can’t make me. On questions of truth, or questions of how to genuinely help promote happiness and avoid suffering, I will follow the crazy train to the ends of the earth. But if it’s some weird spur line to “how about we make everyone worse off for no reason?” I don’t think my epistemic or moral commitments require me to follow it there. 13: Long Disc writes:
Several people had this concern but I think the chart isn’t exponential, it’s hyperbolic. An exponential chart would have the same growth rate at all times, but I think growth rate in ancient times was more like 0.1% per year, compared to more like 2% per year today. 14: David Manheim writes:
15: BK writes:
I agree this is a consideration, but I don’t think we should elevate good feedback mechanisms into the be-all-and-end-all of decision-making criteria. Consider smashing your toes with a hammer. It has a great feedback mechanism; if you’re not in terrible pain, you probably missed, and you should re-check your aim. In contrast, trying to cure cancer has very poor feedback; although you might have subgoals like “kill tumor cells in a test tube”, you can never be sure that those subgoals are really on the path to curing cancer (lots of things that kill tumor cells in a test tube are useless in real life). But this doesn’t mean that people currently trying to cure cancer should switch to trying to smash their toes with a hammer. If something’s important, then the lack of a good feedback mechanism should worry you but not necessarily turn you off entirely. 16: Mentat Saboteur writes:
Thanks, now I don’t have to be a long-termist! Heck, if someone can convince me that water doesn’t really damage fancy suits, I won’t have to be an altruist at all! You’re a free subscriber to Astral Codex Ten. For the full experience, become a paid subscriber.
|
Older messages
Effective Altruism As A Tower Of Assumptions
Wednesday, August 24, 2022
...
Book Review: What We Owe The Future
Tuesday, August 23, 2022
...
Sign in to Astral Codex Ten
Tuesday, August 23, 2022
. Here's a link to sign in to Astral Codex Ten. This link can only be used once and expires after 24 hours. If expired, please try logging in again here Sign in now © 2022 Scott Alexander 548
Sign in to Astral Codex Ten
Tuesday, August 23, 2022
. Here's a link to sign in to Astral Codex Ten. This link can only be used once and expires after 24 hours. If expired, please try logging in again here Sign in now © 2022 Scott Alexander 548
Your Book Review: 1587, A Year Of No Significance
Monday, August 22, 2022
Finalist #15 in the Book Review Contest
You Might Also Like
First-ever UEFI bootkit for Linux in the works, experts say [Thu Nov 28 2024]
Thursday, November 28, 2024
Hi The Register Subscriber | Log in The Register Daily Headlines 28 November 2024 KITTY LOOKS AT SCREEN AI GENERATED First-ever UEFI bootkit for Linux in the works, experts say Bootkitty doesn't
On My Mind: Fig Ornaments and Striped Bath Mats
Thursday, November 28, 2024
Plus: Eensy-weensy, teeny-tiny gifts. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission. November 27,
What It’s Like to Be on Trump’s Enemies List
Wednesday, November 27, 2024
Columns and commentary on news, politics, business, and technology from the Intelligencer team. Intelligencer power What It's Like to Be on Trump's Enemies List “Revenge does take time.” Photo-
GeekWire Mid-Week Update
Wednesday, November 27, 2024
Read the top tech stories so far this week from GeekWire Top stories so far this week Microsoft credited with spotting sophisticated Chinese hack that hit telecoms including T-Mobile US officials say a
Thursday Briefing: A fragile cease-fire in Lebanon
Wednesday, November 27, 2024
Plus, a post-election Thanksgiving. View in browser|nytimes.com Ad Morning Briefing: Asia Pacific Edition November 28, 2024 Author Headshot By Gaya Gupta Good morning. We're covering the first day
Turn your ideas into reality at AWS re:Invent 2024
Wednesday, November 27, 2024
Join in person or the free livestream and learn all things AWS and generative AI GeekWire is pleased to present this special sponsored message to our Pacific NW readers. Don't miss your chance to
SIROTA’S SIGNALS: A New MAGA Plot To Kill Anti-Corruption Laws
Wednesday, November 27, 2024
Plus, new data on Liz Cheney's election effect, the connection between real estate and your insurance premium, and a hidden city discovered under the ice. A New MAGA Plot To Kill Anti-Corruption
Erik Prince sued The Intercept
Wednesday, November 27, 2024
There is an increasingly common strategy by billionaires to weaponize libel law against journalism — and in the Donald Trump era, we can expect the legal attacks on the free press to rise. In 2020, The
AI stops raccoons from invading house
Wednesday, November 27, 2024
Should AI be regulated like drugs and airplanes? | 5 reasons to attend the GeekWire Gala ADVERTISEMENT GeekWire SPONSOR MESSAGE: Get your ticket for AWS re:Invent, happening Dec. 2–6 in Las Vegas:
☕ Call me, beep me
Wednesday, November 27, 2024
How brands can make the most of their BFCM and holiday texts. November 27, 2024 Marketing Brew Presented By Frontify It's Wednesday. Walmart is rolling back investments in some DE&I programs,