Astral Codex Ten - More Drowning Children
I. People love trying to find holes in the drowning child thought experiment. This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply). So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail. For example, some people say the difference is distance; you’re close to the drowning child, but far from people dying in Africa. Here are some thought experiments that challenge that:
Here it seems obvious that you should save them, even though they’re all the way in China. Is the problem that “you” are sort of “in” China via your robot, even if not physically? Here’s another example:
Again, the answer is clear even though you’re 3000 miles away. At this point, saying that you’re “virtually” in Dublin seems like a stretch. Here the issue seems to be some sort of entanglement. But it’s hard to say exactly how the entanglement works, and it doesn’t seem to be a simple one-to-one correspondence where you’re the only person who can help. For example:
Here it seems like the sociopathic jerks might as well be furniture - their presence doesn’t change your situation compared to the scenario where you’re there alone. II. TracingWoodgrains draws off a now-deleted essay by Jaibot which talks about the “Copenhagen interpretation of ethics”. It argues that by “touching” a situation - a vague term having something to do with causal entanglement - you gain moral obligation for it. If you can simply avoid touching it, your moral obligation goes away. I think this explains half the problem, but I can think of another half that it doesn’t explain. Consider:
Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet. Here I’m split on whether the Copenhagen hypothesis works. A person who lives in the cabin and fails to rescue every child seems much less monstrous than someone who only ever encounters the situation once, even though both of them “touch” the situation exactly as much. Still, as the hypothesis predicts, we are less comfortable with this situation than the normal one where you live far away from the cabin and never worry about it - living near the cabin (“touching” the situation) seems to have some moral impact. Here’s somewhere I think Copenhagen more clearly fails:
Here Copenhagen fails to predict a difference between refusing to rescue the 37th kid going past the cabin, vs. refusing to rescue the single kid in your hometown; you are “touching” both equally. But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable. Again sticking to a purely descriptive account of intuitions, I think this represents a sort of declining marginal utility of moral goods. The first time you rescue a kid, you get lots of personal benefits (feeling good about yourself, being regarded as a hero, etc). By the 37th time, these benefits are played out. If you refuse to rescue a child for the relatively high benefits of a single situation, we think you must have no moral sense at all. But if you fail to rescue them the 37th time, we think this is pretty understandable and similar to what we would do in the same situation. (This “declining marginal utility” explanation is less natural than something like “the obligation to rescue all those children is ruining my life”. But I think it’s more accurate; if we come up with a thought experiment where it doesn’t ruin your life in any way - where it only takes a few hours from your day and you have enough left to accomplish everything you need - then it still seems harsh to demand someone rescue 37 children every day. And when there is an actual moral obligation - like parenting your own children - we don’t accept “it will ruin my life” as an excuse to get out of it.) III. So these two descriptive theories - the Copenhagen hypothesis, and the declining marginal utility of moral goods - do a good job explaining our intuitions. But some people leap from there to saying they’re also the right prescriptive theories - they determine what morality really is, and what rules we should follow. I think this is a gigantic error, the worst thing you could possibly do in this situation. These are essentially rules for looking good to other people. To follow them is to say that you will always optimize for seeming cool, no matter how many people you have to kill in order to do it. So for example:
This is all awkward enough that maybe you want to push the Copenhagenness back a step and just refuse to touch the cabin at all. Refuse to inherit it, lock your door, tell the lawyer who says you own it now that he needs to get off your property or else you’ll shoot. But we can still make your life difficult:
The Copenhagen theorist would be in a bind here. You really want to avoid the dam forcing you to “touch” the situation, and then either spend your whole life saving children, or be culpable for failing to do so. But it seems both heartless and pointless to waste your one lobbyist favor on a river redirection which doesn’t change anything about the real world (as many children will die as ever) when you could instead use it to do lots of good. My best bet for how a thoughtful Copenhagener would respond is that they would say you had terrible moral luck by happening to end up where the dam was going to redirect the drowning children; however, this itself caused you to “touch on” the situation and now you can be judged for how you respond (including your cowardly response of trying to redirect the river somewhere else). I don’t buy it.
Here it seems obvious that you are a better person than your neighbor. But then what remains of the “moral luck” explanation? What remains of Copenhagen, where you are blamed for a situation if you touch it? Maybe you have to choose to touch it for it to count? But this seems false; in Singer’s original drowning child experiment, you didn’t choose to be the only person near the lake when the kid was drowning. It was just a weird coincidence. In fact, it seems like we all benefit from the same sort of moral luck as the neighbor. Suppose Alice is born in a gated community in the US, to a family making $200,000/year; she goes to her local college, stays in her rich hometown, and eventually makes $200,000/year herself. There are no poor people near her, so she has few moral obligations. But Bob is born in Zimbabwe, to a rare upper-class well-connected Zimbabwean family making $200,000 year; he inherits his father’s business and also makes $200,000 year himself. But he lives in the middle of horrible poverty. His housekeeper is dying of some easily-cured disease, all of his school friends are dying of easily-cured diseases, every day when he goes to work he has to walk over half-dead people screaming for help. It seems like Alice got lucky by not being Bob; she has no moral obligations, whereas he has many. Suppose that Bob only helps a little bit, enough that we would consider him pretty stingy given his situation - maybe he helps his absolute closest school friend, but lets several other school friends die. And suppose that if Alice was in Bob’s situation, she would do even less, but in fact in real life she satisfies all of her (zero) moral obligations. If there’s only one spot in Heaven, should it go to Alice or Bob? Someone who’s still desperately trying to preserve Copenhagen would have to say that the “one spot in Heaven” prompt isn’t fair - God presumably has His own criteria which exploit His perfect omniscience, but we humans must think about morality on a merely human level. I still don’t buy it. For one thing, God isn’t using any special omniscient knowledge that we (the people reading this thought experiment) don’t also have and use easily. For another, if you’re even slightly religious, actually getting the literal spot in Heaven should be one of the top things on your mind when you’re deciding whether to be moral or not. Even if you’re atheist, trying to be the sort of person who would get a spot in Heaven, if it existed, seems like a worthier goal than whatever the Copenhagen-follower is doing. IV. So again, the question is - what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it? My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender. Here we would probably agree to save drowning children, because if we were involved in the situation at all, we would have a 50% chance of being the rescuer (minor inconvenience) or the child themselves (life or death importance). But we would also agree to save people dying of easily-cured diseases in the Third World, because we wouldn’t know if we would be those people either. Everyone would agree to a proposed deal that rich people donate a small fraction of their income to charity, because it would be only a mild inconvenience if they turned out to be rich, but a life-saver if they turned out to be poor. Further, since we wouldn’t know whether we would be Alice (low level of moral obligation) or Bob (very high level of moral obligation), we would take out insurance by agreeing that everyone needed to pay the same modest amount into the general pot for helping people. (How much should they pay? Enough to pick the low-hanging fruit and make it so nobody is desperately poor, but not enough to make global capitalism collapse. I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible.) A final deal might look like this: we’ll all cooperate by sending a bit of our money to a general pot for helping people in terrible situations. And if there’s a more urgent situation that group contributions can’t help - because for example a child is drowning right now and there’s only one person close enough to save them - then we’ll deputize that one person to save them, and assume it will all even out in the end. (Actually, even better would be pay that person a reward for their trouble out of the general pot - then there’s no unfairness or special obligation on one person rather than another!) Here we’re able to bring back all of those things we rejected earlier - proximity, urgency, being the only person available - not because they determine who is worthy of being saved, but in the context of a coalition that plans to save everybody but which in an emergency needs to act through whoever is available. This is no different from a police force which, learning of a serious crime in progress, asks the officer closest to the site to respond, even if that officer isn’t a specialist in that particular type of crime, or even if that officer is one minute away from clocking out and it’s unfair to make them work overtime. All of this makes perfect sense - except that the coalition is in arrears, there is no general pot, and most bad things go unprevented. Only the extra “save people close to you” rule, tacked on as an afterthought, still functions, because that one makes people look good when they do it and is easier to enforce through reputational mechanisms. I think you should probably still save someone close to you (eg drowning), partly because this rule is valuable even on its own (ie it’s better to do it than not do it), and partly because, since other people are following it, you actually have a reciprocal obligation to your fellow coalition members here (ie you expect that if your child was drowning, someone else would help, so you’re free-riding if you don’t help them). If you end up at the death cabin, you don’t have an obligation to save every single child who passes by, because the coalition didn’t intend for the “save drowning children” obligation to be an unusual burden on anyone in particular, and because nobody else is doing this so you’re not betraying fellow coalition members. People may incorrectly think less of you if you don’t do this, and you might want to take action to avoid reputational damage, but this isn’t a moral obligation. The real answer to this problem is that the coalition should split the cost of hiring a lifeguard - or, if for some reason you are the only person who can be in the area, compensate you for your time. Given that the coalition isn’t strong enough to actually do these things, your obligations are limited, and not made any better or worse by living in the cabin vs. further away. I think it’s virtuous, but not obligatory, to behave as if the coalition is still intact, and try to give a portion of your income to some sort of virtual version of the general pot. You could also think of the government as some sort of very distorted flawed real-life version of the coalition and consider your obligations fulfilled by paying taxes, but I think this is an insult to the angelic intelligences, and you should just go with whatever seems like the closest thing to their original plan without waiting for it to actually be instantiated. I think this is more dignified than the thing where you try to hire someone for $525 to move your cabin to a different location so you don’t feel like you’re “touching” the problem, or whatever. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
Misophonia: Beyond Sensory Sensitivity
Wednesday, March 19, 2025
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The Ozempocalypse Is Nigh
Tuesday, March 18, 2025
Sorry, you can only get drugs when there's a drug shortage. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
OpenAI Nonprofit Buyout: Much More Than You Wanted To Know
Tuesday, March 18, 2025
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 373
Tuesday, March 18, 2025
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
What Happened To NAEP Scores?
Tuesday, March 11, 2025
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Microsoft@50 recap: Company faithful mark first 50 years, look to the future at GeekWire event
Friday, March 21, 2025
Amazon launches carbon credit service | Vote for AI Innovation of the Year ADVERTISEMENT GeekWire SPONSOR MESSAGE: A limited number of table sponsorships are available at the 2025 GeekWire Awards:
669002 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 669002 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
Friday Sales: Half-Off Frame Jeans and an Under-$1,000 Sofa
Friday, March 21, 2025
Including Lunya pajamas and $29 Fila sneakers. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission.
962228 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 962228 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
974376 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 974376 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
141787 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 141787 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
913480 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 913480 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
755316 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 755316 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
881244 is your Substack verification code
Friday, March 21, 2025
Here's your verification code to sign in to Substack: 881244 This code will only be valid for the next 10 minutes. If the code does not work, you can use this login verification link: Verify email
People are doing weird skin stuff
Friday, March 21, 2025
Plus: What really happened in the 2024 election, optimizing your algorithms, and more. View this email in your browser Each week, a different Vox editor curates their favorite work that Vox has