Astral Codex Ten - Contra DeBoer On Movement Shell Games
Contra DeBoer On Movement Shell Games"Lots of alcoholics want to quit in principle, but only some join AA"Followup to: In Continued Defense Of Effective Altruism Freddie deBoer says effective altruism is “a shell game”:
In other words, everyone agrees with doing good, so effective altruism can’t be judged on that. Presumably everyone agrees with supporting charities that cure malaria or whatever, so effective altruism can’t be judged on that. So you have to go to its non-widely-held beliefs to judge it, and those are things like animal suffering, existential risk, and AI. And (Freddie thinks) those beliefs are dumb. Therefore, effective altruism is bad. (as always, I’ve tried to sum up the argument fairly, but read the original post to make sure.) Here are some of my objections to Freddie’s point (I already posted some of this as comments on his post): 1: It’s actually very easy to define effective altruism in a way that separates it from universally-held beliefs. For example (warning: I’m just mouthing off here, not citing some universally-recognized Constitution EA Of Principles):
I think less than a tenth of people do (1), less than a tenth of those people do (2), and less than a tenth of people who would hypothetically endorse both of those get to (3). I think most of the people who do all three of these would self-identify as effective altruists (maybe adjusted for EA being too small to fully capture any demographic?) and most of the people who don’t, wouldn’t. Step 2 is the interesting one. It might not fully capture what I mean: if someone tries to do the math, but values all foreigners’ lives at zero, maybe that’s so wide a gulf that they don’t belong in the same group. But otherwise I’m pretty ecumenical about “as long as you’re trying”. This also explains why I’m less impressed by the global poverty / x-risk split than everyone else. Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation. There are assumptions you can add and alternate methods you can use to avoid that conclusion. But it’s a temptation you run into. Anyone who hasn’t felt the temptation hasn’t tried the serious analysis. Real life keeps proving me right on this. When I talk to the average person who says “I hate how EAs focus on AI stuff and not mosquito nets”, I ask “So you’re donating to mosquito nets, right?” and they almost never are. When I talk to people who genuinely believe in the AI stuff, they’ll tell me about how they spent ten hours in front of a spreadsheet last month trying to decide whether to send their yearly donation to an x-risk charity or a malaria charity, but there were so many considerations that they gave up and donated to both. 2: Part of the role of EA is as a social technology for getting you to do the thing that everyone says they want to do in principle. I talk a big talk about donating to charity. But I probably wouldn’t do it much if I hadn’t taken the Giving What We Can pledge (a vow to give 10% of your income per year) all those years ago. It never feels like the right time. There’s always something else I need the money for. Sometimes I get unexpected windfalls, donate them to charity while expecting to also make my usual end of year donation, and then - having fulfilled the letter of my pledge - come up with an excuse not to make my usual end-of-year donation too. Cause evaluation works the same way. Every year, I feel bad free-riding off GiveWell. I tell myself I’m going to really look into charities, find the niche underexplored ones that are neglected even by other EAs. Every year (except when I announce ACX Grants and can’t get out of it), I remember on December 27th that I haven’t done any of that yet, grumble, and give to whoever GiveWell puts first (or sometimes EA Funds). And I’m a terrible vegetarian. If there’s meat in front of me, I’ll eat it. Luckily I’ve cultivated an EA friend group full of vegetarians and pescetarians, and they usually don’t place meat in front of me. My friends will cook me delicious Swedish meatballs made with Impossible Burger, or tell me where to find the best fake turkey for Thanksgiving (it’s Quorn Meatless Roast). And the Good Food Institute (an EA-supported charity) helps ensure I get ever tastier fake meat every year. Everyone says they want to be a good person and donate to charity and do the right thing. EAs say this too. But nobody stumbles into it by accident. You have to seek out the social technology, then use it. I think this is the role of the wider community - as a sort of Alcoholics Anonymous, giving people a structure that makes doing the right thing easier than not doing it. Lots of alcoholics want to quit in principle, but only some join AA. I think there’s a similar level of difference between someone who vaguely endorses the idea of giving to charity, and someone who commits to a particular toolbox of social technology to make it happen. (I admit other groups have their own toolboxes of social technology to encourage doing good, including religions and political groups. Any group with any toolbox has earned the right to call themselves meaningfully distinct from the masses of vague-endorsers). Linking this back to the three-point definition above, this is why I went overboard with descriptors like “fixed and considered amount of your income” or “think very hard about the problem”. Lots of people want to do good; what separates EAs is trying to do it systematically. We do it systematically because we’ve found the systems are the only thing preventing us from half-assing them or not doing them at all. (or at least this is my experience; some other people are saints and don’t need any of this) 3: It’s worthwhile to distinguish the people who focus on a belief from the people who hold it Everyone wants to end homelessness. But there’s a group near me called the Coalition To End Homelessness. Are these people just virtue-signaling? Is it bad for their coalition to appropriate something everyone believes? Everyone wants to end homelessness. But I assume the Coalition does things - like run homeless shelters, hold donation drives, and talk to policy-makers - that not everyone does. If the people in groups like that called themselves Homelessness Enders, and had Homelessness Ender meetups, and tried to convince you that you, too, should become a Homelessness Ender and go to their meetings and participate in their donation drives - this seems like a fine thing for them to do, even though everyone wants to end homelessness. I want to end homelessness, but I don’t claim to be a Homelessness Ender. It’s not something I put much thought into, or work hard on. If the Homelessness Enders tried to recruit me, I would be facing a real choice about whether to become a different kind of person who prioritizes their desire to end homelessness above other things and applies social pressure to myself to turn into the kind of person who works on the problem. 4: It’s tautological that once you take out the parts of a movement everyone agrees with, you’re left with controversial parts that many people hate. … 5: The “uselessness” of effective altruism as a category disappears when you zoom in and notice it’s made out of parts. “Why do we need effective altruism? Everyone agrees you should do good charity!” Effective altruism is composed of lots of organizations like GiveWell and GivingWhatWeCan and 80,000 Hours and AI Impacts. Ask the question for each one of them: Why do we need GiveWell? To help evaluate which charities are most effective. There’s no contradiction between universal support for charity and needing an organization like that. Why do we need GivingWhatWeCan? To encourage people to donate and help them commit. There’s no contradiction there either. Why do we need 80,000 Hours? To help people figure out what jobs have the highest positive impact on the world. Still no contradiction. Why do we need AI Impacts? To try to predict the future course of advanced AI. No contradiction there either. Why do we need the average effective altruist who donates a little bit each year and tries to participate in discussion on EA Forum? Because they’re the foundation that supports everyone else, plus they give some money and occasionally make good comments. You could imagine a world where all these same organizations and people exist, but none of them used the label “effective altruism”. But it would be a weird world. All these groups support each other, always in spirit but sometimes also financially. Staff move from one to another. There are conferences where they all meet and talk about their common interest of promoting effective charitable work. What are you supposed to call the conference? The Conference For The Extensional Set Consisting Of GiveWell, GivingWhatWeCan, 80,000 Hours, AI Impacts, And A Few Dozen Other Groups We Won’t Bother Naming, But This Really Is An Extensional Definition, Trust Us? Freddie has a piece complaining that woke SJWs get angry when people call them “woke” or “SJW”. He titles it Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Social And Political Changes You Demand. His complaint, which I think is valid, is that if a group is obviously a cohesive unit that shares basic assumptions and pushes a unified program, people will want to talk about them. If you refuse to name yourself or admit you form a natural category, it’s annoying, and you lose the right to complain when other people nonconsensually name you just so they can talk about you at all. I was tempted to call this post “Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Social And Political Changes I Demand”. 6: The ideology is never the movement I admit there’s an awkwardness here, in that EA is both a philosophy and a social cluster. Bill Gates follows the philosophy, but doesn’t associate with the social cluster. Is he “an EA” or not? I lean towards “yes”, but it’s an awkward answer that would be misleading without more clarification. But this isn’t EA’s fault. It’s an inevitable problem with all movements. Camille Paglia calls herself a feminist and shares foundational feminist beliefs, but she hates all the other feminists and vice versa. She thinks feminists should stop criticizing men, admit gender is mostly biological, stop talking about rape culture, and teach women to solve their own problems. She also has some random right-wing political beliefs like doubting global warming. So is she "a feminist" or not? I don't know. Marginally yes? She sure seems to think a lot about women, but probably wouldn't be welcome at the local NOW chapter dinner. I sometimes describe myself as “quasi-libertarian”. On most political issues, I try to err on the side of more freedom, and I think markets are pretty great. But I really don’t care about taxes, I have only the faintest idea how guns work, I voted for Obama and Biden, and I find the sort of people who go to Libertarian Party meetings to be weird aliens. Am I a libertarian or not? This is why I just say “quasi-libertarian”. Freddie deBoer thinks we need to build more housing. But he hates most YIMBYs (1, 2, 3, 4). He writes:
I agree with Freddie: it’s better to define coalitions by what people believe than by social group. If that’s true, Bill Gates is an EA. But I also agree with Freddie that this is hard, and the social group matters a lot in real life too. In that sense, Bill Gates isn’t an EA. EA probably screwed this up worse than most other groups. I don’t think a movement our size is capable of rebranding. We just have to eat the loss. If we were optimizing entirely for clarity and not for attractive-soundingness, I’d go for Systematic Altruism on the one side, and The Network Of People Who All Pursue Systematic Altruism Together In A Way Causally Downstream Of Toby Ord, Will MacAskill, And Nick Bostrom (TONOPWAPSATIAWCDOTOWMAANB) on the other. In real life I have no solution for these kinds of ambiguities; language is an imperfect medium of communication. 7: Maybe the solution is to look at the marginal effect of more vs. less of a movement. Yesterday I argued that effective altruism had saved hundreds of thousands of lives, so people should celebrate its successes rather than focusing on SBF and a few other failures. I checked to see if I was being a giant hypocrite, and came up with the following: wokeness is just a modern intensification of age-old anti-racism. And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc. But people (including me) mostly criticize wokeness for its comparatively-small failures, like academics getting unfairly cancelled. Why should people judge effective altruism on its big successes, but anti-racism on its small failures? One answer: don’t have opinions on movements at all, judge each policy proposal individually. Then you can support freeing the slaves, but oppose cancel culture. This is correct and virtuous, but misses something. Most change is effected by big movements; a lot of your impact consists of which movements you join and support, vs. which movements you put down and oppose. Maybe a better answer is to judge movements on the marginal unit of power. An anti-woke person believes that giving anti-racism another unit of power beyond what it has right now isn’t going to free any more slaves, it’s just going to make cancel culture more powerful. I don’t know exactly what it means to give effective altruism another marginal unit of power, although if we hammered it out I’d probably support it. Instead, I’ll make the weaker argument that you should, personally, think about how to make the world a better place, and if you notice you’re not doing as good a job as you want, consider using effective altruism’s tools. I think on the margin this is good, and EA’s past successes are a good guide to what another marginal unit of support would produce. The problems of the world are so vast that all of EA’s billions of dollars have barely budged the margin; an extra bed net still does almost as much good today as it did in 2013 when the movement was founded. A marginal AI safety researcher is worth less now than in 2013, but there are still only a few hundred (maybe a thousand now) in the world. You get different answers if you apply the marginal unit of support to broadening the movement’s base or intensifying the true believers; maybe this is part of why all debates are bravery debates. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
In Continued Defense Of Effective Altruism
Tuesday, November 28, 2023
"All you do is cause boardroom drama, and maybe some other things I'm forgetting..."
God Help Us, Let's Try To Understand AI Monosemanticity
Monday, November 27, 2023
Inside every AI is a bigger AI, trying to get out
Open Thread 304
Monday, November 27, 2023
...
Open Thread 303
Monday, November 20, 2023
...
Book Review: I Saw Satan Fall Like Lighting
Friday, November 17, 2023
...
You Might Also Like
What A Day: MTGeeze Louise
Saturday, November 23, 2024
DOGE just got dumber. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Someone Needs to Tell the Manhattan DA’s Office the Trump Case Is Over
Friday, November 22, 2024
Columns and commentary on news, politics, business, and technology from the Intelligencer team. Intelligencer the law Somebody Needs to Tell the Manhattan DA's Office It's Over The Trump hush-
Black Friday Looms
Friday, November 22, 2024
The already-live deals that are actually worth shopping. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate
Google opens another traffic spigot for publishers
Friday, November 22, 2024
PLUS: Why Apple News might start generating more revenue for publishers ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
LEVER TIME: The Secret Recordings Netanyahu Wants Censored
Friday, November 22, 2024
A new documentary exposes never-before-seen video of the Israeli leader — and argues he's prolonged the Gaza War to evade corruption charges. In the latest edition of Lever Time, producer Arjun
No News is Good News
Friday, November 22, 2024
Tuning Out, Weekend Whats, Feel Good Friday ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Why Amazon is doubling its Anthropic investment to $8 billion | Windows Recall makes delayed debut
Friday, November 22, 2024
'Bomb cyclone' drives up EV charging station demand ADVERTISEMENT GeekWire SPONSOR MESSAGE: Get your ticket for AWS re:Invent, happening Dec. 2–6 in Las Vegas: Register now for AWS re:Invent.
Pinky and the (lab-grown) Brain
Friday, November 22, 2024
Plus: 50 people working to make the future a better place, and more. View this email in your browser Each week, a different Vox editor curates their favorite work that Vox has published across text,
A cheap multi-cooker that’s surprisingly good
Friday, November 22, 2024
Plus, more things worth the hype View in browser Ad The Recommendation Ad This 10-in-1 multi-cooker won't ruin your kitchen's aesthetic—and it's only $60 Two images next to each other. On
Joyriding Rats, 60 Thanksgiving Recipes, and the Sexiest Collard Farmer
Friday, November 22, 2024
Scientists have discovered that laboratory rats don't just drive tiny cars—they actually enjoy taking the scenic route. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏