Astral Codex Ten - Contra Stone On EA
I. Lyman Stone wrote an article Why Effective Altruism Is Bad. You know the story by now, let’s start with the first argument:
Stone finds that Google Trends shows that searches for “effective altruism” concentrate most in the San Francisco Bay Area and Boston. So he’s going to see if those two cities have higher charitable giving than average, and use that as his metric of whether EAs give more to charity than other people. He finds that SF and Boston do give more to charity than average, but not by much, and this trend has if anything decreased in the 2010 - present period when effective altruism was active. So, he concludes,
What do I think of this line of argument? According to Rethink Priorities, the organization that keeps track of this kind of thing, there were about 7,400 active effective altruists in 2020 (90% CI: 4,700 - 10,000). Growth rate was 14% per year but has probably gone down lately, so there are probably around 10,000 now. This matches other sources for high engagement with EA ideas (8,898 people have signed the Giving What We Can pledge). Suppose that the Bay Area contains 25% of all the effective altruists in the world. That means it has 2,500 highly engaged effective altruists. The total population about 10 million. So effective altruists are 1/4000th of the Bay Area population. Suppose that the average person gives 3% of their income to charity per year, and the average effective altruist gives 10%. The Bay Area with no effective altruists donates an average of 3%. Add in the 2,500 effective altruists, and the average goes up to . . . 3.0025%. Stone’s graph is in 0.5 pp intervals. So this methodology is way too underpowered to detect any effect even if it existed. How many effective altruists would have to be in the Bay for Stone to notice? If we set the “not just noise” threshold at 0.5 pp, it would take 200x this amount, or 500,000 in the Bay alone. For comparison, the most popular book on effective altruism, Will MacAskill’s What We Owe The Future, sold only 100,000 copies in the whole world. But all of this speculation is unnecessary. There are plenty of data sources that just tell us how much effective altruists donate compared to everyone else. I checked this in an old SSC survey, and the non-EAs (n = 3118) donated an average of 1.5%, compared to the EA (n = 773) donating an average of 6%. In general, I think it’s a bad idea to try to evaluate rare events by escalating to a population level when you can just check the rare events directly. If you do look at populations, you should do a basic power calculation before reporting your results as meaningful. Stone could potentially still object that movements aren’t supposed to gather 10,000 committed adherents and grow at 10% per year. They have to take hold of the population! Capture the minds of the masses! Convert >5% of the population of a major metropolitan area! I don’t think effective altruism has succeeded as a mass movement. But I don’t think that’s it’s main strategy - for more on this, see the articles under EA Forum tag “value of movement growth”, which explains:
Aren’t movements that don’t capture the population doomed to irrelevance? I don’t think so. Effective altruism has managed to get plenty done with only 10,000 people, because they’re the right 10,000 and they’ve influenced plenty of others. Stone fails to prove that effective altruists don’t donate more than other people, because he’s used bad methodology that couldn’t prove that even if it were true. His critique could potentially evolve into an argument that effective altruism hasn’t spread massively throughout the population, but nobody ever claimed that it did. II.
A few responses: Technically, it’s only correct to focus on the single most important area if you have a small amount of resources relative to the total amount in the system (Open Phil has $10 billion). Otherwise, you should (for example) spend your first million funding all good shrimp welfare programs until the marginal unfunded shrimp welfare program is worse than the best vaccine program. Then you’ll fund the best vaccine program, and maybe they can absorb another $10 million until they become less valuable than the marginal kidney transplant or whatever. This sounds theoretical when I put it this way, but if you’ve ever worked in charity it quickly becomes your whole life. It’s all very nice and well to say “fund kidney transplants”, but actually there are only specific discrete kidney transplant programs, some of them are vastly better than others, and none of them scale to infinity instantaneously or smoothly. The average amount that the charities I deal with most often can absorb is between $100K and $1MM. Again, Open Phil has $10 billion. But even aside from this technical point, people disagree on really big issues. Some people think animals matter and deserve the same rights as humans. Other people don’t care about them at all. Effective altruism can’t and doesn’t claim to resolve every single ancient philosophical dispute on animal sentience or the nature of rights. It just tries to evaluate if charities are good. If you care a lot about shrimp, there’s someone at some effective altruist organization who has a strong opinion on exactly which shrimp-related charity saves shrimp most cost-effectively. But nobody (except philosophers, or whatever) can tell you whether to care about shrimp or not. This is sort of a cop-out. Effective altruism does try to get beyond “I want to donate to my local college’s sports team”. I think this is because that’s an easy question. Usually if somebody says they want to donate there, you can ask “do you really think your local college’s sports team is more important than people starving to death in Sudan?” and they’ll think for a second and say “I guess not”. Whereas if you ask the same question about humans and animals, you’ll get all kinds of answers and no amount of short prompting can solve this disagreement. I think this puts EAs in a few basins of reflective equilibrium, compared to scattered across the map. So is there some sense, as Stone suggests, that “so broad a range of priorities [can’t] reasonably be considered a major gain in efficiency”? I think if you look at donations by the set of non-effective-altruist donors, and the set of effective-altruist donors, there will be much much more variance, and different types of variance, in the non-EAs than the EAs. Here’s where most US charity money goes (source): I don’t think Stone can claim that an EA version of this chart wouldn’t look phenomenally different. But then what’s left of his argument? III.
The IPA mentioned here is Innovations For Poverty Action, a group that studies how to fight poverty. They’re great and do great work. But IPA doesn’t recommend top charities or direct donations. Go to their website, try to find their recommended charities. There are none. GiveWell does have recommended charities - including ones that they decided to recommend based on IPA’s work - and moves ~$250 million per year to them. If IPA existed, but not GiveWell, the average donor wouldn’t know where to donate, and ~$250 million per year would fail to go to charities that IPA likes. I think from the perspective of people who actually work within this ecosystem, Stone’s concern is like saying “Farms have already solved the making-food problem, so why do we need grocery stores?” (also, effective altruism funds IPA) I’m focusing on IPA here because Stone brought them up, but I think EA does more than this. I don’t think there’s an IPA for figuring out whether asteroid deflection is more cost-effective than biosecurity, whether cow welfare is more effective than chicken welfare, or figuring out which AI safety institute to donate to. I think this is because IPA is working on a really specific problem (which kinds of poverty-related interventions work) and EA is working on a different problem (what charities should vaguely utilitarian-minded people donate to?) These are closely related questions but they’re not the same question - which is why, for example, IPA does (great) research into consumer protection, something EA doesn’t consider comparatively high-impact. And I’m still focusing on donation to charity, again because it’s what Stone brought up, but EA does other things - like incubating charities, or building networks that affect policy. IV.
Suppose an EA organization funded a cancer researcher to study some new drug, and that new drug was a perfect universal cure for cancer. Would Stone reject this donation as somehow impure, because it went to a cancer researcher (a white-collar PhD holder)? EA gives hundreds of millions of dollars directly to malaria treatments that go to the poorest people in the world. It’s also one the main funders of GiveDirectly, a charity that has given money ($750 million so far) directly to the poorest people in the world. But in addition to giving out bednets directly, it sometimes funds malaria vaccines. In addition to giving to poor Africans, it also funds the people who do the studies to see whether giving to poor Africans works. Some of those are white-collar workers. EA has never been about critiquing the existence of researchers and think tanks. In fact, this is part of the story of EA’s founding. In 2007, the only charity evaluators accessible by normal people rated charities entirely on how much overhead they had - whether the money went to white-collar people or to sympathetic poor recipients. EAs weren’t the first to point out that this was a very weak way of evaluating charities. But they were the first to make the argument at scale and bring it into the public consciousness, and GiveWell (and to some degree the greater EA movement) were founded on the principle of “what if there was a charity evaluator that did better than just calculate overhead?” In accordance with this history, if you look on Giving What We Can’s List Of Misconceptions About Effective Altruism, their #1 Misconception about about charity evaluation is that “looking at a charity’s overhead costs is key to evaluating its effectiveness”. This is another part of my argument that EA is more than just IPA++. For years, the state of the art for charity evaluators was “grade them by how much overhead they had”. IPA and all the great people working on evidence-based charity at the time didn’t solve that problem - people either used CharityNavigator or did their own research. GiveWell did solve that problem, and that success sparked a broader movement to come up with a philosophy of charity that could solve more problems. Many individuals have always had good philosophies of charity, but I think EA was a step change in doing it at scale and trying to build useful tools / a community around it. V.
I think if you have to write in bold with four exclamation points at the end that you’re not explicitly advocating terrorism, you should step back and think about your assumptions further. So: Should people who worry about global warming bomb coal plants? Should people who worry that Trump is going to destroy American democracy bomb the Republican National Convention? Should people who worry about fertility collapse and underpopulation bomb abortion clinics? EAs aren’t the only group who think there are deeply important causes. But for some reason people who can think about other problems in Near Mode go crazy when they start thinking about EA. (Eliezer Yudkowsky has sometimes been accused of wanting to bomb data centers, but he supports international regulations backed by military force - his model is things like Israel bombing Iraq’s nuclear program in the context of global norms limiting nuclear proliferation - not lone wolves. As far as I know, all EAs are united against this kind of thing.) There are three reasons not to bomb coal plants/data centers/etc. The first is that bombing things is morally wrong. I take this one pretty seriously. The second is that terrorism doesn’t work. Imagine that someone actually tried to bomb a data center. First of all, I don’t have statistics but I assume 99% of terrorists get caught at the “your collaborator is an undercover fed” stage. Another 99% get eliminated at the “blown up by poor bomb hygiene and/or a spam text message” stage. And okay, 1/10,000 will destroy a datacenter, and then what? Google tells me there are 10,978 data centers in the world. After one successful attack, the other 10,977 will get better security. Probably many of these are in China or some other country that’s not trivial for an American to import high explosives into. The third is that - did I say terrorism didn’t work? I mean it massively massively backfires. Hamas tried terrorism, they frankly did a much better job than we would, and now 52% of the buildings in their entire country have been turned to rubble. Osama bin Laden tried terrorism, also did an impressive job, and the US took over the whole country that had supported him, then took over an unrelated country that seemed like the kinds of guys who might support him, then spent ten years hunting him down and killing him and everyone he had ever associated with. One f@#king time, a handful of EAs tried promoting their agenda by committing some crimes which were much less bad than terrorism. Along with all the direct suffering they caused, they destroyed EA’s reputation and political influence, drove thousands of people away from the movement, and everything they did remains a giant pit of shame that we’re still in the process of trying to climb our way out of. Not to bang the same drum again and again, but this is why EA needs to be a coherent philosophy and not just IPA++. You need some kind of theory of what kinds of activism are acceptable and effective, or else people will come up with morally repugnant and incredibly idiotic plans that will definitely backfire and destroy everything you thought you were fighting for. EA hasn’t always been the best at avoiding this failure mode, but at least we manage to outdo our critics. VI. Stone moves on to animal welfare:
Again, this is part of why I think it’s useful to have people who think about philosophy, and not just people who do RCTs. People having kids of their own instead of donating to sperm banks is in some sense an “error” in our evolutionary program. The program just wanted us to reproduce; instead we got a bunch of weird proxy goals like “actually loving kids for their own sake”. Art is another error - I assume we were evolutionarily programmed to care about beauty because, I don’t know, flowers indicate good hunting grounds or something, not because evolution wanted us to paint beautiful pictures. Anyone who cares about a future they will never experience, or about people on far off continents who they’ll never meet, is in some sense succumbing to “errors” in their evolutionary programming. Stone describes the original mechanisms as “about intra-human dynamics”, but this is cope - they’re about intra-tribal dynamics. Plenty of cultures have been completely happy to enslave, kill, and murder people outside their tribes, and nothing in their evolutionary mechanism has told them not to. Does Stone think this, too, is an error? At some point you’ve got to go beyond evolutionary programming and decide what kind of person you want to be. I want to be the kind of person who cares about my family, about beauty, about people on other continents, and - yes - about animal suffering. This is the reflective equilibrium I’ve landed in after considering all the drives and desires within me, filtering it through my ability to use Reason, and imagining having to justify myself to whatever God may or may not exist. Stone suggests EAs don’t have answers to a lot of the basic questions around this. I can recommend him various posts like Axiology, Morality, Law, the super-old Consequentialism FAQ, and The Gift We Give To Tomorrow, but I think they’ll only address about half of his questions. The other half of the answers have to come from intuition, common sense, and moral conservatism. This isn’t embarrassing. Logicians have discovered many fine and helpful logical principles, but can’t 100% answer the problem of skepticism - you can fill in some of the internal links in the chain, but the beginning and end stay shrouded in mystery. This doesn’t mean you can ignore the logical principles we do know. It just means that life is a combination of formally-reasonable and not-formally-reasonable bits. You should follow the formal reason where you have it, and not freak out and collapse into Cartesian doubt where you don’t. This is how I think of morality too. Again, I really think it’s important to have a philosophy and not just a big pile of RCTs. Our critics make this point better than I ever could. They start with “all this stuff is just common sense, who needs philosophy, the RCTs basically interpret themselves”, then, in the same essay, digress into:
Morality is tough. Converting RCTs - let alone the wide world of things we don’t have RCTs on yet - into actionable suggestions is tough. Many people have tried this. Some have succeeded very well on their own. Effective altruism is a community of people working on this problem together. I’m grateful to have it. VII. Stone’s final complaint:
I’ll be excessively cute here: Stone is repeating one of the most common critiques of EA as if it’s his own invention, without checking the long literature of people discussing it and coming up with responses to it. I’m tired enough of this that I’m just going to quote some of what I said the last time I wrote about this argument:
You can find the rest of the post here. I’ve also addressed similar questions at In Continued Defense of Effective Altruism and Effective Altruism As A Tower Of Assumptions. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription. |
Older messages
Unsong Available In Paperback
Monday, June 3, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
A Theoretical "Case Against Education"
Thursday, May 23, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Book Review: The Others Within Us
Tuesday, May 21, 2024
Welp, the demons are back ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Open Thread 330
Monday, May 20, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Profile: The Far Out Initiative
Friday, May 17, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
☕ Like a slingshot
Friday, November 22, 2024
Can "Glicked" save the box office this year? November 22, 2024 View Online | Sign Up | Shop Morning Brew Presented By AT&T In-car Wi-Fi Good morning. Money may not grow on trees, but it
Numlock News: November 22, 2024 • Frescoes, Hedges, Swipes
Friday, November 22, 2024
By Walt Hickey ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Against The Generalized Anti-Caution Argument
Friday, November 22, 2024
... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Trump AG Pick Linked To Musk’s Fraud Lawyer
Friday, November 22, 2024
If confirmed, Pam Bondi could shut down a Justice Department probe of Musk's company, which was represented by her brother. Within hours of scandal-plagued Matt Gaetz withdrawing as attorney
Gaetz Withdraws, Bitcoin Nears $100,000, and a "Glicked" Double Feature
Friday, November 22, 2024
Former Rep. Matt Gaetz withdrew his name from consideration as President-elect Donald Trump's attorney general nominee on Thursday, saying his nomination was becoming "a distraction" to
Holiday travel season starts now. Are you ready?
Friday, November 22, 2024
Plus: Matt Gaetz withdraws as nominee for attorney general, the tale of the $6 million banana, and more. November 22, 2024 View in browser Lavanya Ramanathan is a senior editor at Vox and editor of the
☕ Make-believe
Friday, November 22, 2024
How PBS Kids is using AI. November 22, 2024 Tech Brew presented by IBM It's Friday. Hello and TGIF! Remember that survey we mentioned on Wednesday? We had a little oopsie with the link, but we want
Chinese ship casts shadow over Baltic subsea cable snipfest [Fri Nov 22 2024]
Friday, November 22, 2024
Hi The Register Subscriber | Log in The Register Daily Headlines 22 November 2024 cable Chinese ship casts shadow over Baltic subsea cable snipfest Danish military confirms it is monitoring as Swedish
Oops! Disregard Previous Poulsbo Chapter Email
Friday, November 22, 2024
Dear CreativeMornings Community, We made a mistake earlier today and accidentally sent you an email about a Poulsbo chapter event to our entire list. It was only meant to go to Poulsbo community
The Spray That 4 Strategist Staffers Spritz on Their Hair Daily
Friday, November 22, 2024
Plus: 28 very sparkly gifts. The Strategist Every product is independently selected by editors. If you buy something through our links, New York may earn an affiliate commission. November 21, 2024 The