The real scandal inside Facebook's cross-check program
Here’s your free edition of Platformer for the week — a look at a long-awaited new opinion from Meta’s Oversight Board about how to handle harmful posts from its most prominent users. We love coming to you with a big story once a week. But in recent days, paid subscribers are getting lots more: big scoops about Twitter’s new product roadmap, Elon Musk ordering employees to print out their code, his plans to charge verified users to keep their badges, and the secret history of encrypted DMs on Twitter. We also wrote the definitive account of Twitter’s first round of layoffs. If you value this work, it would mean a lot to us if you bought a monthly or annual subscription. Subscribe now and we’ll send you the link to join us in our chatty Discord server, where we’ve been putting ChatGPT through its paces in our new generative AI channel.
The real scandal inside Facebook's cross-check programIt's not the program itself — it's the inequality it perpetuatesToday, let’s talk about a new opinion from Meta’s Oversight Board that attempts to square a tension at the heart of any big social network: on one hand, a desire to treat its users with equality; and on the other, an acknowledgement that in practice some groups of users deserve special treatment. I. No company wants to say outright that some of its customers matter more than others. But in practice, on social networks in particular, some customers do — at least when it comes to how their accounts and posts are treated. If the president of a country regularly communicates with their citizens on a platform — particularly a country that might ban or otherwise restrict your business — you want to be extra careful with any steps you take to remove their posts or suspend their account. In short, you don’t want to make a mistake. It was in this desire not to make mistakes that the company then known as Facebook created “cross check” — sometimes styled “Xcheck” — and implemented it around the world. The program’s existence was first documented by the Wall Street Journal as part of Frances Haugen’s revelations last year. Jeff Horwitz’s article revealed that some employees felt that the program offers undue protections to some of the platform’s most prominent users, often allowing harmful content to stay live on the platform for many days before Facebook intervenes. Horwitz wrote:
Days after the article’s publication, the Oversight Board — an independent body funded by Meta with the intention of serving as a check on its content moderation and policy decisions — said it would investigate. A week after that, Meta formally asked the board to issue an advisory opinion on what it ought to do with cross-check and the basic idea that it will give some posts and accounts an extra layer of review in an effort not to screw up. More than a year later, on Tuesday the board delivered its opinion. I’ve recently been critical of the board’s slowness — its 23 members managed to produce just three opinions last quarter — but in this case it wasn’t entirely the board’s fault. As described in the 57-page opinion, Meta dragged its feet during the board’s fact-finding process, in one case waiting almost five months to answer a question about what sorts of accounts qualify for cross-check protection. Even then, Meta only offered “limited aggregate data about each listed entity on the current,” the board said. (Meta cited privacy concerns, as companies often do in these situations, but the board said it “believes, and has pointed out to Meta, that these concerns could have been mitigated and more extensive disclosures provided.” In the relatively dry, somber world of communications between Meta and the board, I thought this qualified as a fairly sick burn.) Despite the dodges and delays, the board eventually obtained a great deal of information about cross-check, and the opinion lays it out in admirably thorough detail. And after laying out the shape of the program, the board reached some important conclusions. II. We’ll get to those conclusions in a minute. But first, it’s worth giving a bit more explanation to what cross-check is. Notably, after the Journal’s report, Meta broadened the scope of the program in an effort to reduce the inequality inherent in the system. The board summarizes all this in one neat paragraph:
Creating a second kind of cross-check program and nominally expanding it to all users had obvious public-relations benefits for Meta, which could then argue that the protections it offers for wrongful removal of content extend to all users. In practice, though, the board found that the “general secondary review” queue had too few people working on it, and that posts marked for such review are often never given one. That leads us to the board’s conclusions. The first and most important of these tracks with the central claim in the Journal’s reporting: it leads to “unequal treatment of users.” The board writes:
And it’s not just that the program leads to unequal treatment. It’s that, thanks to the fact that Meta wouldn’t share complete data on who has qualified for ERSR, we don’t even know how unequal it is. It’s possible that, given how little Meta eventually chose to share on this subject, the inequality runs much deeper than even the board assumes here. A second conclusion is that this inequality causes harm. If a prominent account posts something awful on Facebook, ERSR means that it won’t be removed by automated systems. Facebook’s goal in these cases is to remove content within 12 hours to five days. In practice, though, the board found that even in the United States, it takes the company “approximately 12 days on average to make a decision.” In one case, the board found that the company took a staggering 222 days to evaluate a post. To state the obvious here, that’s a long time for hate speech, violence, and other violations of the company’s community standards to be live on the site. A third important conclusion the board reaches is that, however important cross-check is to the company, it doesn’t really measure the program’s effectiveness. “For example, Meta did not provide the board with information showing it tracks whether its decisions through cross-check are more or less accurate than through its normal quality control mechanisms,” the board says. And if you’re going to leave potentially harmful stuff up on Facebook for 222 days, wouldn’t you want to know if it is? In short, while Meta often dresses up its policy decisions in the language of human rights, the board finds that — as you would probably expect — cross-check is primarily designed to serve business needs. And so what do you do about it? III. If it were up to the board, Meta would do a lot about it. Members made 32 different recommendations for the company, to which is now obligated to respond. (The company requested, and received, an extra-long 90-day period to consider all of the board’s requests.) Importantly, though, the board fundamentally supports the idea of a secondary review system. It acknowledges that large-scale content moderation systems will inevitably make mistakes, and that Meta should act to reduce them. And it accepts that this means designating some accounts for special protections. At the same time, the board finds that Meta’s implementation of such a system is deeply problematic, and arguably a scandal of its own. For example, while just 9 percent of Facebook’s daily active users are from the United States or Canada, 42 percent of all the ERSR content reviewed by the company came from those two countries. It’s a powerful illustration of something we have long known: that as inadequate as content moderation can feel to us North Americans, the situation is much worse in the rest of the world. And that can and does lead to very real harms. The board calls for developing much stronger criteria for which accounts should be eligible for ERSR protections, making those criteria public, and allowing individuals to proactively apply for the program. It says Facebook should require anyone who is included to first review the community standards and agree to uphold them. And it says Facebook should proactively communicate changes to those standards so users will know how to comply with them. Once an account has been accepted into the program, the board says its status should be visually communicated to users. That way, if you see a harmful post from a government official or celebrity left up for 222 days, you’ll have some idea of why it hasn’t yet been removed. And, if you report a post on such an account, the board says that you should be told the account you’ve reported has special protections. These are really good ideas! As for the other half of cross-check — GSR, which attempts to elevate potential mistakes for secondary review no matter which account posted them — the board recommends that Facebook build more capacity to ensure it’s actually reviewing those posts, rather than just funneling them into a queue that in practice is often ignored. That’s a direct request for Facebook to shift funding resources, and I’m fairly sure that’s a first for the board. How will the company respond? I talked with Meta yesterday on the ground rules that I couldn’t name who I talked to, or quote them directly. Mostly I wanted to know what the company makes of the critiques and recommendations in the opinion, which it had also been briefed on. But the company didn’t want to say anything until the opinion was public. A statement from today Nick Clegg, the company’s vice president of global affairs, outlined changes the company has made to the program already and said only that it would share more in 90 days. Last year, the mere existence of cross check struck some observers as an outrage. But to the board, the idea of secondary review turned out to be a practical necessity for a large-scale social network. At the same time, there’s plenty in cross-check that’s ugly to see. And the board’s recommendations for transparency, auditing, and heightened focus on the system’s design would go a long way toward making content moderation more equitable. Of course, these can seem like minor concerns in a world where some states have passed laws decreeing that content moderation of nearly any sort should be illegal. But as long as platforms still have the right to remove bad posts — and frequently make mistakes in doing so — the rest of us deserve better than we’re currently getting, on Facebook and everywhere else. Here’s hoping the company takes its board’s recommendations seriously, and brings more equality to a platform where that remains in disappointingly short supply. Governing
Industry
Those good tweetsTalk to usSend us tips, comments, questions, and the full list of cross-check beneficiaries: casey@platformer.news and zoe@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
The promise and the peril of ChatGPT
Tuesday, December 6, 2022
The AI era is dawning — are any of us ready? PLUS: The Twitter Files
Twitter’s advertising losses are piling up
Wednesday, November 30, 2022
Revenues are in free fall, employees say. Can Musk turn the tide?
Why some tech CEOs are rooting for Musk
Tuesday, November 29, 2022
They're skeptical of their workers, too. PLUS: New details on Musk's 'general amnesty' for banned Twitter users
The secret history of encrypted DMs on Twitter
Wednesday, November 23, 2022
They almost launched in 2018 — and could still do a lot of good today
Trump is restored to Twitter
Tuesday, November 22, 2022
PLUS: Layoffs continue, chaos on the sales team, and what happens when the certs expire?
You Might Also Like
Monk Mode
Thursday, January 9, 2025
Some self-experiments for the start of 2025 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Growth Newsletter #232
Thursday, January 9, 2025
How to choose a topic that makes money ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Behind the founder: Drew Houston (Dropbox)
Thursday, January 9, 2025
Drew Houston opens up about battling against big tech, building and rebuilding culture, finding purpose, and more ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
⏰ 4 days to go...join this 5-day ecommerce challenge
Thursday, January 9, 2025
7+ world-class mentors to help build and validate your ecommerce business. Hi Friend , Just 4 days to go before we start revolutionizing YOUR ECOMMERCE JOURNEY. Are you in? If you feel like you're
Revolut plots to lure wealthy clients
Thursday, January 9, 2025
+ Should Europe be more like the US?; Gulf countries cut funding to European startups View in browser Sponsored by Salesforce Good morning there, How are your New Year's resolutions holding up? As
69 new Shopify apps for you 🌟
Wednesday, January 8, 2025
New Shopify apps hand-picked for you 🙌 Week 1 Dec 30, 2024 - Jan 6, 2025 New Shopify apps hand-picked for you 🙌 New Apps Dollarlabs: Ultimate Discounts Create and manage product, order, BOGO, flash,
What to Expect from VCs if the Downturn Persists
Wednesday, January 8, 2025
Learnings from previous venture downturns on the shifts we may see accelerate ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Freelancer Tools, Tool Finder, Beloga, Presite, Fancy Components, and more
Wednesday, January 8, 2025
AI-powered knowledge hub BetaList BetaList Weekly Tool Finder Exclusive Perk The wikipedia for AI tools & software Freelancer Tools https://freelancer-tools.shop/ Presite Site plans made easier
The A-Word
Wednesday, January 8, 2025
Read time: 46 sec. The next wave of successful founders won't be engineers. They might not even know how to code. In 2025, everyone will be talking about the A-word: ATTENTION There's a famous
join me: VC Trends for 2025
Wednesday, January 8, 2025
plus, read our latest State of Venture Report Hi there, Benjamin Lawrence here, Senior Lead Analyst at CB Insights. Thought you would be interested in our new State of Venture Report – read it for free