Facebook's big new experiment in governance
Here’s your second free edition of Platformer for the week — a story about how Facebook may begin letting users effectively write some of its policies, in a significant new experiment in platform governance. Do you value the labor that goes into this sort of independent, ad-free journalism? If so, it’s a great day to subscribe. I’m about to announce a big hire, and your support will ensure we can continue to explore and illuminate the world’s most consequential platforms. Plus, join now and you can access our chatty Discord server, where I’ve been answering reader questions all day about yesterday’s anniversary post. What do you say? Facebook's big new experiment in governanceWhat if platform policies were written in part by their users?
In June, I wrote that to build trust, platforms should try a little more democracy. Instead of relying solely on their own employees, advisory councils, and oversight boards, I wrote, tech companies should involve actual users in the process. Citing the work Aviv Ovadya, a technologist who recently published a paper on what he calls “platform democracy,” I suggested that social networks could build trust by inviting average people into the policymaking process. I didn’t know it at the time, but Meta had recently finished a series of experiments which tried to do just that. From February to April, the company gathered together three groups across five different countries to answer the question: what should Meta do about problematic climate information on Facebook? The question came as watchdogs are increasingly scrutinizing the company’s approach to moderating misleading information about the environment. Last year the Guardian reported on an analysis performed by the environmental group Stop Funding Heat that found 45,000 posts downplaying or denying the climate crisis. And in February, after Meta promised to label climate misinformation, a report from the watchdog group Center for Countering Digital Hate found that “the platform only labeled about half of the posts promoting articles from the world's leading publishers of climate denial,” according to NPR. Against that backdrop, Meta hired a policy consulting firm named Behavioural Insights Team, or BIT, to bring Facebook users into policy development process. Specifically, users they were asked what Meta should do about “problematic information,” which BIT defined as “content that is not necessarily false, yet expresses views that may contain misleading, low quality, or incomplete information that can likely lead to false conclusions.” Meta wouldn’t give me any examples of what it considers problematic climate speech. But I can imagine panels being asked whether Facebook should intervene if, for example, a user with a big following notes sometime this winter asks something like “if climate change is real, why is it cold outside?” At all the big platforms today, average users do not have a say on how this question gets handled. Instead, it’s left to company executives and their policy teams, who often do consult experts, human rights groups, and other stakeholders. But the process is opaque and inaccessible to platform users, and in general has undermined confidence in the platforms. It’s hard to put trust in a policy when you have no idea who made it or why. (Not to mention who enforces it, or how.) For its experiment, Meta and BIT worked to find about 250 people who were broadly representative of the Facebook user base. They brought them together virtually across two weekends to educate them about climate issues and platform policies, and offered them access to outside experts (on both climate and speech issues) and Facebook employees. At the end of the process, Facebook offered the group a variety of possible solutions to problematic climate information, and the group deliberated and voted on their preferred outcomes. Facebook wouldn’t tell me what the groups decided — only that all three groups reached a similar consensus on what ought to be done. Their deliberations are now being taken under advisement by Facebook teams working on a policy update, the company told me. In a blog post today, BIT said participants expressed high satisfaction with the process and its outcomes:
Meta was impressed with the results, too, and plans to run further experiments in platform democracy. “We don't believe that we should be making so many of these decisions on our own,” Brent Harris, vice president of governance at the company, told me in an interview. “You've heard us repeat that, and we mean it.” Harris helped to oversee the creation of the Oversight Board, a somewhat controversial but (I’ve argued) useful tool for delegating authority on some matters of content moderation and pushing Meta to develop more open and consistent policies. Now Harris has turned his attention to platform democracy, and says he’s encouraged by the early results. “We think that if you set this up the right way, that people are in a great position to deliberate on and make some of the hard decisions (around) trade-offs, and inform how we proceed,” Harris said. “It was actually really striking how many folks, when they came together, agreed on what they thought the right approach would be.” In a survey after the process, 80 percent of participants said Facebook users like them should have a say in policy development. (I’d love to ask the other 20 percent a few questions!) Promising though the early results may be, platform democracy is not a guaranteed feature of Facebook in the years to come. More executives and products teams need to buy into the idea; the process needs to be refined and made cheaper to run; and there are more experiments to conduct on using deliberative processes with specific groups or in specific geographies. But in a world where, thanks to Texas and the rogue 5th Circuit of Appeals, platforms are at risk of losing the right to moderate content at all, Meta and its peers have every incentive to explore bringing more people into the process. With trust in tech companies at or near all-time lows, it’s clear that relying solely on in-house policy teams to craft platform rules isn’t working as intended for them. It may be time to give people more of a voice in the process — before the Supreme Court decides that, when it comes to regulating speech, the platforms don’t deserve any voice at all. Governing
Industry
Those good tweets![]() ![]() Talk to meSend me tips, comments, questions, and deliberative processes: casey@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
How Platformer is changing in year three
Tuesday, September 20, 2022
What I learned in year two, and what comes next
Scoop: Facebook shrinks its experimental product division
Thursday, September 15, 2022
And you can probably guess what it's working on now
Is TikTok's time running out?
Thursday, September 15, 2022
In new testimony to Congress, the company faces stern questions on China
How Cloudflare got Kiwi Farms wrong
Wednesday, September 7, 2022
On platforms and the stochastic terrorism loophole
How a Twitter plan to counter extremism fell apart
Friday, September 2, 2022
A research team was midway through a project to help troubled users. Then Elon Musk bought the company
You Might Also Like
🚀 Ready to scale? Apply now for the TinySeed SaaS Accelerator
Friday, February 14, 2025
What could $120K+ in funding do for your business?
📂 How to find a technical cofounder
Friday, February 14, 2025
If you're a marketer looking to become a founder, this newsletter is for you. Starting a startup alone is hard. Very hard. Even as someone who learned to code, I still believe that the
AI Impact Curves
Friday, February 14, 2025
Tomasz Tunguz Venture Capitalist If you were forwarded this newsletter, and you'd like to receive it in the future, subscribe here. AI Impact Curves What is the impact of AI across different
15 Silicon Valley Startups Raised $302 Million - Week of February 10, 2025
Friday, February 14, 2025
💕 AI's Power Couple 💰 How Stablecoins Could Drive the Dollar 🚚 USPS Halts China Inbound Packages for 12 Hours 💲 No One Knows How to Price AI Tools 💰 Blackrock & G42 on Financing AI
The Rewrite and Hybrid Favoritism 🤫
Friday, February 14, 2025
Dogs, Yay. Humans, Nay͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
🦄 AI product creation marketplace
Friday, February 14, 2025
Arcade is an AI-powered platform and marketplace that lets you design and create custom products, like jewelry.
Crazy week
Friday, February 14, 2025
Crazy week. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
join me: 6 trends shaping the AI landscape in 2025
Friday, February 14, 2025
this is tomorrow Hi there, Isabelle here, Senior Editor & Analyst at CB Insights. Tomorrow, I'll be breaking down the biggest shifts in AI – from the M&A surge to the deals fueling the
Six Startups to Watch
Friday, February 14, 2025
AI wrappers, DNA sequencing, fintech super-apps, and more. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
How Will AI-Native Games Work? Well, Now We Know.
Friday, February 14, 2025
A Deep Dive Into Simcluster ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏