Before Uvalde, a platform fails to answer kids' alarms
Open in browser
Here’s your 22nd free edition of Platformer for the year: a look at how platforms build tools to let kids report threats online, but too often ignore them. The Uvalde shooting is only the latest example of young people attempting to report bad behavior only to see those reports go nowhere. If you value real-time reporting on how platforms are reshaping the world around us, I hope you’ll consider becoming a member today and helping to fund my work. Join now and you can come hang out with us in our chatty Discord server, where today we talked about the Supreme Court’s ruling in the Texas social media case. 👉 Before Uvalde, a platform fails to answer kids' alarmsTech companies keep building systems to detect violent threats. Why didn't Yubo's work?A week ago today, an 18-year-old man walked into an elementary school in Uvalde, Texas, and committed the latest in our nation’s never-ending series of senseless murders. And in the aftermath of that horror — 19 children dead, two teachers dead, 18 more injured — attention once again turned to what role platforms might have played in enabling the violence. This question can feel both urgently necessary and also somehow beside the point. Necessary because people (often teenagers) are constantly being arrested after making threats on social media, and the Uvalde case shows once again why those threats must be taken more seriously. And yet it’s also clear that American’s gun violence problem will not be solved at the level of platform policy or enforcement. It can be solved only by making it harder for people to acquire and use guns, particularly the assault weapons that figure in every single story like this one. But around here we focus on platforms. And with that in mind, let’s take a look at what we’ve learned about the shooter’s online behavior in the week since the shooting. It speaks to issues around child safety and platforms that I’ve reported on here before — and points to some clear steps that platforms (and, if necessary, regulators) should take next. Aside from a handful of private messages, the Uvalde shooter appears not to have much used Facebook. That and Instagram were once the default platforms for making threats like these, but new platforms are growing in popularity with young people. The Uvalde shooter liked one called Yubo, created by a French company called Twelve App. It’s a “live chilling” app similar to Houseparty, the app that Meerkat became after helping to launch the live-streaming craze in the United States in 2015. It’s also apparently quite popular, with more than 18 million downloads in the United States alone, according to the market research firm Sensor Tower. Like Houseparty, Yubo lets users broadcast themselves live to a small group of friends. The twist is that Yubo focuses on making new friends — finding people with similar interests and letting them chat. Particularly young people. “Yubo is a social live-streaming platform that celebrates the true essence of being young,” the company says. (Perhaps for that reason, its seems to have attracted more than its share of older men and their unwanted sexual advances.) In the days after the massacre, reporters discovered that Yubo appears to have been the shooter’s primary social app. He used it, among other things, to threaten rape — and school shootings. Here are Daniel A. Medina, Isabelle Chapman, Jeff Winter and Casey Tolan at CNN:
At the Washington Post, Silvia Foster-Frau, Cat Zakrzewski, Naomi Nix and Drew Harwell found a similar pattern of behavior:
Yubo told the network that it is cooperating with the investigation, but declined to offer any details on why the shooter was able to remain on the platform despite having been reported for making threats over and over again. It can seem shocking that a person who repeatedly makes violent threats, and is reported for doing so to the platform, fails to see any consequences. And yet for years now, children have been telling us that this is regular occurrence for them. In May of last year, I wrote about a report based on a survey of minors by Thorn, a nonprofit organization that builds technology to defend children from sexual abuse. Here are two findings from that survey that are relevant to the Uvalde case, from my column about it:
In short: most kids use platform reporting tools instead of telling parents or other caregivers about threats online, but in most cases those reporting tools aren’t effective. In our interview last year, Julie Cordua, Thorn’s CEO, likened platform reporting tools to fire alarms that have had their wires cut. In the Uvalde case, we see what happens when those alarms aren’t connected to effective enforcement mechanisms. If there’s any room for optimism here, it’s in the fact that criminals really do seem to be moving away from better-defended platforms to ones that are less established — and, in some cases, have fewer policy and enforcement tools. Surely part of that is simply evidence of changing tastes — Discord and Twitch are much more popular with the average teenager today than Facebook or perhaps even Instagram is. But part of it is also that Meta, YouTube, and Twitter in particular have invested heavily in content moderation, making it harder for bad actors to make threats with impunity and evade bans. That speaks to the value of content moderation, to both companies and the world at large. Peruse Yubo’s website and history and you will see a company that appears to be committed to good stewardship. The app has clearly posted community guidelines, albeit ones that have not been updated since 2020. It has a policy on ban evasion. And it uses facial-recognition technology in an effort to prevent users younger than 13 from signing up. The company also says that it uses machine-learning to scan live streams in an effort to find bad behavior, and scans text messages to look for private information that users might be about to share unwittingly, such as phone numbers. These are good, useful, and expensive tools that many other platforms do not offer. At the same time, these are voluntary measures in a world where regulators still have not established minimum standards for content policy, moderation, enforcement, or reporting what they find — aka “transparency.” We know that Yubo had a policy against basically everything the Uvalde shooter did. We know that kids saw what he was doing online, grew concerned, and used the app’s reporting tools to try to prevent it from happening in the future. And, as is usually the case in these situations, we know nothing about what happened next. Were the reports reviewed? By humans or machines? What did they find? Platforms that allow users to create accounts should be required to let people report those accounts for bad behavior. (Did you know you still can’t report an account on iMessage, one of the world’s biggest communications services?) Platforms should also be required to let us know what they do with those reports, both individually (to the person who reported it) and in the aggregate (so we can understand bad behavior on platforms overall). Doing so will sadly do nothing to stop the epidemic of gun violence in this country. But it will make good on the promise that apps like Yubo are making to their users when they let them report bad behavior — that they will take action when they receive them, and work to prevent further harm. Nobody forced Yubo build the systems that Thorn’s Cordua rightly called “fire alarms.” But it did. The least that Yubo and other platforms can do now is offer us some evidence that those alarms are actually plugged in. Elsewhere in bad vibes: How the right-wing misinformation machine is exploiting the shooting to promote false conspiracy theories. And here’s some dead silence from the firm hired by the Uvalde school district to monitor students for making threats on social media. Governing⭐ In a chillingly close 5-4 decision, the Supreme Court voted to stay the Texas social media law that would force platforms to carry hate speech and other harms. “Justice Alito said he was skeptical of the argument that the social media companies have editorial discretion protected by the First Amendment like that enjoyed by newspapers and other traditional publishers.” Fun! (Adam Liptak / New York Times)
Industry
Those good tweets![]() ![]() Talk to meSend me tips, comments, questions, and Yubo reports: casey@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
Welcome to Platformer
Saturday, May 28, 2022
Hey, thanks for subscribing to Platformer! Free subscribers receive one full edition each week. Paid subscribers receive four, arriving Monday through Thursday. All updates will arrive straight to your
How Facebook undercut the Oversight Board
Saturday, May 28, 2022
What really happened between the company and the board over Russia and Ukraine
Twitter's meltdown May
Saturday, May 28, 2022
Elon Musk's deal is looking wobbly, and the CEO just fired his top two lieutenants
Elon goes wobbly
Saturday, May 28, 2022
The Twitter deal is on hold. Is he renegotiating — or backing out?
Facebook admits its mistakes
Saturday, May 28, 2022
What the company's latest enforcement report tells us about the free-speech debate
You Might Also Like
🚀 Ready to scale? Apply now for the TinySeed SaaS Accelerator
Friday, February 14, 2025
What could $120K+ in funding do for your business?
📂 How to find a technical cofounder
Friday, February 14, 2025
If you're a marketer looking to become a founder, this newsletter is for you. Starting a startup alone is hard. Very hard. Even as someone who learned to code, I still believe that the
AI Impact Curves
Friday, February 14, 2025
Tomasz Tunguz Venture Capitalist If you were forwarded this newsletter, and you'd like to receive it in the future, subscribe here. AI Impact Curves What is the impact of AI across different
15 Silicon Valley Startups Raised $302 Million - Week of February 10, 2025
Friday, February 14, 2025
💕 AI's Power Couple 💰 How Stablecoins Could Drive the Dollar 🚚 USPS Halts China Inbound Packages for 12 Hours 💲 No One Knows How to Price AI Tools 💰 Blackrock & G42 on Financing AI
The Rewrite and Hybrid Favoritism 🤫
Friday, February 14, 2025
Dogs, Yay. Humans, Nay͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
🦄 AI product creation marketplace
Friday, February 14, 2025
Arcade is an AI-powered platform and marketplace that lets you design and create custom products, like jewelry.
Crazy week
Friday, February 14, 2025
Crazy week. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
join me: 6 trends shaping the AI landscape in 2025
Friday, February 14, 2025
this is tomorrow Hi there, Isabelle here, Senior Editor & Analyst at CB Insights. Tomorrow, I'll be breaking down the biggest shifts in AI – from the M&A surge to the deals fueling the
Six Startups to Watch
Friday, February 14, 2025
AI wrappers, DNA sequencing, fintech super-apps, and more. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
How Will AI-Native Games Work? Well, Now We Know.
Friday, February 14, 2025
A Deep Dive Into Simcluster ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏