Here's this week's free edition of Platformer: a look at a critical decision by the Supreme Court this week that could reshape the 2024 election. Do you value independent reporting on government and platforms? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about the dismantling of the Stanford Internet Observatory. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
Programming note: With this edition, Platformer is on summer break. We’ll be back with a special edition next Wednesday, and then return to our regular schedule July 15. As always, you can find our posting schedule here. Here’s a simple question with a surprisingly complicated answer: if the FBI discovers evidence of foreign interference in the 2024 election on Facebook or other social platforms, can it legally share that information with Meta or other companies? Until recently, the answer would have been an unequivocal yes. But then came Murthy vs. Missouri. The lawsuit, which was brought by the states of Missouri and Louisiana along with five individuals, accused federal officials of illegally pressuring platforms to remove content in violation of the First Amendment. It consisted largely of innuendo, baseless conjecture, and outright fabrications. But in September, the U.S. Court of Appeals for the 5th Circuit ruled that the Biden administration had improperly pressured social networks in some cases. This practice, known in legal circles as “jawboning,” describes incidents when governments attempt to persuade non-government actors into doing something they are not legally required to do. The ruling deeply rattled federal agencies, who suddenly feared they would face legal consequences for publicly voicing their policy objectives. The National Institutes of Health even suspended a $150 million program that gives grants for communicating public health issues. The administration appealed the Fifth Circuit’s decision, and the Supreme Court heard the case earlier this year. On Wednesday, the justices voted 6-3 to overturn the Fifth Circuit. Here’s Andrew Chung at Reuters: Conservative Justice Amy Coney Barrett, who authored the Supreme Court's ruling, wrote that the two Republican-led states and the other plaintiffs lacked the required legal standing to sue the administration in federal court. [...]
Barrett wrote that the plaintiffs could not show a "concrete link" between the conduct by the officials and any harm that the plaintiffs suffered. They "emphasize that hearing unfettered speech on social media is critical to their work," Barrett wrote. "But they do not point to any specific instance of content moderation that caused them identifiable harm." The decision would seem to pave the way for government agencies to resume their contacts with social platforms — though perhaps not for the reason you would imagine. In the end, the justices did not rule on the merits of the case, or establish under what circumstances the government can and cannot exert pressure on social platforms or other businesses. Instead, as Chung writes, they found simply that the plaintiffs could not prove officials’ conduct had harmed them and thus that they had no standing to sue. One reason the plaintiffs were found not to have standing, the court wrote, is that platforms have many reasons to moderate content, and were moderating all the content named in the case before the defendants’ posts (about COVID, vaccines, and other topics) were removed. That “complicates the plaintiffs’ effort to demonstrate that each platform acted due to ‘government-coerced enforcement’ of its policies,” Barrett wrote in the decision. In the short term, the justices’ decision appears to be a win for our shared sense of reality. With federal agencies’ right to communicate reaffirmed, they can now alert social platforms to various risks without fear of being taken to court. And in fact, they already have. In March, NBC reported that the FBI is once again alerting social platforms about foreign propaganda efforts. Under a program developed during the Trump administration, the FBI is contacting big platforms to share evidence of covert influence operations in an effort to disrupt them. “In coordination with the Department of Justice, the FBI recently implemented procedures to facilitate sharing information about foreign malign influence with social media companies in a way that reinforces that private companies are free to decide on their own whether and how to take action on that information,” the FBI told NBC. With the election less than five months away, platform sources I’ve spoken with tell me they expect that these efforts will now intensify. What might they find? In recent months, Meta has disclosed significant ongoing influence operations around the world, including a major effort by Russia to decrease international support for Ukraine, and another one from Israel designed to bolster support for the country in its war on Gaza. In May, OpenAI disclosed Chinese and Russian efforts to use its products in influence operations as well. It’s difficult to imagine an influence campaign like this deciding the outcome of the presidential election, if only because the major party candidates are both already so well known to voters. Still, to the extent that foreign countries do attempt to change the outcome here, we should now know much more about it in real time than we otherwise might have. At the same time, First Amendment scholars are troubled about some aspects of the court’s ruling. On the Moderated Content podcast, the University of Chicago Law School’s Genevieve Lakier told host Evelyn Douek that Barrett’s requirements for a person to gain standing in a jawboning case seemed almost impossibly high. Not only do they have to prove that they were harmed by the government — a difficult task, given that jawboning almost exclusively takes place in secret and is thus invisible to the victim — but they must also prove that they are likely to be harmed by the government again. In addition, they have to prove that the Supreme Court can effectively stop the harm from taking place. Because platforms have multiple incentives to moderate content — commercial ones, moral ones, legal ones — it’s not clear that the court saying “knock it off” will have any effect on the platforms’ actions. In Murthy, which was based on evidence that amounted to little more than a conspiracy theory, the court’s high bar for standing doesn’t offer much reason for concern. But the practice of jawboning is rampant around the world, and increasingly common in the United States. Both liberal and conservative lawmakers here now regularly make statements urging platforms to remove speech that is legal under the First Amendment. A future US government — a near-future one, even! — might use this week’s ruling to push the envelope even further. Perhaps they might work behind the scenes to get platforms to remove posts that are critical of the government or of one party’s candidates. Or maybe they would pressure them to remove posts that inform women how to get abortions across state lines. In either case, those affected might never even know they had been harmed. And even if they did know, given that platforms remove posts for many interconnected reasons, it’s possible this court would never grant them the standing they need to begin the discovery process and get the evidence necessary to make their case. At the beginning of the current Supreme Court term, it seemed as if 2024 might be a year that made platforms permanently less effective at investigating propaganda campaigns and moderating their platforms as they see fit. It still might be: the court is expected to rule on whether two laws restricting content moderation in Texas and Florida are constitutional in the coming days. In the meantime, the Murthy decision offers some hope that a majority of justices are unwilling to impose radical and unpredictable changes on the status quo. But it may not be too long before we see that the fact that few can get standing in a jawboning case can cut both ways. On the podcast this week: Kevin and I break down the record labels' lawsuit against Suno and Udio with RIAA CEO Mitch Glazier. Then, Christopher Kirchoff, who founded the Pentagon's Silicon Valley office, stops by to discuss his new book on how the relationship between tech and the military is changing. And finally: some HatGPT. Apple | Spotify | Stitcher | Amazon | Google | YouTube Governing- The Center for Investigative Reporting is suing OpenAI and Microsoft for violating copyright law. The lawsuit focuses on how AI-generated summaries of articles threaten the media industry. (Sarah Parvini and Matt O’Brien / AP)
- Seven content-licensing firms have formed a trade group, the Dataset Providers Alliance, which it says will advocate for “ethical data sourcing” in AI training. (Katie Paul / Reuters)
- Amazon is investigating Perplexity AI to see whether the AI startup violated AWS rules by scraping websites that tried to block it from doing exactly that. (Dhruv Mehrotra and Andrew Couts / Wired)
- The House Energy and Commerce Committee abruptly canceled a session to discuss and vote on several bills, including the American Privacy Rights Act and the Kids Online Safety Act. (Lauren Feiner / The Verge)
- A group of nonprofits is arguing that the TikTok law is unconstitutional because it restricts free speech and makes it impossible for users to associate on the app. (Haleluya Hadero / Associated Press)
- Instagram and Threads’ political content filter only appeared to be reset, Meta said, but in reality the settings hadn’t changed. (Gaby Del Valle / The Verge)
- Meta's Oversight Board made 53 decisions in 2023, after receiving almost 400,000 appeals. Not great. (Karissa Bell / Engadget)
- AI will intensify antitrust abuses by Big Tech, top German antitrust official Andreas Mundt warned. (Karin Matussek / Bloomberg)
- How a covert Chinese propaganda campaign harassed Deng Yuwen, a frequent critic of China and Xi Jinping, and his teenage daughter. (Steven Lee Myers and Tiffany Hsu / New York Times)
- OpenAI’s sudden move to restrict ChatGPT access in China is shaking up the local AI scene. (Bloomberg)
- A Q&A with Zhang Hongjiang, a computer scientist and one of China’s leading advocates for AI safety on the opportunities and challenges facing China. (Ryan McMorrow and Nian Liu / Financial Times)
- Telegram has become a crucial platform for the LGBTQ+ community in Russia, which is increasingly under threat from the government. (Sassafras Lowrey / Wired)
Industry- OpenAI struck a multiyear licensing deal and strategic partnership with Time, giving the company access to the magazine's 101 years worth of archives. (Sara Fischer / Axios)
- OpenAI wants to use AI to help the people who fine tune its large language models. A step toward recursive self-improvement, which could help AI development accelerate rapidly. (Will Knight / Wired)
- OpenAI is currently generating more revenue from selling access to its AI models than Microsoft is from its comparable business. (Aaron Holmes / The Information)
- ByteDance is continuing to bet on TikTok Shop, despite a looming ban. (Alexandra S. Levine / Forbes)
- Character.AI now lets users talk to AI avatars over calls, and supports multiple languages, including Spanish, English, and Russian. (Ivan Mehta / TechCrunch)
- Mark Zuckerberg said in a YouTube interview that he doesn’t believe there will be just one AI — and noted some of his closed source competitors seem to think they’re creating God. (Sarah Perez / TechCrunch)
- Meta is starting to test user-created AI chatbots on Instagram, with a rollout beginning in the US. (Ivan Mehta / TechCrunch)
- A review of Meta Ray-Bans: while traveling in a French-speaking country. The AI translation does not work very well. (Kate Knibbs / WIRED)
- Google Cloud is partnering with Moody's to use credible financial data in enterprise chats. (Ina Fried / Axios)
- Google is reportedly testing facial recognition technology for office security, starting with its campus in Seattle. (Jennifer Elias / CNBC)
- Google Translate is adding support for 110 new languages, including Cantonese. (Jay Peters / The Verge)
- Google Sheets now performs “significantly faster” when doing calculations on desktop Chrome and Edge, the company says, thanks to some AI enhancements. (Abner Li / 9to5Google)
- YouTube is reportedly in talks with record labels to license their content for AI tools that can clone music. (Anna Nicolaou and Madhumita Murgia / Financial Times)
- YouTube is the largest streaming platform for connected and traditional TVs, beating both Disney and Netflix, according to a Nielsen report. (Alex Sherman / CNBC)
- Amazon is planning to launch a section on its site featuring cheap items shipped directly from warehouses in China. (Jing Yang and Theo Wayt / The Information)
- Amazon reached a $2 trillion market valuation for the first time, following an AI-fueled boost. (Carmen Reinicke / Bloomberg)
- Figma announced a major UI redesign and new generative AI tools for projects. (Jay Peters / The Verge)
- BeReal is reportedly set to lay off a significant portion of its staff following the acquisition by French gaming company Voodoo. (Riddhi Kanetkar and Kali Hays / Business Insider)
- AI work assistants require more management than expected, CIOs say. (Isabelle Bousquette / Wall Street Journal)
- A number of companies including Google, Snap, and Meta are changing their terms of service to make way for AI training. (Eli Tan / New York Times)
- An AI-generated voice of sportscaster Al Michaels is set to give daily, personalized recaps of the Paris Olympics on Peacock. (Jay Peters / The Verge)
- Landlords are increasingly using AI chatbots to handle tasks such as maintenance requests and questions from prospective tenants. (Julie Weed / New York Times)
- An experiment with three AI assistants on how well they can plan a trip to Norway. (Ceylan Yeginsu / New York Times)
Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and foreign propaganda: casey@platformer.news and zoe@platformer.news.
|