Plus: the TikTok ban slows down In September I wrote here about a so-called “jawboning” case making its way toward the Supreme Court. “Jawboning” refers to speech from public officials that is meant to compel a person or business to take action. Embedded in the name is the idea that the public official is just running their mouth — their words don’t carry the force of law. But because the person is a public official, often people and businesses do what they say anyway. Sometimes this is benign. If the FBI calls Meta and alerts it to a network of Facebook accounts that it has linked to a Russian troll farm, and leaves it up to Meta to decide what to do about it, the public arguably benefits from that dialog. Similarly, if the Centers for Disease Control gets in touch with YouTube to discuss how it is communicating about vaccine safety, the public health benefit seems obvious. But not every communication is so innocuous. Governments around the world regularly call social networks and demand that various posts be removed, typically because they are critical of an elected official, head of state, ruling party, or some related power structure. And they love to do this precisely because jawboning takes place outside the legal system and leaves no public trace. This is the chief danger of jawboning: it tends to be both very effective and mostly invisible. And in the United States, there are cases where it could violate the First Amendment. Over the past few years, jawboning has become a preoccupation of some forces on the right, who argue that jawboning has been used to censor conservatives. Over the weekend, the New York Times profiled the rise of this movement, which has filed nuisance lawsuits against researchers and academics and sought to block almost all communications between the federal government and social networks in the run-up to the 2024 election. And the campaign is working, argue reporters Jim Rutenberg and Steven Lee Myers: “The people that benefit from the spread of disinformation have effectively silenced many of the people that would try to call them out,” said Kate Starbird, a professor at the University of Washington whose research on disinformation made her a target of the effort. [...]
Officials at the Department of Homeland Security and the State Department continue to monitor foreign disinformation, but the government has suspended virtually all cooperation with the social media platforms to address posts that originate in the United States. That brings us to today, when the Supreme Court heard a case filed by the attorneys general of Missouri and Louisiana. Murthy vs. Missouri accuses federal officials of improperly pressuring platforms to remove content that is critical of the government. And if affirmed by the court, it could dramatically limit the ability of federal agencies to share information with social networks about misinformation, public health, and other issues in the public interest. It is the third major speech case to be heard this term, following the content moderation cases that were argued last month. And it appears that just as a majority of justices appear to support businesses’ First Amendment right to set editorial policies, so too do a majority of justices appear to support the government’s First Amendment right to argue with platforms about those policies. Here’s Adam Liptak at the New York Times: A majority of the Supreme Court seemed wary on Monday of a bid by two Republican-led states to limit the Biden administration’s interactions with social media companies, with several justices questioning the states’ legal theories and factual assertions.
Most of the justices appeared convinced that government officials should be able to try to persuade private companies, whether news organizations or tech platforms, not to publish information so long as the requests are not backed by coercive threats. To be clear, this case does seem to have turned up evidence of the government abusing its authority. As J. Benjamin Aguiñaga, Louisiana’s solicitor general, told the justices: “Behind closed doors, the government badgers the platforms 24/7,” he said. “It abuses them with profanity. It warns that the highest levels of the White House are concerned. It ominously says that the White House is considering its options.” These do seem like cases where government officials went too far. As far as I can tell, though, there isn’t much jurisprudence on exactly how far officials are allowed to go in cases like these. And so, as you would expect, they mostly just try to see what they can get away with. But it’s clear that federal agencies have been severely chastened by the campaign waged by conservatives over the past few years. And that could prevent them from doing important work in an election that will surely be tested by our foreign adversaries. There should be clearer rules about when and how government officials can jawbone with the platforms. But Missouri and Louisiana would ban even basic communication between the government and businesses, and a majority of the Supreme Court seems to think doing so would go much too far.
The TikTok ban slows downEarlier this month, a new TikTok ban gained steam. While there had been many previous efforts to force ByteDance to sell the app, this one shocked the DC establishment by rocketing out of committee on a bipartisan vote of 50-0. On Wednesday, the full House of Representatives passed the bill. But the momentum has cooled considerably since. “I’ve heard Majority Leader Chuck Schumer privately supports the bill but doesn’t want to waste time putting it to a vote,” Alex Heath wrote Friday in his newsletter, Command Line. “The Republicans will just fall in line with Trump, who recently flipped his stance because Facebook is the actual enemy or something. TikTok has to act concerned and resolute since the optics of the bill passing the House are still terrible. But make no mistake, this ban attempt will fail just like all the others.” When asked, Schumer has been noncommittal about the bill. “I’ll have to consult, and intend to consult, with my relevant committee chairmen to see what their views would be,” he told reporters last week. That seems notable given the Washington Post’s analysis of how the bill might be defeated. Here are Cristiano Lima-Strong, Jacob Bogage and Aaron Schaffer: The path of least resistance to defeat the legislation is for Senate Majority Leader Charles E. Schumer (D-N.Y.) to refer it to the Commerce Committee, said Akash Chougule, vice president of right-wing advocacy group Americans for Prosperity. The organization has lobbied lawmakers in support of the bill, though Chougule declined to discuss meetings with specific lawmakers.
“I think folks in Washington understand that if something like this gets referred to committee, it’s as good as dead,” he said. “If Leader Schumer were to do that, I think it’d be clear he’s not serious about the threat posed by TikTok.” Given Congress’ inability to pass a single tech regulation since 2016, it would hardly be a surprise to see yet another effort like this to flame out. But it would be telling that even a bill rooted in two of the top bipartisan fears of the moment — the well-being of teens and Chinese influence — can’t clear both houses of Congress. In the meantime, a robust debate has broken out over the relative merits of the ban. “Banning TikTok Is Unconstitutional & Won’t Do Shit To Deal With Any Actual Threats,” writes Mike Masnick at TechDirt. Masnick argues that China can spread propaganda and obtain data about Americans in many ways that don’t involve surveillance via short-form video apps. Banning TikTok, he argues, represents an unconstitutional restriction on free speech. These concerns seem particularly relevant given that Congress appears to have been motivated to force the app’s divestiture over concerns that pro-Palestine posts were getting disproportionate engagement on the platform, despite any evidence that TikTok put its thumb on the scale. I’ve been more sympathetic to efforts to force a sale. The critics who argue that there’s no evidence TikTok has misused Americans’ data seem to have forgotten that the company admitted to spying on reporters in an effort to stamp out leaks. And it seems ridiculous that we have rules around foreign ownership of broadcast media in this country but not digital media, where the bulk of political discourse arguably now takes place. Commentators I respect, including Evelyn Douek and Alex Stamos at Moderated Content, argue that the bill is written so badly that it will almost certainly be thrown out by a judge even if it is signed. And perhaps it will be. Assuming the bill is stalled for good, it would be nice to believe lawmakers would turn their attention to passing laws around less spectacular issues like data privacy and foreign ownership of digital media. But somehow whenever the dust settles on a bill like this, Congress gets back to doing what it does best, which is nothing.
Talk about this edition with us in Discord: This link will get you in for the next week.
Governing
RedditLots of links over the weekend as we ramp up to the IPO.
Industry- Apple is in talks to license Google’s Gemini for the iPhone. It has also talked to OpenAI. (Mark Gurman / Bloomberg)
- OpenAI’s GPT store is off to a slow start, with developers who offer chatbots through the store saying they’re disappointed with the low customer numbers. (Stephanie Palazzolo / The Information)
- Google unveiled a new AI system called VLOGGER that can generate lifelike videos from just a single photograph. (Michael Nuñez / VentureBeat)
- TikTok’s most logical buyers would be ByteDance’s non-Chinese investors, like Sequoia Capital and General Atlantic, as outside investors look to form investor groups. (Kerry Flynn and Dan Primack / Axios)
- Linda Yaccarino is left with another loss in trying to woo brands back to X, following Don Lemon and Elon Musk’s split. (Tim Higgins / Wall Street Journal)
- Elon Musk told Don Lemon that he takes a small amount of ketamine every other week to help with depression. (Jyoti Mann / Business Insider)
- X’s pivot to video — wherein the platform is trying to get users to upload and engage with videos from other platforms — appears to be a pivot to nowhere. (John Herrman / New York Magazine)
- Windows users are seeing new pop-ups in Chrome that look like malware, but are actually Microsoft’s renewed efforts to get people to switch to Bing. (Tom Warren / The Verge)
- LinkedIn is working on in-app games. (Ingrid Lunden / TechCrunch)
- YouTube now requires creators to label in most cases when their videos were made using artificial intelligence. (Clare Duffy / CNN)
- YouTube creator MrBeast signed a deal (possibly worth $100 million) with Amazon MGM to produce a competition game show called Beast Games. (Taylor Lorenz / Washington Post)
- Dating apps like Tinder, Hinge and Bumble are getting worse because of its paid subscription model, and there aren’t many god alternatives, this author argues. (Magdalene J. Taylor / New York Times)
- An Uber Eats driver made UberCheats, an algorithm-auditing tool, after not getting an explanation from Uber about errors and payments. (Madhumita Murgia / Financial Times)
- Small businesses and local governments are turning to college students in university clinics for free cybersecurity help. (Lindsey Choo / Wall Street Journal)
- Suno AI allows anyone to generate professional sounding music and lyrics — and the impact on the music industry could be immense. (Brian Hiatt / Rolling Stone)
- Doctors are leveraging artificial intelligence to summarize visits with patients. (Ashley Capoot / CNBC)
Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link)
Talk to usSend us tips, comments, questions, and forbidden jawboning: casey@platformer.news and zoe@platformer.news.
|