The AI Overviews debacle and leaked search ranking documents tell a common story about the web's future — and it's not pretty
Here's this week's free edition of Platformer: a look at Google's disastrous launch of AI Overviews and what it tells us about the future of the web. Do you value the sort of work we do around here? If so, consider upgrading your subscription today. We'll email you all our scoops first, and you'll be able to discuss each today's edition with us in our chatty Discord server.
Over the weekend, the AI Overviews that Google announced at its developer conference made international headlines — but not for the reasons the company hoped for. Across Threads, Bluesky, and X, users encountering the company’s AI-generated summaries atop search results found over and over again that Google was hallucinating or worse. Most famously, there was the result that suggested putting nontoxic glue in your pizza. But AI overviews also suggested putting gasoline in your spaghetti. And its sense of American history appeared deeply broken; it reported that just 17 American presidents were white, and that one was Muslim. I was able to confirm that AI overviews were suggesting that people eat one to three rocks per day, an idea that turns out to have come from … The Onion. The fact that many of the most viral screenshots of AI overviews were fake seemed, for once, beside the point. When Google is recommending that you should eat a rock every day, almost any search result shared on social media seems plausible enough. That’s the whole problem! In the moment, all of this felt funnier than it did scary. But it also revealed the emptiness of Google’s new approach to search. Without any knowledge base of its own, the company’s large language model simply summarizes and regurgitates what it finds on the web according to unknown criteria — an approach Today in Tabs’ Rusty Foster accurately calls automated plagiarism. Google blamed all this on its users, Kylie Robison reported at The Verge: Google spokesperson Meghann Farnsworth said the mistakes came from “generally very uncommon queries, and aren’t representative of most people’s experiences.” The company has taken action against violations of its policies, she said, and are using these “isolated examples” to continue to refine the product. On one hand, some of these queries clearly were quite uncommon. “Can I use gasoline to make spaghetti” probably did not come up during internal red-teaming exercises. The whole point of gradually rolling out big changes to search is to identify where it’s broken. On the other hand … plenty of these queries were common enough. Asking about the race or religion of US presidents, or how to get cheese to stick to pizza, are straightforward uses of Google that the previous, non-AI-degraded version of the search engine handled just fine. The company could have chosen to roll out AI overviews in a few narrow categories. But instead it went broader, and now poor Katie Notopoulos is eating glue pizza for pageviews. I expect that the quality of Google’s AI results will improve over time; it’s an existential issue for the company, and if it can’t make AI search work, someone else will. (The company could probably get a long way just by removing The Onion from its search engine’s news sources. In the meantime, I can report that as of today the company is no longer pushing a rock diet.) But even then, Foster’s criticism will still stand: those “overviews” really are just slightly reworded versions of journalists’ copy, designed to give people ever fewer reasons to step outside Google’s walled garden. This is what I mean when I say that the web has entered a state of managed decline: one company has outsized influence over when and how people visit any websites at all, and it has told us it plans to gradually ratchet those visits down by continuing to answer more questions on the search engine results page. And to the extent that it moves slowly, or occasionally pauses and temporarily reverses course, it will be because doing so benefits Google, rather than any of the sites and businesses that have come to rely on it. The company said last week that it is preparing to show ads in AI overviews, as we always knew it would. While we wait for any of this to get better, it seems worth noting that this is arguably Google’s third significant botched launch of an AI product. Bard, the predecessor to Google’s Gemini chatbot, debuted in February 2023. When it did, a demonstration incorrectly stated that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system.” It did not, and the incident was one of the first prominent cases of an LLM hallucinating on a global stage. Then in February of this year, Google’s Gemini chatbot refused to make images of white people in many cases, resulting in racially diverse Nazis and Founding Fathers. After an outcry, particularly in conservative circles, Google removed image generation from the bot. Each of those was embarrassing in its own way. And yet — it also seems obviously worse to tell people to eat rocks or make spaghetti with gasoline. In that respect, the most important story about Google’s AI launches is that they are deteriorating over time. When the Wall Street Journal tested the big chatbots across a wide variety of criteria, it ranked Google third, after the upstart Perplexity and OpenAI’s ChatGPT. (Anthropic’s Claude and Microsoft’s Copilot ranked fourth and fifth, respectively.) There is still a lot we don’t know about how large language models work. There is even more we don’t know about how Google’s moves here will change the future of the internet. A web that thrived because of its openness and decentralization has now begun to wither. On Tuesday, people who work on search engine optimization raced to read about thousands of pages of documentation regarding the company’s search engine algorithm that appear to have been accidentally published online. Google closely guards information about search ranking, both for integrity reasons (to prevent bad actors from manipulating results) and competitive ones (to maintain its edge over rivals). And so the SEO experts who got an early look at the documents are calling them a bonanza. No one has yet fully digested the contents of the leak, and Google has not commented on the documents’ authenticity. Some of the systems referenced may no longer be operating. Assuming the documents are real, though, I was struck by the first conclusion drawn from them by Rand Fishkin, who published the first report on the leaks. Surveying the documents, he concludes that Google’s organic search rankings have come to favor large, dominant brands over everything else. “They’ve been on an inexorable path toward exclusively ranking and sending traffic to big, powerful brands that dominate the web [over] small, independent sites and businesses,” he writes. AI overviews, of course, are intended to work the same way: identifying the relatively few credible publishers left on the web, then compressing their collective output into a slurry that can be served up in the place where search results once appeared. The trend is away from an open web where anyone can compete to a world with a smaller number of big winners. For the moment, that benefits large publishers. In time, though, it may favor only one publisher: Google itself. In that way, the story of the AI overviews debacle and the story of the search ranking leaks are the same. Each shows Google moving awkwardly toward the place it has been moving for years now. And at the moment it’s not clear what anyone can do about it. Sponsored Simplify your startup’s finances with MercuryAs a founder of a growing startup, you’re focused on innovating to attract customers, pinpointing signs of early product-market fit, and securing the funds necessary to grow. Navigating the financial complexities of a startup on top of it all can feel mystifying and incredibly overwhelming. More than that, investing time into becoming a finance expert doesn’t always promise the best ROI. Mercury’s VP of Finance, Dan Kang, shares the seven areas of financial operations to focus on in The startup guide to simplifying financial workflows. It details how founders and early teams can master key aspects, from day-to-day operations like payroll to simple analytics for measuring business performance. Read the full article to learn the art of simplifying your financial operations from the start. *Mercury is a financial technology company not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust®; Members FDIC. Platformer has been a Mercury customer since 2020. Governing- A US appeals court is fast-tracking the schedule for hearings on the challenges to the TikTok divest-or-ban law. (David Shepardson / Reuters)
- Elon Musk has increased his criticism of President Biden on X, an analysis showed, posting about the president almost 40 times this year. (Kate Conger and Ryan Mac / New York Times)
- Most image-based disinformation is now AI-generated, Google researchers found, and the problem could be worse than the company claims. (Emanuel Maiberg / 404 Media)
- The families of the Uvalde shooting victims are suing gunmaker Daniel Defense, Activision, and Meta, alleging that the companies exposed the shooter to the weapon and trained him on how to use it. (Arelis R. Hernández and Naomi Nix / Washington Post)
- AI firms must be regulated by a body other than themselves, two ex-OpenAI board members say. (The Economist)
- Today’s AI isn’t sentient, and LLMs aren’t going to achieve that anytime soon, these AI experts argue. (Fei-Fei Li and John Etchemendy / TIME)
- Election officials worldwide are running “prebunking” education campaigns that help people identify misinformation. (Cat Zakrzewski, Joseph Menn, Naomi Nix and Will Oremus / Washington Post)
- The impact of smartphones on teens’ mental health is much more nuanced than the common narrative suggests, this author argues. (David Wallace-Wells / New York Times)
- Meta added safety features to CrowdTangle in response to an EU investigation into the company’s phasing out of the tool. (Foo Yun Chee / Reuters)
- Telegram has become a tool for Russian disinformation and a major headache for EU regulators. (Alberto Nardelli, Daniel Hornak and Jeff Stone / Bloomberg)
- AI-generated misinformation largely went unverified by 11 chatbots on WhatsApp that promised to help voters in India identify misinformation. (Ananya Bhattacharya and Fahad Shah / Rest of World)
- Meta’s Oversight Board announced a new case for consideration, related to an Instagram post of a Pakistani political candidate accused of blasphemy. (Oversight Board)
- Threads is finding a large user base in Taiwanese activists, who use the app as a space to organize. (Viola Zhou / Rest of World)
Industry- xAI announced its series B funding round of $6 billion, with investors like Valor Equity Partners, Vy Capital, and Andreessen Horowitz. (xAI)
- Elon Musk and Meta’s chief AI scientist Yann LeCun are fighting on X about AI risks and Musk’s conspiracy theories. (Maxwell Zeff / Gizmodo)
- Google added new AI features to its Chromebook Plus line, including access to the Gemini chatbot. (Ivan Mehta / TechCrunch)
- YouTube’s “Playables” free game store is starting to roll out for all users. (Sarah Perez / TechCrunch)
- More publications are preparing to integrate into the fediverse, as social media platforms like Facebook and X become less reliable for traffic. (Sara Guaglione / Digiday)
- A look at the familiar questions being raised by the increasing number of deals made between news publishers and AI companies. (Pete Brown / Columbia Journalism Review)
- The new AWS CEO, Matt Garman, is facing pressure to keep up in the AI race. (Annie Palmer / CNBC)
- AI models that are small enough to run on phones or laptops are possible and could present new use cases for AI, Microsoft researchers say. (Will Knight / WIRED)
- GPT-4 has been able to outperform human analysts when analyzing financial statements and predicting future earnings growth, researchers found. (Michael Nuñez / VentureBeat)
- Former OpenAI safety lead Jan Leike is joining Anthropic to lead its “superalignment” team. (Kyle Wiggers / TechCrunch)
- AI tutors are changing the private tutoring industry, as more students turn to AI apps to help with school assignments. (Rita Liao / TechCrunch)
- Wall Street is cashing in on the technology driving AI, as stocks in the utilities, energy, and materials sectors surge. (Charley Grant / Wall Street Journal)
- Truth Social is struggling to keep its small US user base, with daily visits dropping more than 21 percent since April. (Kevin Breuninger / CNBC)
- Match Group and Bumble are exploring new features to attract Gen Z women, after reports of women ditching dating apps due to threats and unsolicited nudes. (Stephanie Stacey / Financial Times)
- Instant messaging app ICQ is shutting down and encouraging users to migrate to messaging apps from Russian parent company VK. (Michael Kan / PCMag)
Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and search ranking documents: casey@platformer.news and zoe@platformer.news.
|