With AI search results coming to the masses, the human-powered web recedes further into the background
Here's this week's free edition of Platformer: a report from the scene of Google I/O, where the company just announced the roll out of an ambitious plan to remake search. Will the company take the web down with it? Do you value independent jouranlism like ours? If so, consider upgrading your subscription today. We'll email you all our scoops first, and you'll be able to discuss each today's edition with us in our chatty Discord server.
When Sundar Pichai took the stage at Google I/O on Tuesday morning, he said that the rise of generative artificial intelligence would provide new opportunities: for creators, developers, startups, and for everyone. Two hours later, after the sun rising over the Shoreline Amphitheatre sent audience members in the cheap seats scurrying to vanishingly few spots in the shade, the opportunities on offer did not seem entirely clear. Aside from the opening act, the TikTok DJ Marc Rebillet, no creator, developer, or startup took the stage. Instead, a handful of celebrities appeared in brief videos, with Wyclef Jean enthusing over AI music tools and Donald Glover experimenting with AI filmmaking. To the extent there was an obvious opportunity, it was to use Google’s products, covering a bewildering array of tasks, from homework to shopping to moving to celebrating an anniversary. Many of these products promise real improvements in the lives of everyday people: a tool to automatically find and organize emailed receipts while creating a spreadsheet of expenses; one to ask questions about your images stored in Google Photos with natural language; another to create a lively interactive tutoring session from materials uploaded to NotebookLM. When it comes to the company’s core search engine, however, the image of progress looks far muddier. Like its much-smaller rivals, Google’s idea for the future of search is to deliver ever more answers within its walled garden, collapsing projects that would once have required a host of visits to individual web pages into a single answer delivered within Google itself. The company’s AI-powered search results, which it calls Search Generative Experience, are coming to all users in the United States soon, Google announced today. By the end of 2024, they will appear at the top of results for 1 billion users. As I noted when I wrote about the new, more extractive search engines from companies like Arc and Brave, Google’s move to answer more questions on the search engine results page is simply a continuation of a long-standing practice. But where the company once limited itself to gathering low-hanging fruit along the lines of “what time is the super bowl,” on Tuesday executives showcased generative AI tools that will someday plan an entire anniversary dinner, or cross-country-move, or trip abroad. A quarter-century into its existence, a company that once proudly served as an entry point to a web that it nourished with traffic and advertising revenue has begun to abstract that all away into an input for its large language models. This new approach is captured elegantly in a slogan that appeared several times during Tuesday’s keynote: let Google do the Googling for you. It’s a phrase that identifies browsing the web — a task once considered entertaining enough that it was given the nickname “surfing” — as a chore, something better left to a bot. “People’s time is valuable, right? They deal with hard things,” Liz Reid, the company’s head of search, told Wired’s Lauren Goode ahead of the event. “If you have an opportunity with technology to help people get answers to their questions, to take more of the work out of it, why wouldn’t we want to go after that?” It’s a fair question, and not remotely a new one. When I started covering Google a decade ago, the company talked often about evolving search until it resembled the computer from Star Trek: an omnipresent, omniscient digital assistant. The Star Trek computer is notable for its ability to answer questions more or less instantly, and if the information it provides is supplied by something akin to the world wide web, it is not something that its users ever see. Whatever labor funded the production of knowledge that it refers to goes unmentioned, and whatever sources it relies on go uncredited. It is this vision of the future that Tuesday’s announcements moves us ever closer to. And it is one that is understandably of concern to the many people who have come to rely on Google answering questions imperfectly, or partially, and funneling traffic to them since 1998. “Web publishers brace for carnage as Google adds AI answers,” read an accurate headline in the Washington Post on Monday. Until now, publishers have been able to rely on significant volumes of traffic coming from the blue links that appear under many queries. But what the company is now calling AI overviews often obscure these links, requiring users to click to see them, or simply abstracting them away in an automatically generated summary. Analysts who have studied the company’s early experiments with SGE say a bloodbath is coming. Here are Gerrit De Vynck and Cat Zakrzewski: Tech research firm Gartner predicts traffic to the web from search engines will fall 25 percent by 2026. Ross Hudgens, CEO of search engine optimization consultancy Siege Media, said he estimates at least a 10 to 20 percent hit, and more for some publishers. “Some people are going to just get bludgeoned,” he said. Raptive, which provides digital media, audience and advertising services to about 5,000 websites, including Easy Family Recipes, estimates changes to search could result in about $2 billion in losses to creators — with some websites losing up to two-thirds of their traffic. Raptive arrived at these figures by analyzing thousands of keywords that feed into its network, and conducting a side-by-side comparison of traditional Google search and the pilot version of Google SGE. Google has more reason than most to move cautiously here: it supplies advertising to many of the web pages that are about to lose all that traffic, and stands to lose as visits to those pages disappear. But because the company maintains a stranglehold over much of the digital advertising market, it appears to be betting that it can ride out the transition and smooth out any bumps by pulling levers on its many other sources of revenue. In his public comments, Pichai has sought both to emphasize the power of AI-enhanced search and play down any potential disruption to the ecosystem that Google now supports. He told CNBC’s Deirde Bosa that he did not think AI overviews would disrupt the company’s business, or publishers’. “In general, we find it's both overall increasing usage, and when we look at it year on year, we have been able to grow traffic to the ecosystem,” Pichai told her. “We are prioritizing approaches which will generate traffic as well. So we are working hard on that.” The company has many levers at its disposal here: it can choose when to show AI overviews, and when not to; if outbound traffic were to drop precipitously, rousing the attention of regulators or other aggrieved parties, it could revert changes for a time. Still, as the first day of I/O wound down, it was hard to escape the feeling that the web as we know it is entering a kind of managed decline. Over the past two and a half decades, Google extended itself into so many different parts of the web that it became synonymous with it. And now that LLMs promise to let users understand all that the web contains in real time, Google at last has what it needs to finish the job: replacing the web, in so many of the ways that matter, with itself. It remains to be seen how much this matters to the vast majority of people whose livelihoods do not depend directly on web traffic. I suspect billions of people will be happy to receive their answers to complicated queries directly on the search results page, uninterested in where the information comes from, so long as it’s accurate enough. But to everyone who depended even a little bit on web search to have their business discovered, or their blog post read, or their journalism funded, the arrival of AI search bodes ill for the future. Google will now do the Googling for you, and everyone who benefited from humans doing the Googling will very soon need to come up with a Plan B. Google I/OAs grim as I find the implications of this year's I/O for the web, the company had a lot of new features, models, and experiments to discuss. - Ask Photos is a forthcoming feature in Google Photos that will help users find specific images by talking to the chatbot in natural language. (Cherlynn Low / Engadget)
- Gmail will get a feature letting users search through their backlogs of emails, and quickly summarize and reply to messages. (Ron Amadeo / Ars Technica)
- Gemini 1.5 Flash, a new model, is lighter and less expensive than the Pro version, but is still optimized for speed and efficiency. (Pranav Dixit / Engadget)
- Gemini Nano, the smallest of the models, will be integrated directly into the Chrome desktop client, beginning with Chrome 126. It should be able to help with tasks like autofill. (Frederic Lardinois / TechCrunch)
- Gemini on Android, which will replace Google Assistant, will be more tightly integrated with Gmail, Google Messages, and YouTube. (Sarah Perez / TechCrunch)
- Gemini's context window is now 2 million tokens in a private preview, letting users analyze much longer documents, codebases, videos, and audio recordings. Paying users now get access to 1 million-token windows, the most of any publicly available model. (Kyle Wiggers / TechCrunch)
- A new upgraded SynthID watermark imprinting system will now mark AI-generated videos and text. Glad to see the company investing in this. (Umar Shakir / The Verge)
- A new “Gems” feature will let users create customized versions of the Gemini assistant with different personalities. Perhaps Google will go in a Her direction, too. (Umar Shakir / The Verge)
- Veo, an AI video generator, and Imagen 3, the latest image generator, are Google’s latest competitors to OpenAI’s Sora and DALL-E 3. (Devindra Hardawar / Engadget)
- Project Astra, a new camera-based AI agent, can help people find things that make noise as well as misplaced or forgotten objects. A demo of Astra was among the highlights of the keynote. (Cherlynn Low / Engadget)
- Google's newest chip, Trillium, is nearly five times as fast as its previous version, and Google says it will greatly improve AI data center performance. (Max A. Cherney / Reuters)
- Google is updating "circle to search" and will soon be able to solve more complicated math problems using the camera. (Julian Chokkattu / WIRED)
GoverningIndustryThose good postsFor more good posts every day, follow Casey’s Instagram stories.
an orca patiently sitting through a Geico commercial before it can watch a boat sinking tutorial on youtube — Ygrene ✔️ (@ygrene.bsky.social) May 13, 2024 at 2:55 PM
THE ORCAS GET POLICE TENSE NOW
[image or embed] — Robert Black (@hurricanexyz.bsky.social) May 13, 2024 at 2:36 PM
“jojo siwa” sounds like something a minion says right before it drops a 1200lb bag of hammers on your skull — street meat (@hotdog.ceo) May 13, 2024 at 7:54 PM
Talk to usSend us tips, comments, questions, and Plans B: casey@platformer.news and zoe@platformer.news.
|