The villain in this game is the absence of AI
The villain in this game is the absence of AIHow to fight an invisible enemy in a game you never asked to playI just tried to play a game called Caves of Qud. It's a deep, clever game that has the aesthetics of a code editor. It’s bedevilled with weird keyboard shortcuts and unfriendly UIs. During the tutorial, you are told to press random keys but are never told why: "this frog might be hostile. Press L" says the game. But why? What does L do? Why do I care about the frog's hostility? What am I even doing here? It’s frustrating being told to do something when you have no idea what effect it will have on the the world around you. I’ve written about the magic circle before: the metaphorical space you enter when you briefly forget about reality and submit to the rules and logics of a game. The magic circle is what makes you actually care about the game; it’s the thing that makes footballers cry when they lose an important match; it’s the thing that that people create subreddits about to discuss their favourite tactics in Call of Duty. I think when an industry as monstrous as the tech industry dishes out mediocre general-use products, such as generative AI tools, and leaves the task of figuring out use-cases up to society, we are forced into the magic circle of a game that we never asked to play. The game asks us — including those outside of technology politics who don’t have rotten brains and therefore shouldn’t have to care about this — to morph ourselves around AI’s lack of utility and to forgive how rubbish it is. We are told that we need AI for our lives to be better but no one ever explains how it will be better, why it has to be AI, and why we can’t just live our lives without it. The villain in this game is the absence of AI. It’s tough in this magic circle — how do we step out of it? Let’s start here: the CEO of the most hated and hateful US healthcare insurance provider has just been assassinated for enacting some intensely inhuman policies that even the grizzliest eater of child souls would wince at. The people are happy he is dead. Everyone agrees that living under the opaque tyranny of health insurance is thoroughly undignified, and also that the killer, Luigi Mangione, is extremely good looking (try to change him, ladies). While we can pontificate endlessly about why he did it, we can also just read his very short ‘manifesto’ which I think explains it just fine. But tbh we don’t have to understand what kind of person Luigi Mangione is in the same way that we don’t really have to understand the predictive systems that drove him, in part, to kill a snivelling shameless CEO. The only thing we have to understand is the harms. I recently read Ali Alkhatib’s piece on Defining AI, where he cites the authors of AI Snake Oil, who try to explain the distinction between the predictive systems that insurance companies might use (which is snake oil) and ‘real’ AI. Ali questions why this distinction even matters — the point is, the system is opaque, you have no idea how it reaches its unfair decisions, and then you are somehow harmed by it. Its true definition, if that even is knowable, is irrelevant. But the harms are obviously very real. Simple-minded VC-bootlicker Casey Newton wrote a piece recently where he flattened out the discourse around AI into two straightforward camps: one side thinks AI is ‘real and dangerous’ and the other thinks it’s ‘fake and it sucks’. He obviously is on the ‘real and dangerous’ side, because he believes that the direction of VC funding is a reliable indication of whether or not a product is good, or in this case if it’s even real. In other words, he’s so deeply entrenched in Silicon Valley’s magic circle that he can’t see outside of it. He thinks that anyone who belongs in the ‘it’s fake and it sucks’ camp just doesn’t understand the true capabilities of AI, and is being a contrarian luddite. He’s gone from licking the boot to just deep-throating the entire thing. The magic circle we are placed in by top-down AI enthusiasm is not obscuring some mystical product — it is the product. It consists of the narratives that lock us in a race against the clock to overcome the existential risks of AGI, and the painted future of abundant knowledge, infinite new discoveries, and human immortality. Oversimplified analyses like Casey Newton’s fail to see that all of this is about so much more than its technical capabilities; it’s about the underlying industrial and political strategies, and the kinds of futures we’re being locked into with the development of new infrastructures and realities. Who controls the magic circle? Currently it appears to be the various PR and lobbying machines propped up by monopolies. In a recent podcast episode I produced, competition lawyer Michelle Meagher discussed how monopolies control the pace of innovation and regulation alike, with a historical example: when we discovered that CFCs and other harmful chemicals had poked a giant hole in the ozone layer, new legislation was written to outlaw use of these chemicals — but the companies producing and selling these chemicals used their huge lobbying forces to ensure they would only stop when viable alternatives became available. The harm reduction was all on their timeline. As journalists and advocates and just generally people who care, I don’t think we should leave the magic circle — rather we should keep telling everyone that we are in one, and work together to discover game cheats. Commentators like Casey Newton are playing the game on a VR headset that they aren’t even aware is affixed to their person. Thinkers like Ali Alkhatib are at least questioning whether or not the game is even good. The role of monopolists and ‘innovators’ is to set the pace of production, and tell stories about why their [insert product in this gaping hole] is so great. By that measure, we can’t ask everyone to step out of the current magic circle and into plain reality — no one cares about the harms of gen AI and predictive systems because they are not directly visible, and being a working adult in 2024 is a horrible energy-sapping grind. So, if you’re an AI critic, don’t criticise the technical particulars; rather, look at the harms and the narratives and make people care about them by telling your own stories. Thank you for subscribing to Horrific/Terrific. You can now follow me on Bluesky. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
The masculine urge to build
Friday, November 29, 2024
Biohacking, government procurement, and fascism ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Stuck in a reactionary doom loop
Friday, November 8, 2024
Fascism, the slow motion flashmob ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Welcome to the first Chappell Roan election
Sunday, October 20, 2024
What's happening on the other side of the post-reality curve and why am I being doxed by Ray Bans? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
A contagious culture stain
Friday, September 27, 2024
All the ways in which generative AI is an energy vampire ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Content Neverending
Saturday, September 7, 2024
What do you know about how Flickr started? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Edge 456: Inside the Toughest Math Benchmark Ever Built
Thursday, December 19, 2024
FrontierMath pushes the boundaries of mathematical reasoning in foundation models. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
New Malware Technique Could Exploit Windows UI Framework to Evade EDR Tools
Thursday, December 19, 2024
THN Daily Updates Newsletter cover Python Data Cleaning and Preparation Best Practices ($35.99 Value) FREE for a Limited Time Professionals face several challenges in effectively leveraging data in
Deck Your iPad in Red and Green *Fa-La-La-La-La, La-La-La-La*
Thursday, December 19, 2024
Gift the internet's favorite iPad case. We know we say it every year but, trust us, if feels really good to get ahead of those holiday gifts. Skip the lines, even online, and shop something for
Post from Syncfusion Blogs on 12/12/2024
Thursday, December 19, 2024
New blogs from Syncfusion Build Micro Frontends with single-spa: A Guide By Thamodi Wickramasinghe Learn how to build and deploy micro frontends using the single-spa framework. This step-by-step guide
Diving Deep into Kotlin Coroutines Source Code
Thursday, December 19, 2024
View in browser 🔖 Articles How Coroutines withContext Actually Works Ever wondered how Kotlin's withContext actually works? This article jumps into the coroutine source code, breaking down how it
⚙️ Cruise shuttered
Thursday, December 19, 2024
Plus: Google's Gemini 2.0 era is here
My 7 must-have gadgets of 2024
Thursday, December 19, 2024
Google Trillium; USB-C stocking stuffers; Best Chrome extensions -- ZDNET ZDNET Tech Today - US December 12, 2024 Rolling Square InCharge XS The 7 tech gadgets I couldn't live without in 2024 I
PHPWeekly December 12th 2024
Thursday, December 19, 2024
Curated news all about PHP. Here's the latest edition Is this email not displaying correctly? View it in your browser. PHP Weekly 12th December 2024 Hi everyone, Wherever you are in the world, we
You’ve Been Storing UUIDs All Wrong
Thursday, December 19, 2024
View in browser 🔖 Articles Optimizing UUID Storage in SQLDelight: Text vs. Two Longs Discover how to optimize UUID storage in SQLDelight with practical insights into text versus two-long storage
Spyglass Dispatch: Google's Mariner • Google's Astra • Google's Agents • Google's Gemini • Suleyman Podcast • Microsoft's AI Pitch • Apple's AI Chip • Meta's Donation
Thursday, December 19, 2024
Google's Mariner • Google's Astra • Google's Agents • Google's Gemini • Suleyman Podcast • Microsoft's AI Pitch • Apple's AI Chip • Meta's Donation The Spyglass Dispatch is