The villain in this game is the absence of AI
The villain in this game is the absence of AIHow to fight an invisible enemy in a game you never asked to playI just tried to play a game called Caves of Qud. It's a deep, clever game that has the aesthetics of a code editor. It’s bedevilled with weird keyboard shortcuts and unfriendly UIs. During the tutorial, you are told to press random keys but are never told why: "this frog might be hostile. Press L" says the game. But why? What does L do? Why do I care about the frog's hostility? What am I even doing here? It’s frustrating being told to do something when you have no idea what effect it will have on the the world around you. I’ve written about the magic circle before: the metaphorical space you enter when you briefly forget about reality and submit to the rules and logics of a game. The magic circle is what makes you actually care about the game; it’s the thing that makes footballers cry when they lose an important match; it’s the thing that that people create subreddits about to discuss their favourite tactics in Call of Duty. I think when an industry as monstrous as the tech industry dishes out mediocre general-use products, such as generative AI tools, and leaves the task of figuring out use-cases up to society, we are forced into the magic circle of a game that we never asked to play. The game asks us — including those outside of technology politics who don’t have rotten brains and therefore shouldn’t have to care about this — to morph ourselves around AI’s lack of utility and to forgive how rubbish it is. We are told that we need AI for our lives to be better but no one ever explains how it will be better, why it has to be AI, and why we can’t just live our lives without it. The villain in this game is the absence of AI. It’s tough in this magic circle — how do we step out of it? Let’s start here: the CEO of the most hated and hateful US healthcare insurance provider has just been assassinated for enacting some intensely inhuman policies that even the grizzliest eater of child souls would wince at. The people are happy he is dead. Everyone agrees that living under the opaque tyranny of health insurance is thoroughly undignified, and also that the killer, Luigi Mangione, is extremely good looking (try to change him, ladies). While we can pontificate endlessly about why he did it, we can also just read his very short ‘manifesto’ which I think explains it just fine. But tbh we don’t have to understand what kind of person Luigi Mangione is in the same way that we don’t really have to understand the predictive systems that drove him, in part, to kill a snivelling shameless CEO. The only thing we have to understand is the harms. I recently read Ali Alkhatib’s piece on Defining AI, where he cites the authors of AI Snake Oil, who try to explain the distinction between the predictive systems that insurance companies might use (which is snake oil) and ‘real’ AI. Ali questions why this distinction even matters — the point is, the system is opaque, you have no idea how it reaches its unfair decisions, and then you are somehow harmed by it. Its true definition, if that even is knowable, is irrelevant. But the harms are obviously very real. Simple-minded VC-bootlicker Casey Newton wrote a piece recently where he flattened out the discourse around AI into two straightforward camps: one side thinks AI is ‘real and dangerous’ and the other thinks it’s ‘fake and it sucks’. He obviously is on the ‘real and dangerous’ side, because he believes that the direction of VC funding is a reliable indication of whether or not a product is good, or in this case if it’s even real. In other words, he’s so deeply entrenched in Silicon Valley’s magic circle that he can’t see outside of it. He thinks that anyone who belongs in the ‘it’s fake and it sucks’ camp just doesn’t understand the true capabilities of AI, and is being a contrarian luddite. He’s gone from licking the boot to just deep-throating the entire thing. The magic circle we are placed in by top-down AI enthusiasm is not obscuring some mystical product — it is the product. It consists of the narratives that lock us in a race against the clock to overcome the existential risks of AGI, and the painted future of abundant knowledge, infinite new discoveries, and human immortality. Oversimplified analyses like Casey Newton’s fail to see that all of this is about so much more than its technical capabilities; it’s about the underlying industrial and political strategies, and the kinds of futures we’re being locked into with the development of new infrastructures and realities. Who controls the magic circle? Currently it appears to be the various PR and lobbying machines propped up by monopolies. In a recent podcast episode I produced, competition lawyer Michelle Meagher discussed how monopolies control the pace of innovation and regulation alike, with a historical example: when we discovered that CFCs and other harmful chemicals had poked a giant hole in the ozone layer, new legislation was written to outlaw use of these chemicals — but the companies producing and selling these chemicals used their huge lobbying forces to ensure they would only stop when viable alternatives became available. The harm reduction was all on their timeline. As journalists and advocates and just generally people who care, I don’t think we should leave the magic circle — rather we should keep telling everyone that we are in one, and work together to discover game cheats. Commentators like Casey Newton are playing the game on a VR headset that they aren’t even aware is affixed to their person. Thinkers like Ali Alkhatib are at least questioning whether or not the game is even good. The role of monopolists and ‘innovators’ is to set the pace of production, and tell stories about why their [insert product in this gaping hole] is so great. By that measure, we can’t ask everyone to step out of the current magic circle and into plain reality — no one cares about the harms of gen AI and predictive systems because they are not directly visible, and being a working adult in 2024 is a horrible energy-sapping grind. So, if you’re an AI critic, don’t criticise the technical particulars; rather, look at the harms and the narratives and make people care about them by telling your own stories. Thank you for subscribing to Horrific/Terrific. You can now follow me on Bluesky. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
The masculine urge to build
Friday, November 29, 2024
Biohacking, government procurement, and fascism ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Stuck in a reactionary doom loop
Friday, November 8, 2024
Fascism, the slow motion flashmob ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Welcome to the first Chappell Roan election
Sunday, October 20, 2024
What's happening on the other side of the post-reality curve and why am I being doxed by Ray Bans? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
A contagious culture stain
Friday, September 27, 2024
All the ways in which generative AI is an energy vampire ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Content Neverending
Saturday, September 7, 2024
What do you know about how Flickr started? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Youre Overthinking It
Wednesday, January 15, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, January 15, 2025? The
eBook: Software Supply Chain Security for Dummies
Wednesday, January 15, 2025
Free access to this go-to-guide for invaluable insights and practical advice to secure your software supply chain. The Hacker News Software Supply Chain Security for Dummies There is no longer doubt
The 5 biggest AI prompting mistakes
Wednesday, January 15, 2025
✨ Better Pixel photos; How to quit Meta; The next TikTok? -- ZDNET ZDNET Tech Today - US January 15, 2025 ai-prompting-mistakes The five biggest mistakes people make when prompting an AI Ready to
An interactive tour of Go 1.24
Wednesday, January 15, 2025
Plus generating random art, sending emails, and a variety of gopher images you can use. | #538 — January 15, 2025 Unsub | Web Version Together with Posthog Go Weekly An Interactive Tour of Go 1.24 — A
Spyglass Dispatch: Bromo Sapiens
Wednesday, January 15, 2025
Masculine Startups • The Fall of Xbox • Meta's Misinformation Off Switch • TikTok's Switch Off The Spyglass Dispatch is a newsletter sent on weekdays featuring links and commentary on timely
The $1.9M client
Wednesday, January 15, 2025
Money matters, but this invisible currency matters more. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
⚙️ Federal data centers
Wednesday, January 15, 2025
Plus: Britain's AI roadmap
Post from Syncfusion Blogs on 01/15/2025
Wednesday, January 15, 2025
New blogs from Syncfusion Introducing the New .NET MAUI Bottom Sheet Control By Naveenkumar Sanjeevirayan This blog explains the features of the Bottom Sheet control introduced in the Syncfusion .NET
The Sequence Engineering #469: Llama.cpp is The Framework for High Performce LLM Inference
Wednesday, January 15, 2025
One of the most popular inference framework for LLM apps that care about performance. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
3 Actively Exploited Zero-Day Flaws Patched in Microsoft's Latest Security Update
Wednesday, January 15, 2025
THN Daily Updates Newsletter cover The Kubernetes Book: Navigate the world of Kubernetes with expertise , Second Edition ($39.99 Value) FREE for a Limited Time Containers transformed how we package and