Horrific-Terrific - The Vague Normative Framework of Good™️
Hello — thank you to those who came to the first Marked as Urgent last month! This was the first in a series of tech policy meet ups in London, hosted by me, Ben Whitelaw, and Mark Scott. Subscribe to the calendar on Luma so you don’t miss future invites. The other week I was at an event and found myself speaking with someone who turned out to be an effective altruist. I was with a friend; a lucky uninitiated FOOL who knew nothing about what EA is, and so of course they asked very disarming good-faith questions that plunged the EA Man into irreversible intellectual paralysis. Questions like “why have you made an entire movement out of what is basically just giving to charity?” or “why not skip the middle step of acquiring a huge salary and just put your skills and time directly into the charity sector?” I wish I could remember his answer to either of those but honestly most of what he said was like conversational butt-dialling. At one point he became exasperated — understandable when a few basic questions obliterate your entire world view into atomic dust — and finally said, “wouldn’t you want to do everything you can to increase good in the world, and decrease evil?”. Naturally I totally lost my patience at this point and told him that he sounded like a character from the Marvel Cinematic Universe (I had to stop talking to him after that). I know this man doesn’t represent all EAs but conversations like this are very revealing: the culture of EA is infectious, and inspires fully grown (lol) men to orient their lives around a child-like zero-sum perspective of the world; one that contains quantifiable amounts good and evil that you can tweak on a cosmic EQ panel. There is literally a straight line from these ideas to eugenics. Please, follow the line along with me. The EA and/or AI Safety community’s core messaging is that The West is in a race to get 100% completion on ‘good’ AGI before the authoritarian super power on the other side of the world builds an ‘evil’ one. Any conversations about safety are narrowed down to a normative conception of ‘good AI’ — there are no considerations about whether it’s safe to yank rare minerals from the earth for GPUs, or the long-term health effects of living in a community next to a data centre. Rather, ‘safety’ refers to protecting ourselves from some unknowable future, where a superintelligent mega-being might step on the human race like its a puddle of ants. Beyond safety is the implicit quest for infinite human improvement. To make AI ‘good’ is to make ourselves ‘good’. We can use thoroughly dehumanising emotion-recognition technologies to catch early signs of autism in children, for instance. To what end? To scrub the world of all autistic people? Also am I meant to believe that a machine can accurately read emotions on a human face and then use those to ‘diagnose’ issues related to health, a thing that humans themselves have never done? Finding traits that sit outside of that vague normative framework of ‘good’, therefore categorising them as defects, is ableist. Doing this at scale — which is what technology enables — is literally eugenics. This addiction to measuring and categorising emotions and behaviours demonstrates a pretty strong distain for humanity. Flattening human nuance and complexity it into oversimplified machine-readable outputs in order to essentially label a child with autism as somehow ‘wrong’ is a form of curative violence: the idea that disabilities are things that need to be ‘cured’, rather than just a part of human existence. Working from this assumption leaves no room for our ground-breaking technologies to perhaps make the world more enjoyable — or even just liveable — for disabled people or just literally anyone who’s experience exists outside of the robotic cishet white male mould that grows in Silicon Valley. Look at this recent study put out by Microsoft: it says that generative AI is atrophying our cognition and degrading our abilities in critical thinking. The good people at 404 Media asked, on their podcast: why would Microsoft admit their own technologies are making us dumber? Because they are in the business of scrubbing out our defects, not augmenting them. This research has been issued as a warning (we need critical thinking to be productive workers!), and a promise that AI will stop eroding our skills in independent thought with just a few tweaks. AI is supposed to make us better, not worse. As I wrote last time, tech leaders have a huge interest in garnering droves of highly intelligent workers who they can instrumentalise to build the future we’ve all been waiting for. I think many critics are stuck on the idea that AI will fully replace human capability, making us functionally impotent. Really the end goal is more of a “merge” or “co-evolution” as Sam Altman puts it — and he says it’s already underway. ‘Where’s my flying car?’ will soon become ‘where’s my race of superhumans?’ Taking a step back, supremacy is kind of built-in to the mainstream white male approach to digital technology: social media platforms were designed and built by one small segment of our population, with their narrow set of human experiences, and deployed for everyone around the world to enjoy. The atrocities in Myanmar that Facebook played a part reveals the one-size-fits-all-approach for what it really is: a complete lack of care for humans, and a preference towards an ahistorical, flattened-out understanding of culture. What’s worse, the people designing these systems truly believe that everyone is aligned with their world view. As you may have heard, the CEO of Sona recently said that people don’t like making music anyway. Thank you, CEO, for rescuing us from the horrific burden of being creative. And thank you again for using computers to solve all our problems while also using them to figure out what our problems are! This is just great, honestly. Technology leaders view the systems they design as necessary enhancements to human kind: that without them, we would be lost, scared, and weak. These systems are sold to us like preventative medicine where the disease or illness is simply ‘being human’. Central to this is a thudding hatred for what makes us who we are: unpredictable behaviours, the desire to be creative, the need to be messy, and the right to bodily autonomy. As I’ve written before, it’s so clear that self-optimisation is rooted in self-hatred. Male wellness influencers like Bryan Johnson perceive their bodies with shame. They reject their human traits like disgusting defects and believe technology can help us live forever — not just as individuals, but as a species. Soon the weight of their own hubris will crush them and then we can all finally relax. Hopefully. I’m discussing this now because I think the EA community and its various splinter groups have blossomed out of control and there isn’t nearly enough push-back. What before may have looked like a group of unserious men on message boards is now a full-blown influential movement with clear, inspirational messaging and a lot of money. And it’s all based on a vague normative framework of ‘Good’. Many appear to be stuck in ‘hm let’s just hear what they have to say’ mode and my patience is absolutely wafer thin. Talking with the EA Man at that event a while back really confirmed it all: their ideas are rooted in fascism and need to be shut down. Thank you for subscribing to Horrific/Terrific. You can now follow me on Bluesky. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
The Commodification of Pleasure
Friday, January 10, 2025
…and the enclosure of creative talent ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The villain in this game is the absence of AI
Thursday, December 19, 2024
How to fight an invisible enemy in a game you never asked to play ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The masculine urge to build
Friday, November 29, 2024
Biohacking, government procurement, and fascism ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Stuck in a reactionary doom loop
Friday, November 8, 2024
Fascism, the slow motion flashmob ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Welcome to the first Chappell Roan election
Sunday, October 20, 2024
What's happening on the other side of the post-reality curve and why am I being doxed by Ray Bans? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
BetterDev #277 - When You Deleted /lib on Linux While Still Connected via SSH
Tuesday, March 25, 2025
Better Dev #277 Mar 25, 2025 Hi all, Last week, NextJS has a new security vulnerability, CVE-2025-29927 that allow by pass middleware auth checking by setting a header to trick it into thinking this is
JSK Daily for Mar 25, 2025
Tuesday, March 25, 2025
JSK Daily for Mar 25, 2025 View this email in your browser A community curated daily e-mail of JavaScript news Easily Render Flat JSON Data in JavaScript File Manager The Syncfusion JavaScript File
Want to create an AI Agent?
Tuesday, March 25, 2025
Tell me what to build next ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
LangGraph, Marimo, Django Template Components, and More
Tuesday, March 25, 2025
LangGraph: Build Stateful AI Agents in Python #674 – MARCH 25, 2025 VIEW IN BROWSER The PyCoder's Weekly Logo LangGraph: Build Stateful AI Agents in Python LangGraph is a versatile Python library
Charted | Where People Trust the Media (and Where They Don't) 🧠
Tuesday, March 25, 2025
Examine the global landscape of public trust in media institutions. Confidence remains low in all but a few key countries. View Online | Subscribe | Download Our App Presented by: BHP >> Read
Daily Coding Problem: Problem #1728 [Medium]
Tuesday, March 25, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Square. Assume you have access to a function toss_biased() which returns 0 or 1 with a
LW 175 - Shopify uses AI to Prepare Stores for Script Editor Deprecation
Tuesday, March 25, 2025
Shopify uses AI to Prepare Stores for Script Editor Deprecation Shopify Development news and
Reminder: Microservices rules #7: Design loosely design-time coupled services - part 1
Tuesday, March 25, 2025
You are receiving this email because you subscribed to microservices.io. Considering migrating a monolith to microservices? Struggling with the microservice architecture? I can help: architecture
Delete your 23andMe data ASAP 🧬
Tuesday, March 25, 2025
95+ Amazon tech deals; 10 devs on vibe coding pros and cons -- ZDNET ZDNET Tech Today - US March 25, 2025 dnacodegettyimages-155360625 How to delete your 23andMe data and why you should do it now With
Post from Syncfusion Blogs on 03/25/2025
Tuesday, March 25, 2025
New blogs from Syncfusion ® Create AI-Powered Smart .NET MAUI Data Forms for Effortless Data Collection By Jeyasri Murugan This blog explains how to create an AI-powered smart data form using our .NET