Horrific-Terrific - The Vague Normative Framework of Good™️
Hello — thank you to those who came to the first Marked as Urgent last month! This was the first in a series of tech policy meet ups in London, hosted by me, Ben Whitelaw, and Mark Scott. Subscribe to the calendar on Luma so you don’t miss future invites. The other week I was at an event and found myself speaking with someone who turned out to be an effective altruist. I was with a friend; a lucky uninitiated FOOL who knew nothing about what EA is, and so of course they asked very disarming good-faith questions that plunged the EA Man into irreversible intellectual paralysis. Questions like “why have you made an entire movement out of what is basically just giving to charity?” or “why not skip the middle step of acquiring a huge salary and just put your skills and time directly into the charity sector?” I wish I could remember his answer to either of those but honestly most of what he said was like conversational butt-dialling. At one point he became exasperated — understandable when a few basic questions obliterate your entire world view into atomic dust — and finally said, “wouldn’t you want to do everything you can to increase good in the world, and decrease evil?”. Naturally I totally lost my patience at this point and told him that he sounded like a character from the Marvel Cinematic Universe (I had to stop talking to him after that). I know this man doesn’t represent all EAs but conversations like this are very revealing: the culture of EA is infectious, and inspires fully grown (lol) men to orient their lives around a child-like zero-sum perspective of the world; one that contains quantifiable amounts good and evil that you can tweak on a cosmic EQ panel. There is literally a straight line from these ideas to eugenics. Please, follow the line along with me. The EA and/or AI Safety community’s core messaging is that The West is in a race to get 100% completion on ‘good’ AGI before the authoritarian super power on the other side of the world builds an ‘evil’ one. Any conversations about safety are narrowed down to a normative conception of ‘good AI’ — there are no considerations about whether it’s safe to yank rare minerals from the earth for GPUs, or the long-term health effects of living in a community next to a data centre. Rather, ‘safety’ refers to protecting ourselves from some unknowable future, where a superintelligent mega-being might step on the human race like its a puddle of ants. Beyond safety is the implicit quest for infinite human improvement. To make AI ‘good’ is to make ourselves ‘good’. We can use thoroughly dehumanising emotion-recognition technologies to catch early signs of autism in children, for instance. To what end? To scrub the world of all autistic people? Also am I meant to believe that a machine can accurately read emotions on a human face and then use those to ‘diagnose’ issues related to health, a thing that humans themselves have never done? Finding traits that sit outside of that vague normative framework of ‘good’, therefore categorising them as defects, is ableist. Doing this at scale — which is what technology enables — is literally eugenics. This addiction to measuring and categorising emotions and behaviours demonstrates a pretty strong distain for humanity. Flattening human nuance and complexity it into oversimplified machine-readable outputs in order to essentially label a child with autism as somehow ‘wrong’ is a form of curative violence: the idea that disabilities are things that need to be ‘cured’, rather than just a part of human existence. Working from this assumption leaves no room for our ground-breaking technologies to perhaps make the world more enjoyable — or even just liveable — for disabled people or just literally anyone who’s experience exists outside of the robotic cishet white male mould that grows in Silicon Valley. Look at this recent study put out by Microsoft: it says that generative AI is atrophying our cognition and degrading our abilities in critical thinking. The good people at 404 Media asked, on their podcast: why would Microsoft admit their own technologies are making us dumber? Because they are in the business of scrubbing out our defects, not augmenting them. This research has been issued as a warning (we need critical thinking to be productive workers!), and a promise that AI will stop eroding our skills in independent thought with just a few tweaks. AI is supposed to make us better, not worse. As I wrote last time, tech leaders have a huge interest in garnering droves of highly intelligent workers who they can instrumentalise to build the future we’ve all been waiting for. I think many critics are stuck on the idea that AI will fully replace human capability, making us functionally impotent. Really the end goal is more of a “merge” or “co-evolution” as Sam Altman puts it — and he says it’s already underway. ‘Where’s my flying car?’ will soon become ‘where’s my race of superhumans?’ Taking a step back, supremacy is kind of built-in to the mainstream white male approach to digital technology: social media platforms were designed and built by one small segment of our population, with their narrow set of human experiences, and deployed for everyone around the world to enjoy. The atrocities in Myanmar that Facebook played a part reveals the one-size-fits-all-approach for what it really is: a complete lack of care for humans, and a preference towards an ahistorical, flattened-out understanding of culture. What’s worse, the people designing these systems truly believe that everyone is aligned with their world view. As you may have heard, the CEO of Sona recently said that people don’t like making music anyway. Thank you, CEO, for rescuing us from the horrific burden of being creative. And thank you again for using computers to solve all our problems while also using them to figure out what our problems are! This is just great, honestly. Technology leaders view the systems they design as necessary enhancements to human kind: that without them, we would be lost, scared, and weak. These systems are sold to us like preventative medicine where the disease or illness is simply ‘being human’. Central to this is a thudding hatred for what makes us who we are: unpredictable behaviours, the desire to be creative, the need to be messy, and the right to bodily autonomy. As I’ve written before, it’s so clear that self-optimisation is rooted in self-hatred. Male wellness influencers like Bryan Johnson perceive their bodies with shame. They reject their human traits like disgusting defects and believe technology can help us live forever — not just as individuals, but as a species. Soon the weight of their own hubris will crush them and then we can all finally relax. Hopefully. I’m discussing this now because I think the EA community and its various splinter groups have blossomed out of control and there isn’t nearly enough push-back. What before may have looked like a group of unserious men on message boards is now a full-blown influential movement with clear, inspirational messaging and a lot of money. And it’s all based on a vague normative framework of ‘Good’. Many appear to be stuck in ‘hm let’s just hear what they have to say’ mode and my patience is absolutely wafer thin. Talking with the EA Man at that event a while back really confirmed it all: their ideas are rooted in fascism and need to be shut down. Thank you for subscribing to Horrific/Terrific. You can now follow me on Bluesky. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
The Commodification of Pleasure
Friday, January 10, 2025
…and the enclosure of creative talent ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The villain in this game is the absence of AI
Thursday, December 19, 2024
How to fight an invisible enemy in a game you never asked to play ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The masculine urge to build
Friday, November 29, 2024
Biohacking, government procurement, and fascism ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Stuck in a reactionary doom loop
Friday, November 8, 2024
Fascism, the slow motion flashmob ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Welcome to the first Chappell Roan election
Sunday, October 20, 2024
What's happening on the other side of the post-reality curve and why am I being doxed by Ray Bans? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
You Might Also Like
Simplification Takes Courage & Perplexity introduces Comet
Monday, March 3, 2025
Elicit raises $22M Series A, Perplexity is working on an AI-powered browser, developing taste, and more in this week's issue of Creativerly. Creativerly Simplification Takes Courage &
Mapped | Which Countries Are Perceived as the Most Corrupt? 🌎
Monday, March 3, 2025
In this map, we visualize the Corruption Perceptions Index Score for countries around the world. View Online | Subscribe | Download Our App Presented by: Stay current on the latest money news that
The new tablet to beat
Monday, March 3, 2025
5 top MWC products; iPhone 16e hands-on📱; Solar-powered laptop -- ZDNET ZDNET Tech Today - US March 3, 2025 TCL Nxtpaper 11 tablet at CES The tablet that replaced my Kindle and iPad is finally getting
Import AI 402: Why NVIDIA beats AMD: vending machines vs superintelligence; harder BIG-Bench
Monday, March 3, 2025
What will machines name their first discoveries? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
GCP Newsletter #440
Monday, March 3, 2025
Welcome to issue #440 March 3rd, 2025 News LLM Official Blog Vertex AI Evaluate gen AI models with Vertex AI evaluation service and LLM comparator - Vertex AI evaluation service and LLM Comparator are
Apple Should Swap Out Siri with ChatGPT
Monday, March 3, 2025
Not forever, but for now. Until a new, better Siri is actually ready to roll — which may be *years* away... Apple Should Swap Out Siri with ChatGPT Not forever, but for now. Until a new, better Siri is
⚡ THN Weekly Recap: Alerts on Zero-Day Exploits, AI Breaches, and Crypto Heists
Monday, March 3, 2025
Get exclusive insights on cyber attacks—including expert analysis on zero-day exploits, AI breaches, and crypto hacks—in our free newsletter. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
⚙️ AI price war
Monday, March 3, 2025
Plus: The reality of LLM 'research'
Post from Syncfusion Blogs on 03/03/2025
Monday, March 3, 2025
New blogs from Syncfusion ® AI-Driven Natural Language Filtering in WPF DataGrid for Smarter Data Processing By Susmitha Sundar This blog explains how to add AI-driven natural language filtering in the
Vo1d Botnet's Peak Surpasses 1.59M Infected Android TVs, Spanning 226 Countries
Monday, March 3, 2025
THN Daily Updates Newsletter cover Starting with DevSecOps Cheatsheet A Quick Reference to the Essentials of DevSecOps Download Now Sponsored LATEST NEWS Mar 3, 2025 The New Ransomware Groups Shaking