🦹♂️ Everyone is President Business from the Lego Movie
🦹♂️ Everyone is President Business from the Lego MovieSuperglue yourself to the continued fulfilment of your infinite desiresRiddle me this: is Google the ‘best’ search engine, or is it just the one that pays billions of dollars every year to be the default on everyone’s device? The answer is in the question… Anyway, on with today’s class. It’s a very short musing on a thought I had about AI recently. I mean, that’s all I write these days but yeah, glad you’re still here. I was recently having a conversation with Eric Wycoff Rogers, a good friend of mine who runs the London Night Cafe — a great quiet spot to hang out in late at night if you want to sit neck deep in a ball pit and discuss complex topics such as the future of AI (which is exactly what we did). Eric, who’s brain is larger, wetter, and more able to retain information than mine, told me about coherent extrapolated volition: a conception of friendly AI which does not need to be told our desires explicitly, but rather it anticipates them, and is able to autonomously act in the best interests of all humankind. It’s very All Watched Over by Machines of Loving Grace and it probably isn’t even slightly possible, because it would require humans to take the initial step of programming it, and humans are notorious for not being able to agree on what’s best for everyone/making software that doesn’t quite work as intended. Imagining alternative futures is important, so let’s just say for a moment that achieving coherent extrapolated volition (CEV) could be possible at some point down the line. Still, do we really need to make things like happiness, comfort, and caring entirely replicable by machines? Thinking about this concept brings me back to the fact that it’s really not clear what tech leaders are trying to achieve with AI systems. They deploy them without clear use-cases, and then warn us about existential risks. And then there are new regulatory shifts that alter the landscape in interesting ways. The general consensus here seems to be that we should work to stop things from getting really bad, instead of actively planning for ways in which AI could actually make things better overall. CEV — or something like it — does not appear to be the goal. The only goals I can extrapolate right now are short term financial gains and the avoidance of total annihilation. There’s a weird kind of grisly pride in openly admitting — or insinuating — that a thing that you created has the potential to bring about an extinction event. Like, okay?? Thank you, I guess??? Hope you get the nobel peace prize or whatever xoxo. This is abject fear-mongering that allows the creators of AI systems to control the narrative and set us on a narrow path towards a future that is defined by them. You know, maybe I would love it if there was an AI out there that could cuddle me to sleep and make everything okay again, but those ideas don’t seem to make the headlines. Because cuddles are considered a lot less powerful and impressive than mass destruction (I disagree, but whatever). I don’t think the overlords of our current top-down generative AI landscape are aiming for utopian cuddly outcomes OR dystopian machine-uprising outcomes — because those are way too extreme. They want to keep everything contained within a watered-down inoffensive midpoint, where we do nothing but generate mundane viral content and automated marketing workflows. Technocapitalists crave control and order; they fucking love prediction models, machine-readable data, and, I dunno… making every single process fully auditable like GitHub does with codebases. They’re all like President Business from the Lego Movie, who can’t stand the idea of people freely expressing themselves, and punishes society by demanding that everyone submits to his idea of perfection, or face being superglued into place. This kind of explains why right now, AI art just seems kind of lame and nothingy. Max Read recently wrote about ‘Controlism’, a new form of AI art that uses a reference image to create a new image. It very much represents how some of the generative AI community have formalised a way to make extremely kitsch images that are a blurry grey approximation of what everyone likes to look at on social media. Yes, this stuff is going viral in 2023, but I do wonder how long this can last. I think a likely progression from this is not human extinction or eternal happiness, but rather a future where the outputs you produce from generative tools will be so perfect for you personally that you won’t even bother sharing it with your networks to show off or to gain viral success, nor will you be interested in looking at anyone else’s content. You will be absolutely satisfied with what you can create yourself, because it will be 100% what you want to watch/read/listen to/generally experience. I mean, it will probably all be porn, but still… 💌 Thank you for subscribing to Horrific/Terrific. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
💔 Love Doesn't Scale
Friday, October 20, 2023
And yet subscription fees keep going up
🌍 New spheres of reality
Friday, October 6, 2023
Somewhere to escape to while the world comes crashing down around us
💦 “Tits are for breastfeeding” | A conversation about porn and child safety
Friday, September 29, 2023
The online safety bill is becoming real; I interviewed a sex worker about how it will effect their work
🌌 Olivia Colman Cinematic Universe
Monday, September 11, 2023
Making content about content to train AI models to make further content (about content)
🔍 Where are all my precious answers?
Friday, September 1, 2023
They've been eaten by a corporation, deal with it like the rest of us and buy twenty different media subscriptions
You Might Also Like
Youre Overthinking It
Wednesday, January 15, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, January 15, 2025? The
eBook: Software Supply Chain Security for Dummies
Wednesday, January 15, 2025
Free access to this go-to-guide for invaluable insights and practical advice to secure your software supply chain. The Hacker News Software Supply Chain Security for Dummies There is no longer doubt
The 5 biggest AI prompting mistakes
Wednesday, January 15, 2025
✨ Better Pixel photos; How to quit Meta; The next TikTok? -- ZDNET ZDNET Tech Today - US January 15, 2025 ai-prompting-mistakes The five biggest mistakes people make when prompting an AI Ready to
An interactive tour of Go 1.24
Wednesday, January 15, 2025
Plus generating random art, sending emails, and a variety of gopher images you can use. | #538 — January 15, 2025 Unsub | Web Version Together with Posthog Go Weekly An Interactive Tour of Go 1.24 — A
Spyglass Dispatch: Bromo Sapiens
Wednesday, January 15, 2025
Masculine Startups • The Fall of Xbox • Meta's Misinformation Off Switch • TikTok's Switch Off The Spyglass Dispatch is a newsletter sent on weekdays featuring links and commentary on timely
The $1.9M client
Wednesday, January 15, 2025
Money matters, but this invisible currency matters more. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
⚙️ Federal data centers
Wednesday, January 15, 2025
Plus: Britain's AI roadmap
Post from Syncfusion Blogs on 01/15/2025
Wednesday, January 15, 2025
New blogs from Syncfusion Introducing the New .NET MAUI Bottom Sheet Control By Naveenkumar Sanjeevirayan This blog explains the features of the Bottom Sheet control introduced in the Syncfusion .NET
The Sequence Engineering #469: Llama.cpp is The Framework for High Performce LLM Inference
Wednesday, January 15, 2025
One of the most popular inference framework for LLM apps that care about performance. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
3 Actively Exploited Zero-Day Flaws Patched in Microsoft's Latest Security Update
Wednesday, January 15, 2025
THN Daily Updates Newsletter cover The Kubernetes Book: Navigate the world of Kubernetes with expertise , Second Edition ($39.99 Value) FREE for a Limited Time Containers transformed how we package and