💼 Making AI ‘useful’ is our biggest mistake
💼 Making AI ‘useful’ is our biggest mistakeDid you know that we could have just been having fun with it instead?Hello screen addicts, and welcome to my braincast. Some predictions for a few of my readers who I know personally: Lisa, you are reading this on a coffee break. Alice you are opening this up on Monday morning. Jo you are not reading this at all because you have a tiny baby and I respect that. Loads of people unsubscribed after I pointed out that Swifties represent the strongest social network we have at the moment in Elon Musk is just Taylor Swift for men. To me, this is a good thing — I am filtering out the people who aren’t ready to hear the TRUTH. Anyway, subscribe for more stuff like this. Ta. I’ve made an effort to use and play with generative AI tools a lot over the last couple of months; I think it’s generally better when I’m critiquing something that I actually have experience with, perhaps you agree lol. Something I can’t help noticing is how, when confronted with a system that is purported to be almost limitless in its capabilities, its limitations become glaringly obvious. I’ve realised that the main draw of a lot of generative AI tools — like ChatGPT and Stable Diffusion and whatever else — is not what they can produce, but that they can produce it with natural language. But the problem here is, the way we interface with gen AI systems at the moment is mostly by awkwardly typing out a prompt. The way prompts are constructed are far from natural; in fact, writing prompts is something you have to learn. There are subreddits and listicles like this one which tell you how to write the best prompts to achieve a certain goal. The listicle I linked is about how to reliably produce pixel art; the prompts all go something like this: “16-bit pixel art, cyberpunk cityscape, neon lights, and futuristic vehicles –ar 16:9 –s 2500 –upbeta –q 3” and, in some cases, don’t even produce something that looks like pixel art. So the language you use to interface with these machines is generally not natural, and you cannot guarantee good outputs. And, as usual, another problem is that generative AI tools tend to be muddled and show-boaty — which only adds to how underwhelming it all is. Even if Google’s Gemini demo wasn’t half-faked to look more impressive, and the machine could, in an instant, recognise a bad drawing of a duck on a post-it — who cares? Is that supposed to be… useful? Watching this ‘demo’ was like watching a parent praising their toddler for all the new things they’d learned at school that day. And then you’ve got Grok, a text generator remade in Elon Musk’s image (rude, obnoxious, unfunny) but it turned out to be too woke, reminding its redpilled incel userbase that trans women are in fact real women, and that no, it would not say a racial slur, even if it meant saving humanity from a nuke (which is pretty dumb). Fact: just as with other biases, the only way to scrub your training data of the woke mind virus is by scrubbing it out of humans first. I think a lot of the disappointment also stems from the fact that generative AI systems represent the first piece of ‘user-friendly’ technology which is infinitely difficult to control. It’s funny because the people who create these systems are often violently addicted to controlling things with machine-readable categorisations and fully auditable breadcrumbs. This reminds me of a very strange conversation I had about the game of Go with a tech guy the other day. He kept asking me if you could create an algorithm that could score a Go game accurately every time, and I said yes — because of course you can. For some reason, he just couldn’t accept this; he continued to insist that ‘Go doesn’t actually have rules’. Actually, it has exceedingly simple rules: you can place a piece anywhere on the board; it cannot be moved unless it is captured by your opponent, and then it is removed from the board entirely. That’s it. The rest is just vibes. This man, clearly obsessed with reducing everything (even a fun game!) to a process, was unable to parse the fact that he just didn’t understand Go enough to create his pointless algorithm. Which is fine! The rules of Go are very very easy to grasp — the gameplay and strategy is not. I’m telling you about this weird interaction because this man’s attitude perfectly exemplifies the tech bro mindset: ‘there is just no way that this thing isn’t completely 100% knowable with the addition of machines’. This attitude is incompatible with generative AI. The whole point is that you don’t actually know exactly how the neural network operates, and therefore you have no idea what you’re going to end up with when you generate an output. The unpredictability is what’s good about it; it’s not a shortcoming that we need to iron out. If you’re trying to produce a new piece of writing with gen AI, and you know exactly what you want it to say, you may as well write it yourself. Getting AI to generate something exactly as you want it is impossible. The creators of AI systems will conflate ‘limitless capabilities’ with randomness, and frustrated users will have to deal with that randomness when they thought they were getting something that could reliably automate boring tasks. The even dumber thing about this is that it’s not even that random anymore. Since the launch of ChatGPT a year ago, OpenAI et al have constrained their models with significant guardrails so that it’s much harder to produce harmful content, and easier to create predictable outputs. It also protects companies from reputational damage and lawsuits, but whatever. Sorry, I’m going to have to talk about Go again. Frank Lantz, one of my favourite people on Substack, recently wrote a piece about optimising for the best outcomes in games. He talked about how AlphaGo (the AI which has beat professional human players at Go) will certainly win very easily, but it will also make moves that look really rubbish and boring to human commentators. That’s because AlphaGo is making moves that will maximise its chance of winning, rather than its chance of completely obliterating its opponent. But humans are much more likely to optimise for spectacle, rather than just winning: “It’s better to be winning by 50 points than to be winning by 1 point because this bigger margin protects us against the variance in our noisy, imperfect predictions about how the game will unfold.” Lantz goes on to say that humans will opt to “crush our enemies” and in doing so, “are we overlooking the best way of maximizing the chance of getting the thing we want, because we have mistaken this barbaric proxy for the actual thing we want, like idiots?“ The way AlphaGo approaches a game of Go is similar to the ways in which generative AI systems insist we should be making content: blandly. The content restrictions and guardrails embedded within models are only theoretically good for harm reduction, and are definitely good for ensuring the production of flat, boring, samey content. It has been proven over and over again that if someone wants to create something offensive or controversial with DALL-E, they can if they spend long enough on it. They also could do it in photoshop if they wanted! This is not a new problem. The problem is that we’ve decided that AI tools are meant to be useful. This is completely wrong. If we’re just going to program a ‘limitless’ machine to generate predictable outputs for work purposes (ew), then what are we doing? I thought the point was to have something that appears to express itself like a human, but does so in a hilarious and whimsical way; I really don’t want an infinite content machine to only ever give me the closest, safest, most underwhelming approximation of what I ask for — I want it to give me something I never would have thought of. Otherwise it’s just boring ffs. 💌 Thank you for subscribing to Horrific/Terrific. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
🪤 Elon Musk is just Taylor Swift for men
Friday, December 1, 2023
Trust me on this one
🥼 Crushing It
Friday, November 17, 2023
The limitations of generative AI's new UI paradigm — and pinning a computer to your chest (also Star Trek)
🦹♂️ Everyone is President Business from the Lego Movie
Friday, November 3, 2023
Superglue yourself to the continued fulfilment of your infinite desires
💔 Love Doesn't Scale
Friday, October 20, 2023
And yet subscription fees keep going up
🌍 New spheres of reality
Friday, October 6, 2023
Somewhere to escape to while the world comes crashing down around us
You Might Also Like
Ranked | Which Country Has the Most Billionaires in 2024? 💰
Thursday, May 2, 2024
According to the annual Hurun Global Rich List, the US and China are home to nearly half of the world's 3279 billionaires in 2024. View Online | Subscribe Presented by: The economy is changing. Is
⚙️ Rovo
Thursday, May 2, 2024
Plus: Microsoft are (were?) terrified of Google's AI
Have VPN connection issues? This might be why
Thursday, May 2, 2024
DJI Power station; Studying with AI; Best gaming PCs -- ZDNET ZDNET Tech Today - US May 2, 2024 placeholder Having VPN connection issues? Microsoft warns the April 2024 Windows update is to blame If
Programmer Weekly - Issue 203
Thursday, May 2, 2024
View this email in your browser Programmer Weekly Welcome to issue 203 of Programmer Weekly. Let's get straight to the links this week. Quote of the Week "The hardest part of design is keeping
Python Weekly - Issue 648
Thursday, May 2, 2024
View this email in your browser Python Weekly Welcome to issue 648 of Python Weekly. Let's get straight to the links this week. News Fake job interviews target developers with new Python backdoor A
A new approach to access management for the way we work today
Thursday, May 2, 2024
Announcing 1Password® Extended Access Management ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Web Tools #563 - Frameworks, JSON/DB Tools, Vue, Nuxt.js
Thursday, May 2, 2024
WEB VERSION Issue #563 • May 2, 2024 Advertisement The Complete JavaScript Course 2024: From Zero to Expert This is an up-to-date JavaScript course covering modern techniques and features that will
Venture capitalists love musical chairs
Thursday, May 2, 2024
A number of investors have been swapping gigs and bouncing from prior employers to build new investing groups. View this email online in your browser By Alex Wilhelm Thursday, May 2, 2024 Good morning,
Gemini in Android Studio and more: Android Studio Jellyfish is Stable!
Thursday, May 2, 2024
View in browser 🔖 Articles Gemini in Android Studio and more: Android Studio Jellyfish is Stable! Android Studio Jellyfish (2023.3.1) is making waves with its official stable release! 🪼🌊 Dive into
wpmail.me issue#665
Thursday, May 2, 2024
wpMail.me wpmail.me issue#665 - The weekly WordPress newsletter. No spam, no nonsense. - May 2, 2024 Is this email not displaying correctly? View it in your browser. News & Articles Why Should You