💼 Making AI ‘useful’ is our biggest mistake
💼 Making AI ‘useful’ is our biggest mistakeDid you know that we could have just been having fun with it instead?Hello screen addicts, and welcome to my braincast. Some predictions for a few of my readers who I know personally: Lisa, you are reading this on a coffee break. Alice you are opening this up on Monday morning. Jo you are not reading this at all because you have a tiny baby and I respect that. Loads of people unsubscribed after I pointed out that Swifties represent the strongest social network we have at the moment in Elon Musk is just Taylor Swift for men. To me, this is a good thing — I am filtering out the people who aren’t ready to hear the TRUTH. Anyway, subscribe for more stuff like this. Ta. I’ve made an effort to use and play with generative AI tools a lot over the last couple of months; I think it’s generally better when I’m critiquing something that I actually have experience with, perhaps you agree lol. Something I can’t help noticing is how, when confronted with a system that is purported to be almost limitless in its capabilities, its limitations become glaringly obvious. I’ve realised that the main draw of a lot of generative AI tools — like ChatGPT and Stable Diffusion and whatever else — is not what they can produce, but that they can produce it with natural language. But the problem here is, the way we interface with gen AI systems at the moment is mostly by awkwardly typing out a prompt. The way prompts are constructed are far from natural; in fact, writing prompts is something you have to learn. There are subreddits and listicles like this one which tell you how to write the best prompts to achieve a certain goal. The listicle I linked is about how to reliably produce pixel art; the prompts all go something like this: “16-bit pixel art, cyberpunk cityscape, neon lights, and futuristic vehicles –ar 16:9 –s 2500 –upbeta –q 3” and, in some cases, don’t even produce something that looks like pixel art. So the language you use to interface with these machines is generally not natural, and you cannot guarantee good outputs. And, as usual, another problem is that generative AI tools tend to be muddled and show-boaty — which only adds to how underwhelming it all is. Even if Google’s Gemini demo wasn’t half-faked to look more impressive, and the machine could, in an instant, recognise a bad drawing of a duck on a post-it — who cares? Is that supposed to be… useful? Watching this ‘demo’ was like watching a parent praising their toddler for all the new things they’d learned at school that day. And then you’ve got Grok, a text generator remade in Elon Musk’s image (rude, obnoxious, unfunny) but it turned out to be too woke, reminding its redpilled incel userbase that trans women are in fact real women, and that no, it would not say a racial slur, even if it meant saving humanity from a nuke (which is pretty dumb). Fact: just as with other biases, the only way to scrub your training data of the woke mind virus is by scrubbing it out of humans first. I think a lot of the disappointment also stems from the fact that generative AI systems represent the first piece of ‘user-friendly’ technology which is infinitely difficult to control. It’s funny because the people who create these systems are often violently addicted to controlling things with machine-readable categorisations and fully auditable breadcrumbs. This reminds me of a very strange conversation I had about the game of Go with a tech guy the other day. He kept asking me if you could create an algorithm that could score a Go game accurately every time, and I said yes — because of course you can. For some reason, he just couldn’t accept this; he continued to insist that ‘Go doesn’t actually have rules’. Actually, it has exceedingly simple rules: you can place a piece anywhere on the board; it cannot be moved unless it is captured by your opponent, and then it is removed from the board entirely. That’s it. The rest is just vibes. This man, clearly obsessed with reducing everything (even a fun game!) to a process, was unable to parse the fact that he just didn’t understand Go enough to create his pointless algorithm. Which is fine! The rules of Go are very very easy to grasp — the gameplay and strategy is not. I’m telling you about this weird interaction because this man’s attitude perfectly exemplifies the tech bro mindset: ‘there is just no way that this thing isn’t completely 100% knowable with the addition of machines’. This attitude is incompatible with generative AI. The whole point is that you don’t actually know exactly how the neural network operates, and therefore you have no idea what you’re going to end up with when you generate an output. The unpredictability is what’s good about it; it’s not a shortcoming that we need to iron out. If you’re trying to produce a new piece of writing with gen AI, and you know exactly what you want it to say, you may as well write it yourself. Getting AI to generate something exactly as you want it is impossible. The creators of AI systems will conflate ‘limitless capabilities’ with randomness, and frustrated users will have to deal with that randomness when they thought they were getting something that could reliably automate boring tasks. The even dumber thing about this is that it’s not even that random anymore. Since the launch of ChatGPT a year ago, OpenAI et al have constrained their models with significant guardrails so that it’s much harder to produce harmful content, and easier to create predictable outputs. It also protects companies from reputational damage and lawsuits, but whatever. Sorry, I’m going to have to talk about Go again. Frank Lantz, one of my favourite people on Substack, recently wrote a piece about optimising for the best outcomes in games. He talked about how AlphaGo (the AI which has beat professional human players at Go) will certainly win very easily, but it will also make moves that look really rubbish and boring to human commentators. That’s because AlphaGo is making moves that will maximise its chance of winning, rather than its chance of completely obliterating its opponent. But humans are much more likely to optimise for spectacle, rather than just winning: “It’s better to be winning by 50 points than to be winning by 1 point because this bigger margin protects us against the variance in our noisy, imperfect predictions about how the game will unfold.” Lantz goes on to say that humans will opt to “crush our enemies” and in doing so, “are we overlooking the best way of maximizing the chance of getting the thing we want, because we have mistaken this barbaric proxy for the actual thing we want, like idiots?“ The way AlphaGo approaches a game of Go is similar to the ways in which generative AI systems insist we should be making content: blandly. The content restrictions and guardrails embedded within models are only theoretically good for harm reduction, and are definitely good for ensuring the production of flat, boring, samey content. It has been proven over and over again that if someone wants to create something offensive or controversial with DALL-E, they can if they spend long enough on it. They also could do it in photoshop if they wanted! This is not a new problem. The problem is that we’ve decided that AI tools are meant to be useful. This is completely wrong. If we’re just going to program a ‘limitless’ machine to generate predictable outputs for work purposes (ew), then what are we doing? I thought the point was to have something that appears to express itself like a human, but does so in a hilarious and whimsical way; I really don’t want an infinite content machine to only ever give me the closest, safest, most underwhelming approximation of what I ask for — I want it to give me something I never would have thought of. Otherwise it’s just boring ffs. 💌 Thank you for subscribing to Horrific/Terrific. If you need more reasons to distract yourself try looking at my website or maybe this ridiculous zine that I write or how about these silly games that I’ve made. Enjoy! |
Older messages
🪤 Elon Musk is just Taylor Swift for men
Friday, December 1, 2023
Trust me on this one
🥼 Crushing It
Friday, November 17, 2023
The limitations of generative AI's new UI paradigm — and pinning a computer to your chest (also Star Trek)
🦹♂️ Everyone is President Business from the Lego Movie
Friday, November 3, 2023
Superglue yourself to the continued fulfilment of your infinite desires
💔 Love Doesn't Scale
Friday, October 20, 2023
And yet subscription fees keep going up
🌍 New spheres of reality
Friday, October 6, 2023
Somewhere to escape to while the world comes crashing down around us
You Might Also Like
Stop Using the Wrong State Management in Jetpack Compose
Thursday, November 21, 2024
View in browser 🔖 Articles Benchmark Insights: Direct State Propagation vs. Lambda-based State in Jetpack Compose Here, we'll dive into some benchmark analysis on the state propagation approach in
wpmail.me issue#694
Thursday, November 21, 2024
wpMail.me wpmail.me issue#694 - The weekly WordPress newsletter. No spam, no nonsense. - November 21, 2024 Is this email not displaying correctly? View it in your browser. News & Articles State of
Turn off Google AI with two letters
Thursday, November 21, 2024
$250 off M4 MacBook; Linux Foundation marks 20 years; Bluesky tips -- ZDNET ZDNET Tech Today - US November 21, 2024 laptop This absurdly simple trick turns off AI in your Google Search results There
PHPWeekly November 21st 2024
Thursday, November 21, 2024
Curated news all about PHP. Here's the latest edition Is this email not displaying correctly? View it in your browser. PHP Weekly 21st November 2024 Hi everyone, PHP 8.4 id due for a release today,
Edge 450: Can LLM Sabotage Human Evaluations
Thursday, November 21, 2024
New research from Anthropic provides some interesting ideas in this area. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Don’t Wait In Line or Online.
Thursday, November 21, 2024
Quick. Easy. Simple. Shop Zugu for the crew. We know we say it every year but, trust us, if feels really good to get ahead of those holiday gifts. Skip the lines, even online, and shop something for
Google's AI-Powered Tool Finds 26 Vulnerabilities in Open-Source Projects
Thursday, November 21, 2024
THN Daily Updates Newsletter cover [Watch LIVE] When Shift Happens: Are You Ready for Rapid Certificate Replacement? Revocations can disrupt your business, but automation saves the day. Discover how.
⚙️ Nvidia doubles revenue
Thursday, November 21, 2024
Plus: US proposes 'Manhattan Project' for AGI
Post from Syncfusion Blogs on 11/21/2024
Thursday, November 21, 2024
New blogs from Syncfusion Secure JWT Storage: Best Practices By Binara Prabhanga Learn about common JWT security risks and best practices for secure JWT storage in SPAs, including HttpOnly cookies,
Top Tech 🏆 The Galaxy Tab S10+ Is Excellent — This is the Most Sturdy Car Phone Holder I've Used
Thursday, November 21, 2024
Also: Testing the The Sennheiser Accentum Earbuds, and More! How-To Geek Logo November 21, 2024 🤖 Android at Its Best The iPad is the most popular tablet, but that doesn't mean other options aren