Humanity Redefined - Superintelligence—10 years later
I hope you enjoy this free post. If you do, please like ❤️ or share it, for example by forwarding this email to a friend or colleague. Writing this post took around eight hours to write. Liking or sharing it takes less than eight seconds and makes a huge difference. Thank you! The year is 2014. Barrack Obama is in the first year of his second term as the US President. Elon Musk is still being seen by many people as the “cool billionaire”, the real-life Tony Stark. Season 4 of Game of Thrones keeps everyone on their toes. Dark Souls II challenges gamers’ skills and patience. People around the world are throwing buckets of cold water on their heads while Pharrell Williams sings how happy he is. Elsewhere in the world, Russia annexes Crimea and begins Russo-Ukrainian war, and Malaysia Airlines Flight 370 disappears seemingly without a trace. Personally, I completed my stint at a startup and was moving to London to start a new chapter in my life. Meanwhile, in tech, we are in the middle of the deep learning revolution. Two years earlier, Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton introduced AlexNet, a convolutional neural network trained on two gaming GPUs that topped the ImageNet leaderboard, eclipsed the competition and kickstarted the deep learning revolution. DeepMind, a four-year-old at that time startup from London, was making breakthroughs in reinforcement learning and amazing the world with their AIs mastering classic Atari games, leading DeepMind to be acquired by Google for £400 million. AI startups started to pop out here and there, each promising to transform or disrupt everything, from healthcare to self-driving cars, with machine learning and deep learning. In many ways, 2014 is similar to what we have today, in 2024—a new, exciting technique emerged opening new possibilities in machine learning and its applications. This is the world in which Superintelligence*, one of the most influential books on artificial intelligence, was released. Written by Nick Bostrom, a philosopher at the University of Oxford, it was the first book exploring the topic of AI risk to break into the mainstream and start a wider conversation about the possibility of a second intelligence—machine intelligence—emerging on Earth and what that would mean for humanity. Today marks ten years since the release of Superintelligence in the UK. In this post, I aim to evaluate the book's impact over the past ten years and its continued relevance in 2024. Superintelligence quickly became influential in the AI research scene and the tech world in general. Many people included Superintelligence on their lists of must-read books on AI and many more recommended it. Most notable of those recommending it was Elon Musk, who months after the book's release began warning about the risks associated with artificial intelligence, calling it our biggest existential threat, bigger than nuclear weapons. It was around that time when he famously compared playing with AI to “summoning a demon.” Sam Altman also recommended the book, writing on his blog that it “is the best thing I’ve seen on this topic.” It is fair to assume that Superintelligence played some role in the founding of OpenAI in 2015. Other notable people endorsing the book were Bill Gates, Stuart Russell and Martin Rees. Nils Nilsson, a computer scientist and one of the founding researchers of AI, said that “every intelligent person should read it.” Meanwhile, critics were dismissing the idea of superintelligent AI being an existential threat to humanity. One of them was Andrew Ng, then chief scientist at Baidu and an associate professor at Stanford University, who said that worrying about “the rise of evil killer robots is like worrying about overpopulation and pollution on Mars.” Another reviewer wrote the book “seems too replete with far-fetched speculation” and that it is “roughly equivalent to sitting down in 1850 to write an owner’s guide to the iPhone”. However, in just 10 years, we would find ourselves in a completely different world. A world in which the question of AI risks and AI safety cannot be ignored anymore. AI and AI safety became the mainstream topicIt was an interesting experience, to say the least, to read Superintelligence again in 2024. Up until recently, news about breakthroughs in AI mostly stayed within the tech bubble and rarely entered the public space. The only notable exception that comes to mind is AlphaGo defeating Lee Sedol, one of the best Go players in the world, in 2016. That event made headlines in Western media, but as Kai-Fu Lee says in his book, AI Superpowers: China, Silicon Valley, and the New World Order*, it had an even greater impact in China. It was considered China’s “Sputnik moment,” catalyzing Beijing to invest heavily in AI research to catch up with the US. Other than that, I barely seen any news about AI breaking to the front pages of national newspapers, be it digital or printed. That all changed with the release of ChatGPT in November 2022. The topic of AI and its impact on society was thrust into the public spotlight. Suddenly, ordinary people were exposed to the frontier of AI research. Many were shocked and surprised by what they saw. If you didn’t have any experience with the cutting edge of research in artificial intelligence, interacting with ChatGPT seemed like science fiction came true years or decades ahead of schedule. Seemingly out of nowhere emerged an AI chatbot with which one can have conversations like with a human. And it wasn’t just one AI chatbot—apart from OpenAI, we also have Microsoft, Google, Meta, Anthropic and an entire cohort of smaller companies (which probably won’t survive for long and will be gobbled by bigger players). Some saw sparks of AGI in GPT-4 while others refined their timelines and brought closer the predicted year AGI and superintelligence would emerge. In fact, a recent survey of 2,778 researchers who had published in top-tier AI venues found that researchers believe there is a 50% chance of AI systems achieving several milestones by 2028, including autonomously building a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. The chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (that’s 48 years earlier compared to the 2022 survey). Alongside amazement at what GPT-4 and similar large language models can do came the anxiety and fear of how these AI models could be misused and cause problems. Soon after the release of GPT in March 2023, the Pause Giant AI Experiments letter was published, calling all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. The letter, issued by the Future of Life Institute (Bostrom is listed as one of its external advisors), did nothing to stop the development of more advanced AI systems. However, among the 33,707 people who signed the letter, there are many well-respected names from academia, business, and more. Even though its goal to pause AI research for six months failed, it succeeded in bringing attention to the question of how to create safe AI systems. It gave this topic a much-needed spotlight and credibility. The second such open letter, the Statement on AI Risk, published by the Center for AI Safety, put the risk from advanced AI on the same level as pandemics and nuclear war, further helping to bring public attention to the problem. Today, the topic of AI safety is one of the most important conversations of our times. The question of how to ensure advanced AI systems are safe is no longer the domain of academics and nerds discussing it in obscure online forums; it is now a serious issue discussed by well-respected scientists, business leaders, and governments. Many countries have passed laws regulating AI. The European Union recently passed the EU AI Act, China has its own set of rules governing AI, and the US is working on its own AI regulations. We’ve had the first AI Safety Summit, which produced the Bletchley Declaration, in which 29 countries acknowledged risks posed by AI and committed to take AI safety seriously. Another outcome of the first AI Safety Summit was the creation of the AI Safety Institute in the UK tasked with testing leading AI models before they are released to the public. However, as Politico reports, the AI Safety Institute is failing to fulfil its mission, with only DeepMind allowing anything approaching pre-deployment access. Many AI researchers, who previously were at the forefront of AI research, started to take AI risks and AI safety more seriously. One of the best known of them is Geoffrey Hilton, one of the “Godfathers of Deep Learning”, who helped popularise the backpropagation algorithm for training multi-layer neural networks, a foundational concept for modern artificial neural networks, and, together with Alex Krizhevsky and Ilya Sutskever, kickstarted the deep learning revolution. In May 2023, he left Google to advocate for AI safety to be able to speak freely about the growing dangers posed by advanced AI systems. "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be," Hilton told BBC. That’s the context in which I read Superintelligence again last week. When I first read it at the beginning of 2015, it shaped my thoughts on AI safety, but I also felt the book was discussing things that were far in the future. Now, these topics and questions raised in the book feel uncomfortably real. We might only have one shot to get superintelligence rightCreating AGI is an explicit goal of many companies in the AI industry. They are all engaged in a competitive game where no one can afford to slow down, as they risk becoming irrelevant very quickly. These companies are incentivized to move fast because the first to achieve AGI will be remembered in history and will become the dominant player. In such an environment, safety could be seen as a hindrance, something that slows progress and diverts precious resources from the main objective of the company. Getting to AGI will be a challenging task that may take many more years to complete. But when we reach AGI, achieving superintelligence might be a much faster process. We might experience an intelligence explosion—the emergence of a system orders of magnitude more intelligent than any human could possibly be. The question then is whether we will be able to control that explosion and if we will be ready for it. Before we get there, it is crucial to solve the control problem and the alignment problem—figuring out how to control superintelligent AI and how to ensure the choices such AI makes are aligned with human values. As Bostrom writes in Superintelligence and as many other AI safety researchers have said, the goals of superintelligent AI may not align with the goals of humans. In fact, humans might be seen as an obstacle to a superintelligent AI. We are not close to achieving superintelligence, let alone AGI, yet we are already encountering many issues arising from AI systems. We still haven’t solved the hallucination problem. Additionally, we have learned that these models can lie to achieve their goals. Famously, OpenAI shared in the GPT-4 Technical Report an instance where GPT-4 lied to a TaskRabbit worker to avoid revealing itself as a bot. There was also a story about Anthropic’s Claude 3 Opus seemingly being aware it was being tested, although that story has a more reasonable explanation than Claude 3 Opus becoming self-aware. We also haven’t solved the interpretability problem, and we do not fully understand how these models work. One could describe the situation we are in right now to a group of people playing with a bomb. It is a rather small group of people with different goals and occasionally engaging in dramas, as Robert Miles perfectly showed in this sketch. It is hard to convince all of them to step away from the bomb, and there will always be one person who presses the button just to see what will happen. Oh, and there is no one to ask for help. We have to figure out everything by ourselves, ideally on the first try. As I was rereading Superintelligence, this part stood out for me for how eerily accurate it is in 2024:
We are somewhere between third and fifth point on that list. We are dealing with a complex problem involving multiple players in a competitive, not cooperative, game. I hope that the best in human nature will stand up to the challenge of creating a good superintelligence. We might only have one shot at it. If you are new to AI safety, Superintelligence* is still a good starting point. The language can sometimes be challenging, and you might need a pen and paper nearby to fully grasp the concepts presented in the book. But the concepts presented in the book are still valid ten years after it was published. Alternatively, I recommend checking out Robert Miles’ YouTube channel, which also serves as a good introduction to AI safety. *Disclaimer: This link is part of the Amazon Affiliate Program. As an Amazon Associate, I earn from qualifying purchases made through this link. This helps support my work but does not affect the price you pay.Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it. Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human. A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support! My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!" |
Older messages
DJI faces ban in the US - Weekly News Roundup - Issue #473
Friday, June 28, 2024
Plus: Apple delays the launch of AI tools in the EU; the first ad made with Sora; a new protein-generating AI; a humanoid robot gets its first proper job; Pope calls to ban autonomous weapons ͏ ͏ ͏ ͏ ͏
Anthropic raises the industry bar for intelligence - Weekly News Roundup - Issue #472
Friday, June 21, 2024
Plus: Ilya Sutskever is back; Nvidia becomes the world's most valuable company; another company trials a humanoid robot; a military robot-dog arms race; a mad scientist grows neurons to play Doom ͏
What is "humanity" anyway?
Thursday, June 20, 2024
And how far can we extend the definition of "humanity"? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Apple Intelligence is different
Sunday, June 16, 2024
An in-depth look into Apple Intelligence and what Apple is promising with "AI for the rest of us" ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Weekly News Roundup - Issue #471
Friday, June 14, 2024
Plus: Elon Musk withdraws the lawsuit against OpenAI and Sam Altman; how nanopore sequencers were invented; a tooth-regrowing drug to be trialled in Japan; Mistral AI reaches $6B valuation; and more! ͏
You Might Also Like
Is there more to your iPhone?
Monday, November 25, 2024
Have you ever wondered if there's more to your iPhone than meets the eye? Maybe you've been using it for years, but certain powerful features and settings remain hidden. That's why we'
🎉 Black Friday Early Access: 50% OFF
Monday, November 25, 2024
Black Friday discount is now live! Do you want to master Clean Architecture? Only this week, access the 50% Black Friday discount. Here's what's inside: 7+ hours of lessons .NET Aspire coming
Open Pull Request #59
Monday, November 25, 2024
LightRAG, anything-llm, llm, transformers.js and an Intro to monads for software devs ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Last chance to register: SecOps made smarter
Monday, November 25, 2024
Don't miss this opportunity to learn how gen AI can transform your security workflowsㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ elastic | Search. Observe. Protect
SRE Weekly Issue #452
Monday, November 25, 2024
View on sreweekly.com A message from our sponsor, FireHydrant: Practice Makes Prepared: Why Every Minor System Hiccup Is Your Team's Secret Training Ground. https://firehydrant.com/blog/the-hidden-
Corporate Casserole 🥘
Monday, November 25, 2024
How marketing and lobbying inspired Thanksgiving traditions. Here's a version for your browser. Hunting for the end of the long tail • November 24, 2024 Hey all, Ernie here with a classic
WP Weekly 221 - Bluesky - WP Assets on CDN, Limit Font Subsets, ACF Pro Now
Monday, November 25, 2024
Read on Website WP Weekly 221 / Bluesky Have you joined Bluesky, like many other WordPress users, a new place for an online social presence? Also in this issue: CrawlWP, Asset Management Framework,
🤳🏻 We Need More High-End Small Phones — Linux Terminal Setup Tips
Sunday, November 24, 2024
Also: Why I Switched From Google Maps to Apple Maps, and More! How-To Geek Logo November 24, 2024 Did You Know Medieval moats didn't just protect castles from invaders approaching over land, but
JSK Daily for Nov 24, 2024
Sunday, November 24, 2024
JSK Daily for Nov 24, 2024 View this email in your browser A community curated daily e-mail of JavaScript news JavaScript Certification Black Friday Offer – Up to 54% Off! Certificates.dev, the trusted
OpenAI's turbulent early years - Sync #494
Sunday, November 24, 2024
Plus: Anthropic and xAI raise billions of dollars; can a fluffy robot replace a living pet; Chinese reasoning model DeepSeek R1; robot-dog runs full marathon; a $12000 surgery to change eye colour ͏ ͏