🎙Jeff Hawkins, author of A Thousand Brains, about the path to AGI
Was this email forwarded to you? Sign up here It’s so inspiring to learn from practitioners and thinkers. Getting to know the experience gained by researchers, engineers, and entrepreneurs doing real ML work is an excellent source of insight and inspiration. Share this interview if you like it. No subscription is needed. 👤 Quick bio / Jeff Hawkins
Jeff Hawkins (JH): I studied electrical engineering in college and started my career at Intel. But soon after, I began reading about the brain. I was struck by the fact that scientists had amassed many details of the brain’s architecture, but how it worked was a mystery. It was as if we had a circuit diagram of the brain but no idea how it functioned. I felt we could solve this mystery in my lifetime, and when we did, we would have a much better idea of what intelligence is and how to build intelligent machines. I found this challenge exciting, and I have pursued that goal ever since. My career path has not been linear. Along the way, I founded two mobile computing companies, Palm and Handspring, and I created and ran the Redwood Neuroscience Institute, which is now at U.C. Berkeley. But throughout my career, my long-term goal has always been the same: understand the brain and then create machines that work on the same principles. Today, I am co-founder and chief scientist at Numenta. We spent a decade reverse-engineering the neocortex, and we had a lot of success. We are now applying what we learned to improve existing neural networks and create a new form of AI based on sensory-motor learning. 🛠 AI Work
JH: As I said, I believe the quickest and surest way to create AGI is to study brains. This didn’t have to be the case. Perhaps we could have created truly intelligent machines by paying no attention to neuroscience. The early attempts at symbolic AI took this approach. They failed. Today’s artificial neural networks have achieved some remarkable results, and they are loosely modeled on brain principles. But today’s AI is still far from being intelligent. We are all familiar with the shortcomings of today’s neural networks. They are difficult to train. They are brittle. They don’t generalize. If we want to claim that a deep learning system understands something, we at least have to admit that its understanding is very shallow. No AI system today has the kind of general knowledge that a human has. Numenta dove deep into neuroscience, not because we want to emulate a human brain but to discover the principles of how it works. It seems obvious that there are some basic things that brains do that we are missing in today’s AI. Once we understand the brain’s operating principles, we can leave neuroscience behind. Fortunately, we have already learned most of the techniques used by the brain that I believe will be essential for AGI. I provide a list of these in my recent book. Let me give you one example here: the brain is a sensory-motor learning system. We learn by moving and sensing, moving and sensing. We move our bodies, our eyes, and our touch sensors. The brain keeps track of where our sensors are relative to things in the world using a type of neural reference frame. This allows the brain to integrate sensation and movement to quickly learn three-dimensional models of the environments and objects we interact with. Learning through movement and storing knowledge in reference frames is essential. I believe that all intelligent machines in the future will work this way.
JH: Right, Vernon Mountcastle was the first scientist to propose that the neocortex is made up of tens of thousands of nearly identical units he called cortical columns. A cortical column is about the size of a grain of rice. Imagine stacking 150,000 grains of rice vertically next to each other, and you get a picture of the human neocortex. Cortical columns are complex; they have many types of neurons arranged in multiple layers, connected with hundreds of millions of synapses. Mountcastle proposed that each cortical column performs the same intrinsic function, although they are applied to different problems, such as vision, hearing, and touch. We believe that each cortical column is a complete sensory-motor learning system. Each cortical column uses reference frames to learn models and store knowledge. Therefore, the entire neocortex is a distributed sensory-motor modeling system. For example, the brain has separate models of what a coffee cup looks like, sounds like, and feels like. This is why we call it the Thousand Brains Theory. These separate models communicate with each other to reach a consensus on what is happening in the world. There are several advantages to this type of distributed architecture from an AI perspective. It makes it easy to build AI systems using multiple sensors and multiple sensor modalities. For example, a car manufacturer could easily swap in and out different types of sensors to provide different capabilities. This could be done without retraining. A distributed architecture also makes an AI system robust to noise, occlusions, and complete loss of one or more sensors. But perhaps most importantly, it allows us to create smaller and larger AI systems by adding and deleting cortical columns. The primary difference between a rat’s neocortex, a monkey’s neocortex, and a human neocortex is the number of cortical columns. In the future, I believe we will create silicon equivalents to cortical columns. Chips will contain varying numbers of these modeling units, analogous to how CPU chips contain varying numbers of cores. I see no reason why we can’t make machines that have more cortical column equivalents than a human.
JH: Incorporating time is absolutely critical. And you’re right, most of today’s ANNs don’t incorporate time at all. But think about how you interact with the world. If I ask you to learn what a new object feels like, you move your fingers over the object's surface. Similarly, when you look at something, your eyes are constantly moving, about three times a second, attending to different parts of the world. When we move any part of our body, the inputs to the brain change. Therefore, the inputs to the brain are constantly changing over time. The brain is able to make sense of this time-based stream of inputs by associating each input with locations relative to objects, environments, and the body. The key point is that time-changing inputs are not a problem to be compensated for but are the essence of how we learn and infer.
JH: Yes. The way I view it, there are many people trying to build intelligent machines. Ultimately, we will all reach a consensus on the key principles and key attributes needed for AI. Two of those attributes will be self-supervised learning and continual learning. We have shown that ANNs that use neurons with dendrites can learn continuously and rapidly, without catastrophic forgetting. We have also shown that a system with predictive models can use prediction error for self-supervised learning. Perhaps there are other methods for achieving self-supervised and continual learning, but it is clear to me that these will be essential for AI.
JH: I love this question, but the answer isn’t as simple as picking a, b, c, or d. First, although I believe deep learning is not on the path to AGI, it is not going away either. Deep learning is a useful technology that can outperform humans on many tasks. At Numenta, we have shown that we can dramatically improve the performance of deep learning networks using principles of sparsity that we learned by studying the brain. As far as I can tell, we are leading in this area. We have a team dedicated to this as we think it is environmentally and financially valuable. But when it comes to AGI, we need something different. Symbolic AI pioneers had the right idea but the wrong execution. They argued, correctly in my opinion, that intelligent machines need to have everyday knowledge about the world. Thus, they believed that achieving the intelligence of a five-year-old was more important than, say, creating the world’s best chess player. Where they went wrong is that they tried to manually collate knowledge and then encode that knowledge via software. This turned out to be impossible. I recall one AI researcher saying something like, “how to represent knowledge in a computer is not just a difficult problem for AI, it is the ONLY problem of AI.” The Thousand Brains Theory provides the solution. Our brains learn models of everything we interact with. These models in the brain are built using the neural equivalent of reference frames, allowing them to capture the structure and behavior of the things we observe. Knowledge is stored in the models. Take, for example, a stapler. How can we store knowledge of staplers, such as how the parts move and what happens when you press down on the top? In the old days, AI researchers would make a list of stapler facts and behaviors and then try to encode them in software. The brain observes the stapler, learns a model of the stapler, and when asked how it works, the brain uses the model to mentally play back what it previously observed. I would say these models are symbolic but not how AI researchers used the term in the past. In summary, AGI systems will use sensory-motor learning to create models of environments and objects. Knowledge is encoded in these models via reference frames. Whether we call this form of knowledge representation symbolic or not is not important. Today’s deep learning networks have nothing equivalent to this. 💥 Recommended book
I wrote both of my books (On Intelligence and A thousand Brains) with this reader in mind. There are many good books about machine learning, but there are few places you can read about intelligence from a broader perspective, including what the brain tells us about intelligence. That’s why I wrote A Thousand Brains. I am not trying to sell books, but I seriously believe a young ML student would benefit from reading A Thousand Brains early in their education. |
Older messages
➗ Edge#173: Exploring Conditional GANs
Tuesday, March 15, 2022
+cGANs to generate images from concepts and GAN Lab
🔦 PyTorch’s New Release
Sunday, March 13, 2022
Weekly news digest curated by the industry insiders
📌 Time is running out to register for Arize:Observe
Friday, March 11, 2022
Just-added speakers include Opendoor, Spotify, Uber & more!
🌨 Edge#172: DeepMind's Deep Generative Model of Rain (DGMR)
Thursday, March 10, 2022
Making news by releasing another groundbreaking deep learning research
📌 SuperAnnotate Launches Free Webinar Series on Automated CV Pipelines | Register Now
Thursday, March 10, 2022
Register Now
You Might Also Like
Import AI 399: 1,000 samples to make a reasoning model; DeepSeek proliferation; Apple's self-driving car simulator
Friday, February 14, 2025
What came before the golem? ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Defining Your Paranoia Level: Navigating Change Without the Overkill
Friday, February 14, 2025
We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy
5 ways AI can help with taxes 🪄
Friday, February 14, 2025
Remotely control an iPhone; 💸 50+ early Presidents' Day deals -- ZDNET ZDNET Tech Today - US February 10, 2025 5 ways AI can help you with your taxes (and what not to use it for) 5 ways AI can help
Recurring Automations + Secret Updates
Friday, February 14, 2025
Smarter automations, better templates, and hidden updates to explore 👀 ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
The First Provable AI-Proof Game: Introducing Butterfly Wings 4
Friday, February 14, 2025
Top Tech Content sent at Noon! Boost Your Article on HackerNoon for $159.99! Read this email in your browser How are you, @newsletterest1? undefined The Market Today #01 Instagram (Meta) 714.52 -0.32%
GCP Newsletter #437
Friday, February 14, 2025
Welcome to issue #437 February 10th, 2025 News BigQuery Cloud Marketplace Official Blog Partners BigQuery datasets now available on Google Cloud Marketplace - Google Cloud Marketplace now offers
Charted | The 1%'s Share of U.S. Wealth Over Time (1989-2024) 💰
Friday, February 14, 2025
Discover how the share of US wealth held by the top 1% has evolved from 1989 to 2024 in this infographic. View Online | Subscribe | Download Our App Download our app to see thousands of new charts from
The Great Social Media Diaspora & Tapestry is here
Friday, February 14, 2025
Apple introduces new app called 'Apple Invites', The Iconfactory launches Tapestry, beyond the traditional portfolio, and more in this week's issue of Creativerly. Creativerly The Great
Daily Coding Problem: Problem #1689 [Medium]
Friday, February 14, 2025
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Google. Given a linked list, sort it in O(n log n) time and constant space. For example,
📧 Stop Conflating CQRS and MediatR
Friday, February 14, 2025
Stop Conflating CQRS and MediatR Read on: my website / Read time: 4 minutes The .NET Weekly is brought to you by: Step right up to the Generative AI Use Cases Repository! See how MongoDB powers your