We're the single species who composes symphonies, erects skyscrapers, builds computers, and regularly gets off the planet. But how did human intelligence evolve from our ancestors in the animal kingdom? And now that our species is scintillatingly shrewd, what does a knowledge of our road mean as we work to build intelligence artificially? Join Eagleman this week with Max Bennett, an especially smart human who illuminates a path through the 600 million year story of brain power in his book "A Brief History of Intelligence".
Humans are really smart? But how did intelligence evolve? If we're trying to look back at the history of intelligent brains, do we have to look all the way back to our common ancestors with the apes, or all mammals or all reptiles, or can the origins of intelligence be traced back even further? And now that our species is good and smart, what does the knowledge of our past mean for us as we work to build intelligence artificially? Welcome to Inner Cosmos with me David Eagleman. I'm a neuroscientist and an author at Stanford and in these episodes, we sail deeply into our three pound universe to understand why and how our lives look the way they do. Today's episode is about intelligence and the history of intelligence. How did human intelligence arrive on the scene?
Now?
This is an important question because we seem to be operating at a different level than our neighbors in the animal kingdom. We are the only ones, as far as we can tell, who compose symphonies and launch mars rover missions and discover DNA and build courthouses and have congresses and construct windmills and write novels and build screws and screwdrivers to hold things together, and so on and so on, none of which any other animal does. And this is how we've taken over the whole planet. But how the heck did this happen? Why are humans such a runaway species? Well, traditionally the explanation has been something like this is a special gift from your deity, whichever deity your family believed in at whatever moment in history. But centuries of people looking at this carefully, sometimes in a microscope, sometimes in the brain scanner, sometimes at autopsy, careful examination has made something very clear. When you look at the brains of other animals, those brains are very similar to our own. Now, this shouldn't be surprising. It's the same when you look at other animals hearts or lungs or kidneys. It's the same good idea, and it's conserved throughout evolution, and so it goes with brains, with neurons and cerebellum and thalamus and hippocampus and cortex and blah blah blah. It looks pretty similar everywhere. And this leads to a point which should be fairly obvious when you look across the evolution of the vast Kingdom of animals. You don't find that there was no intelligence and suddenly humans popped up like hairless geniuses. That's not what happened. Instead, what you find is there are versions of intelligence all around us. As one example, I always admire the squirrels hopping in my tree in the garden. They perform these sophisticated acrobatics and do the kind of stuff that human gymnasts would never even attempt. And crows show intelligence that's closer to our own. They can solve really sophisticated puzzles, and dolphins have some sort of societies and language, though again not quite as sophisticated as ours. And in episode thirty four, I explored what it would be like to have different levels of intelligence, So please check out that episode if you're interested in that. So back to this question. When we ask how intelligence got here, it ends up being a question about an evolutionary journey, like when we ask how did Homo sapiens start walking on our rear legs? Or how did we become hairless? Or why do we get pimples and other primates don't, or even deeper things like how did any of us we and other land dwelling animals, how did we get kidneys or lungs? We can ask the same sort of questions about the brain. The brain has a very rich evolutionary history, a long and sometimes branching pathway that has led from early brains swimming around looking for food to brains now that build skyscrapers and launch rocket ships and try to figure themselves out. This is the kind of stuff that none of our neighbors in the animal kingdom do, as far as we can tell. And there's clearly something special about the human brain that allows that to happen. In other words, we find smarts all across the animal kingdom, but there is something very special about human intelligence. There's an evolutionary biologist named Theodosius Dubzanski, and he once said all species are unique, but humans are the uniquest. So I've just told you two things. On the one hand, we have very similar brains to all our animal cousins, and on the other hand, we have a runaway intelligence. So what has happened here? One person who has devoted himself to this question is Max Bennett, who wrote a wonderful book called A Brief History of Intelligence, And in this book, Max distills an enormous amount of data about the history of animal species to reveal a clear path that stretches from very ancient ancestors to us. He attributes the story of human intelligence not just to a single breakthrough, but to five breakthroughs. I really loved his books, so I called him to join us today. So, Max, when we're talking about the origins of intelligence, you might think that what we need to do is look all the way back to our common ancestors with the apes, or maybe farther back to mammals, or maybe even as far back as reptiles. But you suggest in your book that we have to look back much farther than that. Even so tell us where you think the sparks of intelligence began.
So what's so interesting in trying to understand how the human brain works is not only how much we've learned, but also how much we've still failed to learn because of how complicated the human brain is. I mean, the human brain has eighty six billion neurons and one hundred trillion connections, and so one strategy for trying to understand the brain is to look at the series of steps by which it came to be. Even if we only go as far back as the first vertebrates, with whom our common ancestors are around five hundred million years ago. Our ancestors had brains somewhat akin to a modern fish, and even in a fish brain there are a lot of complicated structures and a lot of neurons. So I think it behooves us to go back all the way to the very first brains, which have brains akin to a modern nematode and a modern Some species of modern nematodes, like C. Elegans, only have three hundred two neurons, and we can learn a lot about what the very first brain did by understanding what a nematode brain does.
So tell us what a nematode is for listeners who don't.
Know, there's many different species of nematodes, but the most well studied is something called Cea elegans, and it is a small wormlike creature. You could fit a few on your fingertip. And they have no eyes, they have no ears, they can't render an image of the external world. They only have three hundred two neurons in its entire nervous system, and yet it can do some really impressive stuff and that teach us a lot about the foundations of the very first brains.
Okay, so give us a sense of what C. Elegans can do.
One thing that's really interesting about C. Elegans is how well it navigates the world and the absence of a complex sensory apparatus. So one might think that in order to find food or avoid predators, one needs to build a map of space, or have eyes that enable them to see into the distance, or have complex ears that allow them to detect things through sound. But the elegance has none of this. And yet if you put sea elegans in a peatrie dish, it finds food rapidly. And if you put them in the wild, they eminently find optimal temperatures, and they eminently find ways to avoid predators. And so the ways that their brain does this seems to be quite similar to the way that a rumba works. So a rumba, if folks aren't familiar, is the sort of classic vacuum cleaning robot, and it also has no eyes or ears, and yet somehow it cleans up everything in your house. And so what a rumba does is when it hits the wall, it sort of backs away and turns randomly, and it keeps doing this randomly enough until it reaches all the corners of your house. And what nematoad does in some ways actually more advanced, where it has sensory neurons around its head, and all these sensory neurons do is they get excited when a good thing like a smell, is increasing in concentration like a food smell, and those drive forward movements, or another set of neurons gets excited when something bad increases or something good decreases, in other words, a decreasing concentration of a food smell. And just by detecting these changes, a brain can decide I'm going to keep going forward if good things are increasing, or I'm going to turn randomly if good things are decreasing, And this is classically called taxis navigation. In simpler terms, you call this just steering. And in the absence of any site, nematoads can find the origin of food smells because food creates this gradience in water, where the concentration of the smell is higher towards the source. So the very first brain, its core function, was just to categorize things in the world and too good and bad, such that it would turn towards good things in a way from bad things.
Now bacteria do that too, yes they do.
Clinokinesis absolutely what's almost mesmerizing about evolution is how this exact same algorithm seems to have been recapitulated in a completely different substrate. So single celled organisms do this exact same type of taxis navigation, but it's implemented in sort of the protein machinery of a single cell. And animatode does the exact same algorithm, but not implemented within a single cell, but through a web of neurons.
And so what you've proposed in your book, which is an amazing book, is five breakthroughs that happened in evolutionary time scales that led to intelligence the way that we have and care about intelligence. So tell us about breakthrough number one.
So breakthrough number one was this idea of steering. So the animals before the first animals with brains, which are classically called bileatrians because they have bilineateral symmetry, meaning they're symmetric across the central plane. It is interesting to real people don't realize this until they think about it, but all animals that we think of as animals are symmetric across the central line through their body.
So you mean they have a left side on the right side, and they are a mirror image.
Yeah, and so, but not all animals have that. So the very very first animals, we think, we don't have perfect evidence for this, but we think we're probably more akin to a coral polyp or a jellyfish, which has radial symmetry, so they're symmetric across a central axis. And so the transition from radial symmetry to bilateral symmetry seems to be in part driven by the need to navigate. So, although jellyfish are an interesting exception because some of them independently seem to have evolved relatively complex navigational systems, most evolutionary neuroscientists think the very first animals were more sensile, like a coral polyp, where they sit in place. They have tentacles and they just try to detect food that pass by the tentacles. But the very first animal with brains are bilateral ancestors. They use this brain to categorize the world and to good and bad. To implement this taxis navigation to find food and avoid predators.
So the existence of a brain correlates with having this left right side. Is that correct?
There are all animals with brains descend have bilateral symmetry, or descend from the bilaterally symmetric ancestor in which the first brains evolved. And so we also see a suite of other interesting things emerged with this first breakthrough of steering. One is classically called affect, which is sort of the first template of emotional states. And so a nematode actually has dopamine neurons, and what these dopamine neurons do is they detect the presence of bacteria outside of the nematode. And what it does is it changed their behavioral repertoire to search in their local area. And we see why this exists in the rumba. So a rumba has something called dirt detect and what dirt detect does is if it bumps into dirt, it starts turning randomly in that area. And the reason it does that is because the world is clumpy, So if you detect dirt, it's likely that there's other dirt nearby, even though you're not. Maybe detecting dirt in the moment. So what anematod does is the exact same thing. If it runs into food, even though it might not detect food a second later, it's probably the case there's other food nearby, and so this rush of dopamine drives this local search in these very early brains. Similarly, there are serotonin neurons, but they're in the throat, and so what serotonin signals is the consumption of food, and serotonin in these very early nematodes drives sort of satiation. And of course those chemicals do much more complicated things than human brains. That basic template of dopamine being the seeking exploitation nearby reward signal and serotonin being the sort of satiation consumption satisfaction signal. We do see hints of that basic even in human brains. So we see categorizing the world into good to bad, we see bilateral symmetry, we see these very basic behavioral states. And then the last thing we also see emerge in this breakthrough of steering is the foundation of associative learning, and this is the first form of real learning that we see emerge in animal evolution. And anematode can associate a stimulus with a good or bad thing. So if you put a nematode in a peach redish and put salt on one side, nematoads typically steer towards salts because salt tends to correlate with food. But if you leave them in a peach redition, starve them for a long period of time in the presence of salt water, they change their opinion and they will start steering away from salt in the future. And it makes sense why associative learning would emerge if the very first brain of steering, because you want to tweak the goodness and badness of things, because deciding what to turn towards and away from is a life or death decision for anema TOD. So this first breakthrough of steering, we see the suite of of new abilities from associative learning. Bilateral symmetry categorizing things in a good and bad emerged with the very first brain. So that was breakthrough number one.
Okay, terrific. And what was break through number two?
So if we fast forward about fifty million years or so, we enter what's famously known as the Cambrian Period and the Cambrian Explosion, is this huge diversification of life, which actually is all of the children of this first bilateral animal. So if you were to swim around the Cambrian Ocean, you would see many ancestors of this bilateral wormlike creature who had proliferated into what would look like crustaceans and arthropods of today. There were huge insect like creatures in the ocean, and then there were also our ancestors, which were much smaller, modest creatures, but they were most akin to a fish of today, and they were called the first vertebrates. And the reason they're called vertebrates is because in fossils, the most salient feature is the vertebral column, so they had a spine. And in these first vertebrates, we can get insight into what their brains did by looking into the brains of fish today, because there's many species of fish that evolutionary neuroscientists think have brains that were quite similar to the very first vertebrates. And what I found most surprising when I first started looking into this is how similar fish brains are to human brains. So I would have expected a fish brain to have almost none of the features of a human brain, but counter to that, intuition. Fish brain have all, with the exception of a few things, have all of the major brain structures that a human brain does. And also, counter to what my expectations would have been, there's sort of a stereotype that fish are really dumb, but the more you look into the comparative psychology work done on fish, fish are way smarter than we think. And for example, fish can learn how to navigate out of a maze and remember exactly how to do it a year later. You can go to YouTube and find really funny cute videos of people training fish to jump through hoops for treats, and you can train a to push levers for food and all of these sort of fun things. And so when we look at these brain structures that emerged, there's a lot of really good evidence that the key thing that happened was these early vertebrate brains enabled the ability to learn through reinforcement and AI. This is called reinforcement learning and behavioral psychology is typically called trial and error learning. So they could learn to perform arbitrary sequences of actions on the basis of whether or not it led to a reward at the end. So when we go into the fish brain. There are two key structures that are useful to know about because they will keep coming up through our story and the evolution of the human brain. One is something called the basal ganglia, and the basil ganglia of a fish has almost exactly the same structure and network as the basil ganglia of a human, and computational neuroscientists have gone to painstaking efforts to show that the basil gangly is implementing a reinforcement learning algorithm almost identical to the reinforcement learning algorithm we use in AI system today. And the way that it works in principle is it trains itself based on the exciting the excitement of dopamine, and it learns to repeat behaviors that drive dopamine release and inhibit behaviors that drive dopamine decreasing. And what's so fascinating is if you look at how this system came to be, you can see how reinforcement learning is only possible if brains first had the foundation of steering. Because the foundation of steering gives us the categorization of things in the world and to good and bad, and that is repurposed to create this reward signal that the basal ganglia then can use to create arbitrary sequences of behavior on the basis of what leads to reward or none. And this is how a fish can learn really complex sequences of actions on the basis of what leads to reward. In the end, the second key structure in a fish brain is something called the cortex, and we do have a version of a cortex. There's a portion of our cortex that we'll talk about that's way more advanced. But a phish cortex can still do something incredible that the first nematodes could not, which is it recognizes things in the world on the basis of patterns. So in the first nema, in the first bilateral brain, it could not detect things in the world on the basis of a pattern of activation. So when you look at a horse, you recognize a horse not because of any single neuron in your brain, but because your brain is somehow decoding the pattern of activation on your retina, the neurons in your retina. And so the first brains could not do anything like this. They only detected things when a single neuron got excited in the presence of some stimulus. But fish, fish can even recognize human faces. There have been some amazing studies that show a fish can recognize a human face and learn which face leads to a reward in which face does not. Even when that face is rotated in space, they still recognize it. So the cortex somehow, this is still an outstanding mystery in the field of neuroscience. Somehow the cortex recognizes patterns and fish eminently well. And in some ways the cortex of a fish recognizes patterns better than even our our best vision systems in AI, because we can. They've done studies that show that a fish can recognize objects in one shots even though it's been rotated in space, and AI systems typically don't do that. You need a lot of data to get into that. So at the first fish brain, we see reinforcement learning emerge, which can recognize patterns in the world and can learn to take actions in the presence of those patterns based on rewards. We see reinforcement learning as breakthrough number two.
Excellent, Okay, how about number three?
Then we're going to fast forward through a long period of evolutionary time, all the way until about one hundred and fifty million years ago. Between hundred and hundred million years ago. This is the era of dinosaurs. Our ancestors were very, very humble, tiny squirrel like creatures that lived underground, and we only came out at nights to hunt for insects. But these were the first mammals. We know a lot about mammal brains, way more than we actually know about fish brains, because the main stay of neuroscience research typically happens in rats and mice when we go into these brains. Interestingly, the fundamental difference between a mammal brain and a fish brain is the presence of one key new structure, which is a part of the cortex elaborates into what's famously called the neocortex NEO for new, and under a microscope there's some really interesting things. So we have remnants of the old cortex of fish they are called the olfactory cortex, and humans and mammals they're called the hippocampus, and they're called the cortical amygdala. These are all ancestral remnants of the very first cortex. But the neocortex is entirely new. This is something that only occurred within mammals, and it looks way more complicated under a microscope and so there's this grand question what did this neocortex do? And classically, when we study the neocortex, we look at a lot of humans, and when you look at a human brain, the whole thing seems to be neocortex. So when you look at human brain, all of this all is full, that's all neocortex bunched together. It's this sort of has this sort of surface area, and the neocortex seems to do everything, which is this funny perplexing thing in neuroscience. Because there's one region that seems to do vision. If it gets damaged, people can't see. There's another area that seems to do audition. If it gets damaged, people struggle to hear things. There's a region that seems to do attention. If it get damaged, you can't perceive things on one side or visual of view. There's an area for movements. If it gets damage, you get paralyzed, so on and so forth. So it's this grand sort of mystery of what the neocortex does, but most of it seems to have been based on this idea of perception. A lot of the neocortex seems to enable us to perceive things in the world, But what's odd is if we think about this from an evolutionary perspective, there's no clear perceptual improvements or very salient at least perceptual improvements, and a mammal relative to a fish, so a fish can recognize faces as well as a rat can. It recognizes them when rotated in space, So it's not so clear from an evolutionary perspective that the neocortech evolve for better perception. If we really examine the fundamental differences in the abilities of simple mammals with fish. There are, however, four things that are seen, and I think these are great clues as to what the first the neo cortex did. One thing that mammals can do very well is they can imagine the future. So there's some really wonderful studies done by David Reddish that show you can put a mouse in amaze and you can watch a mouse imagining its possible futures. Another thing you can do is mammals, even rats, are eminently good at having regret. So if you put them in a situation where they have to make irreversible choices, they will often regret their decision and you can watch them in their brain imagining themselves taking prior past choices. Mammals also have something akin to episodic memory. You can put rats in experiments where they have to imagine some recent past event in order to solve a puzzle in front of them, and you can watch them do that. And then the fourth is they have really great fine motor skills. So in the reptile literature, there's some good evidence that most lizards, with the exception of birds, which is a non mammalian vertebrate that has amazing find motor skills. But reptiles don't even sort of anticipate our movements to get over obstacles. They're very sloppy in their movements. And yet a squirrel, watch a squirrel run across sort of tree branches, has find motor skills that blow away any modern robotic system. So these four things actually can be seen as different applications of what I would call simulating an AI. This is called planning. Typically, so mammal brains are good at simulating possible states of the worlds and then making choices on the basis of that simulation. They can simulate the future, that's imagination. They can simulate past events that's episodic memory, they can simulate and plan their hand motions, which is effectively enabling them to find motor skills. And so this mental simulation we even see in humans. I mean, we are eminently capable of doing this. Close your eyes. You can imagine things in your mind's eye. This lights up your neocortex the same way as if you perceived those same objects. And so simulation was this incredible skill given to these early mammals because it enabled them to plan their movements ahead of time and sort of outsmart the dinosaurs. In AI today, this is classically called model based reinforcement learning. And so in AAI there's this big division between model free, which means learning to take actions without any planning at all. You just see sort of the current state and then you make a choice. Our self driving cars, the AI algorithm that keeps you in the lane is a model free system just sees a picture of the road and decides how to put the seering wheel. Model based systems are ones that imagine possible futures before making a choice. So Alpha Go, that one classically be the best go player in the world, was a model based reinforcement learning system. It actually within a matter of seconds simulated thousands of possible games before making a choice, and so there's this really nice synergy with AI. Where in early vertebrates, with breakthrough two, we see model free reinforcement learning. There's no evidence of fish being able to imagine the future, but with early mammals we see model based reinforcement learning, which is them being able to imagine futures before acting. And what is also really interesting is how you can't have simulation without first having trial and error learning, because the way that simulation cascades into action is you're training yourself in your mind's eye. When a rat closes its eyes and imagines itself taking multiple paths, a little dopamine gets released when it imagines taking the path that actually leads to food. And so then the way that the simulation leads to action is because you already have this trial and error system in place that you're training vicariously with your mind. This is also why they've shown this with athletes too. This is why mental rehearsal dramatically improves performance. Surgeons also, they've done studies that show mental rehearsal improves performance. Okay, so that's break through number three.
Yeah, this is something I talked about on this podcast A lot is the way that we unhook from the here and now and we go to the therein and then, whether that's in the future or the past. As the philosopher Carl Popper said, this is what allows our hypotheses to die in our stead. And we're going to come back to internal models a little bit tell us about the next breakthrough.
Okay, So moving forward from early mammals, a huge asteroid hits Earth, which tragically kills off all the dinosaurs and opens up the world for what is sometimes called the Age of mammals because our ancestors took over from that point forward. It is an interesting quirk that if that asteroid never heard Earth, there would almost certainly be no humans, and it would likely be that we would still be tiny little squirrels hiding in the dirt. So that is just an interesting accident of the universe. But as mammals started proliferating throughout Earth, our ancestors were the ones that stayed in the trees and they became the first primates. And primates are known for having really really big brains, you know. The modern primates include monkeys, non human apes, and of course humans, of whom are apes? And these primates have really big brains for a perplexing reason. So it's been open question in primatology for a lot, or was an open question for a long time, why do primates have such big brains. They don't seem to have such a complex lifestyle that requires them this massive neocortex that evolved. But several decades ago some theories emerged that have been proven out, which it seems to be something about the social lives of primates that drive their really big brains. And so Robin Dunbar is one of the early people that came up with this idea, And what he did is he looked at the size of the social group of primates and compared it to the relative size of their new cortex relatives to the rest of the brain. And you see this almost beautiful curve where the bigger the social group, the bigger the relative size their neocortex. This relationship does not hold for other mammals. So this is not some universal principle, but something about primate societies are such that they require really big neo courtices. And so the more we examine the primate society, we see some interesting features primate societies are very political, so unlike a troop of gazelles and a troop of gazelle's, whoever is the top ranking gazelle is typically the one that's the strongest. So there's very explicit hierarchies in many mammal groupings, but they're based on who's the toughest and the strongest. But if you look at primate societies, it's typically not the strongest. It's the most socially savvy one. It's the one that cozies up to the most allies, it's the one that builds the most friendships, that build sort of this political regime that enables them to be the top ranking chimpanzee, their top ranking bnobo. So there's been some also amazing studies of the ways in which these apes and monkeys reason about other people's mind states when making choices on how to befriend them or how to deceive them. So you can see non human apes do things like they will hide transgressions from other people to try and prevent themselves from getting in trouble. There's this famous study that I love by Emil Menzel. I think it was in the seventies where he put two chimpanzees in the sort of one acre forest, and he showed the location of treats to one of the chimpanzees named Belle, and she initially would share the treat with another chimpanzee named Rock, but then Rock started stealing the treat from her. So what she started doing is, when she knew the location of the treat, she would wait for a rock to look away, and then she would run over and grab it. So then Rock, in response to this, decided to pretend to look away so that when she started running, then he would turn around and run. Then respect to this, what she would do is she would pretend to run in the wrong direction, lead him to the wrong place, and then run back. And so this cycle of deception and counter deeception is very very unique, with impossible exceptions of a few very very smart non primate mammals like dolphins, seems to be unique to primates, and so this gives us a clue as to what might be new in the brains of primates. When we go into the primate brain, we see these suite of new neocortical regions sort of The most sizable one is something in the front of the brain called the granular prefrontal cortex, and when we do sort of neuroscience to try and understand what does the structure do, it lights up a ton when we reason about our own mind, so how we would feel in certain states, or we reason about other people's minds. So in tests of what's called theory of mind, when I need to guess what someone else is thinking about, or what their intention is, or what knowledge they might have, this part of the brain lights up a ton. And they've done some cool study on macaque monkeys that show that in order for a monkey to make a correct assessment of what someone else knows or doesn't know, they need this part of their brain active. If you temporarily inhibit it, they lose their ability to reason about other people's minds. So you get theory of mind. And so the idea is break through four is mentalizing, which is also called metacognition, thinking about thinking, reasoning about your own mind and other people's minds. But there's two unique things about primates that are not classically thought about as being related to mentalizing that I would argue are are only possible in prime it's because of mentalizing. One is imitation learning we know that primates are exceptionally good imitation learners. So if you take a chimpanzee out of their group and teach them how to open a puzzle box or do some clever motor skill, and then you release them back into their troop, within thirty to sixty days, the whole troop will know the same exact skill. So chimpanzees are very good at learning skills through observation. This is part of why apes are such good tool users, because once one member learns how to use a tool, they all adopt the skill, and then they cascade it through generations. In AI, we have tried to teach systems through imitation. We've discovered something really interesting. We've learned that direct imitation of other people's actions does not work. So we've tried this in self driving cars, where we try to teach an AI system to drive a car by watching a human drive a car. And the reason it fails is because when you watch an expert, you never see the expert recover from mistakes. So the second this AI system started veering off the road, nothing in its training set taught it how to recover from veering off the road, because it only watched from an expert of who never veered off the road. The way we get this to work in AI systems, which was most famously invented by Andrew Aang, it's called inverse reinforcement learning. And so what you do is you first try to infer what the person you're imitating is trying to do. You infer their reward function. So if you watch someone drive, you say, oh, they're trying to stay in the center of the road, and then I train myself in my mind's eye to do the same thing that they're trying to do, and that works. So Entering in the early two thousands trained a helicopter to do all these crazy aerobatic tricks through watching other trained experts do those tricks, but not by directly copying them, by first inferring what they're trying to do, and so it eliminates all the extraneous behaviors. This is part of why imitation learning requires mentalizing, because in order for me to really understand what you're trying to do with certain tool usage behaviors, I need to reason about your mind and infer what your intent is. And that's part of why I would argue that primates are so good at imitation learning, they repurpose. It's mentalizing for that. The last one is something called anticipating future needs. So when we go grocery shopping for the week, we're actually doing something really remarkable. We are taking an action today to satiate a need that we do not currently have. I might not be hungry, and yet I'm going to take an hour out of my day to fill up my refrigerator. And it's not so clear how many animals are capable of doing that. So, for example, in mice, you see hoarding behavior before the winter, but we now know that that is genetically hard coded. They're not mentally imagining the winter and realizing they'll be hungry. A rat that is, or a mouse who has never experienced hunger in the winter, never even experienced a winter at all. If you turn down the temperature, we'll start hoarding. But primates seem to be capable of doing this, So they've done some fun studies on squirrel monkeys that show that they will actually choose having less treats today to reduce their future thirsts even when they're not thirsty today, whereas a rat is incapable of doing that, And so this guy. Tom and Sudendorff came up with this theory that maybe anticipating our own future needs uses the same machinery in our brains as reasoning about other minds, because if you think about it, it's really the same thing. For me to ask, what will David feel like if he didn't drink for a week is really the same question as what I feel like if I didn't drink for a week, And so this might also explain why apes and other primates are so good at anticipating their own future needs and making these really long term plans. So breakthrough FORO is mentalizing. It is the building a sort of model of your own inner mind, and it enables you to reason about other minds. It enables you to learn through imitation, and it allows you to anticipate your own future needs.
Great tell us about the final breakthrough that led to the kind of intelligence that we enjoy.
So there's been throughout the ages so many thinkers, philosophers, and scientists have tried to draw a hard line between humans and other animals and articulate what is the thing that makes humans unique? And after writing this book, one of the most like clear things to me is how little difference there really is between us and other animals. So people used to think only humans could imagine things. I think the evidence is very strong that other mammals and probably birds regularly have imagination. Some people thought only humans think about thinking. I think there's pretty good evidence that other primates do the same, and so there's been this long laundry list of stuff. I think the main feature of human intelligence that there is this good evidence is uniquely human, or at least uniquely evolved in the human lineage and was not present in other primates is language. And so it's language is not the same thing as communication. Even single celled organisms engage in communication, but language is unique on two counts. Human language has what's called declarative labels. It allows us to assign an arbitrary symbol to a thing or an action in the world. So when you tell a dog to sit, now what it's learning is when I hear the symbol sit, if I take this action sit, I get a reward. That's something linguists call imperative labels. A declarative label is if I say sit, we're all imagining the action of sitting. And it's not clear that other animals are capable of these types of declarative labels. There's been painstaking attempts to train non human primates, specifically apes, to use language. Typically it's sign language because they don't actually have the sort of vocal apparatus for verbal language, And it's still controversial the extent to which what they were able to do could be called language. But even if you would classify it as a primitive form of language, it's very clear that non human apes are not nearly as good at learning languages as human children. The second thing that's unique about human language is grammar. So we can switch the ordering of these symbols to change their meaning in seemingly arbitrary ways. So Max jumped over Charlie means something different than Charlie jumped over Max, and by ordering the symbols, the meaning totally shifts. And so one might think, okay, language is this unique thing, that there'd be some unique structures in the human brain that enabled language, and to my surprise, also looking to the neuroscience, that's not at all the case. So there are two regions of the neocortex and humans that are very implicated in language, famously called Wernicke's area and Broker's area. But interestingly, those same exact neocortical regions exist in other primates, they're just not used in communication. So for some reason, it wasn't that some new structure emerged in the human brain. It's that we repurpose an existing structure to use in language. And what seems to have happened is a new learning curriculum evolved in humans that enabled us to learn language. And so if we compare chimpanzee children to human children, there's two very unique traits of human children. One is they engage in something called joint attention at a very very young preverbal age, which means children get a unique burst of excitement when they can confirm by looking at your eyes that we are that they and you are attending to the same object. So they've done lots of painstaking studies to show that the child is not excited because they think they're going to get the object. They're not excited because the parent is excited. They are specifically happy and satisfied when they confirm that they are looking at the same object that the parent is looking at. And what does this enable us to do? This enables us to render a simulation of the same object in our head, so we can assign a symbol to it. If we all look at a cat and I confirm you're looking at a cat, and then the parent says the symbol cat, whether it's verbal or a sign or a written word, it creates this sort of basic foundation for labels to be constructed. And the other thing that's unique in human children is proto conversation. So they've shown that very young human infants will match the duration of babbling before words with their parents. So if the parent babbles for four seconds, the child tends to bet for four seconds and then pause and wait for the parent to do that. These two things are not naturally occurring in non human primates, so it's very hard to get a chimpanzee to attend to the same object and for them to confirm that we're all attending to the same thing. Okay, so we get language, But why does language make humans so special? So this has been well discussed in linguistics in Uvall's books Sapiens, I think he speaks to a lot of this. What makes language so incredible? This enables us to share our inner simulations, and so it transforms the human brain from just sort of the epicenter of intelligence to being the medium through which ideas can flow through time. So because I can share what's going on in my mind, culture canform or a more advanced form of culture because I can learn certain skills and then describe the skill to you, or the five of us can go on a hunt together, and I can imagine a plan and then share the plan in my mind with you through symbols, and then we all have the same plan in in our minds, and then we can coordinate and do the same thing together. Without the ability to share inner simulations, you don't get this type of flexibility. So that's one of the fundamental things that enables language to make humans so powerful, because as generations go on, the ideas sort of ratchet up and get more and more complex over time, versus in chimpanzee societies. Because they can't reliably share ideas, they can only observe through learn from each other through observation. There's a limit to how complex these ideas can get over generations. And so that's one of the leading theories, not my theory. Lots of linguists and primatologists talk about this as to why humans sort of took over the world, which is ideas got to get more complex over time until they reach this sort of critical point. And so break through five was speaking or language. And the last point I'll make on this is how you one can see how even speaking in language is dependent on the prior breakthroughs. So as we now know in AI systems, when the leading problems with an AI system bound by just language is how hard it is to actually describe our desires in the form of language. So Nick Bostrom has this really great allegory where suppose there is an AI that manages a paper clip factory, a super intelligent AI, and the instruction US humans give that AI is maximize paper clip production. That's the we give that a natural language, maximize paper clip production. In his allegory, what he imagines if the superintelligent AI were actually to just optimize for the explicit request it was given, it would start to take over Earth and convert everything it could observe into paper clips. And when it was done with Earth, it would expand to Mars and it would start to try and take over the universe to convert all of it into paper clips. And as silly as that example is, as almost nonsensical as it seems, it reveals why mentalizing is required for language to work. Because when you tell a human maximize production of paper clips, what a human is doing is it's they're inferring what you actually mean by what you say. I'm simulating your mind and I'm trying to infer your preferences, and I'm doing this really complex inference task to take the symbols that you gave me and convert it into a really complex reward function that I'm going to try and optimize for. But if all system does is take our words for what we say them to be and doesn't have a model of our minds, then you can get these really wacky outcomes where they would try and convert Earth into paper clips. And so the reason why language requires mentalizing is when we're going back and forth trading symbols all the time, we're trying to guess what the other person means by what they say. We're trying to tell them information to update their knowledge given what we know they know and they don't know. It's so natural for us we don't realize it. But this is one of the key things that human brains are so good at that. Aisystems, at least in the same way, don't solve.
You know. One of the things that always has amazed me is the existence of literature. The thing I hadn't realized until I thought about it was how low bandwidth literature is. The author tells you a few sentences about this and that, the description and the emotions and all the rest depends on the reader. The reader is bringing everything to the table. The author can't put what he's imagining directly into the mind of the reader because every reader is going to imagine something differently predicated totally on this issue that you know, it's all about mentalizing and language is just a very few bits of information that you know, get thrown over the transom to inspire something in someone else's mind.
One hundred percent. I think one thing that just to add to that I think is really cool is it almost is a neuroscience or AI perspective on why many artists talk about how art is an active process. In the sort of consumer of art, when we read a book, we are participating in that artistic creation because we are filling in the gaps. And that's why people can interpret art so differently, and in some ways that's why art is so beautiful, because it's this like message, but it's not fixed. We as consumers get to sort of explore it in our own way. I think it's also in some ways why reading feels harder than watching a movie because you don't realize it, but your mind is doing a lot of work when you read, because it's turning what you read into a mental movie, and that translation takes effort versus watching a movie requires less sort of cognitive overlook.
Now returning to the primates and the humans. So one of the things that people have pointed out is that humans are the only species that teach. So a prime, a young primate will watch his mother, you know, crushing rocks and doing something, and the primate will imitate that. But the mother never gives feedback. The mother never says, oh, you're doing it wrong, do it this way, and grabs his hands and does the right way. But humans do that all the time. We actually teach, and that's something unique to our species. What is the basis of that?
I would argue in my framework, I would argue the basic machinery for teaching exists in mentalizing, but it teaching might be such a complex version of mentalizing because it's two steps. Not only do I need to render what's in your mind, but then I need to be able to think about what actions can I take to update something in your mind. You know, that's a complex act. So I think even if the machinery exists in mentalizing, when you scale up the brain, I mean, the human brain is about you know, three x bigger than a chimpanzee brain, or then the cortex area, you start getting some of the machinery that's there in a very lightweight, primitive form. So I think in my frame, I would argue that some very primitive version of teaching exists in mentalizing, but it doesn't really get rendered more effective and so it scales up in human brains.
Okay, So that puts us at today, and what we have today is this incredible explosion of AI, which is something that you know, my whole career in neuroscience, neuroscientists generally looked at AI and said, well, it's you know, it's not very good. It's not able to do X, y Z. But we've all been surprised in the last few years about what it is able to do. The interesting thing is still the stuff that it's not able to do and why. So let's talk about AI. Tell me your take on where it is currently and what all of your study about the history of intelligence tells us.
So one thing that's interesting is AI today, and this moment seems to be almost taking the exact opposite path as our brains. It's starting from language, at least the sort of explosion general AI has at its foundation been language models been these things called transformers that are trained on huge amounts of language text. And what has been surprising is the degree with which language seems to be so informationally rich that from going from the top of this pyramid of the five breakthroughs, you actually can start going down. So if you ask a large language model questions that require theory of mind, which just to remind the listeners, is being able to reason about other people's knowledge or intent, language models do very good at correctly predicting what someone might do, given that they're missing certain information, and so one might have thought that in the absence of having a mind themselves, they would be quite bad at that. But what seems to actually be the case is by reading all of the texts that exists effectively in the world, it has started to infer things about other people's minds. Similarly, I would have thought that common sense questions so questions about are three redimensional worlds. For example, if you threw a baseball one hundred feet above my head and I jumped up, could I catch it? It's such a simple question for a child to answer. But what you're doing in your mind is you're rendering a three D simulation of the world, and you're looking at the ball one hundred feet above my head, seeing me jump, and realizing you'd know way you could solve that. I would have thought these types of common sense questions would fail in language models, and they did up until you get the most recent update, GBT four. It answers these common sense questions really well. However, all of that said, the way IT solves these problems are completely different than the way that human brains solve these problems, and those differences do matter. Two key things that I think AI is missing that mammal brains can do, even some fish brands can do that I think AI can learn from neuroscience is the following. The first is something called continual learning, So we don't realize it. But all AI systems today are largely trained all at once, so chat GBT doesn't update its information as it reads new articles. The way they update the system is, by and large, they retake the entire data set and they rebuild the model from scratch. And the reason they do that is because AI systems today suffer from what's called the problem of catastrophic forgetting. All that means is when you train an AI system with new data, it tends to overwrite its memories of the old data. And somehow, mammal brands and even fish brains don't forget things when they learn new information, at least not to the extent that aisystems do. So for example, if you learn to ride a bicycle, you don't forget how to drive, or vice versa. And yet somehow AI systems still suffer from this. So commercial AI systems ignore this problem because they say, we're just going to throw more money at the problem and just keep retraining systems. That's also the approach in robotics, by the way, But eventually we're going to want systems that can learn as they go, that can get to know us, that can change their approach based on how they interact with us that can be around our home, and we can show them new skills and they figure out the new skills as they go, and that's something that's unique to mammals that we have not yet figured out NA. So that's one of the big problems. The second problem is mammals have this internal model of the world, so they have this sort of rendered world in their head that adheres to the laws of physics. That's how I can imagine myself do things, and the consequences of my actions in my mind are relatively accurate for what would happen in the real world. And this enables me to build hypotheses and intervene in the world to test those hypotheses. And the reason this is so important is these AI systems today, the truthfulness of information is only as good as the data you give it. So if you give articles about the Earth being flat to the training set of chat SHEGBT, it will start thinking the Earth is flat. But the AI systems we want to create one day are going to be ones that interact with the world, build their own hypo aothesies about the world, and reject information that's inconsistent with them. Model the world and so that's going to be the way that we can get systems that can contribute to science. That's the way we're going to get systems that get more truthful over time. And that's the way we're going to get systems that don't require you know, humans to go in and manually curate these data sets. So although CHATGBT has learned on its own, the manual effort went into creating the data set on which it learned and making sure that data sets rich. So continual learning and world models that allow you to build hypotheses, in my view, are the two big missing gaps that mammal brains have. But aisystems today.
General I agree. You know, last year I wrote a paper about how we would know if AI is really intelligent as opposed to a statistical parrot. And my suggestion is that scientific discovery is really the gold standard for that, because yeah, this is what humans do, and what we do with scientific discovery is not just piece facts together. That's and chat GEPT can do that, but it's the simulation of possible futures. It's what if I were writing atop a photon, what would the world look like? And you valuate that you simulate it out, and you come up with a special theory of relativity. That's the kind of thing that humans do all the time, not just Einstein, but we do that when we mentalize and simulate anything and evaluate it and say, okay, that's not going to work. But this other strategy over here, maybe that is going to yield something when I compare the results to other things I know in the world. So that's what our systems don't do currently. So this is what's really special about human brains is being able to mentalize and having and having a model of the world so that we can evaluate the outcome compare it to what we know in the world. Now you mentioned that as AI is getting better. Let's say chatchept four and whatever will come out. You know, a few months from now, you're saying that it's better and better at answering these sort of of mentalizing questions. But do you suppose it is because of a lot of feedback from humans and a lot of these examples appearing on the corpus of data that it reads that it's able to do this as opposed to actually mentalizing and having understanding.
Certainly, I think one of the key challenges with evaluating these AI systems is we don't know what the training data is, so it can be hard to know if the solution to a problem or word problem you give it is because it's effectively looking up what was in the training data or actually generalizing. I do think though, there's been lots of great work where like there was a study out of Microsoft recently where they reformat some of these mentalizing questions in way that it's very hard to believe that it would be in the training data, and it still solves the problems well. To me, this is a question of how it solved the problems though, because the way that chatchebt solves these problems as it makes an inference over a whole series let's call it, millions of word problems about theory of mind questions, and so it probably builds some form of model how agents or humans act in the presence of information or lack of information. Certainly if it reads enough symbols that suggest that maybe it has some of that information in there, but that doesn't mean it solves the problem in the same way humans do. You know, when we mentalize, we compare the way our minds work and how we feel about things to how we would infer someone else does we put ourselves in someone else's shoes, And so although the performance on word problems might look the same, there might be very big differences in how we solve these problems, which might have very real consequences when we send these things out into the real world. For example, if we made a robot powered by chatchebt help one of our grandparents around the home, and we want them to empathize and understand how they feel, I would not be confidence based on the performance of word problems of theory of mind that chatsheebt is going to care infer about how my grandparent feels in this situation, versus I would feel confident that a human would because I know how a human brain is solving these tasks. So I think algorithmic differences matter the more and more we offload these TASKSDAIE systems, because otherwise performance in one task might not generalize well to these other tests.
So what's interesting is I've spent a lot of time on GPT four seeing if it has theory of mind, you know, running tests on this and just for the audience, theory of mind tests would be something like Sally walks into the room and puts the baseball on the bed. Then she leaves and comes into the room, sees the baseball on the bed, picks it up, puts in the closet, and leaves. When Sally walks back in the room, where does she look for the ball? And the answer, of course is that she looks on the bed. But this requires us to be inside her head. If you ask a question like that to any of the big language models, it will get it right. But why. In part, it's because that particular test, the seal antest, is all over the Internet the gajillion places, and there are many many questions that have been asked about theory of mind that already exist on the Internet. The part that I have found so fascinating is that GPT gets this stuff right about I don't know, sixty percent of the time. So in other words, several times in a row, I'll try to make up some question that I think is new, and it gets it right, and I'm stunned, and I think, wow, I think it really has a sense of what it is to be a person. But then it will get one wrong, and it's the kind of mistake that a person wouldn't make if a person understands theory of mind, they wouldn't get this other version wrong. And that's why I find myself a little bit confused here in the middle of twenty twenty four about whether to conclude that AI has theory of mind capabilities or not.
I think this goes to the semantics of how we measure this thing we call theory of mine, and this is actually what we're asking these in some ways a profound question and an open question in AI, because the entire field of machine learning operates on performance benchmarks. The entire field is based on this idea of give me an evaluation test, and then I'm going to see how well I perform on this test. But that's problematic for things like theory of mind because if you ask any scientist a theory of mind, theory of mind is defined in the mechanism, not the performance, but theory of mind is is the algorithm by which we imagine ourselves on other people's shoes. They don't define theory of mind as the ability to solve this word problem, and so we see this sort of challenge where just because it solves the word problems doesn't mean that it's solving them in the way that someone else might classify as theory of mind. So I think in some ways this is in the semantics of what do we mean when we say does the sing have theory of mind? I think it clearly is very good at solving theory of mind like word problems. I'm quite confident that it's not doing what primates do when they engage in theory of mind. And I'm also not confident that the solutions to these word problems will generalize well to other types of tasks that are not word based that require theory of mind, such as a robot around the house that has to infer how someone might feel in certain situations to proactively help them, proactively comfort them. I'm not confident that the theory of mind word problem success will translate to these other types of theory of mind problems.
So to get to that robot that is like a human and really understands these things, what do you see from your framework of these five breakthroughs of intelligence? What needs to happen besides this language piece.
So the big missing pieces are breakthrough three and four. We need these systems to have some form of internal world model that they're continuously updating based on interacting with the actual world. And I do think this grounding in reality is important for many of the features that we want these AI systems to have, but that will not be enough. That will maybe solve some very utilitarian functionals around the home, but I think we will quickly realize that understanding how to interact with humans and the social lives of humans will emerge as this other really important missing piece, which will require some form of mentalizing. In other words, understanding what's going on in human heads a fascinating open question that I don't have the answer to, but something we'll need to think about. One way in which humans build common ground is that our minds algorithmically are quite similar. So when I put myself in someone else's shoes, certainly there's lots of mistakes we make when trying to guess how other people feel in situations, but there is this basic grounding that we are all very similar. Our brains works relatively similarly in the scope of all possible preferences of life form could have. Humans are remarkably more similar than they are different. And yet when we build this AI system, it's not at all clear that the way it would feel about the world is going to be the way we feel about the world. And so the basic trick that it seems primate brains use, which is I reason about your mind by building a model of my own mind and projecting myself into your situation, won't work for an aisystem because it won't be the same as us. It won't necessarily have the same preferences. And so I do think that begets an interesting sort of safety challenge for us, which is, how do we make sure that they actually understand human preferences, how we feel about things, how we would feel about things, while not being grounded and having those same feelings themselves.
That was Max Bennett diving into the six hundred million year history of how the human brain got here. As you can see, Max looks at evolution the way that you might look at technological innovation in the business world. When a new technology comes onto the scene, like the personal computer, it enables all kinds of new products, and it's the same when a new brain capability hits the scene, that opens the door to new sorts of skills. For example, once a brain can run internal simulations, then it can do things like remember the past, and envision possible futures. So I just wanted to summarize Max's framework here so that you can remember it. The first breakthrough happened in animals that have left right symmetry, like a human or a bird or a lizard as opposed to a starfish or a jellyfish. The first step was that these left right animals learned how to steer themselves through their environment. Break Through number two happened in vertebrates, those animals that have a spinal column. They figured out how to learn from trial and error. Break Through three happened in mammals. They learned to simulate internally, that's thinking about the past and running versions of the future. Break Through number four happened in primates in particular, and that was meant, in other words, imagining what it is like to be inside someone else's head to infer the intent of the other, and for that matter, thinking about your own thinking. And finally, break through number five happened in humans, and that was speech, which allows us to pass information rapidly from one to another and for that matter, from generation to generation. From the Library of Alexandria to the Inner Cosmos podcast, all of this is made possible by figuring out how to communicate at this high bandwidth. As a result of this, humans don't have to start from scratch every generation the way a cat or a horse does, but instead humans are able to springboard off the top of everything that has been discovered by previous humans. Collectively, these breakthroughs, which happened over hundreds of millions of years, gave us the kind of brains that we have us to do the kind of things that we do. A lot of questions remain. One of them is whether there are different paths to intelligence, as we suspect when we look at the octopus brain, which is a mollusc brain that somehow evolved along a very different sort of pathway and yet ended up at a similar spot. And once we find other sorts of intelligences in the universe, we may look back and realize there are many ways to get to intelligence from single celled organisms floating around. For all we know, intelligence is a path that is nudged into being by the pressures of evolution because of the advantages that it grants, so that things generally move in that direction. And if that's the case, if the pressures of evolution guide animals inexorably toward intelligence so they can outcompete their neighbors. Then what a pleasure it would be to visit the Earth six hundred million years from now, when lots of other species have reached new elevations in that long road. They've reached those heights that give them the kind of view that has allowed us to invent and create and discover and intellectually explore. Go to Eagleman dot com slash podcast for more information and to find further reading. Send me an email at podcasts at eagleman dot com with questions or discussion, and check out and subscribe to Inner Cosmos on YouTube for videos of each episode and to leave comments. Until next time. I'm David Eagleman, and this is Inner Cosmos