Ep78 "Does your brain have one model of the world or thousands?"

Published Sep 30, 2024, 10:00 AM

Why do you see a unified image when you open your eyes, even though each part of your visual cortex has access to only a small part of the world? 
What is special about the wrinkled outer layer of the brain, and what does that have to do with the way that you explore and come to understand the world? Are there new theories of how the brain operates? And in what ways is it doing something very different than current AI? Join Eagleman with guest Jeff Hawkins, theoretician and author of "A Thousand Brains" to dive into Hawkins' theory of many models running in the brain at once.

What is special about the wrinkly outer layer of the brain, the cortex, And what does this have to do with the way that you come to explore and understand the world.

And by the way, why do you.

See a whole image when you open your eyes even though.

Each part of your visual.

Cortex has access to only a tiny bit of the image. And for that matter, the brain is divided into different areas for sight and sound and touch and so on. And so why when you're petting a cat, why does the cat seem unified? Why doesn't the site of the cat seem separate from the purring and the feel of the fur. Can we build a new model of how the brain works and in what ways is what the brain doing something very different than what's happening in current AI. Welcome to Intercouse with me David Eagleman. I'm a neuroscientist at Stanford and in these episodes we sail deeply into our three pound universe to understand.

Why and how our lives look the way they do.

Today's episode is about a new model of the brain developed by my friend and colleague, Jeff Hawkins, and we'll get into an interview with him shortly but let me preface by saying that for centuries people have stared at the brain and tried to figure out how this thing works. Because when you stare at it, it's just a huge lump of cells.

You can see that.

There's a wrinkled layer on the outside. And when people dissect that, they can see that that part is about three millimeters thick, and it looks a little different, looks grayer. And so that part is called the gray matter. And we call this the cortex, which means bark, like tree bark. And the stuff below that thin layer is called white matter. And it looks white because the tiny data cables coming off the cells, the axons, these are wrapped in a.

Little sheath called myelin, which makes it look white.

Okay, Now, what you immediately noticed by looking at brains across different mammals is that all the stuff you find under the cortex, all the sub cortical stuff, looks essentially the same. Horses and elephants and mice. They all have the same architecture going on that we do.

They all have.

A thalamus and hippocampus and cerebellum and so on. But there's one thing that really distinguishes us from our cousins, and for that we return to the gray matter, the cortex. It's not that our cousins don't have a cortex. What distinguishes us is the absolute enormity of our cortex. We humans have a ton of this stuff. So take four pieces of paper from your printer and place them next to each other to make one really large piece.

That's how much cortex.

A human has. If you were to spread out the wrinkles. Now, our nearest cousins, the great apes only have about one piece of paper worth, and most mammals have a lot less than that. So something about the story of the runaway human success has to do with the fact that we have way more cortex for our body size than any other creature. And side note, I'm really talking about what's called the neocortex or new cortex, because we also have a little bit of paleocortex or old cortex. But the thing that really makes us outstanding is the amount of neo cortex that we have.

But what is this neocortex doing.

Well, if you look at any neuroscience textbook, you'll see that this part of the brain, the cortex, is often drawn with different colored regions like this red region over here is devoted to vision, and this green one is devoted to hearing, and this yellow one to touch and so on. But something I've been obsessed with and write about in my latest book, Live Wired, is that this is the wrong way to think about it, because the neocortex is remarkably flexible.

It's not a fixed map.

If you are born blind, the part of your cortex that we would have thought of as visual cortex gets taken over by hearing and touch and so on. Now let me just be really clear what I mean by taking over. The neurons there are the same The cortex looks exactly the same from the outside, but the function of those particular neurons is now not visual. They have nothing to do with visual information anymore. Now that same neuron instead of firing when it detects a moving object, now it responds to a touch on your.

Toe, or hearing a B flat note or whatever. So the little labels that.

We draw onto the brain, these maps that we impose, these are.

Actually massively flexible.

And as you may know, I gave a talk at TED about this a while ago, where I showed that you can feed in new kinds of information, let's say through the ears or the skin, and the brain will figure out how to deal with that data. It will flexibly devote part of its cortical real estate to that. And this line of thinking led some scientists, like Vernon mount Castle some decades ago to realize that the cells of the cortex are.

A one trick pony.

No neuron is inherently a visual neuron or a neuron devoted to hearing or touch or smell or taste or memory or whatever. All parts of the cortex are perfectly capable and willing to take on any job. So that suggests they're all running some sort of basic algorithm. And it doesn't matter what kind of data you feed in. Different parts of the cortex will say cool, I'll build a representation of that data. I don't care if it comes from photons or air compression waves or temperature or whatever. I'm on the job here to build an understanding of whatever is coming in locally. Now, it's not individual neurons that are building models, but instead groups of many tens of thousands of neurons arranged in a six layered cylinder. So think about this like you're a geologist and you drilled out a cylinder of rock and you saw six layers in it, six sedimentary layers.

That's what the neocortex looks like.

Six layers. And it's built out of these columns which have the same types of neurons with the same connection patterns in each column. And so think about the cortex as being made of lots of these columns, like taking hundreds of thousands of grains of rice and standing them up on their end and packing them all next to each other. Now, people have known about cortical columns for many decades since Vernon Mountcastle first discovered these in nineteen fifty seven. But recently someone has pulled together several different threads to propose how this could underlie what the cortex is all about. And that's someone is Jeff Hawkins and his team. And so I met with Jeff in my studio. Now, Jeff is one of my favorite people because he does theoretical neuroscience. He really tries to figure out the big picture of what the brain is doing. Now, Jeff has a very interesting history, so I'll just mention that in the nineteen eighties he was a graduate student at Berkeley, where he proposed a PhD thesis on a new theory of the cortex, but his proposal was rejected, and so he ended up pursuing his vision for mobile computing instead, and in nineteen ninety two he launch the company Palm, which made the Palm Pilot. If you remember that, this was this little handheld device and you could write on it with a stylus and it would translate your handwriting into text. And you can use this for your address book and your calendar and your contacts and note taking. This was the first entrant into the world of portable computing, and it.

Really changed the world.

Anyhow, A decade later, Jeff returned to his original love, which was theoretical neuroscience, trying to figure out what's going on with the brain, and he wrote a book in two thousand and four called on Intelligence, which was very influential on me and lots of other thinkers I know. So I was very excited when Jeff recently came out with his next book that represents his last decade and a half of research. It's called One Thousand Brains, a New Theory of Intelligence, and it describes his framework for thinking about the brain. So, without further ado, let's dive into a very cool new model of the brain. Okay, Jeff, So you are a theoretician. You think about the brain from a high level. We're in this era now of AI where AI is doing all kinds of things that are amazing and no one expected. But you see the brain as being very different from what is going on with let's say, large language models. So tell us about that.

That's absolutely true. You know, the current AI wave is really amazing, but those models don't work at all like the brain. And I think you could start with one really fundamental difference. Brains work through movement. We move our bodies through the world. We move our hands over objects to touch and learn what they are. We move our eyes constantly, so the inputs of the brain are constantly changing, but mostly because we're moving through the world. And the term for that is a centory motor system. And the brain can't understand its inputs unless it knows how it's moving through the world. So we learn by exploring, by moving different places, picking things up, touching, so on, and that's all Animals that move in the world learn this way. So this idea that the brain is the central motor system has been known back in the late eighteen hundreds, but it's pretty much ignored by everybody. But it leads to a very fundamental different way of how we acquire knowledge and how knowledge is represented in the brain. Whereas today's AI is most of it's built on well deep learning of transformer technologies, which are essentially we feed data to it. We don't it doesn't explore it, and we feed to large language models. We just feed a language. So there's no inherent knowledge about what these words mean, only what these words mean in the context of other words. Right, But you and I can pick up a cat and touch it and feel it and know this warmth, and we understand how its body's moved because no one has to tell us that. We just experience it directly. So this is a huge gap between brains. Pretty much all brains work by century motor learning and almost all oh it doesn't. And you can just peel the layers apart and see what the differences are, and it makes a huge difference. So I'm not a fan. I'm a fan of AI today, but I don't think it's the future of AI. I don't think it's going to get you to what people really want or truly intelligent machines.

Okay, terrific, And we'll dive into that more in a little bit.

Now.

When we look at let's say the human brain, there's lots of areas that we can point to. There's the cortext and wrinkly outer bit, there's all these subcortical areas. When you think about intelligence and the stuff that we're going to talk about today, what is the part that you concentrate on.

Well, we concentrate first and foremost in the New York cortext, which is about seventy five percent of the volume of your brain. I mean it's what you see, as you said, if you can take a scope, and that's what you see in the New York cortext. And so it's a pretty dominant part of what we think of intelligence. You can't consider it completely on its own. I mean, it's connected to all these other things. And so we also study those other things and in service to the new or cortex. So we study the thalmbists, and we study the cerebellopment, We study the basic just because you have to know how the cortex workship these other things. But our primarily our goal and many neurosign this goal is to understand the New York cortex, because that's what mammals have. We've got a big one. You know, everything we think, most of what we think about being intelligent, about our ability to understand the world and generate language and see and hear and and so on, is the New York cortext not one hundred percent, but most of it. Unfortunately. Also, not only is it the biggest structure, but it's a very very regular structure. So you can look at this thing, the New or Courts is like a sheet of cells. It's you know, it's like a size of a large dinner napkin and only a few millimeters thick, and it gets wrinkly because it's stuck it in your head that everywhere you look on it, it looks remarkably complicated and remarkably the same. So the areas are doing vision, look like the areas are doing languages, look like the areas are doing touch. Look the areas that are doing everything. Really and so there's been long speculated that there's sort of a common algorithmic principle that's applying to everything everything we do, all of our sensory inputs, all of our thinking, all the language, it's hard to believe, but the evidence is overwhelming, and so our research has really been to understand what is that that algorithm, corical algorithm, often referred to as a cortical column. You know, this repeated structure that seems to underline vision and hearing and touch and thought and everything we do. And that's that's just just an appealing thing to try to understand. And we've cracked it. We've actually we actually cracked it. We understand what's going on. That's awesome.

Okay, so a couple of things, right, So the way I sometimes phrase this to people is that if I had a magical microscope and could show you a part of the brain and you can see all the activity running around in the cortext there, could you tell me is that visual cortex are auditory or somatosensory? And the answer is you couldn't tell me, and I couldn't tell you.

Guys, Right, it all looks the same, and there's a and there's as you know, oh, there's these experiments people have done where well, first of all, if you have trauma to one part of the cortext other parts will pick up the same function. You can also people re routed sensory and puts the different parts of the cortext in animals and they seem the work.

So, for example, you have visual information instead of going to the back of the visual cortex, that gets rerouted to the auditory cortex, and that auditory cortex becomes visual cortext.

Right, It's incredibly powerful and flexible system. And mammals, you know, all we have we have a set of sensors, quite a few action more than most people think because the skin there's a lot of different sensors. But other animals have different sensors and they have cortext too, And so there's seems to be this universal algorithm that can be applied. And now we know it's a century motor algorithm. It can be applied, and and we've spent decades trying to figure this out and we've cracked it. Oh that's awesome.

Just before you tell us about that, So tell us what a cortical column is.

Okay, So imagine we talked about. The near cortex is a sheet of cells like three milimeters stick. A cortical column is a little section of that going through the three free milimeters. It's convariating with from a third of a millimeters to a millimeters in diameter. It's it's it's a it's not something you would see. It's not like it's sitting there to be plucked out. But we know they exist, and so within that, let's let's say it's a three millimeters tall and a half millimeters wide cylinder that goes across the cortex that contains all the neural machinery that you would see anywhere in the cortex and in each cortical column, because they look like a little grain of rice in some sense. Why you can imagine what's a little brains of ice stacked next to each other. Each cornico column gets input from some well in parts of the bend they get from some pats of sensory input, so from pats to the retina, from pats to the cochlea of patrio skin. Other parts get information from parts of the near cortex. So the cortex is connected to cortex, but each one is looking at a small If you think about the primary entry regions of the cortex, which are quite large, they're getting input from a small sensory area, right, And so people used to think that, well, if this bil colm is only getting input from a small part of the rent. Now, it can't really doing very much right, It can't be very smart. All you could do is process a little piece of information there, and therefore maybe it's going to detect an edge or something like that, and there's a lot of evidence for that. But we now know what happens is that the cortical comms they get input from over time, from different parts of the world. So the eyes are moving like three times a second, and so that cortical comms may be looking at three different things every second, and it can integrate how the sensor is moving, how your eyes are moving, with what it's sensing to build models that are much larger than it can sense. And the same way that you could take your finger in a dark room and say, okay, David, I want you to learn this new object. Let's call it. You know, a coffee cup. You never touch it, and so you could do is you touch the coffee cup and you move your finger along and around, and as you do, you build a three dimensional model of the cup. Even though you're only getting input from one fingertip, the eyes are doing the same thing. It's surprising you don't realize this so every quarter do Colm, when we understand now is doing this sort of processing movement, information and sense for information, building what we call structure or three D models of things in the world. So it's quite different than even those neurosciences think about it, and there's a lot of reasons we can talk about how it was missed for all these years.

So in the court, you have a century six layers of cells, and a column is all six layers, is all six layers. It's going up and down. It's like think of it like layers of a cake. And the column is you're taking a straw and shoving it through the top, and so you've got.

This, Okay, got a straw cake.

Okay, great, And so the idea is if you're looking at some column in you know, in primary visual cortex. Yeah, your point, Jeff was that, you know, it's it's like looking at the world through a straw.

It only sees a little tiny piece of the world.

But because the eyes are moving arout, because you're exploring the world, this is actually getting lots of parts of information. It's exploring the world in the same way that your finger typically.

Right, and it has to integrate information over time, that's the key, right, And you can literally do this. You can look at the world through a straw, right, and and you can say, oh, what am I looking at? Well, you can't tell them. You start moving the straw and then you can start and you can also learn objects that way. So literally you can learn by looking through a straw, which is what sort of what one column is doing? Got it?

And in your model there are thousands of such columns and each one of these is learning a model of the world as it's going. So tell us about right, right.

So I think this idea that there's all these columns is not a new idea and that they have this fundamental argithm. But what we were I think the first people to kind of figure out what it is and what it's doing. So the trick of this thing is it's trick it here. You know, when you look out at the world, you have a sense anybody you have a sense where things are. I have a sense where you are relative to me. I have a sense where this microphone is relative to me. I know where my hand is relative to this cop I Now there turns out that you have any kind of sense of location in space, you have to have neurons representing it. There's nothing goes on in the brain if there aren't neurons firing doing it. Turns out most of the machinery in the New York cortex is keeping track of where things are relative to other things. So those six layers, all those cells, at least half of that circuitry is tracking where the sensory input is coming from in the world. So if I move my finger over this coffee cup, the part that's getting information from the sensory like I'm sensing an edge, for your example, as I move my finger, it has to keep track of where my finger is, a location of it and its orientation relative this cup. Is quite complicated, but that's what it has to do to build the models. And now we know how it does it. There's all this evidence for it. So the brain is just trying to track of where all of its inputs are in the world, all relative other things. Then it builds up these three dimensional models of the world. So tell us about how it does that then, right, so you can think about when you're in high school, you learned about Cartesian coordinates, x, y, and z coordinates, right, and so if I wanted to say where is something? Where are your relative to me? I might say, okay, your nose the origin, and I could say it's some distance from here, and you know X, Y and Z. Well you have to have something like that. But brains don't do it that way. They do it another way. And this was some very clever research in the last twenty years that people discovered in the antarilo cortex and hippocampus. These cells called grid cells and play cells, which actually operate as reference frames. They are a way of neurons to represent locations and they work differently than X, Y and Z, so there's no origin. It's kind of really clever how they work. The nature has discovered a different way of doing this, so yeah, make sure you tell us a little bit about that. Well, okay, but these these are well known thing. It's just like grid cells, which entronoic cord six. What they do is they these cells, if you take a set of them, individual cells could are not unique, and any d real sell me said, I fired different locations in space, but if you take a set of them, they're unique, and so you can encode a unique location in space. And the key thing about them is these cells automatically update as you move. So the original grid cells are where your body is in a room, and as you move, it's called past integration. It says, okay, you're moving at this dis direction at this speed, so we'll just automatically update these neurons. Is if we know where you are right and so it's it's what sales used to do dead reckoning. You just say, oh, you know, I could I'm heading north for an hour or three knots there for all these three miles in this direction. So we know that these cells exist. They've been well studied, people with Nobel Prize for these things. So we speculated that the same neural mechanisms, these grid cells and equivalents would be in the cortex at every corticle home and sure enough they're finding that now. So all kinds of research now they're finding in humans and other animals that there are grid cell like structures in cortical column And so what does that tell you? It tells me that that's the mechanism by which the brain uses for reference frames. And so literally, when you build a model of something in the world, like a model of a cup or a model of anything, it's essentially what you're doing. You're just saying, here's the sensation, and here it's location. Here's another sensation a different location. Here's another sensation at a different location. You add all these together and you get a three dimensional model. You can say, this thing consists of these features in these locations relative to each other. And so literally, in our head we build models of the world that are three dimensional analogs of the physical things we interact with. And that's why you appears three dimensional to me. You know you're not an image, You're a three dimensional structure because I have a three dimensional model of humans, and I have a special model for you, David.

Okay, great, okay. So you've got these columns in the cortex. They're building three dimensional models or keeping track of where your fingertips are, where your eyes are. So we've got these different windows into the brain. You've got these data cables coming in carrying spikes. It's all spikes, but some of them carrying visual intraces to monitory is some touch.

Every brain doesn't know that, by the way, exactly right. It's all the spikes exactly right. And so for any particular column, it might only be getting a subset of those tell us about that, right, right? Well, any well, I'm not shore going to be a subset.

But what I mean is if I if I am a cortical column that happens to be sitting in the visual cortex, that I happen to be getting visual information, but I'm not getting auditorial So.

There's a real One of the first things we had to address with this series is why does the world appear unified? Right? I don't feel like you know, I I don't feel like, oh, I'm touching something with my hands and I'm looking at something else in my eyes. It's all one thing. There's this cup, right, and I feel the warmth of it, and I know it. I mean, it's one thing. It's and yet we have all these different models. So it turns out you have models of cups and that are taxile models. They're based on how how it feels. You have models of how it looks. You might even have a model how it sounds like this particular ceramic cup. I have an expectation what it sound like. I put on this account here my different ceramic counter. And yet these models are they're all independent, but they're not completely in pandit so there's these long range connections in the cortex. They go from all different sides to the left side of the brain and the right side of the brain and all over the place. There's lots of different types. What they're essentially doing is they're voting. They're all saying, like one my finger says, I think I'm touching something that feels like a cup, and I may not be certain. Another thing, I have something too that's I'm not really certain, and the nice thing and they very quickly are reaching the set. The only thing that makes sense for all our input is we're all looking at the same object. And so there's like across these long range connections, it'll into a percept that's what you perceive. You don't actually normally perceive the individual sensations from your eye or your fingers. You just say, I'm holding this cup in my hand, and it's one percept. And so it's these long raine connections and how these columns vote all the time. This is why I can flash an image in front of your eye and say, okay, well each column is looking at part of that image. Who decides what the whole image right? And by the way, I don't even have time to move my eyes. Once I've run into objects, I don't have to move my eyes to recognize them. Man, what we call a flash inference. The reason is because each part of the court to visual cortext has a hypothesis about what it might be seeing, and they vote, and the only thing that makes sense is the final thing they agree upon. So I have to learn by moving my eyes, by tending to different things and my fingers. But I don't always have to infer or recognize things by movement. I don't always have to. I can just flash an image in front of you and you say, I know what that is, and you don't have time to move rise. This folded a lot of vision researchers for many years because they assume that the movement was necessary because I can flash an image in front of you. But you can't learn that way. You have to learn by attending to different things, quite right, Just so it's clear to the audience.

So this issue about voting, it's not that they're all submitting their votes to some central agency. It's that they're all talking with one another simultaneously, simultaneously. And something about the spike patterns holds into shape.

Right, right, well, we know exactly how this occurs. We have models of it, and we've simulated and matches of neuroscience. It's a little it takes a little while for people to get the sense of it. You're right, there's no central voting tally. It's like it's and I don't have all the commns, don't have to talk to all the other comms. It turns out they only have to stalk to a few other commns as long as everyone talks to somebody and the whole thing is connected, so they don't have to like in Zilian connections. But it's it's more like you have a neuro you know, magic neurons are spiking, and in I have I have five thousand neurons that representing what I'm seeing. That's not that man actually, so five thousand neurons and in the brain. We're getting a little technical here. Activations are typically sparse, meaning of those five thousand cells, maybe only two percent or one hundred are active at any point in time. The others are silent. So I'm representing something by saying there's one hundred neurons active out of five thousand. Now, if I wasn't certain, I might say, oh, well, let's do this. I'm going to say it could be object day, it could be object being, could be object C. And I'm gonna activate them all the same time. So now I have three hundred neurons out of five hundred they're simultaneously active. Now that might seem confusing, but it isn't. No trouble is this and everybody's doing the same thing. They're all doing multiple hy positives, and it very quickly says you're supporting this positive and you're supporting this po. It happens simultaneously. No one has to go through so early. There's no like counting the vote. So let's try this like positive in this it all settles very very quickly.

It's kind of cool thing if you thought about what happens when you settle on a hypothesis and then you switch. For example, looking at the Neckar cube, this cube made out of it, yes, twelve lines.

What you know?

You see it one way, then you see it the other way? What is it that allows it to switch? All?

Right? Now, a Necker cube is a two dimensional image, right, it's a two dimensional image of a three dimensional wireframe cube or something like that. Right, And so it's not three dimensional. It's really two dimensional, but your brain wants to make it three dimensional, right because it doesn't know two dimensional things that look like that, And so everything we try to do fits into our models, right, Right. We don't say, oh, that's a two dimensional image. It can't be a cube. No, you says, oh, no, that's got to be a cube, because I know cubes. I don't think it looks like that's not a cube. So it wants to settle on a hypothesis. Is like, okay, well this corner is in front of that corner, and this corner is behind that corner, the corner to the left of that corner. It just has to do that to fit its models. That's right.

But why doesn't it land on a hypothesis and stick there.

Well, I don't really know, but there's other people hypothesis about this is that the evidence goes both ways, right, there's multiple hypositive and so neurons have a way of getting tired about what they're doing. After a while, they say, you know, literally, they have a way of they say, you know, I'm not going to keep finding on this forever. You know, things are changing the world. We don't just get stuck. So there's various speculated mechanisms BEHUWD neurons, and it's been observed. We'll sort of, you know, say okay, I'll be active a little while and then I'm going to stop U. Lets someone else try something, right.

Yeah, it, But what it means is that the other hypothesis has to be kept alive somewhere somehow.

Well it maybe not. Maybe just like I have this hypothesis that locked in on it, and now I'm going to say that's no longer possible. Just go back to square one. What is possible? You know? So it's not like I have these two images in my head conceptual or perceptually. You don't feel that way, right, You only one or the other? Do you lock in the one? The other is forgotten? But then if i'd say disabled the first hypothesis, We're not gonna allow that be anymore. Then it's okay, what's possible. This one's possible, I'll switch to that one. It's not like they're both active. One's active and then it gets tired and then the other, well, I work.

So coming back to the main thing, one part that I want to return to is just this issue that a particular column might only be receiving touch information. Another call might be receiving only auditory information, and so on.

Well, they build independent models, right they. I could have a tactle model of an object, a visual model of an object, right, they're not the same. The visual model of the object will have color, perhaps the tact will have temperature and texture and things like that. So they're different models. But because they can vote, you have a single percept of it. Yeah, okay.

And one of the things that's important here, which of course you have, so you and I both emphasize this is a lot in our books, is that all we are ever seeing is our model of the world, right, and so we don't have any direct access to what's actually out there.

And so the fact.

You mentioned earlier the binding problem, if you mentioned it by name. But the binding problem is this issue that when the coffee cup is here and it's moving, how come the color doesn't bleed off the cup?

And how come it seems like one thing and so on. Buying problem is a poorly defined problem. Exactly. It means a lot of different things, a lot of different people. So you gotta be really careful a say, oh, I let's talk about the binding problem, I might have a different perception of what the binding problem is.

To me.

The binding problem is the one I've already discussed, which is you have these different sensory inputs that but somehow they lead to a single perscept and you can switch back and forth. It's like, I don't It's like, how do I bring these things together? How do I say these are all the same thing? And people used to think in the binding problems like, oh, if I have the auditory cortex and the visual cortext and somautter centric cortex touch, then they must all project to someplace where they are binding together into a single model. And we flip that on its head. They don't bind. They bind together through just long range connections. But there's no place I have to do that. There's no nobody's sitting on top of it and saying, hey, what's your vote? What's your vote? Now? It's just like, so there, we don't need a model that incorporates all the aspects of objects. We have independent models that we can invoke as needed, and they all they all vote to reach common consensus. So I have no problems navigating, you know, doing things in the dark. I have no troubles doing things just by vision. I have no I can do things sometimes of audition, you know, like I know the same things are going on. I have the same model in the world, right. You know, if I if I'm walking at night between my bed and the bathroom and it's pitch black, I still have the same model in the house. I still know where the door is going to be and everything else. You know, if I can do with touch for the vision. Right, So there isn't a central model that says, here's a model of my house of touch and vision hearing. It says all these independent models. Right.

And now the reason you called your book a thousand brains, we call this ypods.

It's one thousand brains theary theory, right, is.

Precisely because you've got all these cortical calls and they're each making a little model of the world, and they're all talking to one another. Right, So you know, hi, this is this feels like coffee cup. This looks like coffee cup. It sounds like coffee cup when it plays down. This is the temperature of coffee cup, and so and so these are all talking with one another.

So so the reason they call the thousand brains. Is that each cortical column is doing what the entire brain is doing. Right, each quarter column is a senior model learning system. And and when we ask where is a model of something? We've been talking about this, where is the model of the skull or the microphone or whatever. So many things we know, where is that model? It's not, it's it's in many different places. So there's a thousand models of coffee cops, a thousand models. You don't perceive that, but they exist. And and so it's like it's it was really trying to capture that original idea that cortical columns are common and that and that there's all these different models out there that are different and they can vote one day, they vote to reach a consensus.

Yeah, and it's certainly consistent with the idea that you know, for example, if someone is born blind and then visual cortex gets taken over by hearing and touches on, they are better at hearing in touch presuming because they just have a lot more real estates.

Voted right, right, or real estate is a lot more practice too, right right. So it's amazing how flexible it is.

Yes, given your model of the brain. Let's talk about AI and what you think is going on currently with LLLMS and what that is missing.

Right, LMS flipp is interesting. Well, let me start with the criticism AI in general. Okay, AI has always been focused on what they call benchmarks, like how well can you solve problems? So, how well came this system recognized images? How well can play chess? How well can play go? How well I can translate from one language to the other. And you have all these benchmarks, and everyone competes against these benchmarks that they're kind of diverse all over the place. That's the wrong way to think about it. When we let's use computer as an analogy. When we say something as a computer, we don't based on what it's doing. We based on how it works. Alan Turing and John Fan Nordmann define what we now call a universal turning machine, which is like, okay, if a system has memory and a processor, and the memory has data and instructions, and you can change the instructions and change the data, it can do anything. And that is a computer. So I can say my toaster is a computer, even though it's a very limited computer, because it has one of those things inside. Right, if it was hard coded with springs and wires and stuff, it wouldn't be a computer. But because it has a little microprocessor has those definitions, it's a computer. So that's how we do it in the computer world. We say, these are the functions that it has to perform, and you can apply it to big problems, little problems, different types of problems all over the place. And AI we've been focused on this idea that oh a benchmarks, you know, and we always want to be beat some human Well, like a dog. Almost everyone who has the dog says it's intelligent, right, but doesn't have language, it doesn't play standswer, it doesn't play go. But why do we say it's intelligent because we can tell the that dog has an eternal model of the world. It's kind of like my internal the model. He knows where the door is, it knows how to get go on the walk. And so why why focus on this issue of like, well, it's not intelligent because it doesn't play go better than the best human player. So I think part of the problem was that people didn't know how brains worked, and so if you don't, what are you going to do? Right? We don't know it. We we know enough to build this stuff. So I think in the future that's what's going to be. We're going to say AI systems don't have to be like humans. They don't even have to do the same things humans do. Some of them are going to be very dedicated. It's very focused tasks, and we're going to be very broad So might be you know, engineers building space stations, all this huge rider, but they're all going to work on the same principles that biology has discovered. Today's AI doesn't work on those principles, you know, most of it. If you talk about the large language models, these are transform well models. We feed in a string of tokens basically words or wordlike things, and it just learns the structure of that string, and it's very good at what it does. But there's no inherent knowledge of the actual the world. It doesn't have a three dimensional models of the world. It doesn't if someone's written about it, it'll tell you about it, but it can't experience it itself. So you couldn't send in one of these AI systems down the space and say, you know, go to Mars explore and see what's out there that we can build things with and here's some tools and start building a structure. It just no why they're gonna do this though it's not gonna happen, but to contact. The tools we're working on can do that. That's what humans do, and that's the promise of AI. It's not just you know, targeting things that humans can do, like high level things like you know, translating language or writing poems or things like that. It's really how do you build the system to understands the world and knows how to act in that world. And that's the key.

One of the things you wrote in your book that I thought was great was, uh, you address this issue of the existential threat of AI that a lot of people are banging on about and you don't think it's a threat.

I don't think it's a threat. I mean you have to, you have to tease it apart because so many people like there's different existential threats, but you know that I one is called the alignment problem, Like all these AI agents are gonna you know, you're gonna tell what to do, but it won't be aligned with our values. And I'm just saying they don't have any values and it's just so far from reality. I just if once you understand how brains work, you said, like I'm doing any of that stuff. It's it's hard to it's hard for me to give a succinct answer to this. But I don't think that today's AI systems have any of these problems. They're not gonna run away. They're not They're not gonna have their own desires. They're not gonna say, hey, I'm awake, I need to survive, you know.

Because because these current large language models are just statistical parrots that are taking much language and spinning language back out.

Right, and you can apply they'll prime the robotics and other things, but they're gonna be still so statistical parents exactly. But and and by the way, they lack the human brain. We talked earlier, the new or corts is the biggest part of the brain, but we have a lot of other parts of the brain, and our emotional centers and how much of what makes us humans, our drives and motivations are mostly not the near cortex. Right. There are these other things. And if you provided an AI system with those other things, I might start worrying about it. But if you're just trying to model stuff, it's it's no threat. It's we just assume that some AI system, because it can spew back language, is going to think like us and be like us and have our same motivations. Nothing like it at all.

So tell us about the thousand brains projects and how you're gonna make this happen.

Right, So, we kind of been working on this theory for decades really, and maybe five or six years ago we really had some breakthroughs and sort of all came together and then we said, well, I always thought that this is the way we're going to build tuly intelligent machines. And this is at the same time as deep learning and transformers are taken off and all this excitement about it. But that didn't distract us. We said, Okay, let's see if we can start building this stuff. So we for a couple of years we had a small team that was trying to implement the thousand brains theory, modeling quarter u cooms, the voting, all this stuff, multisensories things, all this stuff, were modeling it, and we decided earlier this year that the best way to go forward was to do this in an open source project. We haven't actually told people we're doing this before. So we've created a thousand Brains project. We're taking all of our our code and putting an open source We are taking the patterns. We have a lot of patents. We're going to make a non a short clause. We've done that. I'm not a short close on our patents. We have. We hire a team of people to like open source project manager for the outside people. We've already got quite a few people interested. We already have received some funding from the Gates Foundation for this significant funding and help fund the project for a couple of years. We have there's a guy named John hen at Connegie Mellen University who's building silicon to implement quartical columns. So there's the people around the world who've been excited about our work and following it and want to join in on this day. So we figured let's get them all together, let's build a framework open source project. And so we built out this team. We have just been run by the women Vision Clay, who's just brilliant, and technical side is by Dunam Neils and and so we're just starting this, you know. So we actually haven't officially and we've talked about it, but we haven't officially launched yet because not everything is open yet and we have to there's a lot of stuff you have to do to put in to get those whole do work. But we're going full boring this and I think my hope is that anyone who's excited about the work, and there's quite a few people can help join us and work on this and propel it forward and really created what I not only just an alternate form of AI centory motor AI based on brain principles, but I think what's going to be actually the ultimate, uh primary source of AI, which is brain modeling a thousand brains projects. This is amazing. So how do people get involved in this?

Uh?

You can just go to our website dementa and dot com and you know, there's a lot of information already, a tons of information. It's like we have all the stuff with accumulated documentation, code, videos, were all that up there, plus tutorials and so on. So you just you can just you can you go to our nomenta dot com. It'll be obvious how to sign up to be informed of what's going on or how to get involved.

Great, So some listener to this podcast is I want to get involved and understand more about this thing. Go to Nementa dot com and they can.

They can. First of all, they'll start, they'll sign up to getting notified about things are happening. They can get educated on the whole project. They can I don't think right yet they can contribute code yet, but that will happen within a month or so. It'll be obvious how to get started. There's there's a lot of information to learn. I would think if you haven't. If you haven't, you might want to start with just by reading the book to the Thousand Brains, because it gives you that not only the basics of the theory, but it also gives you the vision about how this is going to play out over time.

And so the idea is just so I'm straight on this. So the idea is a person can download the code and run this model.

Right the first we're making said that you can do that. You can run our current experiments, you can recreate them, you can apply them different ways. Great.

So something that you and I have in common that we are obsessed about is this idea that we're living inside our own internal models. This is all a construction. And you had a line in the book that I loved, which is that if you had different sensors for picking up different information in the world, we would have a different perceptual experience, a completely different experience of the universe.

Well maybe completely, not completely, Like a blind person is we're learning the worth of touch, and a person who is deaferent, a person maybe who has sensory problems on his hand, they will end up with a similar structure.

Sorry, but what I mean is not in terms of how we pick up on the visible light ratee. But I pick up on infrared and you pick up on radio waves.

Okay, right, you might if you if you really did that, then you would have a different view of the world, like it's often you know, like it take the issue of color. It's often said that bees, you know, seeing the ultra violet and we don't. So what looks like toss is a white flower to them? Is this beautifully colorful variegated flower.

But let's say you saw a totally different part of the electromagic spectrum and so you see in the microwave range. Question is would would we have color at all?

I don't know, It's hard to say, right, there's a there's an underlying really interesting philosophical problem called qualitia, which is like, why does color feel like color? Right? And it doesn't feel like sounds or tactile sensations. And it's an interesting challenge to understand that. I've written about it a bit.

Yeah, do you have a hypothesis about this, I'll tell you what mine is, But it is it's always sort of half one, which is I think it's about the structure of the data coming in defines the quality.

I don't know why or how that's true.

But you know, with the eyes, you've got two two dimensional sheets of data coming in, and so vision feels like something with hearing, it's a one dimensional signals just going up and down and vibringing your ear drum. That feels like something. You don't confuse vision with hearing. That's like completely different worlds to you. My interest has been in what happens when we feed news structures. We've done a lot of interesting stuff in this area.

Exactly would you have a completely new quality? Is it possible? So? I mean, certainly you can imagine. First of all, I agree with you again, it's all spikes, right, So there's nothing there's no color spikes, there's no heat, spikes. It's just spikes. And so obviously the different quality it has to come about somehow from the structure of the data spatially and temporarily, and also sensory motory, you know, it's like how things change as you move, and I think that's a big part of it. So I agree with on a fundamental level that it has to be some in the data and it's nothing else. And we can then ask ourselves something like, well, imagine you've been blind your whole life. You don't have a sense of color. You've never experienced color, and so to you would be kind of mysterious things. Someone can say, well, can't you tell that's that's you know, that's this type of orange and that time? What are you talking about? Right? They'd have to accept that you have some super sense and the world looks different to you because you have vision and I don't. And they may be able to touch things that you know, sense things that I don't. Sign, So I could be able to try to read braille if you're not a braille reader, that feels like what the stuff? It's a blur? Right? So they oh, no, I feel everything there, right, So we can we can just ask ourselves a questions like what's the world like to different people, and sometimes we'll end up with a similar model, like yeah, well you and I would have no matter what censors you have, we'd have the model of physical structure of a coffee cup. But other times it could be quite different, you know, and certain if you start like sensing parts of the radio spectrum or other things, just be you know. One of the things I always wondered, like, what would be like if you had if you had smell sensors stand under your fingers, right, and then everything you touch. Well, we kind of we have. We have temperature sensors, and we have tapping with all kinds of But what I could smell like you could tell chemicals that were on the surface of objects. This is what dogs do. You know. Dogs they don't just smell. They stick their nose right on the thing and they smell. They moved to the next bot it smell. Dogs build this freedom, actual structure of smells. We don't have that smell for us. It's kind of like wasting in from some direction, right. Dogs have this incredible model of the world smell mod and it's hard to imagine what it is. But I'm sure they have it, so I think it's fun to think about these things. I don't you know, in the future will build machines that perceive the world different than we do. But that'll be great.

Yeah, Okay, Jeff, this has been wonderful.

Thank you for being here too. Thanks David. It's always great talking to you and I enjoy it. It's a lot of fun we were and I love your podcast.

So that was Jeff Hawkins, theoretician and author of A Thousand Brains.

Now.

I love his model because it builds on previous research and gives us a possible starting point for how this whole system might be working. This is a view of the brain in which you don't have just a single model of the world being constructed, but hundreds of thousands of little models, each viewing the world through their little straw. And these models are independent, but they're not completely independent, so they communicate with each other and they vote, and in this way, the whole system converges on its best guess of what's going on out there in the world. And by this mechanism we construct a full three dimensional representation of the environment around us, with its sites and sounds and three dimensional structure. So this gives us a clear framework for thinking about the neocortex. Now, we might not know, oh for a while, if this answers everything, or it needs some tweaking, or if there are far better models coming down the pike. But what I absolutely love about this is that this is where the endeavor of science shines. Taking something that seems insanely complex, eighty six billion neurons with two hundred trillion connections, something of such vast complexity that it bankrupts our language, and saying, wait, what if there's a really simple principle at work here? What if there's a way that we could reduce all that complexity by just looking at this from a new angle. So let me give an analogy here. Just think about what it would be like if you had a magical microscope with which you could look into a cell and into the nucleus in the middle. What you would see is mind boggling complexity. There. You'd see millions or billions of molecules racing around and interacting and doing god knows what, and you'd say, wow, there's no way.

We're ever going to understand this.

But then Krick and Watson come along and say, actually, the important thing is this DNA molecule and keeping the order of these base.

Pairs, and all the rest is housekeeping.

And suddenly the fog of confusion lifts. Now something that seemed well beyond us can be described in a sentence or two, and science leaps forward and things move fast from there. I worked with Francis Crik when I was in my postdoctoral years, and now I look around me at Stanford and Silicon Valley, and there are thousands of laboratories and companies doing amazing work with genomes, and their existence results entirely from this one simplifying insight about DNA in nineteen fifty three, that new model that suddenly clarified what what is happening inside the nucleus. By the same token, this is what we're trying to do with the brain. Brains appear to be ferociously complex, and yet we have lots of brains running around the planet.

We've got eight point two billion of them.

So something must be straightforward about their architecture, or else Mother Nature wouldn't be able to build these over and over with such reliability. You couldn't drop this massive quantity into the world and have them all functioning well unless there was something pretty uncomplicated about building and running a brain. So that is the overarching game of science to take the overwhelming complexity around us and to find new angles to look at things to reveal simplicity. Go to eagleman dot com slash podcast for more information and find further reading. Send me an email at podcasts at eagleman dot com with questions or discussion, and check out and subscribe to Inner Cosmos on YouTube for videos of each episode and to leave comments until next time. I'm David Eagleman, and this is Inner Cosmos.

Inner Cosmos with David Eagleman

Neuroscientist and author David Eagleman discusses how our brain interprets the world and what that  
Social links
Follow podcast
Recent clips
Browse 102 clip(s)