A brain's 86 billion neurons are always chattering along with tiny electrical and chemical signals. But how can we get inside the brain to study the fine details? Can we eavesdrop on cells using other cells? What is the future of communication between brains? Join Eagleman with special guest Max Hodak, founder of Science Corp, a company pioneering stunning new methods in brain computer interfaces.
Why is it so hard to reverse engineer the brain? Can't we just measure the signals and all of the brain cells and then figure out the neural code. And if not, why not? And what does this have to do with solving vision loss and eavesdropping on the activity of cells using other cells and communication between brains using something other than conversation or observing and understanding and maybe changing our own experience of the world. Welcome to Intercosmos with me David Eagleman. I'm a neuroscientist and author at Stanford and in these episodes, we sail deeply into our three pounds universe to understand the mysterious creatures inside the eighty six billion neurons that are chattering along with tiny electrical and chemical signals producing our experience. Now, today's question is how do you actually get inside the brain to study it? After all, we know that the brain is the root of all of our thoughts and hopes and dreams and aspirations and our consciousness. And the reason we know this is because even very small bits of damage to the brain change who you are and how you think and whether you're conscious. Note that other parts of your body, like your heart, can get completely replaced by a machine and you are no different. Or you can lose your arms and your legs and you can still be conscious, or you can get a kidney replacement and you're still thinking about your life and your family and what you need to do tomorrow. But even a tiny bit of damage to the brain caused by let's say a stroke or a tumor or a traumatic brain injury, even a small bit of damage can change you entirely. Even if you don't lose your consciousness, you might lose your ability to think clearly, or to speak, or to move, or to recognize animals or understand music, or understand the concept of a mirror, or a thousand other things that have taught us over the centuries about the complex landscape of this three pound inner cosmos. So we know the brain is necessary for our cognition and experience, but we didn't get to that understanding through detailed studies of the intricate circuitry, but instead mostly through observations of crude damage. So there's still an enormous amount that we don't understand about how the whole system works. We only have a sense of how it breaks. It would be like if you were a space salien and you looked at cell phones and discovered that if you zap the phone with your laser, then it doesn't make calls anymore. Okay, that's important, but it doesn't tell you how telecommunication works in terms of base stations and frequency bands and compression and sim cards and everything else. For that, you would need to take off the cover of the cell phone to figure out what the billions of transistors are actually doing. And that's really our modern challenge in neuroscience to study this incredibly detailed system more directly. So why is progress still so slow on that front? Well, it turns out it's very hard to study the brain's trillions of neurons directly, this pink, magical computational material that mother nature has refined through hundreds of millions of years evolution. Why because this is the computational core, and so mother nature has protected it in armored bunker plating. So that's the first challenge. The brain is tightly protected inside the prison of our skull. But that's only part of the challenge, and that can be addressed by careful neurosurgery. The bigger difficulty is that even when we can get in there by drilling a little hole in the skull. What we find is an incredibly densely packed device made of very sophisticated units that are microscopically small, and there are almost one hundred billion of them, which is about twelve times more than there are people on the planet. And each one of these neurons is sending very tiny electrical signals tens or hundreds of times per second, and these signals zoomed down axons and cause chemicals neurotransmitters to be released. And it's not generally clear how to read this insanely dense circuitry to understand how these trillions of incredibly small signals racing around in there lead to a particular outcome at the scale of a human like you move your arm, or you have a craving for pistachios, or suddenly you're reminded of the poem osomandias or whatever. What is the relationship between this small scale and the large scale? So how do neuroscientists try to decode this incredible complexity. The answer is by marrying the technology that we have like computers, directly to the cells of the brain. And this is what we generally call a brain computer interface or BCEI. We use that term to refer to essentially anything that allows direct communication between the brain and to an external device. So people use these to control wheelchairs or robotic arms, or type directly onto a screen or speak through a synthetic voice. The idea is to use BCIs to restore functions in people who have lost them, like paralysis or blindness, and someday perhaps to enhance the capabilities of healthy people. Now, how does a BCI actually work. People sometimes think about BCIs as measuring electrical activity on the scalp with an EEG electroncephalogram, and that counts, but you don't get very much detail from the outside of the skull. So the more sophisticated forms of BCIs involves measuring brain activity directly from the cells. And the main way to do this is with small metal electrodes that you insert into the brain tissue. And with these electrodes you can send little electrical zapps to stimulate the neurons, and you can can also listen to hear when the neurons themselves are giving off small electrical signals. Now, this has been a technology that researchers and neurosurgeons have used for many decades, but it's still a challenge because you have to drill a hole in the skull and these little, tiny metal electrodes. Although they're tiny, they're actually pretty big from the point of view of neurons. From the point of view of the neurons, it's like inserting a tree trunk. It damages the tissue. Now you've probably heard of companies like Neurlink. They're still inserting electrodes just like neurosurgeons have done for decades, but they're working to make them smaller and finer and robotically inserted, and also wireless in their communications so the information can go back and forth without having a cable there. So it's a better version of the same idea of sticking electronics into the brain. But are there new ideas about how to read and write to brain cells, about how to interface with the brain. Today, we're going to talk about what is at the cutting edge, and so for that I called a colleague of mine who is shaping the future of BCI technology, Max Hodak. Max is an unusually brave thinker. He started studying brain machine interfaces as an undergraduate, and while most people would be thrilled to simply be a part of that. He was already thinking about the ways that parts of the science were inefficient and could be improved. Some years later, he went on to be a part of the co founding team at Neuralink and he became the president, and then four years ago he left to found his own company, Science Corporation. When I visited him at Science Corporation recently, many of the things I saw there would have seemed like science fiction fantasy just a few years ago. So here's my interview with Max Hodak. You started a company called Science Corp, which will refer to as Science and tell us about science because it's so exciting what you're doing there.
Our main focus at Sciences is restoring vision to people that have gone blind because they've lost the rods and cones in the retina. And I this was not something I'd not worked on the retina before, but I had this thesis that that the technology was there that this would be possible. There's I think two different ways to do this that people have been thinking about in the retina. There's a technical optogenetics, where you use a gene therapy to deliver a little bit of DNA to the cells of the optic nerve to make them light sensitive. That then you could activate with a laser, or you could put an electrical stimulator under the retina and drive the remaining the cells that are still there electrically.
And the mean for just one saying, the retina is the lawn of cells at the back of the eyeballer catching the photons that are coming in through the front. And so if you've got a problem where let's say those cells have died for whatever reason, lots of reasons, then what you're talking about is how do you how do you get those cells to catch the photons and send their signals back along the optic nerve.
Yeah, so I think you know to take a step back. If you're thinking about getting vision into the brain, there's a couple of different places you could think to do it. The first is the retina. So the back of the eye is the retina, which is this really nice two D sheet of neurons and a big cable going into the brain. So in some ways this is like a really ideal interface to the brain. Evolution has done this to give us vision. The first stop of the optic nerve out of the eye is a structure in the thalamus called the lateral janiculate nucleus, which is a very deep structure in the brain. It's very old evolutionarily, and there's about one point five million cells in the optic nerve. There's about about the same number of cells in the thalamus. And then from there you go out to a much larger number of neurons in cortex called primary visual cortex. And so if you want to supply vision to the brain, in some sense synthetically your choices are really in the retina, in the in the LGN, or in V one, and everywhere past the optic nerve gets much much harder. Nobody has ever really shown the restoration of form vision by directly stimulating either the LGN or V one. I mean, people haven't even really shown the restoration of form vision simulating the optic nerve. The device that we're bringing to market now that just recently finished to face three clinical trial sits under the retina and stimulates a layer of cells called the retinal bipolar cells, which are the first cells past the rods and cones. And so this is really in many ways the first opportunity to get a visual signal back into the signaling pathways into the brain.
So let's back up. So how does your device work.
So the device is called Prima. It's a pretty cool idea. So it's a tiny little solar panel chip about two millimeters by two millimeters, so it's really very small, and there's if you look. If you look at it, you'll see all these little hex grids on it, these little hex tiles. Each one of those TXT tiles is a photodiode and an electrode. So what we do is you implant this under the retina in the back of the eye where the rods and concept degenerated, and the patient wears glasses that have a laser projector on them, and the laser projector projects the scene with laser energy onto the implant in the back of the eye, and wherever the laser energy is absorbed, it stimulates, and wherever there's darkness in the scene that it doesn't. And so this is a cool idea because there's no implanted battery, there's no wires, there's no PCBs, there's no electronics other than this tiny little chip. Because you send it both energy and information simultaneously in the laser pulse and so this is like, it's tough to imagine how you would do this more simply than this. And when you look at past devices, so like a little over a decade ago, there was a company called Second Site that had a retinal stimulator that is probably what people would be most famous when people think about retinal prustcs. So it worked very differently than than the science prima implant. First of all, it targeted a different layer of cells. It targeted the optic nerve rather than the bip our cells, which are just much harder to stimulate naturalistically in this way. And the second is because it wasn't it was a conventional electrical implant. You had this big titanium box attached to the side of the eye. You had cables going in through through the eyeball to power it. This was a four and a half hour surgery. Being able to just put this little two by two millimeter chip of silicon fully wirelessly under the eye with a little inserted tool is a totally different game and it's and the clinical trial results I think really speak for themselves. The first time ever in the history of the world, as far as we know, that blind patients have been able to read again.
Oh that's so amazing. So all of the electronics and all that stuff is in the glasses themselves, which are capturing the scene like a camera and zapping it back with a laser to the chip.
Yeah yeah, powering it, Yeah, basically like a solar cell.
Congratulations on all your progress with that. It's an incredible device.
Yeah.
And also I should say we didn't develop this from a scratch ourselves. We acquired this from another company called Pixium, which was based in Paris and had done has started the clinical trial. Originally the technology came from a I love at Stanford scientist Daniel Planker in the Electrical Engineering department who came up with the idea, did the early work at Stanford that was licensed by Pixium. They started the clinical trial which we acquired and have finished and finished the clinical trial and to bring in to market.
Right, I mean, I'm so I'm so jazzed that you guys are doing that or bringing it to market and making making this across the finish line. So that's what you're doing in the retina for people who have lost vision. Tell me what you're doing with uh reading from neurons? So before just before we get there. So the challenge with brain computer interfaces has always been, well several One of them is that you know, mother nature has wrapped the brain in this armored bunker plating, so it's hard to get to. But then when you get in there, you've got eighty six billion neurons and you have to figure out who's saying what. And the traditional way to do this is to dunk an electrode in there, which really damages the tissue. So obviously people have been trying to make electrodes thinner and thinner. But you've got an idea that you're working on which is amazing. Tell us about that.
Yeah, so that I can like, there's no free space in the brain. The brain is wet, it squished together. Evolution has really compressed as much as much as it can into as small a space and an energy budget as it possibly can, and so there's there has not really left holes that we can take advantage of in there. Evolution is extremely good at its job, and there's limits how small you can make an electrode. You can't make a like one nanometer wire because as a wire, just any electrical wire gets smaller, the resistance increases there's just real limits how small you can make a recording electrode before you lose the ability to distinguish the signal that you care the biological activity from the background noise. And then on the stimulation side, this is actually worse because there's real limits how small you can make a stimulating electrode before you start splitting water in the brain and producing hydrogen and oxygen, and like, you really don't want to be doing this, And so we think about, like, what does an ideal neural interface look like. I think one of the high level intuitions that I started with was, Yeah, the brain is encased in this dark vault of a skull, but it has to communicate with the world.
There's like you, the.
Brain is not telepathically connected to the outside world. I mean, it's also important to realize that you're not seeing the world out there, right, You're only ever seeing in perceiving information that's arrived at the brain.
And so how does it get there.
All of the information that flows in or out of the brain flows through a relatively small number of cables. There's twelve cranial nerves and thirty one spinal nerves. The optic nerve is cranial nerve two. The vestibular cochlear nerve that carries hearing in balance is also called nerve eight. And kind of thinking about you've got this relatively small number of wires, we can think about attaching to those like we do for getting vision into the brain through the remnants of nerve too. But this also kind of got in the back of my mind going this idea of can we grow a thirteenth cranial nerve that really feels like the ideal neural interface. Biology has given us other examples of fiber bundles that get information in and out of the brain for really any purpose that the brain needs. It, is it possible to add a thirteenth biological wire that, instead of having an eye at the other end or having a bunch of muscles at the other end, had a USBC port basically, And so the high level intuition here is like, what can we add to the brain? How does the brain do this? Like how does nature do this on its own? And the answers it uses neurons, And so this kind of prompts a question what happens if we add more neurons to the brain and the answers, they grow in and wire up and give you these bidirectional chemical synapses. And so this has led to an approach that we call biohybrid like biohybrid neural interfaces, and it really feels like it has the scalability that many conventional methods don't. Now there are alternatives to electrodes, So tell us what a biohybrid interface is. So a biohybrid neural interface is when we take heavily engineered stem cell derived neurons in a dish, we load those into the electronic device, and then what you place into the brain is just the ingrafted cells. So we're not placing any metal or any like, no electronic or mechanical component goes into the brain.
Instead, you're growing.
We basically graft these these cells onto the brain through an appropriate starting point, and then those grow out form new connections just as kind of more more of the brain.
And this is because mother nature is really good at growing cells into groups of other cells and so on. So you're taking advantage of that.
Yeah, we're letting biology do as much as the heavy lifting as we can. Now, this creates other problems, but the and I think smart people can say, well, now you have a really complicated selling engineering problem to solve. But if you can solve that in the meaningful way that you have to, yeah, you can get biologies do a lot of work for you.
Yeah. So these cells that you're putting on there and growing in you have heavily engineered these cells. So tell us about that. Yeah.
So there's a couple of things you need to do. The first is it needs to be matched to the immune system. Now, if you don't do this, it's you can still make a cell therapy for a patient, but you need to do it on an individualized patient basis per patient. This is very expensive. It can take a very long time to make edit the other edits that we need. And so the first set of editing that we do is to make the neurons hypommunogenic, meaning that they don't bother the immune system when you put them in a patient.
So how do you do that?
This is a much longer topic. There's these things called major histoic compatibility complexes, and we need to suppress some protein expression and force some other protein expression to basically tell the immune system not to eat you and and also that you are fine.
And how far along are you on that pathway? Is that solved?
I mean, I wouldn't say that that's a solved problem. I would say as a as a fields, there's several standalone companies that their ip is hypo immune agenic stem cells, and so we are i'd say, pretty close to the state of the art in the field, but it's not perfect.
Now.
In the brain, the immune system tends to leave you alone more than many other areas like this is, for example, a lot of the work that's been done in gene therapy so far has been done in the eye because the immune system tends not to overreact in the eye because when it does in a patient and a subject goes blind, this historically is a bad thing. And so there's some areas where you tend to get more autoimmune reactions in some are Some are anatomy where this happens less. The brain is one of the areas where because around the time of the surgery you're treating them with systemic immunosuppressants anyway, and then once the bloodburned bearer has healed, it being approximately hypominogenic is probably fine.
Okay, So you do that to these cells, you engineer them that way, and then you stick them on so that they grow in. But of course you're keeping the cell bodies outside and then what are you doing with those?
Yeah, So the next edit that we make is we add a protein called a light gated ion channel also as an option to these cells, which allows us to fire them using light.
And this is pretty important.
So when we have so the device that the cell is embedded in has two components around each cell. It has a recording electrode which allows us to detect the date of the cell, and it has a tiny little micro LED kind of like you'd have in like your phone screen next to the cell. And so when we want to fire a neuron, we turn on the LED and that depolarizes the cell and sends a pulse into the brain. And when that neuron receives input from the brain, because it's grown out both inputs and outputs, we can detect that with the electrode, and so being able to optically stimulate using light and electrically record use it in an electrode. A capacity of electrode allows us to minimize crosstalk between these so that we can do them both simultaneously.
And they're sandwiched in between this. So the cell body is sandwiched in between the little light and the little recording electrode. And so you can say, for this guy, I want to turn him on now, and I want to record what he's doing through time.
Yeah, it's not quite exactly one to one, but it's pretty close.
Great, And how many neurons can you grow in there at once?
Well, I mean this is so there's the number of electrodes in the device, or number of channels in the device, and then there's the number of cells, and then there's the number of synapses that you get the brain. And these are slightly different things. So the chips that we're working with right now have four thousand electrodes per FIN, and so we're thin is one of these one of these little sandwiches. Yeah, and so it's actually it's really eight thousand per because it's four thousand microelids and four thousand electrodes. But we call this a four thousand channel fin and we're working on stacks of these to scale this up to hundreds of thousands of channels in one in a couple millimeters by a couple millimeters. But I mean, you could load this with a half a milli liter of cells, which easily millions of cells, and those can form many billions of synapses through the brain.
Do each of these cells form about let's say, ten thousand synapses or.
I mean it's tough to count them. I mean you can get there's the order of magnitude. People think it's like maybe about a thousand synapses per cell, but I mean we can't. These are tough to actually count.
Right, If you had a million neurons in there, you'd get a billion synapses in the brain.
Yeah, back of the envelope.
Back of the envelope. And then so what you'd be able to do is stimulate exactly as you want to, Okay, fire number three hundred and seventy nine, now, fire number one hundred and fifteen, and son, and then record the activity going on there so you can read and write.
Yeah, so you can read and write. And it's a fairly complex.
So you've got this transform between the activate the activities and the cells and your device and what's going on in the brain. We don't think of it in terms of the single unit activity. In the beginning of the field, we were really thinking in terms of single neurons, and in the very beginning, the first experiments that were done in animals didn't have a model of brain activity really at all. What they did is they place electrodes in the brain and then say, when this neuron fires, the cursor should go up, and when this neuron fires, the cursor should go down, and you just can learn to separate these things. So the brain is very plastic under feedback. Now that works for a very small number of channels, and of course the subject isn't learning to modulate those neurons specifically. They're actually modulating big groups of neurons around where the electrode is, and so as you go to higher level control, that doesn't really work anymore. But the brain has these abstract informational representations of things like intended motor activity or face recognition or other things that objects that it thinks about, and so we're still at the early stages of learning to use these devices. Really different type of BCI, But how we think what we think we're seeing is these cells would really join these cortical representations and then just become part of part of the brain, and you can do neuroscience on them like you would any other part of the brain, except that the soel body is right there in your device and really easy to observe.
What are the biggest challenges that you're facing in terms of bridging these digital systems and these biological systems.
There's many of the hard problems here are not the really obvious sexy ones. In fact, actually I realized the other other week that the very first piece of writing that I put on the Internet was kind of this like sophomore literally as a software I call it, but the software grant about how like back in circa two thousand and eight, everyone felt like the hard problems here were understanding the neural code and like real science to study these like deep neuroscience questions, and it was kind of for the technicians to figure out how to get the electrodes in the brain, whereas actually the problem is how do you get these electrodes in the brain. And certainly the neuroscience has advanced a lot, and the neuroscience is very cool, but a lot of the problems here are things like packaging, which is a fancy term for when you place an electronic device in the body, it's going to be it's going to be attacked, it's going to be degraded, it gets encapsulated in scar tissue that neurons are pulled away from you. There's these very harsh chemical environments that try to attack and destroy your device. It's important to realize that there are no truly passive surfaces anywhere in the body, like even bone is constantly getting remodeled and turned over and regenerated, and so when you place one of these non regenerating device in the body, it's going to be attacked. And so now we have we have much better materials than we did ten years ago, specifically things like silicon carbide, which is really annoying material to work with, but a very good encapsulant that does not degrade in the body in the same way as these older polymer king capsulans do. It's like, if you look at the history of Prima part of like how did PIXI them, the company we.
Bought this get here?
They actually had an approved device in the I want to say twenty fourteen twenty fifteen called Iris, which was a different retinal prosthesis and it worked very differently. It had a conventional electronics package, It required a battery, but it was it was got on market and then was withdrawn and it was withdrawn because of packaging failures. Basically, the device didn't have an acceptable lifespan in human patients once on the market, and that was like they were using materials that were available at the time, which was before we figured out, as I feel, how to work with things like silicon carbide. And that is an example of a problem that enabled Prima to work. So Prima is a full carbid encapsulation and it should last I mean there's now dat out to six years and some patients and it should outlast these patients.
It should last decades.
Amazing.
And so that's an example of like a big area of progress in the last few years that people wouldn't really think of.
And so, what are some surprising findings or unexpected obstacles that you've run into while doing let's say the biohybrid electrodes.
I mean, biology is just it's when it works, it can do a lot of things that we humanity is just not at that level of capability yet. But also in neural engineering or either whether that means systems neuroscience or BCI, you'll start in mice and then maybe you'll work in an intermediate species like pigs and then eventually end up in monkeys, then end up in humans. And when you have an electrode or even something like optogenetics that works basically the same in mice as a dozen monkeys as it does in humans. When you're engrafting neurons into the brain, I mean, there's a big difference between mouse neurons and human neurons and macac neurons are different thing entirely, and so you end up having to redo a bunch of this work in each species that you work in, and so there's every time we switch species, there's a lot to relearn. And fifteen years ago now probably something like that, there is a major discovery of the ability to turn any cell into a stem cell. Again, this was a discovery called induced plury potency, won the Nobel Prize a while ago, and that works really well in rodents, it works really well in human cells. Turning a macaque skin cell into an ips is like there's just a bunch of little tricks that don't work as well. And so the biology is pretty deep in all of these areas.
It's surprising that those are different, but you know, just given the evolutionary shared history.
But yes, yeah, I mean there's a lot that's conserved, but there's also a lot of little things that are slightly different.
Yeah, quite right. So big congratulations on where Prima is right now. That's so exciting. On the biohybrid electrodes, it said you have growing neurons into the brain and then being able to read and write that way. When do you think that's going to be ready in humans? What's your prediction.
I think that the first human ingraftment will happen around twenty thirty, okay, so I think like probably five years, okay.
And what is the first thing you're going to tackle once it gets into humans?
Well, I mean it should be it's a communication device, and so motory coding, speech to coding, all of that should be possible. And so I think in the near term it's you're looking at the figure of merit for any brain computer interface for communication is a bandwidth measured in bits per second. The record for keyboard and mouse kind of low dimension motority coding is about seven bits per second, which is I think neuralinks current participants. There's a group at UC Davis led by saying Nick Card and Sergey Staviski, who recently showed speech to coding from pridal cortex that gets about twenty to twenty five e bits per second. Human language is routinely rated at forty bits per second, So you think that you can as them tote towards that, and so I think in the near term what we're looking for is a forty bit per second communication prosthesis. Longer term, this is where neural engineering and BCI diverge a little bit, and there's a lot of interest internally at looking at how is this applicable in stroke or other areas where you've lost cells where conventional BCI techniques really won't work in the same way, and potentially even organic nerd degenerative diseases, But those are very hard and I don't want over promise on the timeline there.
Now, if we were just going to blue sky here, part of the mythology about BCIs is that at some point everyone will have one of these for summer, you know, for communicating faster with their cell phone or their computer or whatever. To what degree do you think that's hype versus let's imagine one hundred years from now, where do you realistically see think it's going to be in terms of the amount of market it has.
Yeah, I mean one hundred years from now. I have this event horizon somewhere between twenty thirty and twenty thirty five now that I just can't see beyond, and kind of for my entire life, I always kind kind of like see the future, and we are clearly in the takeoff era now, and this is not I don't think I'm saying anything that contrarian, at least in Silicon Valley, but one hundred years from now is almost impossible for me to imagine.
Now.
With that said, I don't think that healthy forty year olds are going to be getting holes drilled in their skull anytime soon. My view is that it'll be a long time before these things are really augmentative, much less elective procedures. But everybody eventually becomes a patient. There's some point as you get older. For example, the main indication of prima is age related macular degeneration, which is very common and if someone lives into their late seventies or eighties, is actually pretty prevalent, and so for many many of these things. Eventually there will come a time when it makes sense. I mean we consider retinal prostisis and cochlear prosteces also BCIs when I look at, say twenty years from now, the things that I that are very much research. This is not a thing that's happening in the next five years. But if you can get a neural interface with the bandwidth of that say like the two hemispheres are connected, which is about one hundred million fibers on both sides that project across the midline to connect the two hemispheres of your brain into a single thing. If you can get something of that bandwidth, which is probably only tens of megabits, then this takes you into really interesting territory about really being able to redraw the borders around brains and gets at this thing called the binding problem.
And that feels less than twenty years away for me.
This feels not like the next five years, but not not to the distant future like within people's lifespans today.
So let's stile click on that tell us about the binding problem and how you think this addresses that.
But I mean, I don't have a solution for the binding problem. Is if the brain is made up of a lot of different neurons and a lot of different areas kind of connected together. Why do we Where does this unified perception come from? You? You see the world, you can think about it, you hear things. All of this is fit together into a coherent hole for you.
When the bluebird flies past you, the blue doesn't come off of the bird, and the chirping doesn't seem like it's coming from somewhere else. It seems like a unified object. Yeah, exactly, even though even though blue is processed apparently in one part of your brain and the motion another part, and the chirping in a different part. Yeah, okay.
And so in some there's some sense in which almost all communication is about creating correlations between brains. There's we're having a conversation right now. There's concept spaces in my brain that are being active that I developed from education, like learning English learning, math learning, science, doing these things, and I can serialize these neural activations to vibrations over the air, send over to you, receive through your ears that then activate these correlations in your brain that allow us to share these concepts. But we don't. Our brains don't become one thing. And so there's there's some point between the types of correlations that you get between the hemispheres of a brain and the types of correlations that we get between brains that are in dialogue. And where does this Where is that crossing point? We don't know today, but I think that biohybrid devices have the potential to get close to there, and that takes us to really different regimes than kind of conventional VCI technology.
Let me just make sure I understand what you said. So the idea is, if you're reading and writing from my brain and from your brain, we can get closer to being a single brain.
Well, like, yeah, the question is like, where does that happen? What makes I mean? People back in This has done less commonly now, but it was never really done that commonly. But people used to cut the connection between the two hemispheres of the brain to treat epilepsy. You could prevent a seizure from spreading from one to the other. And those split brain patients were really interesting to study.
Because you could.
You could ask kind of the right hand a question which would go to the left hemisphere, and then you could ask the other hand, which was coming from the other hemisphere, to kind of answer, and you get the sense that there's two agents going on.
In one head.
Yeah, one in each hemisphere.
And so if you take that then the opposite direction, what do you get? I think is really interesting.
You're saying, put four hemispheres together and yeah you get Yeah, Now who would do this? Who would volunteer for example, two spouses for example?
Yeah, exactly.
So I think this is in the beginning this this is going to be something like you've got like a long married couple one has a terminal disease. Can you make the loss of that brain like having a stroke you recover from, rather than the that rather than lights out?
Oh wow, and we double click on that story. What would the narrative be there?
Well, I mean you get so the if you have if you can build these super organisms and get kind of equilibration of representation over some extended period of time. I mean, people already store memories in their spouse's brains that then they can access and recall later. Right, this is about creating correlations between brains, and so there's some they suspect that there's some nonlinearity in there where you get something really different, but of course we don't know exactly where that is yet. I mean, this is a tricky field because there's a fine line between doing very like we're right now in the process of preparing twelve hundred pages of regulatory documentation that is like very nuanced in exactly how you do these tests to verify these like things that have passed clinical trials that are in almost fifty patients in six countries, and then you kind of play some of these technologies out not even that long five ten years, and you sound like a lunatic. But that's part of why this is such an exciting field.
I think, right, what would you see so I know, I know the event horizon for both of us is, you know, not more much more than a decade out. But what would you see is the societal benefits that could happen from this, you know, at whatever time scale, for example, connecting brains or something. Have you thought about what that could what that would turn into not just responuses, but for society.
I mean at the end of that is this idea of substrate independence, which is the thing like when I, like I see a person, there's two parts of this. There's the there's the robot, and there's an agent. And I'm going to be pretty pretty disappointed if I get murdered by my pancreas, which is like basically a support structure for like keeping the agent going. And so there's I think this takes us to Okay, if we're serious about exploring the universe, I think we have to adapt ourselves the environment rather than bringing little pressurized bottles of Earth with us everywhere we go. Because our like once great grandparents grew up on a planet that happened to have those things, and so I think this is like very profound technology.
So substrate independence, just for the audience, means getting off of this wet biological stuff and onto something more robust, like a silicon chip or something. In other words, getting your mind into something that can survive space travel.
Which could be other biological brains, or it could be an engineered system. Brains are composed of ordinary matter assembled by the rules of chemistry.
There's no magic in there.
They're very complicated and we don't have obviously complete explanations for how they work. But they're ultimately physical systems, and so there's something that they're doing that's producing this experience that ultimately must be explainable.
And so what you're doing with the electrodes, the biohybrid electrodes in the brain. How does this lead to substrate independence?
Well, the idea is that if you can get like, if you can really, in some profound sense, lose track of where one brain ends in another begins, then where does this take you. I have no idea what that experience will feel like, but I'm pretty confident that that device is going to get made in the next decade.
And this is research.
This is not a there's nothing to sell here yet, but it's the type of frontier that is enabled by the types of devices that are getting made now and that and there's I think enough near term commercial revenue from things like the from the visual pres thesis to fund this.
This stuff happening.
So if you're able to read from the brain, then you can take that data and put it into a different substrate.
Whether that requires so to do that, that requires new physics that we don't understand today. We'd have to really understand what is the brain doing that is producing this ordered experience that we have. But I strongly suspect that intelligence and consciousness are separate or independent. Is possible to have a pure experience in the absence of adaptive behavior and it's possible to have very apparent adaptive behavior and the absence of experience, So these things are separate now. In order to have true substrate independence, like you could build a silicon based system that is as good as our brains. This requires a physics and neuroscience breakthrough that will produce several Nobel prizes that we don't have yet.
But I do think that that is not one hundred years away.
I think that there's really compelling threads of research that are being pulled on that have the potential to produce those equations. But even if we don't get those equations, if you can build brain to brain connections, then you don't need them, because you know that brains are good enough and if you can assemble, if you can connect them together, then that is another approach with some drawbacks and.
Some like big head starts.
Do you think people would volunteer to connect their brain to someone else's I'm not sure. I'm not sure I would enjoy connect with everything.
I don't know.
I mean, I don't think that this is for everybody. Also, this is on a thing that exists today. I think that this is a really interesting thing on the horizon that is like enough to notice. So like, oh that, like, if that's possible, what does that mean? But I think it's tough to to I think really anticipate it too much.
Right now, you.
Onz wrote that one of the main goals in neuroscience is to understand the physics of consciousness so that we can engineer experience. So tell us what you mean by that.
Yeah, So to be clear, I don't think that's like the only goal of neuroscience. I think there's lots people working in neuroscience that are thinking about other stuff and have never asked themselves those questions. But I think that, I mean, arguably one of the kind of end goals of technology is is recursion in the sense of we gain the gain the ability to observe and manipulate kind of our own existence, and we I think, like Earth is small and intensely contested, and space is large, and the speed of light is low, and there's like you never run out of real estate, and like in the matrix, and so getting to a point where we can we really have we have control over our like the nature of our experience feels like kind of a logical endpoint of a lot of what we've seen over the last like since the beginning of the technological revolution.
So, how is what you're doing with the biohybrid electrodes. How will this get us closer to understanding something about the physics of consciousness? Oh?
Well, I mean one thing, one thing that I think is true about about consciousness is that there's a good chance that to really know one will have to see it for yourself. I think that the problem, one of the problems that has made it so hard to study is not it's not that it's magic or that there's like some metaphysical thing that makes it inherently impossible, but that there's no measurements that we can take that will tell us things, because you can always if you believe that intelligence and adaptive behavior is separate from phenomenal experience, then if you run a behavioral experiment in an animal, you can always see some explanation for what's happening without resorting to saying anything about conciousness. And when we do experiments in animals, we don't talk about what they see or perceive. We say they can use the information or they can learn the information. And so when you think about what experiments can you really run that would allow you to know if you've learned something this This often looks like, can we add a new sensory mode? Can we It's also pretty tough to imagine a sense that you don't have, because again evolution is very good at its job and it's really fit filled this available time and space. But for example, it's sense that you don't have. Is a true vector sense, So the ability to see a field, like a three D field out in the environment, and we don't have this because you don't have the sense organs to do this.
We don't make measurements out of a distance. We only get measurements that arrive to you.
If we had some way to get this signal, say from remote sensors or other things, then you could get the information. So what would a true vector sense feel like to experience? And so at the point where we can implement that and make and then make that available to you, and then way that you see it, you're I guess this was this was a new information, and I'm experiencing it directly and I can use it intuitively, and there's no other way.
I could have experienced this.
I think that is like the type of proof of concept for knowing that you've gotten some of that model. And I think that this isn't I don't think that you can do this with conventional electrodes. I think that you need something like a biohybrid neural interface to get to that level. Why when you electrically stimulate vision into the brain. So let's say that you put an electrode in primary visual cortex. If you inject charge through this, you can absolutely get a flash of light somewhere in the visual field. And if you do this in an animal, you can get them to look to the way you put the flash of light, and so you can say, okay, I got some visual signal into the brain. The problem is that these flashes of light, these are known as phosphenes. And what a phosphine really is is when you stimulate lots of neurons simultaneously, you average them together. And so if you have a neuron that represents like red and some part of the visual field, next to something that represents like a spatial frequency, next to something that represents like emotion, like an orientation emotion, and you drive all of these simultaneously, you kind of average them. Basically nothing that the only information that's remaining is is i thing called writing a topic, which is where in the visual field was it? And if you do that, then you're limited. You throw away almost all of the information that you could have conveyed. And when you do this, also, like this very continuous stimulation tends to produce the most intense immune responses to electrodes that you get. And so these these writing electrodes tend to be very encapsulated. And so you want something that gives you access to hundreds of thousands or millions of neurons and the at single cell informational resolution in ways that will the brain will really adapt to informationally, and electrodes don't get that type of specific stimulation, certainly not at the hundreds and nobody's ever done something like one hundred thousand electrodes for stimulation. And there's the other technique, optogenetics, where you do this with an optic stimulator.
This requires genetically modifying the host brain.
You have to use a gene therapy to deliver this new protein to the cells of the brain. This is not a thing that is really done in humans and cortex, and there's reasons that that is that it's going to be really difficult, and so there isn't I don't as from where I said, I don't see another technology that is really capable of getting hundreds of thousands or millions of neurons at single cell resolution in a way that is long term stable, in a way that allows those neurons to learn the signal that you're trying to give them.
What philosophical questions keep you up at night?
So there's a question that whenever I go to like things where I see my friends, there's a question that splits the table evenly every time, which is is a destructively scanned upload you? So these expanded you with think side of things that my friends and I call the transporter problems. And in some sense they're very simple, which is like if you have if you take like a scan of a brain, but at the end the brain is no more, and then you can use this to build a perfectly biophysically accurate like atomic simulation of that person. Does this make you feel better about dying of cancer? And for me, the answer to that is no. And I think many people and faced actually with that situation, would conclude no.
This is no as in you feel you will have died if you got destroyed, yet there was a replica of you that got booted up a second later.
Yeah, exactly.
This is like I'll be survived by my friends, which is great, but doesn't necessarily make me feel a lot better about my specific situation.
Right. In other words, the replica that gets booted up a second later thinks, wow, I'm max. It was just over there and now I'm over here. But the question is do you get any benefit from that?
Exactly?
And so from its perspective, it's probably right. And I think that people respond to this while saying like, well, every night you lose consciousness, you wake up in the next morning, you've broken some continuity there, which I think is like also totally fair. That's like also not that's true, but still doesn't really make me feel better. And so the two camps here are my agency living on in the world, which can be done through some other, some model, some replication of me that makes me feel like my influence will persist, versus I will accept drift in the personality in the agency as long as I get continuity. And so that's like the brain to brain connection there is like you'll get significant personality drift because you're kind of averaging together to people to some degree, but you get continuity or is this living on an agency without continuity?
Is that good?
And what's interesting is people's brains seem to make a choice on this early in their life and they are unable to see the other one. They're very convinced that this is like one of these two things is nonsensical. And so my read on this is that this is a there's a choice of metaphysics that's being made here and from which you reason. So this is a kind of a choice that your brain has made that allows you to see something and then from there you start reason. And so you can't like really talk your way through this, But I think these are kind of the two tribes that like metaphysical tribes here, And my guess is that kind of people get converted to continuity when faced when it becomes like a real thing. But that's the that's a philosophical question for which I don't know there's a right answer that keeps debate going.
And do you feel any differently about the problem if you were degraded into your atoms and then those atoms were beamed over somewhere and then reconstructed, But it's still you. You're degraded and you're rebuilt up. Does that make a difference for you?
Yeah?
I mean, so this is this is the second transporter problem. Is if you send the atoms, does this make it better? And I think really the thing that I don't know? I mean, I think so. There's a show that I love that recently came to Netflix, was really hard to watch for a while called Pantheon.
Highly highly recommended.
I think Pantheon is probably the best depiction of how I think the next like fifteen years might go that I've ever seen in fiction.
It's adult animation.
It's by that it's based on a series of short stories by Ken Lew who is probably best known as the English language translator for the Three Body Problem series. And that show is amazing but also terrible metaphysics. It's a destructive upload, it's like, but the characters also realize this. There's graffiti on a building at one point that says like Dina live forever, which I don't find that compelling of value proposition, but it's an interesting depiction of a world where you kind of get to the other side of that of that choice of metaphysics to the degree that people aren't worrying about it anymore, and from the backwards looking perspective it works out fine. And so that's certainly one potential view there. The other is that what if you really believe what matters is continuity, then what you have to do is you kind of have to get a seed brain on both sides of the transporter, briefly establish brain to brain link to get the continuity through it, and then that's enough. As long as there's a brief moment of continuity, then that kind of gets you through that philosophically.
Oh interesting, So this is where you might do your four hemisphere trick.
Exactly, Well, yeah, I mean typically mean and that in the case where it's really like an adom for adam reconstruction, and the representations are already shared, then you wouldn't need any time. If you did this with two people do for that to really make sense, there'd be some time to get representational like drift between them. It's funny because if we talk about these things are interesting and are genuinely becoming from the realm of science fiction where they some of them still are today, and to the realm of engineering, which not all of this is today, but also only clear Like we don't at work, we don't really spend a lot of time thinking about like the future of humanity. It is mostly, as I often say, debugging Linux drivers, yeah, and writing regulatory documentation.
So what drives you in your work?
I mean, look, if you really believe that these things are possible within our lifetimes, I just like AI is also very exciting. There are other exciting things happening.
In the world.
But when you really believe that these things could actually be possible, I think it is tough to think about a lot else.
That was Max Odak, founder and CEO of Science Corporation. He's working on the challenge of how to read and write from the brain, and really there are only a handful of people who are doing that. With the smarts and entrepreneurial bravery of Max, he and his team are at the cutting edge of integrating with the brain, whether that's by turning pixels into lasers and stimulating a tiny implant in the back of the eye, or growing neurons into the brain that ingratiate themselves into the network in a way that you can spy on the activity there. You can check out more about his company in the show notes at Eagleman dot com, slash podcast, and Max's website is science dot xyz. So let's wrap up at its core, the idea of growing cells into the brain as a brain computer interface. This challenges the comment intuition of a division between biology and machinery, and more generally, however, we make interfaces to the brain, these open the possibility that we'll be able to someday not only interpret what it is to be a human, but also enhance that and that in the future, even things like our thoughts, which seem unassailably private and ineffable. Things like thoughts might soon traverse digital pathways the way any data flows through a network. What does it mean when a thought leaves the confines of the skull? The story of BCIs is just beginning, and it's not just a story about the technology. It's the story of a whole new channel of communication. It's about translating the language of neurons into the language of computers, or perhaps eventually into the brains of other people. It's about giving voice to the mute, it's about giving movement to the paralyzed, and it's about giving wings to our imagination. So The work by Max and others in the BCI space invites us to consider whether our brains have to always remain isolated entities, or whether they can interface with a broader universe. This work reminds us that the brain doesn't always have to be merely an imprisoned container for thought, but instead a living, dynamic interface with the world, one that's going to soon enough, maybe in our lifetimes, reach far beyond the biological limits to which we have become accustomed. Go to Eagleman dot com slash podcast for more information and to find further reading. Send me an email at podcasts at eagleman dot com with questions or discussion, and check out and subscribe to Inner Cosmos on YouTube for videos of each episode and to leave comma until next time. I'm David Eagleman, and this is Inner Cosmos.