Daniel and Kelly talk to Prof. Megan Peters about the inner workings of the mind, and how much we do and don't understand it
On the podcast, we love to ask deep questions and then explore the edge of our knowledge. Today we're going to go a little bit meta and try to understand the workings of our own minds, the us that is asking those questions and yearning to understand the universe, from biology to physics, and yes, sometimes even a little bit of chemistry. Science gives us this incredible tool to consistently build knowledge about the universe, but it's not clear that it can tell us about ourselves. That it can help us understand why we have a first person subjective experience in the first place, why we have an experience at all, why chocolate has a taste and pain has a negative valance, Why there's something it's like to be me. How's that different from what it's like to be a bat? Is it like anything to be a rock or a puddle or a star? Science, like all you and endeavors, is limited by the senses that we use to interact with the universe. It can help us understand what we sense and find patterns and build models, but it can't ever really tell us about the hard reality of the physical universe that exists out there beyond our minds and our senses, and that same limitation might apply to the other end of that sensory pipe. Can a scientist use science to probe the nature of the mind while trapped inside their own mind? That might just be the province of philosophy. Today on the podcast, we're getting philosophical and asking what we know and how we can know about how consciousness emerges. Welcome to Daniel and Kelly's Extraordinarily self Conscious Universe.
Hello, I'm Kelly Winnersmith. I study parasites and space, and today I'm wondering if we live in a simulation.
Hi, I'm Daniel. I'm a particle physicist, and I don't care if we live in a simulation. I want to eat simulated dark chocolate.
So that's my question. Is it possible that we live in a simulation? We're getting philosophical today, let's go all the way.
Yeah, absolutely, it's possible. But it also seems like a modern reflection on recent concepts about how computers work. You know, one hundred years ago, people thought the whole universe was like pipes and cogs, and these days we think it's all computers. So it's possible, but we also have zero evidence that it's a simulation, and dark chocolate tastes really really good anyway.
That's true, and so does pad see You. I'm hungry today.
This is why we shouldn't report the podcast just before lunch.
That's right. I'm so glad that I have simulated taste buds and smell receptors so that I can enjoy all of those things very much.
But it is fascinating to think about where your consciousness were, where your experience comes from, why it is that you even have one while you can like or hate pad see You, whether or not that's something that comes from the meat in your mind, or whether it's just the information circling through those neurons which could be uploaded to the cloud.
Yeah. I mean, I think you'd have to be a machine to not like TI food in general personally, but I could be wrong. But I'm excited that so you and I. We get a lot of emails from listeners. You can send those emails to questions at Daniel and Kelly dot org. And lately we've been getting a lot of questions from folks who have been asking about consciousness. I think we've been getting like one a week for a while, and we decided it was time to jump into the consciousness question, and you lucked out, and you happen to know the perfect guest I do.
There's lots of really smart folks here at you See Irvine, and I'm good at googling and finding them and persuading them to come and talk to me on the podcast. I have a lot of fun, and this is a topic that's really close to my heart. I've been thinking about it for a long time and also thinking about whether it's something you really can ask scientifically. You know, science is wonderful and powerful, and of course we are advocates for science here on the podcast, But just because it's a tool that does let us build knowledge about the universe doesn't mean it's a tool that can answer every question. Right. You can't answer the question like should I get out of bed in the morning? Why do people eat white chocolate? Anyway? Right, Not every question is scientific, and so there are things about the universe that we can probe with science, and there are limits to what science can answer. And I've always wondered how much we can understand about consciousness itself using science. So I was really excited to talk to a cognitive scientist who thinks about this all day in a very serious way.
Yeah, I'm excited that consciousness is sort of at the interface of science and philosophy, which makes it a really fun topic to think about. And so I had a lot of fun today with Megan Peter's chatting with her about this.
Stopping before we dive into this wonderful conversation, we were curious what folks out there thought about their own consciousness, what ideas people have for why they even have an experience, how these few pounds of goo in your brain somehow generate all of this joy and frustration that we experience. So I asked our cadre of volunteers to chime in on the question how does consciousness emerge? If you'd like to provide answers for a future episodes, please don't be shy to right to questions at danieland Kelly dot org. So think about it for a minute. How do you think consciousness emerges? Here's what our listeners had to say.
I think we have to really think about what consciousness might mean before we can figure out where it comes from. But if I had to make a guess, I'd say consciousness is just energy sliding into different forms as it rolls along from our brain.
Jews exponential, exponential extension of the emergence of life. I don't think anyone knows human consciousness is an intersection of the more mechanistic functions of the human brain and the created soul. I think consciousness needs you to have both theory of mind and the complexity to apply it to yourself.
Very slowly on a Saturday morning after a night out down the town with several coffees.
Consciousness is fundamental and that we are as humans somehow tapped into it. Wow, consciousness, that's a bigie. Well, I think it's magic.
As the parent of twins who think I'm pretty much useless, I'll say it that somehow it involves interaction with others.
Consciousness emerges when the penguin models in. I've always leaned towards the idea that consciousness is an illusion.
I believe consciousness emerges when the brain's elegical florid of plasma interacts with the noise and dangers in the brain.
Thanks the brain itself is a sensory organ that can sense its own processing.
A pint of strong fresh coffee is a necessary but not sufficient condition.
It's a very complicated dance in the brain. It's a very slow process. I think consciousness takes many, many forms, things that the brain is doing, neural inspiring, reaching out a critical consistency, and an introduction number.
I don't think we can know how consciousness emerges until we know what consciousness is and if we have it all.
Right, We've got coffee, penguins, magic, all kinds of amazing things in there, and a large diversity of answers.
Mmmmm, And there's a thread here. I've noticed a lot of people suggesting that somehow it arises from the complexity of the brain neurons, reaching a critical number or you know, all these interactions. To me, that sort of frames the question but doesn't really answer it, right, Like, why is it the complex network of cells can have a consciousness emerge regardless of the complexity, Right, It's not obvious to me that complexity is enough. And that's really I think the focus of the question for today's episode.
I would personally love to know the evolutionary reason why we have consciousness, how does it benefit us? And would we expect to see other animals to have it as well? Yeah, and anyway, there were so many exciting things we talked about with Megan today.
Yeah, there's lots of really fun questions there, like where are there other species of humans which actually were more intelligent but didn't survive because our ancestors were more warlike or more cannibalistic? Right? Or if you ran the Earth experiment a thousand times, how often would you even get intelligence or more or less intelligence? All right? Is this a fluke or is it a common outcome? Man? I'd love to know the answer to those questions.
And you're also getting into some of the complexity there because you were calling it intelligence and where does intelligence end in consciousness begin? And what is the difference in those things? And it gets complicated pretty fast.
It does. And fortunately we have an expert to help us dig through this. So let's jump right into our conversation with Professor Megan Peters from the Cognitive Science department here at UC Irvine. All right, so it's my pleasure to have on the podcast Professor Megan Peters. She's an associate professor here at you see Irvine in the Department of Cognitive Science. She's a PhD from UCLA in cognitive science and her research aims to reveal how the brain represents and uses uncertainty. She uses neuroimaging, computational modeling, machine learning, and neural stimulation techniques to study all of this, and most importantly, she agreed to come on the podcast and talk to us about what is consciousness and answer all of our questions and fend off Kelly's poop references.
Ah, good luck, Thank you both for having me. It's really a pleasure to be here. As an aside, my PhD is in psychology with a focus on cognitive and computational neuroscience. If that matters, So you can cut this part out if you want to redo that intro.
No, we'll find out if that matters, we'll see Yes, absolutely great, So let's dig right in. I want to know And this is maybe the hardest question we're going to ask you all day. How do we define consciousness? We're going to be talking about it, We're going to be arguing about We're going to be saying do rocks habit? Do dogs have it? But it's kind of slippery if we don't even know what it is we're talking about. So what is consciousness? How do we define this thing that we all sort of know intuitively but have a hard time describing.
Super important question and leads me into one of the first kind of technical things that we'll talk about today, which is the distinction between are the lights on? But is anyone home? Essentially, So we talk in the fields of psychology and in the philosophy of mind about the distinction between what's called access consciousness and phenomenal consciousness. So these are terms that the philosopher and ned Block introduced a while back, and the idea is that access consciousness is about something like the global availability of information in a complex system to allow that system to behave usefully in its environment, to survive, to seek goals, to seek rewards, to get food, to not be eaten by something. And the distinction then is between you know, access consciousness, which is information goes all over in the brain or in the mind if there isn't a brain present. But the difference then between that and phenomenal consciousness is that the phenomenal consciousness has to do with the phenomenology of the experience that the observer is having. So I'm going to break that down that really just comes down to this something that it is like to be you the qualitative experiences that you have of the world. The fact that pain hurts. It isn't just a signal like your tesla would send to itself, like damage in my right front tire or something like that. It's more like it has qualitative character to it. There's something that it's like to be in pain. If I whack you in the leg, it's not just a signal that I've damaged the tissue in your leg. There's more to it than that.
So are you saying that a tesla has access consciousness because it notices damage in its tire, but it doesn't feel pain, doesn't have anomenological consciousness, or if I miss.
Ansers, I wouldn't even go that far to say that a tesla has any sort of access consciousness whatsoever. I think that that's a higher bar there, But I think that the distinction is important.
Right.
We can imagine a system that has all of the hallmarks of access consciousness, like a fancy future tesla. Okay, you know that has all of this global availability of information. It enables flexible adaptive behavior. It can change based on its context. It's not just going according to its programming. You know that kind of thing. And yet there's nothing that it's like to be that fancy future tesla. It's just a zombie. It's just engaging in behaviors that are useful for that organism, but there's no one home. So someone being home aspect is also really important because we can imagine in like, you know, not a future robot scenario, but just a medical situation where you've got a person who's in a coma or they're in a persistent vegetative state, and they might wake up seemingly and have of sleep wake cycles and maybe respond to external stimuli. But the critical factor in deciding whether you know someone's home whether to keep them on life support is whether there's anything is like to be them inside. And the flip side is also really important in the context of medical science, because even if they don't exhibit any of the outward signs or symptoms of being conscious, of having access to that available information, of being able to behave in their environments, someone still might be in there. They might be locked in. And so it's this presence of phenomenal experience of someone being home, so to speak, that I think is the important thing to remember when we're talking about consciousness in the context of biological systems or artificial ones.
So I'm still trying to wrap my head around the two different kinds of consciousness. Is there a kind of organism or a situation a person can be in where they would have one but not the other to help me sort of differentiate.
Well, For humans, the idea is that, presuming that you don't subscribe to, you know, philosophical zombiesm that like, you have no idea that I'm conscious, right, I exhibit all the hallmarks, but you have no idea. But presuming that you don't want to go down that rabbit hole, I.
Do actually in a minute.
In humans, it seems like they go hand in hand, right, that if you have access consciousness, you have phenomenal consciousness in general as an awake, behaving organism. But there might be cases in specific scientific experiments where you can reveal symptoms or cases or evidence of access consciousness in the absence of phenomenal consciousness. So there's some very specific psychological experiments that suggests that these are separable entities. One classic example is a task that was actually attributed to and was developed by a cognitive scientist here in my department, George Spurling. So this is the classic spurling task where you show someone an array of letters. There's like five letters in a row, and there's rows of letters, and you flash it very fast, and you ask people to kind of give their impression of the overall array, like did you see it? Did you feel like you got the whole thing? And people say, yeah, I feel like I got the whole array. That's fine. But then when you ask them to report the whole array they can't, But when you ask them to report a specific row they can't. It feels like there's a distinction then between this feeling that you've got phenomenal experience of the whole array, but your access might be limited, and so ned Block has famously called this phenomenal consciousness overflowing access.
It seems to me sort of the crucial distinction, right, because access consciousness is something we can understand sort of on a fundamental level, Like we can trace signals into your eye and watch those proteins fold and then up the optic nerve and into your brain, but we can't know whether anybody's there like experiencing, or what that experience of seeing a red photon is, or whether my red is the same as your red and all this sort of dorm room philosophy kind of stuff, and so access consciousness is sort of more accessible scientifically than phenomenological consciousness, which is more philosophical. Is that fair?
I think that's a relatively fair characterization, and that there are a lot of kind of current scientific theories of consciousness that report to be a theory of consciousness, and most of them, really, when it comes down to brass tacks, end up being about access consciousness because the phenomenal part is really hard to get at. As you said, I think that there are some approaches that might be promising in this vein. So you could look for neural correlates or patterns of neural activity that are associated with reports of phenomenal experience. So like I can create conditions where I show you the same stimulus over and over and over again, and so the early parts of your visual brain are responding kind of similarly. I mean, there's noise, and there's variability and so on, But if I flash the same thing at you over and over and over again, I'm going to get a consistent kind of pattern of responding in the back of your head, which is the early visual cortex. But you might have fluctuations in your subjective experience of that stimulus. Sometimes you feel like you see it, sometimes you feel like you can't. So to the extent that I can hold most of the stuff in the back of your head constant, and I can measure that in relation to flashing something at you over and over and over again. But then I look for how your subjective experience fluctuates from one moment to the next. I feel like I saw that strongly. I feel very confident that I got that right. I feel like that was nothing at all. I didn't really experience anything. Then I can go look for neural correlates that are co varying with the subjective experience that you're reporting to me. There are a lot of problems with that too, that maybe you don't have perfect access to your own subjective experiences and your reports are spurious and blah blah blah. But I think that there are ways that we can go about trying to get at how the brain constructs or supports or represents these subjective experiences that are different from just how your brain processes information about the external environment.
Right, And I do want to get into those experiments. But first I just want to make sure we're totally clear on these definitions. And I think the example that you were a little dismissive of a minute ago is actually helpful, at least to me to clarify what we're talking about. And that's the example of philosophical zombies. Right. The idea here is, could you take Daniel and replace him with some machine, biological or whatever that replicates all of my actions? Seems the way I do to be conscious. But there's nobody home, right, there's nobody experiencing the red and eating the pizza and whatever. It's just, but it seems like it reacts exactly the same way. So philosophical zombies, as I understand them, and tell me if I'm wrong, conjectures that it's possible to build something like this, which means therefore that the phenomenological consciousness is totally unmeasurable. Right, that there's no way for us to know we have, As you said, we have to sort of trust your first person reports about your subjective experience, which makes it difficult to do any actual science. Right, is that a fair characterization of philosophical zombies?
If I missed something, Oh, I have like five things I want to say, let's make sure I get to all of them. Okay, So first, the idea of philosophical zombies has been hotly debated in the philosophical literature for a very long time, and there are a number of philosophers who say, yes, this is absolutely totally reasonable to pause it, that this could happen, And then there are other people who say, no, like, that's not really like a reasonable assumption. So you know, go read Dan Dennitt if you want to hear about all of this stuff. But I think that you've touched upon another important point too, which is this idea of testing for whether someone is in there or not, and how we don't currently have the capacity to do that even in other beings that look and behave precisely exactly like you do, like me or like.
Kelly, or chemists are like, are they really in there? Do they actually like chemistry? Is that possible?
They have to be a machine?
Right, are they just lying?
Yeah?
We don't have a way of testing it. We don't have like a consciousness ometer. I can't pull out my hair dryer looking consciousness ometer and pointed at you and be like are you conscious? Like we don't have that.
How do you know what it would look like? We don't even have the device. Why does it look like a hair dryer?
At some point actual clearman is positive that it was going to look like a hair dryer, and it is kind of propagated throughout by thinking since then. But that seems about right right, Like it's kind of like a spidometer, like they, you know, cop pulls up that on the side of the road.
Too much consciousness, here's your ticket.
Something like that. But we don't have those and in fact, like we wouldn't even know how to build one, right, We don't even know what the relevant metrics are. Do we care about measuring brain stuff? Or is that kind of spurious? But one of the arguments kind of against philosophical zombism is the supposition that consciousness is not just this epi phenomenon, That it's not just this thing that kind of comes along for the ride over and above an organism behaving usefully in its environment and flexibly adapting to different conditions and being intelligent and seeking goals and not getting eaten by predators. That consciousness itself serves a function that you cannot have all of that stuff, all of that useful stuff for staying alive without consciousness. And this is an arguable position. It's not a fact. But the folks who are going to argue that consciousness serves a function, there is a function of consciousness. It's not just that there are functions for creating consciousness somewhere in your head, but that consciousness itself is useful, that it allows some evolutionary adaptive advantage. That is certainly a defensible position. And from that context you actually couldn't have philosophical zombies at all, because it is actually impossible to get all of those other behaviors and cognitive processes and all of that stuff without the consciousness bit.
I see. If that's true, then the interaction with people who appear to be conscious is evidence that they are actually conscious.
Yeah, but these are all like arguments that we can have, right. There's no way of saying I'm right and you're wrong, depending on which position you're holding.
I think about this when I interact with my dog, you know, because I feel like my dog is in there right, Like my dog knows me, my dog loves me. I love my dog. My dog understands that I love him, that I'm nice to him. It's hard to imagine that there's nobody at home in my dog. But you know, this whole concept suggests that it's possible for there to be a machine effectively, and I say a machine just to indicate like that there's nobody home, though I don't actually know if AI could have phenomenological consciousness, But it's hard to imagine that you're not actually in there, Kelly's not actually in there. I'm the only one in the universe who's experiencing this, but I can't actually prove it, right, So it really isn't fundamentally important point, even though it sounds absurd on the face of it, right that we could be the only one aware that we can't actually know if anybody else is in there. We feel people loving us, you know, and their experience and their pain reflects ours, but we can't actually know, And I think that's a fundamentally important point to hold on to, even though it feels ridiculous.
It is, I mean, the easiest solution here is that we all have consciousness, right Like that would be the most parsimonious explanation, and it's easy to make that leap because there's so many physical information processing similarities between you and me. Right, our brains are not exactly the same, but they're pretty darn close, and so we can make the leap maybe to talking about great apes or monkeys or dogs or other vertebrates, other mammals, that kind of thing. I have a harder time when we get down into you know, insects, like I don't know that it be a really.
But I don't know, like what are people who like live in Virginia, for example. That's just like.
I'm not going there. But the presence of like the idea of a philosophical zombie, it becomes maybe a more useful exercise from like an empirical science standpoint, or from even a philosophy standpoint when we start talking about entities that are so fundamentally different from us, so not just you versus me, or us versus your dog or my cat, who you know is probably right on the bord of it, but no, I think he's conscious. I think there's something that it's like to be him. But when we talk about octopuses like those are really weirdly different creatures that are basically aliens, or when we talk about you know, the potential for silicon based systems in the future or alien species or those kinds of things, things that are so fundamentally different from us. Then it starts to be like, okay, well, when we don't have the kind of one to one mapping between the str ure of my biological robot that I drive around and your biological robot that you drive around, right when we don't have that very similar to it, then it becomes maybe a little bit more useful to talk about, like can you have all of these behaviors and all of these cognitive capacities in the absence of someone being home? And we don't know the answer, but it at least gives us more than just kind of a philosophical, as you said, dorm room argument. Feel like I don't know that you're in there, But is that a difference that makes a difference right now? Probably not?
All right, So we're going to take a break and when we get back, we're going to dig into OCTOPI a little bit more so for access consciousness. And you've sort of hinted at this already. The definition included like escaping predators having different motivations That to me does sound like it encompasses at least all vertebrates. So is it safe to say then that the consensus is that all vertebrates at least have excess consciousness.
I don't want to speak for any big scientific community on anybody's behalf, I would say that my personal sense is that, yes, you're going to be hard pressed to find people who would die on the hill of saying that cats are not conscious. I think that would be hard to find.
Yeah, so you mentioned that octopus they're smart, but their brain is different than ours. Would you say that they have phenomenological consciousness because it seems like someone's in there when you interact with them.
I have no idea. I have no idea, but I think that you know, saying that we have no idea is reasonable. Yeah, under this situation that I don't think that we have strong empirical evidence either way, because that strong empirical evidence is predicated on this whole conversation that we've just been having about what would that evidence even look like? And how do you separate evidence for phenomenal consciousness from evidence for not just access consciousness but just pure intelligent behavior. Right, Never mind access consciousness and this global availability of information thing. It's really just that octopuses occupy are very intelligent. They're very intelligent. Clearly, they can be even sneaky, you know, they get out of their cage and they go open something and steal the food and then they go home so that they don't get in trouble. So they're clearly highly intelligent creatures. I think that my margins of uncertainty, my cone of uncertainty, is just so wide there that I have no idea.
So how do you study the difference between intelligence behaviors and consciousness. We'll just stick with access consciousness in animals in the lab.
Yeah, so in animals that's even harder too, because how do you query the phenomenal experience of a rat? It's kind of hard. In my line of work, we focus on one particular kind of subjective experience, which is that of metacognitive uncertainty. So one of the things that we ask people to do like people now, but we can do this potentially in rodents and monkeys as well, though I don't have those folks in my lab. I only have people. Will ask you to do some task, like we're going to show you some stuff on a computer screen, ask you to press buttons, tell us what you see, and then we'll ask for a subjective report on top of it, How sure are you that you got that right? How confident do you feel in your assessments? How clearly do you think you saw the thing? And some of those questions we can design experiments to ask in the animals too, and so those are kind of more about the subjective experience, especially if we do manipulations to the stimulu or task such that the animal's behavior is basically exactly the same between one condition versus another condition, but their subjective reports differ. So that tells us that we're trying to get at something that has to do with phenomenology or maybe something that's facilitated by global access of information, rather than something that's just kind of the zombie part of their brain, like you know, almost reflex like responding to the stimuli out there in the world. But the distinction between is that subjective report facilitated by globally available access to information or are we actually tapping into phenomenology. That's also kind of sticky when you get to talking about rodents and that kind of thing, But that at least allows us to say, never mind all of phenomenal experience or subjective experience or consciousness. I'm going to focus on thisbit, just this one thing that I can operationalize, very classic for a psychologist, Right, I'm going to put you in a room and ask you to look at a computer screen and press one of two buttons for an hour, and that's what you're going to do. But that's how we try to get at this.
I think the example of an octopus really puts the finger or the tentacle on the question of like what it's like, which to me is the core of the question. Like, you know, the mechanics of how information is accessed or stored or whatever. That's fascinating and that's good science. But to me, the real question, which I think is what people call the hard problem, is how you get to have an experience from something which is you know, just made out of stuff. Like my desk doesn't have an experience, My computer, I think, doesn't have an experience. Most of the universe doesn't have an experience. But I have an experience, and you have an experience, and probably, for example, an octopus has a very different kind of experience. It's got like eight little brains arguing about what each leg will do, and you know, an alien out there with a different kind of sense. You know, a tongue that can taste quantum electrons or something might have a completely different kind of experience. And to me, the hard question is, you know, where does this come from, this phenomenological aspect. Is it fundamental to matter? Is it somehow emergent in some way we don't yet understand? How do you pose this question? Is that the central question you think, and how do you think it's best asked?
I think that's absolutely one of the central questions. Is you know, how is it that the something that it's like that the conscious experience arises from matter. That is, as you said that Dave Chalmer's hard problem, which is, you know, there seems to be this fundamental disconnect between patterns of brain response and the subjective experience. It's a very different natural kind. It's a very different kind of stuff, right, The subjective experience stuff is unlike any other kind of stuff in biology.
What do you mean by that?
That's the hard problem. That's the idea is that it's not clear how you would get something like subjective experience whatever that is, out of the interactions between physical neurons. The bridge, you know, might be something like Okay, Well, one of the things that physical neurons do in talking to each other is similar to how computers do this is that they create information. So now we're kind of moving from physical space into information space, but nevertheless we don't know how to even get from information to subjective experience. We can call them the same thing and say how we're done, like moving on with our lives, but that doesn't feel satisfying. And there are people who think that the hard problem is not a problem at all, that subjective experience might not even exist at all, or that our belief that subjective experience exists is an illusion. So there's like a whole this is really complicated, and we could talk for hours and hours about this.
Who's experiencing that illusion though? In that case?
Right, yeah, exactly. But if you want to read about, you know, illusionism from a very deep and powerful perspective, I'm going to mention him again, go read Dan Dennett. He wrote very extensively on this topic. So the fundamental question is, you know, how do we get something like consciousness out of something like brains? And is the substrate important for creating the subjective experience and the shape and nature of that subjective experience, or is there kind of one type of consciousness that all kinds of systems might create, regardless of their physical substrate. And you might maybe think that the latter statement seems a little strange, that, like, how is it that an octopus could create a subjective experience that's very similar to mine because our substrates are so fundamentally different. But we do see evidence of convergent function in evolution in terms of things like digestion, where like, you've got lots of very different kinds of systems that all accomplish the same computation or function of digesting stuff, And so it is possible to think that different substrates might accomplish the same kind of function in creating the same kind of conscious experience.
Are you saying that the fact that like birds and bats evolve flights separately, but fundamentally it's the same thing, suggests that different kinds of wet matter could generate the same kind of subjective experience.
It is possible. I don't know, if you think that consciousness serves functions, that there might be one kind of subjective experience that best serves that function. The only thing I was going to say Beyond that, though, is that I'm more sympathetic to the idea that there's probably different kinds of subjective experience across different substrates. It feels like the burden of proof on saying that all subjective experiences are kind of the same across lots of different systems would probably be on the people who are making that claim. I think it's much easier to say the kind of subjective experience you have depends on the substrate that's generating it.
How else to explain how some people actually eat white chocolate and claim to enjoy it, right, I mean, it's impossible to understand.
That bem maladaptive.
So hey, I like what chalk?
Oh oh, sorry, that's all right. I'm from Virginia. Daniel's just going around insulting everybody today.
All right.
So we were talking to Joe Wolf the other day. She's an evolutionary biologist who studies convergent evolution. I feel like if she were here, she would be explaining to us about how if you look at traits, like, you know, the evolution of the crab body plan, you'd really love to like have an evolutionary tree where you can count how many times this thing popped up and how many times it disappeared to try to understand like its adaptive value. What kind of preconditions you need before this thing comes into existence? Could we ever have that in the study of consciousness or will we never be able to know? Like does an octopus have access consciousness? Will we ever be able to get there?
Great question? As of now, the path is very murky to me because as we just talked about it before, like you can't pull out your hair dryer consciousness ometer and pointed at things. And the challenge is really because the thing that we're studying is by definition unobservable by anybody except for the observer who is experiencing the consciousness. Like that is the definition of what we're talking about, and so it's really very different from anything else that we study in science.
And their second wrinkle there also not only is this something which we can't observe or measure we have to rely on somebody reporting it, but it's also filtered through our own consciousnes right, like we sort of assume consciousness when we do science. We're saying we have hypothesis, we do experiments, so everything is filtered through sort of like two consciousnesses when we're even like talking about this, and maybe you're about to answer this question, but like, is this something we can probe scientifically or is it limited to philosophical discussions on the roof with banana peels?
Oh my gosh, there were like five questions that, Okay, so is this something that we can study scientifically? Yes, I think so, But I think that we need to have a lot of help from philosophers. So a lot of the work that I do, and a lot of my scientific friends are not actually scientific friends only they are philosophers as well, And I think that there's a lot of value that we get from that.
Yeah, And I didn't mean to suggest that philosophical exploration is like not valuable. It's absolutely like, you know, the wonderful cousin of scientific exploration, and fundamentally important. But it's also different, right, It gives different kinds of answers.
Yeah, it does. But I think that we need to be informed by our philosophical friends down the hall in understanding whether the experiments that we're designing are really getting at the target that we think we're interested in studying. And so there's a lot of work out there that's like Okay, I'm going to tell the difference between whether you're likely to wake up from a coma or not. And that's really relevant in the clinical setting and is so powerful and important that we do that. It doesn't necessarily tell us about the experience that the person is having. It just tells us kind of binary whether it's there or not. And we need that. We need to have measures that will allow us to predict is someone in there right now? Are they awake? Are they likely to wake up? We need all that stuff, but that doesn't really get at the fundamental question of this subjective experience bit. And we also have a lot of studies out there that purport maybe to do research on conscious access, but really, if you changed the question, you might start to be skeptical about whether they're targeting that. So I'll give you an example, which is I, as a psychologist, I put you in a room and I ask you to tell me whether you saw something or not. And that's a subjective answer, right like did you see it? Did you not see it? Right? Like, I'm asking you are you consciously aware of this stimulus? And then I could go measure brain responses or whatever that go with cases when you said you saw it versus when you said you didn't see it. And I say, okay, now I find like the neural correlates of consciousness. But now imagine that I replace you as the human observer with a photodiode. I could do exactly the same experiment and I would never conclude that that photodiode has conscious experience. Probably, So I think that there's like a lot of challenges that the philosophical fields of philosophy, not only of philosophy of mine, but philosophy of science. I think too, like really has a lot to say about how we're designing experiments and how we're interpreting their results. There was another question that you asked earlier in that stream, but now I don't remember what it was. Maybe it's about like, can we ever develop a test for consciousness? So that's another thing that we should talk about. So there's been some work that I've contributed to recently where we're saying, well, we don't have a consciousness ometer, and we wouldn't even know how to go about building one. We don't even know whether pointing it a behavior or brain or something entirely different, Like I don't know, like aras some like crazy other thing. We have no idea what to even point it at, never mind how to build it. But one way that we might make progress is to collect all of the potential consciousness ometers that have been built over the years in terms of behavioral signatures of awareness and humans and neural response patterns and so on. Collects them all and then make a decision about which ones are applicable. First, how they all correlate with each other in terms of predicting whether someone is in there and what they're experience, and then whether they're applicable to a neighboring system. So I'm not going to jump straight to octopuses or tesla's or aliens, but i might jump to young children because most of these studies and these metrics are developed on adults. And so I'm going to say, okay, well, I'm going to take all this stuff and then I'm going to point it at young children, which I also presume are probably in there. You know, when they stub their toe, they cry like they indicate that they are experiencing pain. And I'm going to see how much those metrics now continue to correlate with each other and continue to make useful predictions about whether that subject is aware of a stimulus or not, or you know, wakefulness versus sleep, that kind of thing. And then if that seems to be okay, then I'm going to say, okay, now I'm going to point them maybe at grade apes. Now I'm going to point them, maybe at New World monkeys. Now I'm going to point them. You know, so we can kind of go down the evolutionary hierarchy, so to speak, and say, well, the degree to which a particular candidate set of consciousness ometers is applicable to a particular system is defined by their similarity to us, and then we have to make decisions about what metrics of similarity to use. But you can see that, at least conceptually, from a high level, this might be a path forward.
I have to admit I'm not convinced. Like it feels to me like it's just making fuzzier what we're measuring about what's going on mechanistically inside people's heads. But it doesn't actually tell us anything about the first person experience, which is almost because of the way we defined it, infinitely inaccessible, right, Like there's no way for me to share my experience with you other than and manipulating my mouth or whatever and filtering it through your consciousness to your first person experience. So it feels to me like it's something that's completely inaccessible, and the only way forward, I'm guessing is to like try to dig into this emergent behavior and see if we can make a bridge between the microscopic details that we do understand and somehow come out mathematically with a realization of how this first person experience has to emerge. I'm skeptical.
I guess well, I want to ask you about the mathematical thing, because why is math the answer here?
Because I'm a physicist, because you're a physicist.
Okay, yeah, sure, I knew the answer to that question before I asked it. This isn't a zero some game. We don't have to do one or the other. I think that an analogy that a lot of folks in consciousness science like to use is that of early investigations into the nature of life and vitalism, and so this idea that we had to discover this very specific, fancy thing that was almost magical in nature, that was like why is this alive? And why is this not alive? And like let's maybe do some math or maybe like do a bunch of empirical experiments to discover like the life force or the vitalism, like the lifeness there. Then over time we discover that it turns out that life is just kind of a collection. It's like a bag of tricks, right, and the boundaries are maybe a little bit fuzzy, and like what are viruses? Are those alive? I don't know. So maybe there's an analogy here, which is that if we keep pushing on multiple angles, that there might be a convergence of approaches and information and evidence that will reveal that this hard problem is just going to go away. We don't have to discover a mathematical transformation or an emergent property or anything like that that by better describing the system we will discover that that problem completely dissolves. I don't know, but it's possible. And we saw we have historical examples of cases where something that seemed very mysterious and seemed like an emergent property has now been transformed into a series of really beautiful descriptions of how the system is working. And maybe that will happen with consciousness too.
All right, well, my consciousness needs a break from all these really heavy but amazing ideas. And when we get back, I want to talk to Megan about some of the theories people have to explain these deep mysteries. All right, we're back and we're talking to the apparently conscious Megan Peters, who tells us she is inside her body and driving it like a meat machine, and she's an expert on these questions of consciousness, so we should listen to her. And we've been talking about this sort of hard question of consciousness. How your first person experience is somehow generated from the meat inside your head, or if you're an AI and you're listening to this, the silicon inside your chips, and you know, to me, this question of emergent behavior is really important across science and especially in physics, you know, where we see so many examples where we understand the microscopic laws and then you zoom out and you need different laws, you know, like we understand how particles work, but then you zoom out and you need fluid mechanics, and those laws are very different but still applicable. And so it seems to me like there might be some progress to be made if we can somehow tackle this emergent question or think of it from this prism. But Megan, tell us, what are people doing? What are the sort of current leading theories of answers to the hard problem of consciousness.
Great question. So, yeah, there's a number of theories that kind of bridge between philosophy and neuroscience. So you know, ultimately a lot of these theories are saying, the thing that we know is conscious is us, and so we're going to study consciousness in us because that makes sense. And we can't go studying consciousness in rocks because we don't know that they're conscious, and so that would kind of be a circular argument. So these theories are kind of bridging the gap between philosophy and neuroscience and psychology, and they come in annumber of different flavors. And we are touching upon also this difference between access consciousness and phenomenal consciousness that we talked about before. But let's assume that we can study consciousness scientifically. What do those theories look like. So probably the most influential theory has been the global workspace theory. So Bernie Bars and then stand a Hand started this idea that consciousness is about the global accessibility of information in kind of a centralized processing space. So you have these different modules that either take in information from the external world through your sensory organs, or they have other functions like memory storage and recovery. They have other functions like integration of different sensory systems information, that kind of thing, executive function, decision making, sort.
Of mental analogies of organs, right the way you're like, your liver has a function and your stomach as a function.
Yeah, to think about them as modules is typically how they're described, you know, encapsulated modules. But then these modules share information with a central global workspace where there's a translation that happens between whatever representation is happening in that module of the relevant information in that module, and then it gets pushed into a global workspace. And if it makes it into that global workspace, it's available to all the other modules for processing, so it can influence the processing in each of those other modules. And so the idea is that this is kind of a computational level theory where this global availability of information then facilitates goal directed interaction with the environment. Run away from that thing, don't get eaten, do eat that thing, et cetera. And that we can also see hallmarks of this global broadcast or global availability of information in the brain, where when you have cases that someone becomes aware of something because the signal is strong enough or so on, that you actually see all of the information propagate throughout the brain. You can see the information and say, a visual stream of evidence not only land at the back of your head. If you reach back and touch the back of your head, you find that little bump that's about where your visual cortex is. It's called the Indian So if you're not conscious, the information stays back here in your visual cortex, and if you are conscious of that information, you can actually see with electrophysiology, with EEG, electrocapholography, with fMRI, you can see that information travel forward and end up elsewhere. And so this global availability of information, both from a neurophysiological standpoint and from a computational standpoint, seems very useful for an organism and seems very related to in us, whether we're conscious of something or not.
And this seems like a helpful way to sort of take apart what might be happening mechanistically inside the brain. The way you might take a piece of code and look at it and be like, oh, how they organizingk oh is a database and the act with the hash table whatever. Oh, this makes sense, but does it get at the question of why is this thing experiencing itself?
Not necessarily? And that has been one of the criticisms of global workspace theory or global neuronal workspace theory, which is the neural version of it, which is that it's kind of more about access consciousness or maybe even just about global broadcast of information and not anything having to do with the C word consciousness at all. And so there have been other theories that kind of compete with this one as well. So one of them is that local recurrence, so like kind of local feedback within a given module, specifically the visual cortex, that the strength of that local feedback, that kind of recurrent processing looping, that that is something that somehow gives rise to the experience that we have of the world. And then there's another group of theories that I happen to be partial to, which is that there's an additional step that needs to happen beyond broadcast into a global w or local recurrence or anything else like that. And that additional step is that you've got a second order or higher order mechanism that's kind of self monitoring your own brain.
Why was higher order under scare quotes there for those of you who.
Is listening, not scare quotes, but to indicate that this is a specialized term, so that there are representations that your brain builds about the world, and we would call those first order, and then the representations that your brain or your mind builds about itself or its own processing would be higher order, second order, higher order. But the idea here is that let's say that you have information in these modules, it gets globally broadcast into a central workspace. Your brain has to kind of make a determination of is this information that's available in the global workspace, is it reliable? Is it stable? Is it likely to represent the true state of the environment. So this is where you can see that I'm a metacognition researcher too. But it turns out that higher order theories like this that are about re representation of information that's available in some workspace or some first order state, they actually came from philosophy originally. They didn't come from the metacognition in literature. Originally, So a philosopher named David Rosenthal and another one named Richard Brown. They're both in New York. These are the folks who kind of pushed forward this idea of higher order representations being related to conscious experience, such that you are conscious of something when you have a representation that you are currently representing that thing. I'll say it again. Yeah, So if I have a representation that consists of I am currently in a state that is representing apple, then I am conscious of the apple. It's not enough to just have a representation of apple. I need to also have on top of that state in my head that is, I am currently representing apple.
But wouldn't the philosophical zombie also have that state inside their cognition somewhere. How do we know that that actually generates the first person experience?
We don't.
I want to be hard skeptic on this.
No, it can be hard skeptic. That's great. Even these theories that say that you have a mechanism that says I am currently representing this or that representation is good and stable and likely to reflect the real world, they don't answer the phenomenal consciousness question.
Right. I remember reading Dan Dennitz's really fun book Consciousness Explained, and I'll admit that I was on a roof and I was smoking banana peels at the time, but I found it compelling in the sense that has sort of changed the question right, sort of like his theory. Maybe I'm sure you can describe it more accurately. You know, his multiple drafts model sort of convinces you that your account of your own consciousness might be wrong. You know that there is no present moment. It's all just men reas of the recent past that's later on constructed to convince you that you were aware when you never really were. And the something I like about that theory because even though I don't believe it, it did make me think differently about my own conscious experience. What do you think of that theory? And are people still taking it seriously?
People are still taking it seriously. Yeah, And I think that we don't have currently the empirical protocols or evidence to say that that's wrong. And this is why I want to continue to push for the close integration between philosophy and neuroscience. This is another one where I'm going to be agnostic. I don't like taking a stance on all this because quite frankly, I think that anybody who says that they have solved anything about consciousness is just full of it, like, there's no way because we just don't know. Right. We've got a lot of theories, and we can build up empirical evidence and support of this theory or that theory and this function of consciousness and that kind of thing. But ultimately, if you say that you've kind of created the solution, that your theory is the write one, and you know that for a fact, then sorry, Like, I don't know what banana peels you're on, but I don't think that's useful.
Well, it was a bold title for a book, though, Consciousness Exploited.
Yeah, Dan was a bold guy. Yeah.
Yeah, So a lot of the theories that you were talking about a moment ago were using words that sort of remind me of computers and AI. How do we think about whether or not AI is conscious? Does it matter? Is that an interesting question? What do you think about that?
Great question? And I think that until I don't know, ten fifteen years ago, this was a fun thought experiment in science fiction, And now I think it's not quite so much anymore, right because now we have machines that behaviorally really do pass the Turing test. Which I assume we're all quite familiar with, but just in case, the test is that the machine has to convince a human observer or a human player that it is a person. And we have machines that quite handily passed that under most scenarios or most get out your chat GPT app on your phone, and like it feels very convincing that there might be someone in there. Right, until maybe fifteen years ago, ten years ago, this was really science fiction, and now I think it's not anymore. And the questions not only bear on let's figure out the ontological truth of whether the AI is in fact conscious, but there is also really strong implications regardless of whether it's conscious or not, for what happens if we think it is, and what happens if it is but we think it's not. Right, So there's like strong moral and ethical considerations here. I'll kind of have two ways of answering this. One is from the perspective of current theories of how we think consciousness arises from a like functionalism perspective, which is that there is some brain or computational function that gives rise to consciousness in some capacity. A lot of the things that we're talking about you've rightly pointed out have direct analogies in computer processing. We can certainly build even a little simulation that monitor itself sure that says is this representation reliable or stable? I can build a set of computer modules that then send information into a global workspace.
Sure.
In fact, maybe your iPhone sort of does that already when you talk to Siri as the virtual assistant.
Right.
And recurrency in like local feedback or recurrent processing, that's absolutely recurrent. Neural networks are a thing. They've been a thing for a long time. So we have now all of these kind of hallmarks from these theories that we can use to build something that looks like a checklist. That's like, if you've got an artificial system that has all of these things, well, I'm going to not say then it's conscious, but I'm going to say it might raise our subjective degree of belief that it could potentially have consciousness.
A lot of good qualifiers.
There, Yeah, there are. Because we wrote this big paper me and a whole bunch of other people in twenty twenty three that's up on archive that's called consciousnes and Artificial Intelligence, and it goes through this checklist. It kind of develops the theoretical arguments for building this checklist, and then kind of ultimately says, look, we've got systems that tick an awful lot of these boxes already. Should we think that they're conscious? Maybe not, because clearly the theories are not complete. Some of the things on that checklist are going to be completely irrelevant to consciousness, and we're probably missing a whole lot of things also. But to the degree that there is a system that ticks more or less of these boxes, well maybe the ones that tick more of these boxes we might want to consider a little bit more closely. But then the other thing we do in that paper is talk about the ethical implications of false positives and false negatives.
I think AI is a great way to differentiate between intelligence and consciousness, because it's not hard for me to believe you could build a very intelligent system, one that surpasses humans, that even like manipulates us and takes over the planet and runs us as slaves. It could be super intelligent without actually being conscious, without anybody being in there, right, And there's a dark dystopian future there, you know, where super intelligence actually extinguishes consciousness. But to me, you could maybe get an answer to the question again mathematically, if you could somehow build up a description of consciousness from the little bits inside you know the way, For example, you could say, hey, if you describe all the motion of water droplets, I can tell you whether or not a hurricane is going to form. Like if we can master the understanding of the dynamics at a small scale and compute it all the way up to the bigger scale, then we can say yes or no, there is a hurricane. In the same way, then I could like analyze your brain and I could tell you, oh, yes, this does emerge into some first person experience or does not. If we had that mathematical bridge, and then we could apply it to AI and say, oh, yes or no, there is or is not somebody in there. And I'm really attracted to these kinds of theories. I think they're called physicalism or do you call them functional theories? How much progress have we made in that direction and are we likely to make any more? Or is it too intractable a problem.
I don't think it's too intractable a problem. Necessarily, I'm not going to fully subscribe to the hard problem of consciousness. I do think that if we continue to push on this kind of idea of emergent properties and the translation between a physical substrate and the information that it produces and that emerges from that physical substrate in terms of its interactions in space and in time. I think that will take us forward. I am a reductionist or a physicalist myself. I don't think that there's anything magical or spiritual about consciousness personally. I know that there are others who are going to disagree with me there, but I do think that if we could build such an explanation that it would get us a long way to understanding consciousness, the kind of emergent properties that you're talking about. I think that it's really important for us to recognize that the success of such an endeavor is going to need at its core, the assumption that the system that we are studying is actually conscious. Otherwise it becomes circular. Right Like, if I want to create an explanation of how matter gives rise to whatever I think is consciousness and then start pointing that at everything that I can think of, then I'm not building an explanation of consciousness. I'm building an explanation of I don't know, physical interactions in the environment or information processing in the environment. But I don't know that it's going to get me all the way to consciousness. But if we could do that in us, sure, in one thousand million years, when we have that explanation and we have a full what the philosopher Lease and Rochetitis would call a generative explanation of how a physical system and interactions in that physical system fully gives rise to conscious experience, Yeah, that would be great. I don't know how we're going to do that, but I do think that it does require a connection with that physical substrate.
We're getting to the end of our time together. We've talked about why this is a difficult problem. Let's end on why it's important to keep studying this difficult problem.
I think I'll answer this from three perspectives. One, as human beings, we want to understand our worlds. We want to understand the basic science of how things happen. We are curious, and we have this compelling drive to understand our environments. And we can see this both from the perspective of modern science, but also just from the perspective of this is literally what brains do. They build internal models of the world and they predict stuff that's going to happen from those internal models of the world, and we use that to drive ourselves forward. That's what evolution has done for us. So I think that we are hardwired to do this, to be natural scientists in a way. So I think that that's important, like to give in to that feeling, that compulsion. But also from a practical and societal benefit perspective, we can take multiple multiple prongs on this. So one is the medical perspective, which is that it's really important for us to understand the presence or absence of suffering in folks who have disease or injury. That we want to understand the diversity and heterogeneity of those perspectives. So here's a very concrete example. There are a number of disorders or conditions out there that come with them chronic pain, but you don't have any physical substrate that you can identify. You don't know why that person is in chronic pain, and so you don't know how to fix it. But that doesn't mean that the pain isn't real, that the suffering isn't real. And so from that perspective, understanding the nature of subjective experience is really critically important. Fear and anxiety is another one. We know the fear circuitry. We've mapped that. Neuroscientist Joseph Ldou has been instrumental in driving forward the mapping of the Imigdaler circuit, the fear circuit in the brain. But he's very careful to distinguish between processing of threatening stimului and the experience of fear. So if we develop pharmacological interventions that fix the circuitry bit and fix the behavioral bit in rats, one of the reasons they might not translate to humans is that they don't reduce the fear. They change the behavior, but the fear persists. So I think that from like a clinical perspective, it's really important for us to understand this and from depression, from the perspective of those who have autis inspective disorders, so to understand their subjective experiences, like there's just a huge amount of clinical benefit that we can build. And then finally from the perspective of the artificial systems that we've just been talking about. So we're in a position now where whether or not we build systems that have phenomenal awareness or consciousness, that whether anybody's home in there. You know, maybe we're not going to be able to answer that question for a long time. But the way we interact with systems depends on whether we think that they have consciousness, And the way that we build guardrails and legislation and assigned responsibility in legal settings depends on whether we think that these things can think for themselves, and whether they have moral compasses and all those things which are not necessarily related to conscious experience per se. But there's an argument to be made that, at least in a lot of the systems that we know about, describing responsibility is related to the capacity for that agent to be self directed, and that that seems intimately related to seeking out goals that are not just defined by a programmer, but ultimately like decisions that that thing might make that might be driven by its intrinsic rewards seeking. And there's something that it's like to seek reward because it feels good. So there's a lot of kind of moral and ethical and legislation and societal implications for getting this right, a lot of medical reasons to get this right. And then you know, from a basic science curiosity perspective, this is literally what we evolved to do is figure out the world, and so let's keep doing that as well.
And when we meet aliens, does it matter if they're conscious, Like, if they're intelligent and they're interesting and they want to share with us their space warp drives? Does it matter if when we point our hair dryer at them it says yes or no? What do you think?
I think so. But there's also a lot of evidence that consciousness and what's called moral status can be disentangled. That we ascribe moral status to things that we think are conscious, but we also don't need to require consciousness in order to ascribe moral status, and we certainly treat things very badly even if we know that they're conscious. So these are conceptually disentanglable things. I think that it will matter though for the rest of the machinery around encounter, right, Like maybe from an individual astronaut's perspective it doesn't matter so much. But from the perspective of the laws and regulations and societal implications of what that would mean and what kind of a people do we want to be, I think that then it really does matter. And so having folks at the helm who are paying attention to the moral implications of such a weighty determination would be very good.
Well, I definitely want the aliens to know that we are conscious before they decide whether or not to know nucas from orbit so or have us as a snack.
Assuming they ascribe moral value and moral status to meanings with consciousness.
And maybe they developed the consciousness of meter and they can share it with us.
Maybe here's hoping, all right.
Well, thanks for being on the show today, Megan. This was fascinating.
Thank you so much for having me. This was really fun and engaging and it's been a real pleasure.
Thank you very much.
Daniel and Kelly's Extraordinary Universe is produced by iHeartRadio. We would love to hear from you, We really would.
We want to know what questions you have about this Extraordinary Universe.
We want to know your thoughts on recent shows, suggestions for future shows. If you contact us, we will get back to you.
We really mean it. We answer every message. Email us at questions at Danielandkelly.
Dot org, or you can find us on social media. We have accounts on x, Instagram, Blue Sky and on all of those platforms. You can find us at d and Kuniverse.
Don't be shy right to us