Ep57 "When should new technologies enter the courtroom?"

Published May 6, 2024, 10:00 AM

Can we measure a lie from a blood pressure test, or pedophilia from a brain scan? And how should a judge decide whether the technology is good enough? What does this have to do with Ronald Reagan, or antisocial personality disorder, or how the television show CSI has impacted courtrooms? Today’s episode lives at the intersection of brains and the legal system. When are new neuroscience techniques allowed in courts, and when should they be?

Can you measure pedophilia in a brain scan? Can you measure a lie from somebody's blood pressure? And how should a judge in court who's not an expert in science decide these things? What does any of this have to do with President Ronald Reagan or antisocial personality disorder or how the television show CSI has impacted courtrooms. Welcome to Inner Cosmos with me David Eagleman. I'm a neuroscientist and an author at Stanford and I've spent my career at the intersection of our brains and our lives. In today's episode is about an aspect of the intersection between brains and the legal system, and it's a tricky one. The question is when neuroscience techniques are allowed in courts, When should they be allowed? What bars need to be passed for a technology to be accepted. So let's start on March thirtieth, nineteen eighty one, when the President of the United States, Ronald Reagan, has been delivering a speech and afterwards, he and his team are returning to his limousine and he gives a two big arm wave to the crowd and suddenly there are gunshots ringing out and everyone's diving, and President Reagan is hit with a ricochet off his limousine, and Press Secretary James Brady falls and Secret Service agent Tim McCarthy falls, and a DC Police officer named Thomas del Haunty is also wounded, and the President arrives at the emergency room in critical condition and almost dies. And for those of you who weren't alive in nineteen eighty one, or for whom this has re seated in memory, just try to picture the horror that this entailed. Now you may remember that the gunman, John Hinckley, had a deep psychosis. He was divorced from reality, and he believed that if he shot the president, he would win the love of the actress Jody Foster. There's a lot to say about this case, and in episodes thirty six and thirty seven I talked about the insanity defense, but here I want to zoom in on a very particular aspect. The thing most salient to us today was the fact that this was the first high profile case to use a form of brain imaging. Hinckley's lawyers pled not guilty by reason of insanity, and to support their defense, they introduced brain imaging evidence so his defense counsel argued that he was schizophrenic, and they argued they could prove this by showing CAT scans or CT scans. CT stands for computer aided tomography or computerized demography. Now, the lawyers on both sides agreed that cat scans had never before been admitted as evidence in a courtroom. Neuroimaging was brand new at this time, So should the judge allow this new Fengal technology to be accepted or not? Well, it's not obvious. Can you really tell if someone suffers from schizophrenia just by looking at an anatomical picture of the brain. It's not obvious. So the judge decided to dismiss the jury so that he could hear the arguments about whether or not the technology was relevant and should be admitted. And expert witness, a physician, pointed out that Hinckley's soul sigh, which are the valleys running along the outside of the brain, these were wider than average, and this physician cited a paper suggesting a connection between schizophrenia and wider sulsie. So the assertion was, if you have schizophrenia, you can see that just by looking at a cat scan of the brain. So this doctor said, quote the fact that one third of schizophrenic participants in the study had these widened sulsie whereas in normals probably less than one out of fifty have them. That is a very powerful fact. But the prosecution rebutted this. They said, no way, it has not been proven that a cat scan can aid in the diagnosis of schizophrenia, and therefore this evidence should not be presented to the jury. In other words, they argued the technology should be excluded from the courtroom because it was not yet ready for prime time. The judge listened to the arguments and he finally decided that he would not admit the cat scan. Then nine days later he heard more expert testimony and he confirmed that he would not take the cat scan. And then he changed his mind and reported he would take the cat scan. Okay, So what is this back and forth illustrate? It illustrates the difficulty for a judge to decide what makes meaningful evidence and what does not. In the end, Hinckley was found not guilty by reason of insanity, although that had little or nothing to do with the cat scan. But this high profile case is just one of hundreds where this question comes up, should neuroimaging be allowed in the courtroom. There's no single answer to this question, and in part that's because there are many different guyses in which it comes up. And so that's what we're going to talk about today. We're going to talk about how any technology gets into courtrooms. So to motivate this, imagine that we start seeing advertisements for a new Silicon Valley company that has developed a new mind reading technology. They call this the Palo Alto three thousand, and they strap it to your head and they measure some brain waves and pass that through a large language model, and they print out in words to a screen what you are thinking. So you might be thinking about wanting a hot dog with pickles. In this machine will print to the screen, I want a hot dog with pickles. Now this is totally made up, but pretend it's true. In a few years that said company launches, and let's say the technology looks pretty good. It captures the gist of what you are thinking about. Now, the question is should this be admissible in a court of law. Let's imagine that someone puts it on and states under oath that on April twenty fifth, he was getting dinner with his family, but suddenly the screen prints out I committed the crime. Now, how do you know whether to believe that or not? The company is started by a handful of young people who dropped out of college, and they claim to be experts in neuroscience, But how do you know whether it really works? And especially in a high stakes situation, should you accept this in a court of law or not? Well, some people, in order to judge the quality of the technology, they ask, well, are they charging for this technology? But that's not a meaningful measure. Of course they're charging. They can't develop new technologies for free, anymore than you would expect Apple to not charge for their laptop. But the fact that they're charging certainly doesn't rule in or out anything about its efficacy. So how do you know whether the technology is efficacious? Can it be used in a court of law? How do you know whether it works and provides what the legal system calls probative value, which means can it do what it's supposed to do? Can it provide sufficiently useful evidence to prove something in a trial? So this is what we're going to talk about today. Most of the time we don't realize that new technologies always have to be assessed by courtrooms to know whether they should be accepted or rejected. And some get in and then we take that as background furniture, and others never make it. And what we're going to see today is how and why. So fast forward some decades from the Hinckley trial. Where are we now? What is allowed in the courtroom? Well, we have more sophisticated technologies to image the brain. Now, for example, we can get a picture of the brain in an MRII scan. Magnetic resonance imaging MRI gives you a snapshot of what the brain of a person looks like. You're not seeing activity there, You're just seeing the anatomy. Think of this an analogy to the way you would look at someone's skeleton with an X ray. You can't see anything moving around what you see as a snapshot. So with MRI you can hope to see abnormalities like a tumor or evidence of a stroke, or the consequences of a traumatic brain injury. Now, I've been called by many defense lawyers over the years who say, I have a client who's going up for trial. Can you take a brain scan and see if you can find something wrong with their brain, so this can serve as a mitigating factor. But I always tell them the same thing. If you find something wrong with your client's brain, that can serve as a double edged sword. The duray might think, Okay, I'm convinced there's something different about this man's brain. But this presumably means he'll be predisposed to committing this kind of crime again, so we should probably lock him up for a longer time. So a defense lawyer has to utilize this argument with care. In any case, what MRI gives you is an anatomical snapshot. And now I want to tell you about the next level of technology called fMRI. Where the F stands for fancy MRI. Okay, I'm kidding. It stands for functional magnetic resonance imaging fMRI. And this is because it's telling you about the function of the brain. It's measuring blood flow to show you where the activity in the brain just was. This works because when brain cells are active, they consume energy, and the blood flow to that specific region needs to increase so that you can bring fresh oxygenated blood to the area to restore the used energy. So in fMRI, we see where the new oxygenated blood is going and we say, aha, there must have just been some activity there a few seconds ago. So that's the difference between an anatomical snapshot or a functional picture of what's going on. Now. Part of the reason that you can use the static snapshot the MRI in court is because it's generally seen as hard science. This is the guy's brain. But when we're talking about fMRI, what we're looking at is the activity in the brain, and we're generally asking something about the person's mental state, and can that be the same kind of hard science. On the one hand, it's a clear question with a clear answer if someone has a stroke or a brain tumor. But this isn't the case if you want to pose a question like did this defendant intend to kill the victim? fMRI doesn't and can't give you clear answers like that to questions that are useful for the legal system. So we're going to dig into this now. So first let's start with the question of whether fMRI has been used in courts. The answer is yes, But the technology can be used in different ways. It doesn't always have to involve an individual's brain, but can sometimes be about brains in general. So let me give you an example. There was a murder case in Missouri where a young man named Christopher Simmons broke into the home of a woman named Shirley Crook. He covered her eyes and mouth with duct tape, he bound her hands together, and then he drove her to a state park and threw her off a bridge to her death. Now, this was a premeditated crime, and the evidence was overwhelming, and he admitted to the murder. So the judge and jury handed down a death sentence. But there was a complication. Christopher Simmons was only seventeen years old at the time he committed the crime. And so this case spun all the way up to the United States Supreme Court, and the question was can you execute someone who was under the age of eighteen when they committed the crime? After all, the argument goes, adolescence is characterized by poor decision making, and young people should have the chance to grow up into a different life. Well, one of the things that happened at his trial is that the Supreme Court considered fMRI evidence. Now this wasn't from Simmons's brain in particular, but from adolescence in general. The study compared young people and adults performing the same cognitive tasks, and what the researchers found, not surprisingly, is that young brains are not doing precisely the same thing as older brains. There are measurable differences. A juvenile's brain just isn't the same thing as an adult. So the Supreme Court justices saw this evidence, considered it, and presumably this is part of what led the court to conclude that it is unconstitutional to execute someone for a crime who is a minor. Now that's an example of fMRI making it into the court. It's been used in this way to compare groups of people, juveniles versus adults in this case. But things get a little trickier when you're trying to say something about an individual's brain, the brain of the one guy standing in front of the bench. So what can we and can we not say with the technology? So let's zoom in on a few examples. Many researchers and legal minds have been asking whether one can use brain imaging to diagnose whether someone has antisocial personality disorder, which is a condition in which a person has a long term pattern of manipulating, exploiting, and violating other people people with antisocial personality disorder or a SPD. They'll commit crimes, they'll flaunt rules, they'll act impulsively and aggressively, they'll lie and cheat and steal. Now, this is a condition that is massively overrepresented in the prison population. But biologically it's not obvious what it's about. There's no single gene here, and there's not a single environmental factor. It's a complicated combination. And the legal system often cares to know whether someone has ASPD or not. And so researchers started to wonder a long time ago, could you use brain imaging to determine in some clear categorical way, does this person have ASPD or not. So, in one study, researchers highlighted the brain regions that had high probability of being anatomically different between people with ASPD and those without. And you can look in the cortex what's called the gray matter, or below the core text what's called the white matter, and you can measure these small anatomical differences between those with and without. So the question arose, can you use this technology in court as a diagnostic tool to say that this person has ASPD or not? Now do you see any problems with this off the top of your head about whether this technology can be used. The problem is that all the scientific results come about from examining groups of people, like fifty people in each group, and the question is whether these group differences are strong enough to tell you about individual differences. So this is known as the group to individual problem. In other words, you have data from groups of people that can be distinguished on average, but you're trying to say something about this individual. It would be like making an accurate statement that men on average are taught than women, and then asking whether some individual, like a tall woman, could be categorized as a man because her height clocks in at the average mail. The legal system is well aware of this grouped individual problem, and so as technologies are introduced, the justice system always needs to ask how specific is this technology and how sensitive is it? Is it good enough for individual diagnosis. Brain imaging studies generally just give us group average results, and the question is whether it tells us enough or anything about the person who's standing in front of the bench right now. Now. The idea of bringing functional brain imaging to bear on questions of criminal behavior is an old one, and this grouped individual problem is just as old. For example, there was a study in nineteen ninety seven where researchers image the brain of normal participants and murderers, and they found, on average, there was less activity in the frontal lobes in murderers. So you look at the activity in the front of the brain behind the forehead and you say, hey, on average, there's less going on here in the murderer group. But you can't use this on an individual. You can't say, oh, this person has less activity, so he must have been the murderer. In other words, it has no power in a court of law. You still face the problem of trying to say anything about an individual from a group average. And so it's for reasons like this that brain imaging on individuals has not gotten very far in courtrooms. Let me give one more example. Another research group used brain imaging fMRI to see if they could identify pedophiles. They found twenty four pedophiles and thirty four controls, and they showed them images of naked men and women and boys and girls. And what they found is that they could on average separate the participants who were pedophiles from the participants who were not. In other words, the pedophilic brain shows a subtly different signature of brain activity than the non pedophilic brain when shown these pictures. It turns out that heterosexual versus homosexual seems to be distinguishable as well. So you might think that sounds quite useful for the legal system, but when scientists and legal scholars take a closer look, it's not as clear. The first question is what are these brain signals actually measuring. The assumption is that it's measuring a state of arousal, like sexual attraction, but what else might be going on? Well, the difference in brain signals could be driven by a stress response or an anxiety response by the pedophilic participants who know they're being measured. Or perhaps what you're seeing is a measure of disgust by the non pedophilic group who knows the purpose of the study and doesn't like gazing at pictures of children in this context. Or what if the pedophilic participants were just slightly more likely to avert their eyes because of shame or not wanting to get measured. That would cause a statistical difference in the brain signals and could, in theory, explain the results. So there are lots of things that could yield this brain imaging result of a difference between the two groups, beyond the hypothesis that it's just measuring arousal, So stress, anxiety, discuss shame, all these things might be what's getting measured here. And part of why this matters is because there are many brain imaging measures where it turns out it's easy to manipulate the results. So let's say you are a pedophile who doesn't want to be labeled as such. Can you purposely move your eyes whenever you see a picture of the children, and that messes up the ability of the scanner to measure something. If something can be faked or messed up, then the technology is useless. But let's say, for argument's sake, that you have a technology that can't be faked or manipulated, and that allows us to move on to the second point. Let's say you don't even care what's getting measured, like stress or anxiety or whatever. All you care to know is whether there is a neural signature that can distinguish the pedophiles from the non pedophiles, irrespective of what is causing that signal. Well, there's also a legal problem here, which is that it's not illegal for a person to be attracted to children. It is only illegal if they act on that. All that's illegal is whether you have committed a crime or not, not whether you are attracted to children. So you can think about whatever attracts you ostriches or jello or whatever, as long as you don't commit an illegal act. So whether you're talking about ASPD or murderers or pedophile while, you'll see that measuring something that matters for a court of law isn't as straightforward as it might have originally seemed. So now let's return to the Palo Alto three thousand. The question is just because the company claims that it functions, well, how do you know whether or not to admit it into the courtroom? After all, remember what I said about the John Hinckley case, how cat scans were admitted into the court to argue that he had schizophrenia. Well, it's now known that wide and salsa in the brain have no relationship to schizophrenia. There are other better anatomical signatures that we have now, like thinner cortices in the frontal and temporal lobes and shrunkened thalamuses. But it turned out that the idea of white and salsa I just didn't hold up. Now, there was nothing fraudulent going on with the claim. It was just a new technology at the time and they were doing the best they could with small sample sizes. But it turned out the theory of white and sul sight was scientifically unsound. Remember how I mentioned that the judge went back and forth several times about the issue of whether to accept Hinckley's cat scan into the courtroom. That's exactly the right thing that should have happened. Not all claims are going to be correct just because a scientist says so. Despite best efforts, science can often be incorrect, and that is the importance of the scientific method. It's always knocking down its own walls. So what is a court to do about all this? Well, let's say that someone wants to introduce the Palo Alto three thousand into a court case, and you are the judge. You have expertise in the legal system, but you don't know the details of what's possible in neuroscience and large language models, and you have questions about whether this technology should be admitted, questions about whether it can accurately read people's thoughts, So how do you decide whether it should or should not be admitted. So let's step back to nineteen twenty three. There was a man named mister Fry who said that he had developed a lie detection technology and it relied on a measure of your blood pressure, and he wanted to introduce this into a court case the way you might want to get the Palo Alto three thousand into a case. But it turned out that mister Fry's claims were not widely accepted by anyone else in the scientific community, and so on those grounds, the court decided not to admit it into the courtroom. What they said was, look, we'll accept expert testimony that comes from well recognized science, but if there's some new technology, it has to be sufficiently established so that it's gained general acceptance in the field in which it belongs. In other words, if other experts in the field don't believe that mister Fry's systolic blood measurement is actually good at detecting lies, then you can't admit it as evidence in the court. And that case set the bar for what came to be known as the Fry standard, which is that technologies need to be generally accepted by other experts in the field before they can be admitted into the courtroom. So under the Fry standard, the court would work to determine whether the Palo Alto three thousand has met the general acceptance of the scientific community. If science experts around the world say, I've never heard of this Palo Alto three thousand, I don't think that it can actually work, then you, as the judge, can glued it from admissibility. So the court solves the problem by deferring to the expertise of other people in the field. But this isn't the only way to make that decision. The Fry standard still is the rule in about half the states in America, but the rest use a different rule to decide whether evidence should be admitted, and this is called the Dowbert standard. So in nineteen ninety three there was a lawsuit from a man named Jason Dalbert. He was born with severe birth defects and his parents brought suit against Merrill Dow the pharmaceutical company, and they said these severe birth defects were caused by the medication that the mother was on called Bendicton. So the pharmaceutical company said the birth defects were not caused by this medication, and it went to federal court and Dalbert said, look, here are animal studies showing that this drug is related to birth defects. And the pharmaceutical companies expert witnesses got up and said, look, this is not generally accept in the field because these are just animal studies and there's no conclusive evidence that shows the link between these and humans. So if you're the judge, how do you know how to arbitrate this? It's difficult. Right, here's some science from the laboratories and here's the pharmaceutical company saying it's not generally accepted in the field that this causes birth effects. So what do you do? Well, what happened is the case was decided in favor of the pharmaceutical company. So Dalbert took it on appeal to the Ninth Circuit and the Ninth Circuit judges also awarded this to the pharmaceutical company. So Dowbert brought this case to the Supreme Court, and the Supreme Court analyzed this carefully, and what came out of this was a new standard for when evidence should be admissible, and that's known as the Dalbert standard, and the Doalbert standard says, look, you accept expert testimony about, for example, these labrat studies if it will help the jury to understand the evidence better or determine the fact in issue. In other words, it doesn't demand general acceptance in the community. Under the Dalbert standard, the key is just whether some piece of evidence is relevant and reliable. Now, the key is that the Fry standard made the scientific community the gatekeeper, but the Dalbert standard makes the judge the gatekeeper. The judge gets to say from the beginning that they'll evaluate this and ask is this evidence relevant and reliable? Does it pass my bar for that? So, regarding this hypothetical palo alto three thousand, the judge might ask has the technique been tested in actual field conditions as opposed to just in a laboratory. Have there been any papers on the palab alter three thousand that were published in peer reviewed journals? What does the rate of error? Do standards exist for controlling the operation of the machine, and so on? These are often difficult questions. It's not always easy for a judge to make a decision about whether or not to accept a new technology. But this gives a pathway where the judge is the gatekeeper. So let's imagine for a moment that the Palo Alto three thousand passes the standard for admissibility. Is there any reason why the technology might still be excluded from the courtroom. There is one reason. Let's say that you're the defence lawyer and you say, Gosh, this thing is so stunning that it's going to prejudice the jury because they're going to look at this fancy technology, and even in the absence of really good evidence, they'll say, Wow, this guy seems guilty. Let's send him to the electric chair without considering the other points. So to prevent that from happening, there's a special rule called Federal Rules of Evidence four h three, and this just says you you should exclude evidence if what you can learn from it is substantially outweighed by the risk of undue prejudice. In other words, does it sway the jurors more than it should. So what you'll see in courtrooms all the time is that if a lawyer tries to exclude a piece of evidence from being admitted based on let's say a Doubert objection, but the evidence gets past that, then the lawyer is going to take a second bite at the apple by calling on federal rules of Evidence four three, saying, look, even if this is relevant and reliable, it's going to have too much sway on the jury. So why is this an issue? Are there technologies that have undue sway on jurors? Is that a concern? It is? And this brings us back to fMRI. In a court of law where jurors are your neighbors and your community and probably not experts in neuroscience, a lot of people will be swayed by a colorful brain image. They're going to put a higher weight on this than maybe they should, and possibly at the cost of not weighing this evidence appropriately in the context of the whole case. And this is part of the concern that some legal scholars have, and this has come to be known as the CSI effect. So you remember the television show CSI. This stood for Crime Scene Investigation, and it's a television drama about a team of forensic scientists and detectives in Las Vegas who use cutting edge scientific techniques to solve murders. So they go around each week and meticulously gather and analyze evidence from crime scenes and each episode features a complex case with an intricate puzzle and the CSI team has to solve this to bring the criminals to justice. Well, the idea with the real life CSI effect is that jurors come to expect what they've seen on TV in terms of magical machinery that does something like you hit a button to enhance the picture and then the computer enhances it, and they see everything with clarity, where the plot twist requires that the investigator pull out some magical technology that suddenly solves the crime, or looking at the pedophile's brain with neuroimaging and knowing whether he did the crime or not. So jurors have come to expect this sort of thing because you don't spend all your time in a courtroom if you're not a lawyer, and something like the television show CSI is their only window into that world. The problem is that it often turns out to be a false window, and when researchers do studies on this, they generally find that jurors see neuroimaging as the truth of the matter asserted. So we just spent a minute on looking at the claim that you can measure pedophilia and we noted that the brain signals might represent that you're a pedophile, or it might represent stress or anxiety, or disgust or shame or averting the eyes or all kinds of things. But that kind of nuanced analysis doesn't usually get done, and so neuroimaging often comes to be interpreted by the jury as the truth of the matter asserted. This is what scholars sometimes call the fallacy of neurorealism, and the fallacy is just that what you see in these pretty false color images is the truth. In other words, somebody thinks, oh, you're capturing the moment of pedophilia in its raw form there whereas, of course, the truth is that fMRI signals are not direct proof of the experience itself. As a side note, these questions of bringing visual evidence into the courtroom, they're not unique to brain imaging. They've been around for a long time. It goes back at least to X rays. So when X rays got introduced in the eighteen nineties, they immediately started showing up in court and everybody was absolutely blown away by the idea of being able to see inside of a body. It's like magic. So what happened over a century ago is people asked this question of can we use this as evidence in court? And the judge said at the time, as long as it was scientifically reliable, it could be introduced. But the same questions about influence on the jury came up, because there's a real power to seeing something. And of course what we have currently with brain imaging is even a deeper issue because it touches on all our notions of being human. For example, I saw a cover of a Time magazine a while ago and the title read what makes us Good or Evil? And the cover image was a huge picture of a brain scan, and there was a little picture of Mahatma Gandhi with a pointer to a part of the brain. And there was a little picture of Adolph Hitler with a pointer to a different part of the brain. And in case you haven't heard my other episodes on this, I want to make it clear there is no such thing. You can't measure some spot in the brain to determine whether someone is good or evil. And by the way, Friedrich Nietzsche wrote about this over a century ago, the words good and evil don't even represent something fundamental, but instead these words end up getting defined by your moment in time. What is good right now may be seen as evil in a century. These terms are defined by your culture. What you think is good might be seen as sacrilege by another group. So the idea that you could just measure something in the brain and say whether the person is good or evil really makes no sense. However, millions of people see this kind of Time magazine cover, and this is why legal scholars worry that brain images could be persue of past the point that they should be in the legal argot. This is known as something having undue influence. Brain images are influential because they take some abstract issue like evil intent and seem to nail it down to the physical. So this is why something like Federal Rules of Evidence four h three plays an important role in asking whether something has undo influence, whether it sways people more than it should now. At the extreme, some people say functional brain images should never be allowed in the courtroom because of their influence. One solution that a colleague of mind suggested is that you ban the visual aspects of brain images from the courtroom, so you just have expert witnesses come on to the stand and tell you what they think is going on as best they can. But they're verbally presenting the results, not showing them. But these are tough issues, right because you can show a gory photograph from a crime scene, which can also prejudice an entire courtroom. Or you can show a reenactment of a murder, but if you can't show a brain scan, that seems like maybe a double standard. So should you rule out all visual images or allow everything? And if you heard episode nineteen, I talked about eyewitness testimony and how massively swaying that is to jurors. You can have all sorts of expert scientific testimony, but then you have the person get up on the stand with tears and a cracking voice and say, I don't care what they say. I know that's the guy. And we're all moved and influenced by that, even though eyewitness testimony is so deeply fallible. So this is all just to say that the question of undue influence always has to be asked. Compared to what compared to other technologies, compared to gory photographs of the crime scene, compared to acting out a rape scene or a murder scene, do those unduly sway a jury? So I hope what you see is that These are tough issues, perhaps tougher than you had intuited at the beginning of the episode, So let's wrap up. We often think that when a new technology comes along, like a new brain technology, it always gives useful information, and we might assume that courts start leveraging it right away. But there are complexities around this. For example, in an earlier episode, I talked about lie detection. How do you know when somebody is actually lying? There are lots of technologies that try to measure some version of this, but nothing can simply tell you the answer because the whole concept of a lie is complex. Sometimes you might be telling the truth but you're factually incorrect, for example, because you're honestly misremembering how something went, but you believe your memory. Or for someone else, they might have no associated stress response because they just don't care that they're lying. So when somebody comes to the courts and says, hey, I have a new lie detection technology, the judge can't just say great, bring it to the case, because the judge first has to decide whether it should be admitted or instead, whether its promise will sway the jurors more than its value. We're all enthusiastic about the next stages of technology and being able to make important measures about what's happening in the brain. But the legal system has to be very careful about this, whether by standards of general acceptance in the scientific community or by the choice of the judge's gatekeeper. Each new technology has to be weighed carefully for admissibility every time before it can enter the esteemed halls of justice. Eagleman dot com slash podcast. For more information and to find further reading, send me an email at podcast at eagleman dot com with questions or discussion, and check out and subscribe to Inner Cosmos on YouTube for videos of each episode and to leave comments Until next time. I'm David Eagleman, and this is Inner Cosmos.

Inner Cosmos with David Eagleman

Neuroscientist and author David Eagleman discusses how our brain interprets the world and what that  
Social links
Follow podcast
Recent clips
Browse 110 clip(s)