Boston University School of Law Professor Danielle Citron says that deepfakes are just going to get more and more convincing, but there are sill certain things we can do to stop their spread.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
Pushkin from Pushkin Industries. This is Deep Background, the show where we explore the stories behind the stories in the news. I'm Noah Feldman. We used to say seeing is believing, but that was before the invention of deep fakes. Increasingly affordable technology can actually create manufactured audio and video that no human eye or ear will be able to distinguish from the real thing. Does that mean we can't trust our eyes anymore, that we'll have to develop a pervasive sense of insecurity about what's actually true? Or could it open the door to something a little healthier, like a modest skepticism or a world where we will be more cautious about what we believe. To discuss these hard questions, I'm joined by Danielle Citron, a professor at the Boston University's School of Law. Danielle is the vice president of the Cyber Civil Rights Initiative, which is a nonprofit devoted to the protection of civil rights and liberties in the digital age. She works very closely with lawmakers, law enforcers, and big social media platforms. She's written extensively about deep fakes, and in June twenty nineteen, she testified before the House Intelligence Committee on the subject. Danielle, I'm just so thrilled that you're here to talk to us about deep fakes. So let's just start with some definitions for those who are just encountering deep fakes for the first time over the last few months. What counts as a deep fake in your definition, deep fakes is involves machine learning technology that manipulates or fabricates out of sort of digital whole cloth video and audio recordings that show people doing and saying things that they never dinner said. So they seem really authentic and realistic, but they're totally false. And they look really good, don't they. You know, right now, the technology is still pretty much developing, so they look good. They're not perfect. If you look closely enough, you can sometimes see some of the flaws, even just to the human eye, and technologists can easily identify, for the most part that there are some flaws that show it's not real. It's that what's going to happen in the next six to nine months is that the technology is it's advancing so rapidly, so soon enough technologists think it's going to be really hard, if not impossible, to detect or discern the difference between what's fake and what's real, And that's when we're going to run into serious trouble when that happened. Yeah, I mean when that happens, that means that any public figure, or not just a public figure, any person could be depicted in a relatively convincing looking video clip as saying we're doing something that they never said or did. So we have to, one way or another start preparing ourselves for that for that world. I guess is there any credible, realistic way just to ban this technology? I mean, could this just be a piece of technology that was flat out prohibited. I mean, I'm not allowed to have a surfaced air missile in my backyard, so why do I have to be able to have this on my computer? Right? No, it's a it's a great question about you know, the difference between single use technologies whose only function is essentially illegality in your hands, right if you had a serviced airbistle. Deep fakes are different because you know, we can make deep fakes for good, we can make deep fakes to create art. You know, there's a deep fake of President Obama teaching us about the phenomenon of deep fakes. We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things. And so you can imagine so many interesting ways that deep fakes can help teach history. There's a man who has als and he sort of beerheaded the use of the technology to create audio that matches his thoughts so that others can hear him speak because he's unable to speak now. And there's a story about a man who's a quadriplegic who can't make love to his wife, and they used photographs of him to create a deep fake sex video of heat consentually and his wife so they can see themselves making love. So there are all sorts of ways that deep fakes can be pro social. And so you know, the question is should we ban the technology? And I would say we're going to have some problems with banning it for all sorts of practical reasons, like it's cats out of the bag. I don't think we can ban it, and I don't think we should ban it. It sounds like in the world that you're describing, we get very quickly to a category that you've written a lot about and that's a very complicated and contentious topic. And that's the topic of consent. You know, in the case of the person suffering from als that you mentioned, it's not just that they could create a video, but I suppose they could create an avatar, and then the man's wife could consentually have virtual sex with him, and they would be they would be happy about that, and we would perceive that as a pro social result. The problem arises when it's an image of a person taken without his or her consent. So are we at least in this realm, the realm of let's call it sex and deep fakes before we get to politics and deep fakes and other kinds of deep fakes. Is this a realm where we need to fall back on the familiar, albeit imperfect rules that we're used to using to regulate sexual conduct. I mean, sexual conduct itself is something that we see as pro social, except if it's not consensual, then it's very much antisocial and it's rape and it's a crime. It's true that with deep fakes, as with real life physical interactions, so often where the rubber hits the road, and where it's legally enormatively significant is where the person does not consent. It's much in the way that we think of sexual interactions that are physical between two individuals or three or whoever many when we're talking about video and audio, much as the way there's a big difference between porn and obscenity, right, or porn and non porn. It all has to do with consent. So I'm a pornography go for it if that's what you want to do. But if you are made into a sex object without your consent, then by my lights, it's something that we can regulate and should regulate. So obviously this is an important and significant developing area of law, and I want to ask you what I think is the natural question about that, which is, how do you draw the line between forms of fiction that have always implicated taking a person's image or their identity, and which historically we've treated as protected by freedom of expression from what you're setting out to prohibit. So Curtis Sittenfeld writes a novel, pretty good novel about Laura buk the former First Lady, and you know, she's more or less appropriated her whole character for the purposes of the novel. And it's not a sound a pornographic novel by stention the imagination. But such a novel could include sex or other forms of intimacy and depicted literarily, and we think of that as protected speech in some way. How would how do you think about drawing that kind of line? So a few things. The first is that audio and video are different from text in important ways. No audio and video have this visceral hold on us. We assume that they're true because we trust what our eyes and ears are telling us, Whereas how human beings respond to text is it's mediated in many respects through our thoughts, and so unlike audio and video, which react, we react to it in a way that it bypasses thinking. We just immediately assume it is true. It has a different kind of impact than text does. So I think that's the first difference between the novel, which imagines Laura Bush doing and saying things, versus video that represents to the world that it actually happened. Right. One is clearly fiction. It has to be mediated by our thoughts. We recognize it's being conveyed as fiction. And by contrast, audio and video in the deep fake context. So I'm not talking about audio and video that is clearly parody, and that we are using in ways that may clear and is. Let's pretend there's a label on it that says this is a parody. We're talking about audio and video that's designed to hide itself as false. It's being passed off as real, and the way we human beings perceive audio and video is as if it is a true representation of things that have happened in the world, as if it's real evidence. So that is I think one really important piece of why digital impersonations using audio and video that not only a powerful impact and therefore can have significant harms, it's how the creators are passing them off as real rather than as fiction. We see this play out with defamation. We can be as opinionated as we want to be and everyone knows it's what we think. But when it comes to factual falsehoods that we're saying are true, we can punish those. So that's another piece of it why there's a big difference between the deep fake that's passing itself off as real. Soon we won't even be able to tell the difference between fake and real unless we do some real serious journalists digging and where someone was at a particular time versus the novel, Daniel, there's several really fascinating things and what you just said, and I want to break it up into pieces if we could, so I want to start with this question of the disclaimer or how the information is presented on the way you were describing now, it sounds like, though, if someone produces a video where they take my image and then they do something with me that I don't want to have done, you know, they put me into a sex video or something like that, if they put a little note on it that says disclaimer, this is a deep fake, it's not Feldman, then in principle it sounds like maybe wouldn't be covered by the prohibitions that you're talking about. And my instinct is that that wouldn't be very satisfactory. If we're concern is about the appropriation of my identity, then a disclaimer, even if it's a chiron running the whole time that the video is playing, I don't think it's going to make me feel that much better about the appropriation of my image, is it. So? Yeah, let me back up for a second. I was using defamation and the kinds of defamation harms and harm to reputation that may be alleviated with labeling. And so the question is a good one, pressing me would disclosure in and of itself be a remedy here, And I always think of it as it's kind of like a half measure, and it doesn't remedy the invasion to let's say, for example, sexual privacy. So when you take someone that you're the example that you gave Noahs one that I would immediately use as well, which is you take someone's sexual identity and you insert their face into pornography. And what you've done is you've appropriated variability to express themselves sexually in ways that they do not want to be so seen that way, right, or depicted, And that in itself is a harm. It's a harm that resonates both with autonomy and with the consequential emotional harms. And so you're right that even with the label there, in the case of an invasion of sexual privacy, that there's harm both to reputation that may be somewhat solved by a label like hey, this isn't real, friends, I'm not passing this off as a fact. That's the harm to reputation and defamation. But there are other harms, and that includes invasions to sexual privacy that labels cannot cure. And as well, if it's in a Google search of your name, label, ain't you doing me any good? Probably because people won't see it and there isn't a meaningful way made of respond we response of this related video and audio that people are going to believe it's you. That's the second thing that I really wanted to ask you about in your fascinating comment before, and that is this idea that audio and video are somehow different because we really believe something about them. And here too, I think there are maybe two different parts. One is the idea that maybe there's something almost neurochemical about viewing audio and video that's different from other forms of representation. And I wonder if you really believe that. I'm not sure I believe that. I mean, if I watch you a Jason Bourne movie, I know it's not real even though it's got audio and video in it, because of the context in which I'm experiencing it. So we're perfectly capable of being skeptical about truth in the cases of audio and video. And I'm not completely convinced that if you put us under a functional MRI scan that our brains would look so radically different in the two contexts. And then the second possibility is that you were saying that we have a kind of cultural expectation that if it's on audio or if it's on a video and it looks like me, that it's me doing it. And there I want to raise the topic for us to talk about of how that's going to change. It seems almost inevitable that that expectation that we still have as of two thousand and nineteen may be gone even by two thousand and twenty. So those are two questions in one. But maybe start with the first one, is there something unique about audio and video? And then we'll go on to whether our expectations are going to change? So what is unique, and this is really important I want to underscore, is the notion of context and content when a deep seek is most effective, meaning when it convinces you that somebody's done something and said something they've never done. It's all about context and content. The Jason Bourne movie you mentioned in that context and content, I know it's a movie I go in understand that it's all this alternative universe that I'm watching on screen, But in a context and context in which the video is being passed off as and looks damn real, then people are going to take it as real. So let me just give example. Rana Ayub is a journalist in India who is an investigative reporter and she her work has attracted lots of vitriol because she challenges people in power. And there was a deep fake sex video made of her. She looked at it and she knew intellectually it wasn't her, But immediately after seeing it, she went and vomited and she couldn't sleep or eat for like weeks because seeing herself demeaned and engaging in a sex action never engaged in, it hit her, she explains in the gut she could not shake the image of it in her head. And when thousands and thousands of people saw it, because it was shared, like half of the phones in India, people apparently had it, people believed it was her and they confronted her off flne There was the suggestion, you know, people were urging others to confront her and have sex with her, to rape her. So in that context, the video was taken as true and it moved people. It moved people to write to her inbox was so full with sex solicitations from men she didn't know. She was overwhelmed. I realize, you know, we're talking about how do we distinguish between circumstances, And you ask rightfully, so you know, isn't it true now that we have our radar is up and we know when to distinguish what's false and what's real. Well, in the case of a movie, we do, our radar is up and we know we have all the and diisha that it's fake. But you know, with deep fakes that are really convincing, we won't have our radar up, And especially in certain contexts and certain content, the whole point of it is to trick us into believing it's true. It's an incredibly horrifying story about about Ranna Ayube. I want to ask though, about how we gradually change our expectations. So, you know, it's a really interesting subject in the history of photography, how people certainly believe that photographs were always real in the not only in the late nineteenth century but well into the twentieth century. And two of the great examples that the darians and photography you like to use are the photographs of spirits, of ectoplasmic angels and fairies and sprites, which became super popular in the end of the nineteenth century. Arthur Conan Doyle, who at the Sherlock Holm Stories, was interested in them, and J. M. Barrie, who wrote Peter Pan was interested in them. And lots of people were fascinated by these by these photographic images which appeared to give actual form to these supernatural spirits. And of course now we look at it and we say, well, that's obviously a fake photograph. The other example that they like to give is stalinist Russia, and there's a famous example of a photograph of a whole group of senior Communist Party figures and then one by one by one, as each was purged. He they were all men. He was airbrushed, or like I say, maybe didn't have airbrushed, but he was removed by earlier technological means from the photograph, until in the end you basically just had Stalin on his own. And at every moment, you know, the Communist Party kept putting out these photographs out there, and the public sort of in some way believe them, even if some people might have remembered that this photograph looked a little different than it had looked the last time. So those are both examples from the history of photography about how the technology could elicit the expectation of reality for some period of time, but then as people get familiar with the technology, our expectation changes. And so I'm wondering if the tragic story that you describe for runa Ayu isn't something that will last for as long as it takes for people around the world to realize what deep fakes are. And let's say it takes six months or a year or a couple of years for people to figure that out. But maybe we're on the US both a change where no one's really going to believe what they see or hear on audio or video anymore. And I think that leads us to two really troubling places, like we're at a fork in the road. And the first is if we just decide that we can't believe anything, then we're going to just believe what we want to believe, just forget the truth, you know, our confirmation biases. We believe information that accords with our world views. And so one possibility in a world in which we are so skeptical of audio and video evidence that we simply say for our hands up and say, uh, I'm just gonna believe what I want to believe. Truth be damned, right, that's one possibility, and that's that's one nightmare for the pursuit of truth. And sometimes it seems like we're already in that world. If you watch Fox and CNN at the same time we're Fox and MSNBC and you hear their alternative versions of reality, sometimes seems like everyone's already gone down that rabbit hole. Right. So that's that's one rabbit hole. That's one of my nightmares. You know. It's like Alice in the look glass like that to me is a one really bad path that you're right, we are already on. And then the second challenge for truth, and you alluded to this earlier, is that we might also be in a space where and this could be liminal. Doesn't mean it's forever, but where. And we've seen it already with politicians looking at real evidence of wrongdoing and saying, ah, you can't believe your eyes and ears, it's a fake. Think about what President Trump tried to do with the Access Hollywood tape, he sort of wises up to the idea of a deep fake after he admits to having of course been in the conversation with Billy Bush more recently. He said that wasn't me. I'm not sure that was me. He tried it out, you know. And it's Bobby Chesney and I, professor at UT Austin and my co author. We call that the liar's dividend, the possibility that liars were leverage the phenomenon of deep fakes to run away from and to escape accountability for the wrongdoing. Yeah, that liar's dividend seems almost inevitable unless there's some magic, you know, super technology that will enable us to distinguish and figure out through careful forensic analysis whether a particular audio or video clip is or is not a deep fake. So let me take a concrete, relatively recent example where I think there was and this is the not the deep fake, but the so called cheap fake of the Nancy Pelosi video which made her slow down her speech and distorted her speech to make her sound like she was either disoriented or maybe drunk. We want to give this president the opportunity do something historic for our country. In that case, what was the technology that enabled observers to prove that this was not actually Nancy Pelosi's voice. I mean, I guess it was partly finding the original video and audio where she sounded normal. We want to give this president the opportunity do something historic for our country. Yes, the key to uncovering that it was like a cheap or shallow fake was that there was existing audio of what really happened. And so once you can check the real audio and visuals next to the fabrication, it was a manipulation, really wasn't a fabrication, then you could say they played with that, they slowed down the speech. But what's challenging is, and someone worries that Bobby and I have about privacy, is that when we don't have self surveillance all the time. You know, Nancy Pelosi when she's giving public remarks, somebody's taping it. But the everyday person doesn't have perfect surveillance one hopes, right, and so you don't have carrying your phone around, you may actually de facto have perfect surveillance of yourself, whether you know it or not, or if you're speaking while Alexa is on. I mean, we usually think that's a problem that you're constantly being surveilled wherever you go, but in your account, it might actually be a good thing. I mean, it very easy to build in a future to your phone. I mean it's already in there that listens to everything you say, and then you could always pull it out and say, well, as a matter of fact, I didn't say that, and we might. I mean, so what Bobby and I imagine in our work together is the possibility that we take that. You know, our phones right now are they're basically GPS trackers, and they track everything we do and say with those phones through text and through our sending photos. But they're not on in the sense of they're not my microphone recording devices unless somebody installed cyberstocking at an app on our phone. Right so we're not although that's not true if you have if you have Hey Siri on there, Oh please, didn't you turn that off on your phone? Sorry? Well, I was going to until our conversation because I guess you know, I thought to myself, I wouldn't want everything that I say to be recorded. But now I think to myself, if someone's going to make a fake video of me saying or doing something, maybe it wouldn't be so terrible for me to have a twenty four seven record of everything I've said and done right and what Bobby and I are worried about our market mechanisms. That, let's say, for people who have a very public persona having a lifeging record of everything you do and say and every interaction, sort of like the novel The Circle, you know, by Dave Eggers, that possibility may not be so much a choice for people who their actions have high potential for harm. For very powerful entities like a company, So it maybe someday, you know, you have a company CEO who part of the deal getting a very high salary is you have to be under surveillance twenty four seven so that you can debunk the deep fake that if time just right, could throw off the IPO. And my worry is that in the broader term, we may have an unraveling of privacy. It's a concept that Scott Peppett like wisely wrote about like eight years ago, the notion that we'll see market pressures that would require us to give up our privacy. Fourth cheaper insurance right for our cars or health. We may see that same move. It's small now, you know, wearing a fit for your insurance company may then become something more pervasive and self surveillance. And it is true it would be able to debunk the deep fake. That's time just right in an attempt to hurt the CEO and attempt to hurt the company. But it's got longer term consequences that I want us all to think about before we rush to embrace what I think is coming market solutions. Because you know, folks get in touch with both Bobby and I. Companies are now seeking our advice on what we should tell CEOs about what they should do to protect themselves and what are you telling them? What we often say is there a range of possibilities, and one of them, of course, are kind of life logging services, and I want them to think about the broader implications of doing that, because I think it's a bad idea. Do the technologists think that in principle it will not be possible to detect a true fabrication? Because after all, what is digital audience video? A bunch of zeros and ones, and it's possible to make these things, maybe it's possible to mask the fact that they've been made. Or do the technologists believe that, as with much technology, there will always be someone one step ahead. So someone makes a really good deep fake, someone else develops a technology to detect that deep fake, because that would seem to be a possible solution that would get you around the life logging alternative where you always have to be prepared to say that never happened. That is to say, if there were some magic bullet technology, it would have to change over time, like in an arms race, they could actually detect that. Does that seem I mean, what are the technologists say when you ask them about that? Do they think it's pointless? Do they think it's possible? Right now? I wish I could say I had a firm answer, And normally I am pretty bullish about the potential role for technology to at least intervene in some of these problems. But my sense of the broader view of technologists is that they're skeptical, you know, about the possibility of a perfect tool of detection. And we certainly need, as human beings, better radar for fakery. We just believe what we want to believe, and we click and we share, and we don't think, and we need to become better digital citizens in that way and think before we share, think before we like not be such immediate consumers of information and shares. And we need to do a whole lot of education with the media too, because the media is a source of amplification. You know, I know that the Wall Street journal has begun educating journalists about the phenomenon of deep fakes and what they're going to do about it, and I imagine that you know, the most reputable outlets are going to be doing the same because there isn't a technical easy way to figure out. At least now there is, but soon enough there won't be. And so how we're going to figure out the truth is God blessed journalists because most people won't have, you know, constant surveillance on themselves, even if they have their GPS track or their phone right, it won't we have recorded the interaction between people unless they've done it consensually. And so it's journalists takes hard work right to interview people. But it's not like there's no so many people just be sayd Danielle, you're a nihilist. You're saying deep fakes is just a crisis we can't tackle, and that's not true. It's a lot of the same old problems, but exponentially, you know, spread well. Thank you very much, Danielle. I'm super grateful to you, and I certified that this is a real conversation that we actually had and was not created by deep fakes, cheap fakes, or anything in between. Right yep, I certify this is a real audio. Thank you, Danielle. That was really fun. I really enjoyed it. Thank you Noh for having me on. From talking to Danielle Citron, I'm convinced they were really only at the beginning of finding our collective solution to the challenges post by deep fakes. Think of the example of photography, where we once believe that whatever we saw was true, and we gradually came to see things in a more complex way. Should our response be to pass laws that regulate what kinds of images can be created, fake or real? Or should the answer be to rely on self regulation or technological solutions. These are challenges that would look very, very different if we were in the midst of the technology, the invention of the photograph. Then they look today at the distance of one hundred and fifty or one hundred and seventy five years, So We're going to have to drill down on just how much deep fakes do fool people, and that's something that we're going to have to watch very closely going forward. The first couple of very popular deep fakes are going to lead to great confusion. But over time we may observe that the public becomes acclimated to the potential uses of deep fakes and becomes more skeptical. As our skepticism rises, the need for regulation may well go down. Danielle remains extremely concerned about the sexual privacy of individuals that can be violated by deep fakes. That's an independent and serious concern, and it will require careful regulatory attention to balance human being's interests in privacy and dignity against the countervailing concerns a freedom of expression. Now our sound of the week that was the Attorney General of Israel announcing the indictment of Prime Minister Benjamin Bbing Netanyahu on a range of corruption charges. Benjamin Etanyahu is not just another Israeli politician. He has served longer in the office of Prime Minister than any other person in Israel's history, including the Ledge and Dary first Prime Minister of Israel. David Ben Gurion. He's become a kind of permanent fixture in the minds of many people on the Israeli political scene. He's so hard to displace that, even though not only one but two recent elections led to his failing to get a meaningful majority, he's still the Prime Minister because although he couldn't form a government, neither could his main opponents. The idea that a sitting prime minister in a democracy, admittedly a struggling democracy in Israel's case, but a democracy nevertheless, will actually be put on trial for crimes in real time is a kind of extraordinary reality for Israeli politics, which are always full of surprises. Is this a good or a bad thing for the state of Israeli democracy? This could really go in one of two different ways. There is a pessimistic view of it, and it's hard not to sympathize with that pessimistic view. It says that Israel has some tendencies that are pushing it in the direction of other democracies, not excluding the United States under Donald Trump, but also including places like Poland and Hungry, where increasingly dominant single leaders serve an office for a long time and call on impulses of nationalism and populism to retain power. In this context, the suggestion that N'tanyahu may actually also be guilty of crimes of corruption would tend to suggest that Israeli democracy is not going in a good direction. After all, what could be more embarrassing for a democracy than evidence that a sitting prime minister is guilty of charges like these. Yet there is an alternative picture, and it's this, instead of being tied in knots as a consequence of the charges that are present, has acted corruptly, sound, familiar, or anyone Israel actually seems to be taking on the question directly. An independent attorney general actually chose to charge the prime minister with a crime. This is after N'taniah, who had announced his hopes that the kannesse At Israel's legislature would actually pass a law rendering him immune from criminal prosecution in the middle of his term, a law that so far has not been enacted in some way that could be seen as a big win for a democracy, After all, the democracy must be pretty strong on the dimension of fighting public corruption. If the attorney general can actually bring charges against a sitting prime minister, and if that prime minister is one of the most powerful in the history of the country. That's what you would ideally want to see in a democracy. The principle that nobody is above the law can't happen in the United States, where a sitting president, as a practical matter, can't be charged with a crime because the Attorney General in the United States works for the president and is pretty unlikely to charge his boss. The outcome of whether this entire episode will turn out to be good or bad for Israeli democracy is one that we will learn in the coming months. Of course, it's possible that Nittagnaho could be charged, tried, and acquitted. It's also possible that, through some complex combination of results, he might actually avoid going to trial. Yet it's also simultaneously possible that he will be subject to judicial process like any other citizen, and the consequences of that development for Israeli democracy could be very far reaching. This is a story that we'll keep on watching going forward. It's one that can only continue to be fascinating. Deep Background is brought to you by Pushkin Industries. Our producer is Lydia Gene Coott, with engineering by Jason Gambrel and Jason Roskowski. Our showrunner is Sophie mckibbon. Our theme music is composed by Luis Gera special thanks to the Pushkin Brass, Malcolm Gladwell, Jacob Weisberg, and Mia Lobel. I'm Noah Feldman. You can follow me on Twitter at Noah R. Feldman. This is deep background