A Solution for Algorithmic Bias

Published Sep 7, 2019, 7:05 AM

Algorithms can determine everything from what ads you see on the internet to the interest rates on your loan. And they aren't always exactly fair. Nicol Turner Lee, a fellow at Brookings, and Talia Gillis, a Harvard graduate student, discuss what to do about algorithmic bias. Plus, Noah reflects on the latest Brexit news. 

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Pushkin from Pushkin Industries. This is Deep Background, the show where we explore the stories behind the stories in the news. I'm Noah Feldman. How did you find this podcast? Did you see an ad for it on your phone? If so, that ad might have shown up for you, because, based on information about you that's out there on the Internet, a computer algorithm decided that this show might be the kind of thing that you would like. Algorithms like that are all around us. Some are far more consequential than others. But I'm glad you're listening to the show. But hey, if you're applying for credit, an algorithm could actually evaluate your credit worthiness, and the stakes are a little higher there than whether you're listening to this podcast or to Trevor Noah's. If I'm looking for a job, an algorithm could go through all of the job applicants to try to do a first cut before the employer decides who they're going to interview. In some cities, algorithms are even being used by the police to try to predict the probability that there's going to be a crime in a particular place and to decide where they're going to focus the police efforts and actually send the cops. Nicole Ternally is a fellow at Brooking Center for Technology Innovation. She's been studying how algorithms like this work and how they fail. She recently co wrote a report for Brookings about algorithmic bias, or in other words, how computers can be racist. So if we look at an algorithm like a black box, it starts with an input and it ends with an output. When it comes to the input, you know, big data has made it very easy to actually harness volumes of data they're about us, these reference points, about individuals, and to create you know, I think some input or what we call training data that essentially trains the algorithm to adapt to what our behaviors are. In many cases, what goes into the algorithm can be accurate. There are certain things that your listeners do online that are discrete, our objective, are true in terms of your search queries, in terms of your online profile. But when you have developers that put in training data that in some respects may be biased or skewed, it creates challenges or what technologists have called garbage in So what I mean by that, if you're developing an algorithm and this is actually a case So this is not something that we're making up. Like the Compass algorithm, which was designed to help judges make better predictions on the amount of time that a defendant should be detained before sentencing. And let's say the training data used to train that algorithm is based upon criminal justice stats or criminal behavior stats. It's no secret that in this country, African American men in particular are more likely to experience arrest. And if they are more likely to experience arrest, which oftentimes leads to incarceration, they will overwhelmingly make up the majority of the training data. So that input, when it gets to the output, it then may disproportionately affect African American defendants by suggesting that they have a longer detainment before sentencing. So that's an example where we have some background bias in our society. Right, the system is already raged against African Americans. Arrests are disproportioned, African American jailing is disproportionate of African Americans. And then once the data is trained, the data that emerges will also reflect those pre existing biases. But how do you know the problem was the algorithm? How do you know the problem isn't Rather the underlying deep structures of racism in the United States that created the circumstances where arrests and imprisonment are disproportionately acts that happen to African Americans rather than to white people. In other words, that is the form of racial bias. How do we know that the algorithm is actually making it worse as opposed to just reflecting the existing realities of race, You know, I think it's both. I mean, on the one hand, I think that we do have the issue where it is representative of the existing societal concerns that we have. A mathematical model is not necessarily going to correct or remedy. I think the historical biases that many groups have suffered. This we're talking about structural and systemic racism and discrimination that just won't go away from a computer model. But I also think that part of what we're seeing, and this is in my attempt to not say that developers are racist, that it all depends on who's at the table when developing that algorithm. And so there's two things that are going on when you look at the tech space. One, you have a very limited pool of diversity that happens in these professions you know, the data science profession in and of itself is underrepresentative of historically disadvantaged groups, women, people of color, older Americans, etc. And that can be problematic as these algorithms become much more ubiquitous in such society. And then you have the other issue of implicit bias, which comes from this unconscious understanding of how the world works. Let me give a good example of that. Amazon just a few months ago released an employment algorithm that was trying to find candidates for their engineering department. The training data that was used by the developers went on the historical data of that department, which tended to be white men, and as a result, the algorithm kicked out any resume that had any hint of a person being from an all women's college or having a women's group represented. So that word, in and of itself, because it's not necessarily associated with engineering professions, struck down the opportunities of them to become a more diverse workforce in terms of that department. You know, Amazon later retracted that and took it off the market, But you see what I mean? So right, I mean, so you mentioned a group of fascinating things there. So one is the composition of the tech world, and there I think every reasonable person can agree that it can only be better on its own terms, totally independent of whether it affects the implicit bias phenomenon. But in general, we would love to see we need to see as a society much greater diverse representation of previously disadvantage groups in or currently disadvantage groups in the tech role. Then they have the implicit bias example where you're drawing on existing data. So you could sort of imagine why Amazon wants to figure out who to hire, and so they put into the data that people they have hired because they think, hey, we're pretty awesome and sure enough. Then that just suggests that they replicate the thing that they have already. Let me ask about the flip side of that, not nicle the potentially positive side. So take a couple of examples. You've mentioned determination of either bail or of criminal sentence on the one hand. Another example would be employment determinations. These are all cases where we know from years and years of collected data that human decision makers are systematically biased against people of and we try to debias people by different methods. We have an appeals process where you can appeal and say I've been discriminated against, very hard to win. We have rules that say don't be biased, and we even have lots of decision makers who in their hearts are not biased, and yet you show them statistically what they've done over the long run, and sure enough their behavior does reflect bias. Now, don't algorithms potentially offer a liberatory solution an equality solution here? Because the one thing that we can say about an algorithm is, unlike a human, if you give it a rule, it will follow the rule, and there's no O G. I thought I was following the rule, but I really wasn't. So if you, in principle, tell the algorithm not to consider race, or you tell the algorithm not to consider various factors that look like proxies for race, and you can even train the algorithms so that it is less inclined to rely on those proxies, then presumably you could have a decision maker making decisions about criminal justice, making decisions about employment that are less biased, less racist than the best intentioned human being. Because humans have an unconscious, and in our unconscious we might be biased, but algorithms don't have an unconscious mind. Yeah, you know, I have to say I got to push back on that, and I'll tell you why. I think because we see more of the digital economy rush to market, we're not dealing with an environment where we see this level of diligence when it comes to know what is the bias impact on certain groups? Have we been able to use race as a proxy to create an anti bias experimentation? Are we auditing our algorithm in ways that we can ensure from its development to its execution that we're identifying what that bias may look like? And I think we need to go forward and put together some framework not on all algorithms. I think Netflix does a pretty good job recommending the types of movies that I like to watch. I'm not I mean, that's true, but I'm not so sure that we should even assume that those aren't biased. They are. I think those are because they're also going to pick out features. They also know what your zip code is. If they take up zip code, then at least implicity, they're also recognizing race, you know, I mean, that's right. So I'll just take my case. I'm an African American woman who's middle age, who loves to watch you know, black romance Flix, and let me tell you, every time Netflix recommends one, I'm happy. I mean, the only problem I get, you know, challenged by it is when the content runs out and not investing in, you know, more more programmers or developers to develop more content for people like me. But maybe that's because they're not feeding those two people like me, right, I'm a middle aged white guy. They're not telling me to watch those, But maybe if they did, I'd watch them, I'd like them, and if that happened, there would be developers. So in that sense, you know, it may be that that there is an implicit bias there. They're just assuming that the reason that I don't watch them is that I haven't watched that many you know, African American romantic comedies in recent years and so. But you know, but itself is not a neutral fact, right, because no one's advertising them to me. Netflix isn't telling me to watch them. If they told me, that might have a different impact. That's right, And that's I mean, I always say to people. When the alright movement and the white conservative movement became a big thing and hate speech on Facebook, I was kind of surprised that I didn't know that this was happening in the same playground in which I also, you know, visit. And that's because my algorithm or is not made of white supremacists. It's you know, more liberals that sort of speak the same language and feel the same way about certain issues. What about the hardcase nicle, I mean, you know, let's say I'm applying for credit and they've got information, you know that Let's say I have allowed I didn't check the box to make it private. That says, hey, Feldman's been searching for payday loans, and the algorithm notices something that is intuitively very plausible, which is that if I'm so desperate that I'm looking into payday loans, probably that means I'm a slightly less good credit risk than someone who hasn't been yet searching for payday loans, because you know, eventually, I'm going to start looking for that the minute I really need it. So if I'm trying to lend money, and that's there's nothing inherently racially determinative about that. It may not even be about wealth in general. It's just about how much money I have right this minute, or how little I have right this minute. That might be great from the standpoint of the credit company, and they might actually be able to do a better job of setting the correct interest rate for me based on that information. Does that still disturbute? Does it still make you think that that's a problem or is that more like Netflix? It's not that big a deal they're telling you know, They're they're fitting the data to the objective. You know. I think it's interesting because I struggle with it. I think that there are cases where, you know, our online behavior will indicate certain characteristics about us. Though it is somewhat problematic because what if I was searching for the payday loan for my uncle? Right, not necessarily for myself, But in the end, I think we have to be very sensitive, or the algorithmic operator has to be sensitive to the extent to which they're denying credit to these groups versus out the groups. Right. One thing, for example, we say in the paper, which I think is just profound, is, as an African American who maybe serve more higher interest credit card rates, what if I see that ad come through and I click it just because I'm interested to see why I'm getting this ad. Automatically, I will be served similar ads, right, so it automatically places me in that high credit risk category. The challenge that we're having now, Noah, is that as an individual consumer, I have no way of recurating what my identity is. I want to ask you a kind of final big picture question, and it's when you survey this whole environment, the possibilities of regulation, you're testifying on the hill about it, you're doing reports on it. Are you in general optimistic about the future of the possibility to regulate algorithms and to also turn algorithms to good with respect to fairness and equality, or are you, on balance pessimistic and think that the terrible legacies of discrimination that we have in our country are like just to be either continued or even made worse by virtue of this technological development. You know, I'm a technologist who's optimistic about the use of technology, you know, I think of it this way. I think as technology evolves, we are faced with this challenge whether or not the technology coopts the user or the user has something to do with the technologies agency, right, And so I'm one of those people, particularly in this case of algorithms, which has just become so interesting to many of us because it's got this explainability portion, and then it has stuff that we don't even know how to dissect and unpack that. I think what we're trying to do in this particular case, Noah's just get ahead of it and to be much more proactive in talking about it. I mean, my goal is to bring to the forefront those algorithms that are allowing older Americans to age in place, those algorithms that are catching chronic disease and some of the worst abilitating diseases, and as ahead of time because of the precision of the technology, we're seeing, you know, better customization of educational curricula for students because algorithms are able to identify learning styles much faster than a teacher can. And so I don't want us to be a society which turns our back against the technology and the innovation, because that's part of this whole new revolution of our shift for manufacturing into I think this digital age where it does matter. I'm gonna tell you honestly, what really concerns me is the fact that the less information or the less diffused that these algorithms are, the more likely you'll be on the wrong side of digital opportunity, and the more likely that your community may not get some of the services that have come out of an algorithmic economy. I mean, imagine living in a community where they don't have your data. You're not your data is not being harnessed for any type of productive algorithm. You find yourself in a state where you have more chronic disease, more incarceration, less levels of educational achievement. Better to be then to be out. Yeah, and I'll just say this in final I mean, I think for those of us that are in this space, I think we take into consideration the fairness and accuracy conversations as well the ethical conversations, But our main goal is to deploy this responsibly. And if we can come up with more responsible frameworks that incorporate many of the aspects that we've talked about today, I think we're on the brink of actually unpacking what could potentially become the next big game changer for people you know that have had to rely upon wrong decisions or humans who are biased to do such. So I want to say, you know, in all honesty, I do agree with you that there's a promise of algorithms to sort of break down the barriers, but it has to be done responsibly with the right people at the table to talk about it. Niculternally thank you so much for a really fascinating and rich discussion and for sharing your knowledge and expertise with us. Thank you now, thank you for having me appreciate you. My conversation with Nicole made me want to talk to somebody who was doing the kind of work that she was just talking about, someone who was thinking about how algorithms can break down barriers rather than create them. So I called up Talia Gillis. Talia is a PhD student in a business economics at Harvard. She's also a former student of mine who holds a law degree from Harvard, and she's been researching how banks and other lenders use algorithms to determine interest rates on loans. She thinks that the way they're doing it right now isn't working, but she has an idea for a better way. Talia, thank you so much for joining me. It's great to have you, and it's great to talk to you as it were on air about something that we've talked about lots and lots of times in the office, your research, because there's very much on how we can fix the problem of algorithmic bias. Tell me what it is that is the core of your approach. What is your original idea about what we can do to make things better. So I think the core of the approach is, first of all, to recognize that it's it's very hard to know a priori what exactly the bias or what direction the bias is going to go in, and so I'm very much focused on the credit pricing context. And in the credit pricing context, it's true that a lot of the kind of input variables into a credit pricing decision suffer from some sort of bias. But what's important to keep in mind is that kind of some biases might get worse in the algorithmic context, but actually big data might in for other types of biases, make things things better in a way. I think there's two large, separate categories of bias. So the first is what I call kind of inputs that result from a bias world, and the idea there is that there's some kind of pre existing discrimination, and so there might be disparities between men and women, or between blacks and whites that kind of originate partially from that discrimination. So that's what you're calling biased world, and that is you're going to apply for a loan. If you make less money and you have more debt, you're not going to get as good terms for the loan. But that's not because the lender in particular, that's because you live in a society where there's background sexist and there's background racism. The world is already biased. And so in that sense, that's biased world. And then what's the second category? And so the second category is inputs that are bias because they result from some kind of bias measurement. You can think of that as, for example, the way we measure someone's income. You know, we might put a lot of weight on someone who has one regular job, regular paycheck, and we fully capture their income and compare that to someone who kind of has multiple jobs, maybe isn't in a formal employee employer relationship, like an uber driver, and then we kind of discount their income or don't measure it properly, or or don't have the ability to fully capture what they're earning. And so the more you might look at two people who they're underlying income is similar, but because of the way we're measuring a person's income, then we consider kind of the second person to have a lower income. So that's an example of bias in the way we're measuring where we're whether we mean to or not, we're systematically giving an advantage to someone who works in a nine to five as opposed to someone who's in the gig economy. And that's what you're calling bias in measurement. And now how would you go about measuring which kind of bias or what kind of bias is in fact found in the algorithm. So it's it's quite difficult to in reality perfectly distinguish between these two biases. Also because very often, like an example I gave with income, it might be a combination of those two. And what do you do if you're in not biased world, but biased German situation where you're worried not about the backround discrimination in the world, but more worried that you're measuring the wrong things in the algorithm and as a result, you know, having a bad effect on their community of color. So with bias measurement, what's interesting about the algorithmic context is that it actually might mitigate a lot of the harms that we're concerned about in the context of bias measurement. So if you take, for example, credit scores, there have been many claims that credit scores are biased against minorities, and that's because they measure certain qualities of credit worthiness that are more representative of let's say, white borrowers. So it puts a lot of weight on people repaying previous loans on time, but it might not give any weight to people who regularly made let's say rent payments, which might actually also be a very good measure of a person's credit worthiness. So in a world in which we put a lot a lot of weight on a credit score, if we moved to a world of kind of machine learning and big data, we might get a whole new richness of indicators of a person's credit worthiness. So let's say the algorithm, how did your full history of payments or your full kind of consumer history. Then we might be getting a lot more information out about a person's credit worthiness that was before only limited to the credit score. So this is an example where if we could identify the bias in the measurement, then we could do better with the algorithm. Yes, what about a situation where the opposite is happening where the algorithm is taking into account things that are producing measurement bias. How do we know that that's happening. So I think that the key is that we never truly know what's going on. Say more about that, because I think that scares a lot of people with respect to these algorithms. What does it mean to say we never truly know what's going on? Well, on the one hand, you could say it's scary, But on the other hand, you could say that any attempt to say it's necessarily bad or necessarily going to hurt populations. It's going to be a difficult position to defend because it's kind of more of an empirical question that requires investigation rather than something that you can determine ahead of time. Now tell you that sounds logically correct to me. You know, you to You don't know for sure something's bad or good until you test it out and you have to examine it, and principle I agree with you. But what would you say to someone who said, well, look, we know how the world works in general, and the world doesn't turn out so well. Often traditionally discriminated against groups, and so our instinct is that we expect to find discrimination rather than to find magic, whereby an algorithmic measurement does better than a human How would you respond to that kind of systemic skepticism that I think one very reasonably hears from people who are concerned that existing bias will be made worse by algorithms, rather than being optimistic about the capacity of algorithms to block certain kinds of bias. You'd have to, I mean, particularly in the credit context, you'd have to be very sensitive to the fact that credit markets are not working for large segments of the US population. So many people in the US don't have access to credit. Many people don't have credit histories, they don't have credit scores. So if you were defending the status quo and credit pricing, then you would have a really big difficulty in terms of actually kind of blocking the potential that this technological move has in terms of expanding access and creating kind of access to credit markets to populations that before have just simply been excluded from these markets. So things are so bad right now, you're saying that it would be crazy enough to at least give this or try because the existing stays quo is deeply discriminatory yeah, I think there's this kind of a serious entrenchment of pre existing disadvantage in credit markets. And when you think that credit markets are very important tool not just to kind of not want to replicate disadvantage, but also credit markets play a very important role at producing wealth or allowing people to kind of come out of some kind of situation in which they were blocked from kind of expanding their possibilities, because if you can't borrow money, you can invest in yourself exactly, exactly right, Okay, So go back then to the question of how we run a test to see whether we've got biased measurement. How do we make sure that we're making things better with the algorithm with respect to measurement bias, not making things worse. How would you test that in real world? So I think what's key is to have a kind of baseline in which the key question is when I move from that baseline to a new situation, how are things changing. And so the key question to me is, if we're comparing kind of a traditional pricing situation to an algorithmic pricing situation, what's happening? And to do that I would do is I would take kind of the algorithmic pricing function. And again, the big advantage in a way of the algorithmic context is that even before you actually apply your decision rule or your prediction to a new borrower who comes through the door, you're able to say something about the algorithm itself. So it sounds like overall, the key tool of social science that you think needs to be used to help us overcome the possibilities of different kinds of algorithmic bias is the experiment. It's experiment by setting a baseline, then experiment and see what happens when the algorithm is applied, and then compare them and then make a judgment afterwards. It sounds like you're saying, we never know from just looking at an algorithm what's going to happen, whether it's going to make the world a worst place, where that's going to make the world a better place. We always have to test it. And in a sense that seems to me very scientific, right, very economic scientific. Run the experiment and see see what comes out on the other side. Do other people agree with you? I mean, how far out are you on the edge and calling for experiment in every case? Well, I think there's several difficulties. I think the first difficulty is kind of maybe a legal theoretical difficulty, and that is that traditionally, the way we've always thought about kind of discrimination and evaluating a lender for discrimination purposes was kind of the exact opposite. It was considering what are the inputs into the decision to price alone, and not what's the outcome. So this whole way of thinking about testing is very focused on the outcome of a pricing rule. So in legal terms, instead of asking about discriminatory intent by the person making the decision, you're asking about whether there's a disparate impact on people at the end and the outcomes. That's right, Do you want us to focus on outputs? Exactly? Exactly? So there's quite kind of this fundamental shift that I think needs to take place in moving from being very focused on what poes into a decision or what poes into an algorithm and saying there's not much progress that we can make on focusing just on the inputs. We really need to go to the outcomes and consider the outcomes more seriously. I think Talia's idea for fixing algorithmic bias has some profound implications. One of the things about algorithms is you can't really know what all the inputs are because often, in the case of a sophisticated algorithm which is based on machine learning, we don't know how it's learning from the data. We just know that it's looking at every aspect of the data, but we don't know exactly what its true inputs are. In other words, we know what data it's training on, but we don't know what features of the data it cares the most about. And that's why Talia sees her approach of running experiments and looking at the outputs as the only potential solution. She's if people begin gradually to see things the way that Taya does, I wonder if that could lead us to a new paradigm more broadly about how we think of discrimination. It might lead us away from the old way of asking, well, was the person of making the decision or racist, and towards a newer way of thinking, which says, who cares what the person was thinking about? What we want to see is whether the system as a whole is producing outcomes that we think are fair and just. That's all in the future, and right now the Trump administration is proposing regulations that actually go in the opposite direction. The Department of Housing and Urban Development has proposed a new rule that would make it harder for banks, or landlords or homeowners insurance companies to be sued for using algorithms that result in discriminatory lending practices, and the Trump administration has gone to the courts more broadly to suggest that they think there needs to be a stronger showing of actual racist intent before discrimination claims can be leveled. So the trend line is not the line that Talia is calling for, nor is it the line that machine learning and artificial intelligence would suggest for us. In its most extreme form, the Trump administration approach might actually allow racist bias to be imported into the functioning of algorithmic systems, and exactly the way that Nicole wants to avoid. We'll be watching very closely going forward to see how those proposed Trump administration regulations are treated, how the courts address the question of bias, and most profoundly, how algorithms shape justice in the future. Now, I want to move to a new segment of deep background, something we're calling Sound of the Week. For me, this week, a defining moment in sound was this everybody to know. There are no circumstances in which I will ask Russels to delay. We're leaving on the thirty first of October. No ifs or butts. We will not accept any attempt to go back on our promises or scrub that referendum. That's Boris Johnson on Monday, making a public statement in front of ten Downing Street, where for the moment he still lives and works as the Prime Minister of the United Kingdom. But things have changed a lot since then. First, in a remarkable development, the Parliament of Great Britain, including a group of rebels from Johnson's own Conservative Party, actually voted that the United Kingdom cannot crash out of the European Union with a no deal brexit. Johnson will actually be required by this law to seek an extension from the European Union so that Britain does not leave the Union without some kind of a deal. Johnson's position has been all along that this would be terrible for his negotiating position with the European Union since he's got nothing to threaten, but Parliament didn't care. Having lost this vote, Johnson then turned around and did two things. One more shocking than next. First, he kicked out of the Conservative Party twenty one members of Parliament who had voted against him. This was sufficiently shocking that his own brother, Joe Johnson, actually resigned from his seat in Parliament and from Johnson's cabinet, saying that he felt a conflict between the national interest and his family loyalties. Then, having taken that radical step, Johnson asked Parliament to vote for a snap election. Now it takes two thirds of Parliament to vote for a snap election for it to happen right away, and the Conservatives didn't get it. So where we are now is it? Boris Johnson can't get out of the European Union on October thirty first, whether he wants to or not. And so far, at least he still doesn't have a general election in which he could try to ask the voters to change the government in order to change this law. Stunning developments, historically significant moments in the history of British politics. What's their more profound meaning, I'll tell you what's been on my mind. There's a deep contradiction between the idea of a referendum that would allow the public as a whole to decide on an important question like whether to leave the European Union and parliamentary democracy, which is based on the idea that the people choose representatives who then exercise their practical judgment and their wisdom to implement the policies of the country. Notice how this contradiction has driven Britain into a kind of paralysis that I would say even veers occasionally on madness. First, the public says leave, but it doesn't say how to leave. Then it tells its elected representatives figure out how to do it, and they can't figure it out. They can't agree. Proposal after proposal gets blocked. Good idea follows bad idea follows bad idea follows good idea, and nothing seems to work itself out. And the whole time the politicians are saying, well, we can't reach an agreement, but we know we have to give effect to the will of the people in the referendum. This is the product of a mismatch between the idea that you can take a snapshot of public opinion at a given moment and call that a referendum, and the idea that the best way to run a government is in fact through electing representatives and have them use their judgment. So is there a way out of this contradiction for Britain. If I were an optimist, I would say that the British could call a new election and that the outcome of that election would somehow clarify whether people favored a change or not. But I don't actually believe that a new election is going to make things any clearer with respect to the contradiction between the referendum and ordinary voting. So then you might imagine, how about another referendum that asks people, well, what did you change your mind? Do you want us to leave without a deal? If so, what kind of a deal? Notice that almost immediately you get into the kind of details that a referendum cannot answer. There's no way that a referendum can do anything other than ask up or down do you want this or do you want that? The only way that that's problem could be solved would be if a specific deal were put in front of the British people and they were asked whether to take it or not. And even then that would leave the question of what to do afterwards. It emerges that the British have simply gone down a rabbit hole of contradiction between these two modes of democracy, direct democracy by referendum and representative democracy by parliament, and the only way they're going to get out of it is if they abandon one of the these two modes of action. They're not going to abandon in Parliament, which is, by most accounts the oldest continuously running political body in any democratic country. At least I hope they won't. What they might learn is that if you've got something as good as Parliament is, maybe you should stay away from their referendums. And if that happens, then over time the British will be able to re establish the norm of parliamentary supremacy and parliamentary sovereignty. It ain't perfect, but it's worked for a long long time. And if there's one takeaway from the briggs At fiasco is that when the British try to deviate from it, they do not know what they're doing. Deep Background is brought to you by Pushkin Industries. Our producer is Lydia Genecott, with engineering by Jason Gambrell and Jason Roskowski. Our showrunner is Sophie mckibbon. Our theme music is composed by Luis GERA special thanks to the Pushkin Brass, Malcolm Gladwell, Jacob Weisberg, and Miah Lobel. I'm Noah Feldman. You can follow me on Twitter at Noah R. Feldman. This is deep background

Deep Background with Noah Feldman

Behind every news headline, there’s another, deeper story. It’s a story about power. In Deep Backgro 
Social links
Follow podcast
Recent clips
Browse 160 clip(s)