Creating trust and transparency in AI isn’t just a business requirement, it’s a social responsibility. In this episode of Smart Talks, Malcolm talks to Christina Montgomery, IBM's Chief Privacy Officer and AI Ethics Board Co-Chair, and Dr. Seth Dobrin, Global Chief AI Officer, about IBM’s approach to AI and how it’s helping businesses transform the way they work with AI systems that are fair and address bias so AI can benefit everyone, not just a few.
This is a paid advertisement from IBM.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
Hello, Hello. This is Smart Talks with IBM, a podcast from Pushkin Industries, I Heart Radio and IBM about what it means to look at today's most challenging problems in a new way. I'm Malcolm Glabbo. Today I'll be chatting with two IBM experts in artificial intelligence about the company's approach to building and supporting trustworthy AI as a force for positive change. I'll be speaking with IBMS Chief Privacy Officer Christina Montgomery. She oversees the company's privacy vision and compliance strategy globally, looking at things like immunity certificates and vaccine passports. Not what could we do, but what were we willing as a company to do? Where were we going to put our skills and our knowledge and our company brand in response to technologies that could help provide information in response to the pandemic. She also co chairs their AI Ethics Board. I'll also be talking with Dr Seth Dobrin, Global Chief AI Officer at IBM. Seth leads corporate AI strategy and is responsible for connecting AI development with the creation of business value. Seth is also a member of IBMS AI Ethics BORT. We want to make sure that the technology behind AI is as fair as possible, is as explainable as possible, is as robust as possible, and is as privacy preserving as possible. We'll talk about the need to create AI systems that are fair and addressed bias, and how we need to focus on trust and transparency to accomplish this. What might the future look like with an open and diverse ecosystem with governance across the industry. There's only one way to find out. Let's I did. One of the things I'm curious about is the origin of this interesting concern about the ethics and trust component of AI, or is this a later kind of of evolutionary concern. About ten years ago, when we started down this journey to transforming business using what we think about is AI today, the concept of trust came up, but not in the same context that we think about it today. The context of trust was really focused on how do I know it's given me the right answer so that I can make my decision. Because we didn't have tools that help explain how an AI came to a decision, you tended to have to get into these bakeoffs where you had to kind of set up experiments to show that the AI was at least as good as human if if not better, and understand why over time it's progressed as AI has in uh started to come up against real human conditions. And I think that's when we started thinking about what is going on with AI when it relates to bias, particularly you know, about five eight years ago there was an issue with mortgage particularly related to the zip code, but started giving you know, biases against people of certain races um and so I think those things combined have led to us to the point where we are today plufs. You know, the social justice movement over the last two years has has really accelerated a lot of the concern mm hmm. Because I noticed you're you're a lawyer by trade. It's an interesting subject because it seems like this is where AI experts like Seth and lawyers work together. It sounds like a kind of classic cross disciplinary endeavor. Can you talk about that a little bit. It's absolutely cross disciplinary in nature. For example, our AI Ethics Board, I'm the co chair. The other co chair is our AI Ethics Global Leader francescar Rossi, who's a well renowned researcher in AI. Ethics, so she comes with that research background. So we had a board in place, an AI ethics board in place before I stepped into this job, and there were a lot of great discussions among a lot of researchers and a lot of people that deeply understood the technology, but it didn't have decision making authority. It didn't have all stakeholders. Are many stakeholders across the business at the table, and so when I came into the job as a lawyer and as somebody with the corporate governance background, I was sort of tasked with building out the operational aspects of it to make it capable of implementing centralized decision making, to give it authority, to bring in those perspectives from across the business and from people with different focuses within the IBM corporation, lots of different backgrounds, and we have very robust conversations, and we also engage the individuals throughout IBM who either from an advocacy because they care very much about the topic, or they're working in the space individually and have thoughts around the topic, are doing projects in the space, want to publish in the space. We have a very organic way of having them be involved as well. Absolutely necessary to have that cross disciplinary aspect. You mentioned beginning your answer, talked book about robust conversations phrase I love. Can both of you give me an example of an issue that's come up with respect to trust and AI. So so, one example might be the technologies that we would employ as a company in response to the COVID nineteen pandemic. So there are a lot of things we could have done, and it became a question not of what we're capable of deploying from a technology perspective, but whether we should be deploying certain technologies, whether it be facial recognition for fever detection, certain contact tracing technologies are Digital health pass is a good example of a technology that came through the board multiple times in terms of like if we are going to deploy a vaccine passport, which is not necessarily what this technology turned out to be, but looking at things like immunity certificates and vaccine passports, not what could we do, but what were we willing as a company to do? Where were we going to put our skills and our knowledge and our company brand in response to technologies that could help to either bring about a cure or help to provide information in response to the pandemic. COVID is a great example because it highlights the value and the acceleration that good governance can bring. Because the way that we as an ethics board laid out the rules, the guardrails, if you will, around what we could would and wouldn't do for COVID help people just do stuff without worrying that we need to bring this to the board. It also laid very clear for this type of use case we need to go have a conversation with the board. It also provided a venue for us as a company to make decisions um and make risk based decisions where okay, this isn't a little bit of the of the fuzzy area, but we think, given what's going on right now in the world and the importance of this, we're willing to take this risk so long as we go back and we clean everything up later. And so so I think that's really important that number one, governance is set up so that it accelerates things, not stops them. And number two, that there's clear guidance into you know, it's not no, it's here's what you can do and here's what you can't do. And help the teams figure out how they can still move things forward in a way that doesn't infringe on our principles. Yeah, I want to sort of give this there's a concrete sense about how a concern about trust and transparency and such would guide what a technology company might do. Now a real example, So, if I want to make sure that people are wearing face masks and then just highlight that there is someone in this area that's not wearing a face mask and you're not identifying the person, I think we'd be okay with that. What we wouldn't be okay with that with is if they wanted to identify the person in a way that they did not consent to and that was very generic. So I'm going to go through a database of unknown people and I'm going to match them to this person, and so that would not be okay, and a fuzzy area would be you know, I'm going to match this to a known person, so I know this is an employee and I know this is him. This is something that we as a board would want to have a conversation with. If this employee is not wearing a mask, can I match them to a name or do I just send a security personnel over here because the employee is not wearing a mask. That's a harder I think, and that's a real world example that we face during COVID. Yeah, let's talk a little bit about diversity and shared responsibility as principles that matter in this world of AI. What do what do those terms mean as applied to AI, and what's the kind of practical effect of seeking to optimize those goals? You know, I think first of all, we need to have good representation of society doing the work that impacts society. So A, it's just the right thing to do. B. There's tons of research out there that shows that diverse teams outperformed non diverse teams. There's a Mackenzie report that says, you know, companies in the top quartile for diversity outperformed their peers that aren't by like, so tons of good research. The second thing is you just don't get as good results when you don't have equal representation at the table. There's lots of good examples of this. So there was a hiring algorithm that was evaluating applicants and passing forward, but all the applicants in the past for this company, you know, the vast majority of them were male, and so females were just summarily wiped out, regardless to some extent of their fit for the role. I wanted to ask Castina. A project comes before the board, and so a conversation might be the team you put together and the data you're looking at is insufficiently diverse, We're worried that you're not capturing the reality of the of the kind of world we're operating in. Is that is that an example of a conversation might you might have at the board level. Well, I think the best way to look at what the board is doing to try to address those issues of bias. I mean, so, for example, we've got a team of researchers that work on trusted technology, and one of the early things that they've done is to deploy toolkits that will help detect bias, that will help make a i AM more explainable, that will help make it trustworthy in general. But those tools initially very focused on bias, and they deployed them to open source so they could be built on and improved. Right and right now, the board is focused more broadly, not looking at it an individual problem in an individual use case with respect to bias, but instilling those ethical principles across the business through something we're calling ethics by Design. Bias was the first focus area of this Ethics by Design, and we've got a team of folks being led by the Ethics Board who are working on the question you asked Malcolm about, how do we ensure that the AI we're deploying internally or the tools and the products that we're deploying for customers take that into account throughout the life cycle of AI. So through this Ethics by Design, the guidance that's coming out from the Board starts at that conceptual phase and then applies across the life cycle up through in the case of an internal use of AI, up through the actual use and in the case of AI that we're deploying for customers are putting into a product, you not through that point of deployment. So it's very much about embedding those considerations into our existing processes across the company to make sure that they're thought of, not just once and not just in the use cases that the Board has an opportunity to review, but in our practices as a company and in our thinking as a company. Much like you know, we did this and companies did this years ago, um with respect to privacy and security, that concept of privacy and security by design which some may be familiar with that stem from the g d PR in Europe. Now we're doing the same thing with ethics. How unusual is what you guys are doing. I mean if I if I lined up all the tech companies that are heavily into AI right now, would I find similar programs in all of them? Or are you guys off by yourselves? So I think we take a little bit of a unique perspective. In fact, we were recently recognized as a leader in the ethical deployment of technology and responsible technology use by the World Economic Forum. So World Economic Forum and the Marcola Center Um of Ethics at at Santa Clara University did an independent case study of IBM that did recognize our leadership in this space. Because of the holistic approach that we take, we're a little bit different I think in some other tech companies that do have similar counsels in place because of the broad and cross disciplinary nature of ours. We're not just researchers, were not just technologists. We literally have representation from backgrounds spanning across the company, whether it be you know, legal or developers or researchers or or you know just HR professionals and the like. So that makes us a little bit unique the program itself. And then I think we hear from clients that are thinking for themselves about how do I make sure that the technology I'm deploying or using externally or with my clients is trustworthy? Right, So so they're asking us, how did you go about this, how do you think about it as a company, what are your practices? So on that point, we are CEO is the co chair of a something called the Global AI Action Alliance initiated by the WEF, and as part of that, we've committed to sort of open source our approach. So we've been talking a lot about our approach. I think it is a little bit unique, as I said, but we are sharing it because again, we don't want to be the only ones that have trustworthy the AI and that have this holistic, cross disciplinary approach, because we think it's the right approach. It's certainly the right approach for our company, and we want to share it with the world. It's not secret or proprietary, but if you talk to the analyst community that serves the tech the tech you know, the tech sector. They say far and wide, IBM is is is ahead in terms of things that we're actually doing as opposed to talking about it all while making sure that it is enforceable and impactful. So for instance, you know we were talking about we review you cases and we can require that the teams adjust them. That's unique, right, Most of the other tech companies do not have that level of oversight in terms of ensuring that their their outcomes are are aligned. There's a lot of good talk, but I think you know, the weft use case that came out on I think it was September really supports that that we're ahead. And then if you look at companies just in general that have AI ethics board. My experiences that with all the companies and I know interact with hundreds of of leaders and companies a year, less than five percent of them have a board in place, and even fewer of those kind of really have a rhythm going and know how they're gonna going to operate as a board. Yet m hmm. I wanted to talk a little bit about the rule of government here, these government leading or following here. UM, I would say they're catching up. I think we're following is probably the most improved, right because look, I think over the last uh couple of years, as we talked about, or maybe it's been almost ten years at this point in time, as these issues have come to light, companies have largely been left to themselves to impose guardrails upon their practices and their use of AI. Let's not be to say that there aren't laws that regulate for example, discrimination laws do would apply to technology that's discriminatory, but the unique aspects, to the extent there are unique aspects or issues that get amplified through the application of AI systems. Um, the government is really just catching up. So we've got the the EU proposed a comprehensive regulatory framework for AI in the spring time frame. UM. We see in the US the FTC is starting to focus on and algorithmic bias and just in general on algorithms and that they be fair and the like. So there are numerous other initiatives following the EU that are looking at frameworks for governing AI and regulating AI, and we've been involved, I mentioned earlier on our Precision Regulation recommendation. So we have something called the IBM Policy Lab, and what differentiates our advocacy through the Policy Lab is that we try to make concrete, actionable policy recommendations, so not just again articulating principles, but really concrete recommendations for companies and for governments and policymakers around the globe to implemented to follow um things like you know, out of our precision regulation of AI. That's where our recommendation is that regulation should be risk based, it should be context specific. It should look at and allocate responsibility on the party that's closest to the risk, and that may be different at different times in the life cycle of an AI system. So we deploy some general purpose technologies and then our clients train those over time, so you know, bearing the risk it should sit with the party that's closest to the risk at the different points in time in the AI life cycle. You know, one of the interesting things about this issue today, we're now in a situation where someone like IBM, I'm guessing is that it would be as sensitive to public reaction to the uses of AI as they would be to government reaction to the uses of AI. And I wanted to just way those you know, is that this is a this kind of fascinating development in our age that all of a sudden it almost seems like whatever form public reaction takes can be a more powerful lever of of in moving changing corporate behavior. Then what government are saying? And do you do you think this is true in this AI space? I think the government regulation that we're seeing is responding to public sentiment. So I agree with you a hundred percent that that this is being moved by the public. And you know, oftentimes when we have conversations at the ethics board, Okay Christina and the lawyers say okay, this is not a legal issue, then the next conversation is what happens if this story shows up on the front page of the New York Times of the Wall Street Journal. So so absolutely we consider that. So I would also I would add to that, like we've been well, probably think the oldest technology company, we're over a hundred years old, and our clients have looked to us for that hundred plus years to responsibly usher in new technologies right and to manage their data, their most sensitive data, in a trusted way. So for us, it's it's not just about the the headline risk. It's about ensuring that we have of business going forward because our clients trust us UM and society trusts US. So the guardrails we put in place, particularly around the trust and Transparency principles, or the guard rails we put in place around responsible data use. In the COVID pandemic, there was nothing that from a legal perspective, said we couldn't do more. There was nothing that said in the US we can't use facial recognition technology and our sites. But we made principal decisions, and we made those decisions because we think they're the right decisions to make. And when I look back at the Ethics Board and the analysis and the use cases that have come forward over the course of the last two years, I can think of very few where we said we're not going to do this because we're afraid of regulatory repercussions. UM. In fact, I can't think of any where because it wouldn't have come to the board if it was a league all. But yet we did refine in some cases stop I actual transactions right and solutions because we felt they were not the right thing to do. Yeah, yeah, A question for either of you, can you can you dig a little more into this, into the real world applications of this. What are some of the very kind of concrete kinds of things that come out of this focus on untrust. So so you know, some some real world examples of how trust plays into what we're doing. Is gets back to a couple of things Christina said earlier around how we're open sourcing a lot of what we do. So our research division builds a lot of the technology that winds up in our products um uh. And then, particularly related to this topic of AI ethics and trustworthy AI are the fall is to open source the base of the technology. So we have a whole bunch of open source tool kits um that anyone can use. In fact, some of our competitors use them as much as we do in their products. And then we build value adds on top of those and so that is something that we advocate strongly for in the Ethics Board helps support us with that, as do you know, our our product teams, because the value is you know, AI is one of those spaces where when something goes wrong, it affects everyone, right, so if if there's a big issue with AI, everyone's going to be concerned about all AI, and so we want to make sure that the technology behind AI is as fair as possible, is as explainable as possible, is as robust as possible, and is as privacy preserving as possible. So tool kits that address those are all publicly available, and then we build value added capabilities on top of that when we set when we bring those things to our customers in the form of an integrated platform that helps manage the whole life cycle of an AI. Because AI is different than software in that the technology under AI is machine learning. What that means is that the machine keeps learning over time and adjusting the model over time. Once you write a piece of software, it's done, it doesn't change. And so you need to figure out how do you continuously monitor your your AI over time for those things I just described and integrate them into your security and privacy by design practices so that they're continuously updating and aligned to your company's principles as well as societal principles as well as any relevant regulations. Yeah, when this question, give me one suggestion prediction about what AI looks like five years or ten years for now. Yeah, So, so that is a really really good question. And you know, when we when we look at what AI does today, AI, while it's very insightful, and it helps us realize things that as humans we may not have picked up on our own. And so to augment our intelligence, A surfaces insights and maybe reduce as a complexity from almost infinite and comprehensible to humans. Two, I have five choices now that I can make based on the output of an AI. There's there's AI is unable for the most part today to provide context or reasoning. Right, So AI provides an answer, but there's no reasoning as we think about it as humans associated with it. There's a new technology that's coming up, that's all. There's a bunch of them that are lumped under something called neurosymbolic reasoning. And what neurosymbolic reasoning means, it's using mathematical equations. So AI algorithms to reason similarly to a human does. So, for instance, you know, the Internet contains all sorts of things good and bad, and and let's let's look at something that's relevant to to me at least being of Jewish background. Right, you want you want algorithms to know about the Nazi regime, But you don't want algorithms spewing rhetoric about the Nazi regime. Today, when we build an AI, it's almost impossible for us to get the algorithm to differentiate those two things. With a tool like reasoning around it, you could exclude prevent an algorithm from saying from learning rhetoric that is, you know, not conducive to norms. It's just you know, an example. So those are the kinds of things you'll see over the next three to five years. I think we'll see a lot more explainableity and transparency around AI. So for example, whether it may be you're seeing this ad because you you know, went on and searched for X, Y and Z, You're seeing a shoe ad because you visited this site, you know to extent it's it's that, or there'll be more transparency you're dealing with a chatbot, you know, just when AI is being applied to you. I think you'll see a lot more transparency and disclosure around that. And then the sort of uh answer, less practical, more aspirational answer I think is you know, we know AI is changing jobs, it's eliminating some, it's creating new jobs, and I think hopefully right with principles around AI. That it be used to augment to help humans, that it be human centered, that it put people first at the heart of the technology. UH, that it will make people better and smarter at what they do and they'll be more interesting work. Right. So I'm hoping that that will ultimately be something that will come out of AI as there's more awareness around where it's being used in your life already day to day, more transparency around that, more explainability around that, and then ultimately more trust. Um, we'll wonderful. I think that covers our basis. This has been really really fascinating. Thank you for joining me for this, and I expect that we will be having both as a company inside IBM and as a society many many, many many more conversations about AI in the coming years. So I'm glad to be on the early end of that process because we're not done with this one, are we not? By a long shot? The beginning, guess, just the beginning. Thank you again, Yeah, thanks for having Thank you. Thank you again to Christina Montgomery and Sethburn for the discussion about trust and transparency around AI and for their insights about what may be possible in the future. It will be fascinating to see how IBM can help foster positive change in the industry. Smart Talks with IBM is produced by Emily Rostak with Collie Magliori and Katherine Gurda, Edited by Karen Shakerge, mixed and mastered by Jason Gambrel. Music by Gramascope. Special thanks to Molly Sosha, Andy Kelly, me La Belle, Jacob Weisberg, had a Fine Erk Sander and Maggie Taylor, the teams at eight Bar and IBM. Smart Talks at IBM is a production of Pushkin Industries and I Heart Radio. This is a paid advertisement from IBM. You can find more episodes at IBM dot com slash smart Talks. You'll find more Pushkin podcasts on the I Heart Dio app, Apple Podcasts, or wherever you like to listen. I'm Malcolm Gladbow. See you next time.