Scaling AI With Purpose

Published Oct 8, 2024, 10:30 AM

In this episode of Smart Talks with IBM, Jacob Goldstein speaks with Rebecca Finlay, CEO of Partnership on AI, about the importance of advancing AI innovation with openness and ethics at the forefront. Rebecca discusses how guardrails — such as risk management — can advance efficiency in AI development. They explore the AI Alliance’s focus on open data and technology, and the importance of collaboration. Rebecca also underscores how diverse perspectives and open-mindedness can drive AI progress responsibly.

This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.

Visit us at https://ibm.com/smarttalks

Hello, Hello, Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell. This season, we're diving back into the world of artificial intelligence, but with a focus on the powerful concept of open its possibilities, implications, and misconceptions. We'll look at openness from a variety of angles and explore how the concept is already reshaping industries, ways of doing business and our very notion of what's possible. In today's episode, Jacob Goldstein sits down with Rebecca Finley, the CEO of the Partnership on Ai, a nonprofit group grappling with important questions around the future of AI. Their conversation focuses on Rebecca's work bringing together a community of diverse stakeholders to help shape the conversation around accountable AI governance. Rebecca explains why transparency is so crucial for scaling the technology responsibly, and she highlights how working with groups like the AI Alliance can provide valuable insights in order to build the resources, infrastructure, and community around releasing open source models. So, without further ado, let's get to that conversation.

Can you just say your name? And your job.

My name is Rebecca Finley. I am the CEO of the Partnership on AI to Benefit People and Society, often referred to as PAI.

How did you get here? What was your job before you have the job that you have now?

I came to PAI about three years ago, having had the opportunity to work for the Canadian Institute for Advance Research, developing and deploying all of their programs related to the intersection of technology and society. And one of the areas that the Canadian Institute had been funding since nineteen eighty two was research into artificial intelligence.

Wow early, they were early.

It was a very early commitment and an ongoing commitment at the Institute to fund long term fundamental questions of scientific importance in interdisciplinary research programs that were often committed and funded to for well over a decade. The AI Robotics and Society program that kicked off the work at the Institute eventually became a program very much focused on deep learning and reinforcement learning, neural networks. All of the current iteration of AI, or certainly the pregenerative of AI iteration of AI that led to this transformation that we've seen in terms of online search and all sorts of ways in which predictive AI has been deployed. So I had the opportunity to see the very early days of that research coming together, and when in the early sort of two thousand, twenty and tens, when compute capability came together with data capability through some of the Internet companies and otherwise, and we really saw this technology start to take off. I had the opportunity to start up a program specifically focused on the impacts of AI in society. There was, as you know, at that time, some concerns both about the potential for the technology, but also in terms of what we were seeing around data sets and bias and discrimination and potential impact on future jobs. And so bringing a whole group of experts, whether they were ethicists or lawyers or economists sociologists into the discussion about AI was core to that new program and continues to be core to my commitment to bringing diverse perspectives together to solve the challenges and opportunities that AI offers today.

So specifically, what is your job now? What is the work you do? What is the work that PAI does?

I like to answer that question by asking two questions, First and foremost, do you believe that the world is more divided today than it ever has been in recent history? And do you believe that if we don't create spaces for very different perspectives to come together, we won't be able to solve the challenges that are in front of the world today. My answer to both of those questions is, yes, we're more divided, and two, we need to seek out those spaces where those very different perspectives can come together to solve those great challenges. And that's what I get to do as CEO of the Partnership on AI. We were begun in twenty sixteen with a fundamental commitment to bringing together experts, whether they were in industry, academia, civil society, or philanthropy, coming together to identify what are the most important questions when we think about developing AI centered on people and communities, and then how do we begin to develop the solutions to make sure we benefit appropriately.

So that's a very big picture set of ideas. I'm curious on a sort of more day to day level. I mean, you talk about collaborating with all these different kinds of people, all these different groups, what does that actually look like, what are some specific examples of how you do this work?

So right now we have about one hundred and twenty partners in sixteen countries. They come together through working groups that we look at through a variety of different perspectives. It could be AI, labor and the economy. It could be how do you build a healthy information ecosystem. It could be how do you bring more diverse perspectives into the inclusive and equitable development of AI. It could be what are the emerging opportunities with these very very large foundation model applications and how do you deploy those safely? And these groups come together most importantly to say what are the questions we need to answer collectively, So they come together in working groups. I have an amazing staff team who hold the pen on synthesizing research and data and evidence, developing frameworks, best practices, resources, all sorts of things that we can offer up to the community, be they in industry or in policy, to say this is how we can well, this is what good looks like, and this is how we can do it on a day to day basis. So that's what we do, and then we publish our materials. It's all open. We make sure that we get them into the hands of those communities that can use them, and then we drive and work with those communities to put them into practice.

You used the word to open there and describing your publications. I know, in the world of AI, on the sort of technical side, there's a debate, say, or discussion about kind of open versus closed AI, And I'm curious how you kind of encounter that particular discussion. What is your view on open versus closed AI.

So the current discussion between open and closed release of AI models came once we saw chat, GPT and other very large generative AI systems being deployed out into the hands of consumers around the world, and there emerged some fear about the potential of these models to act in all sorts of catastrophic ways. So there were concerns that the models could be deployed with regard to different development of viruses or biomedical weapons or even nuclear weapons, or through manipulation or otherwise. So this are emerged about over the last eighteen months, this real concern that these models, if deployed openly, could lead to some level of truly catastrophic risk. And what emerged is actually that we discovered that through a whole bunch of work that's been done over the last little while, that releasing them openly has not led and doesn't appear to be leading in any way to catastrophic risk. In facts, releasing them openly allows for much more greater scrutiny and understanding of the safety measures that have been put into place, And so what happened was sort of the pendulum swamp very much towards concerned about really catastrophic risk and safety over the last year, and over the last year we've seen it swing back as we learn more and more about how these models are being used and how they are being deployed into the world. My feeling is we must approach this work openly, and it's not just open release of models or what we think of as traditional open source forms of model development or otherwise, but we really need to think about how do we build an open innovation ecosystem that fundamentally allows both for the innovation to be shared with many people, but also for safety and security to be rigorously upheld.

So when you talk about this kind of broader idea of open innovation beyond open source or you know, transparency in models like what do you mean sort of specifically, how does that look in the world.

So I have three particular points view when it comes to open innovation, because I think we need to think both upstream around the research that is driving these models, and downstream in terms of the benefits of these models to others. So first and foremost, what we have known in terms of how AI has been developed, and yes, I had an opportunity to see it when I was at the Canadian Institute for Advanced Research is a very open form of scientific publication and rigorous peer review. And what happens when we release openly is you have an opportunity for the research to be interrogated to determine the quality and significance of that, but then also for it to be picked up by many others. And then secondly, openness for me is about transparency. We released a set of very strong recommendations last year around the way in which these very large foundation models could be deployed safely. They're all about disclosure. They're all about disclosure and documentation right from the early days pre R and D development of these systems, right in terms of thinking about what's in the training data and how is it being used all the way through to post deployment monitoring and disclosure. So I really think that this is important transparency through it. And then the third piece is openness in terms of who is around the table to benefit from this technology. We know that if we're really going to see these new models having being successful deployed into education or healthcare or climate and sustainability, we need to have those experts in those communities at the table charting this and making sure that the technology is working for them. So those are the three ways I think about openness.

Is there like a particular project that you've worked on that you feel like you know reflects your approach to responsible AI.

So there's a really interesting project that we have underway at PAI that is looking at responsible practices squarely when it comes to the use of synthetic media. And what we heard from our community was that they were looking for a clear code of conduct about what does it mean to be responsible in this space. And so what happened is we pulled together a number of working groups to come together. They included industry representatives. They also included civil society organizations like WITNESS, a number of academic institutions and otherwise, And what we heard was that there were clear requirements that creators could take, that developers of the technology could take, and then also distributors. So when we think about those generative AI systems being deployed across platforms and otherwise, and we came up with a framework for what responsibility looks like. What does it mean to have consent, what does it mean to disclose responsibly, what does it mean to embed technology into it? So, for example, we've heard many people talk about the importance of water marking systems right and making sure that we have a way to water mark them. But what we know from the technology is that is a very very complex and complicated problem, and what might work on a technical level certainly hits a whole new set of complications when we start labeling and disclosing out to the public about what that technology actually means. All of these, I believe are solvable problems, but they all needed to have a clear code underneath them that was saying this is what we will commit to. And we now have a number of organizations, many many of the large technology companies, but also many of the small startups who are operating in this based civil society and media organizations like the BBC and the CBC who's have signed on. And one of the really exciting pieces of that is that we're now seeing how it's changing practice. So a year in we asked each of our partners to come up with a clear case study about how that work has changed the way they are making decisions, deploying technology and ensuring that they're being responsible in their use. And that is creating now a whole resource online that we're able to share with others about what does it mean to be responsible in this place. There's so much more work to be done, and the exciting thing is once you have a foundation like this in place, we can continue to build on it. So much interest now in the policy space, for example, about this work as well.

Are there any specific examples of those sort of case studies or the real world experiences that say media organizations had that are interesting that are illuminating. Yes.

So, for example, what we saw with the BBC is that they're developing a lot of content as a public broadcaster, both in terms of their news coverage but also in terms of some of the resources that they are developing for the British public as well. And what they talked about was the way in which they had used synthetic meat in a very very sensitive environment where they were hearing from individuals talk about personal experiences, but wanted to have some way to change the face entirely in terms of the individuals who were speaking. So that's a very complicated ethical question, right, how do you do that responsibily and what is the way in which you use that technology, and most importantly, how do you disclose it? So their case study looked at that in some real detail about the process they went through to make the decision responsibly to do what they chose, how they intended to use the technology in that space.

As you describe your work in some of these studies, the idea of transparency seems to be a theme. Talk about the importance of transparency in this kind of work.

Yeah, transparency is fundamental to responsibility. I always like to say it's not accountability in a complete sense, but it is a first step to driving accountability more fully, so, when we think about how these systems are developed, they're often developed behind closed doors inside companies who are making decisions about what and how these products will work from a business perspective, and what disclosure and transparency can provide is some sense of the decisions that were made leading up to the way in which those models were deployed. So this could be ensuring that individual's private information was protected through the process and won't be inadvertently disclosed, or otherwise, it could be providing some sense of how well the system performs against a whole level of quality measures. So we have all of these different types of evaluations and a measures that are emerging about the quality of these systems as they're deployed. Being transparent about how they perform against these systems is really crucial to that as well. We have a whole ecosis that's starting to emerge around auditing of these systems. So what does that look like we think about auditors and all sorts of other sectors of the economy. What does it look like to be auditing these systems to ensure that they're meeting all of those both legal but additional ethical requirements that we want to make sure that are in place.

What are some of the hardest ethical dilemmas you've come up against in AI policy.

Well, the interesting thing about AI policy right is what it works very simply in one setting can be highly complicated in another setting. And so, for example, I have an app that I adore. It's an app on my phone that allows me to take a photo of a bird and it will help me to better understand what that bird is and give me all sorts of information about that bird. Now, it's probably right most of the time, and it's certainly right enough of the time to give me great pleasure and delight when I'm out walking. You could think about that exact same technology applied. So for example, now you're a security guard and you're working in a shopping plaza, and you're able to take photos of individuals who you may think are acting suspiciously in some way and match that photo up with some sort of a database of individuals that may have been found, you know, to have some sort of connection to other criminal behavior in the past. Right, So what goes from being a delightful Oh, isn't this an interesting bird? To a very very creepy What does this say about surveillance and privacy and access to public spaces? And that is the nature of AI. So much of the concern about the ethical use and deployment of AI is how an organization is making the choices within the social and systemic structure they sit. So so much about the ethics of AI is understanding what is the use case, how is it being used, how is it being constrained? How does it start to infringe upon what we think of as the human rights of an individual to privacy? And so you have to constantly be thinking about ethics. What could work very well in one situation absolutely doesn't work in another. We often talk about these as socio technical questions. Right, just because the technology works doesn't actually mean that it should be used and deployed.

What's an example of where the partnership on AI influence changes either in policy or in industry practice.

We talked a little bit about the Framework for Synthetic Media and how that has allowed companies and media organizations and civil society organizations to really think deeply about the way in which they're using this. Another area that we focused on has been around responsible deployment of foundation on large scale models. I said, we issued a set of recommendations last year that really laid out for these very large developers and deployers of foundation and frontier models were what does good look like? Right from R and D through to deployment monitoring, and it has been very encouraging to see that that work has been picked up by companies and really articulated as part of the fabric of the deployment of their foundation models and systems moving forward. So much of this work is around creating clear definitions of what we're meaning as the technology evolves and clear sets of responsibilities. So it's great to see that work getting picked up. The NTIA in the United States just released a report on open models and the release of open models. Great to see our work cited there as contributing to that analysis. Great to see some of our definitions and synthetic media getting picked up by legislators in different countries. It's important, i think, for us to build capacity, knowledge and understanding and our policy makers in this moment as the technology is evolving and accelerating in its development.

What's the AI Alliance and why did Partnership on AI decide to join?

So you had asked about the debate between open versus closed models and how that has evolved over the last year, and the AI Alliance was a community of organizations that came together to really think about, okay, if we support open release of models what does that look like and what does the community need? And so that's about one hundred organizations. IBM, one of our founding partners, is also one of the founding partners of the AI Alliance. It's a community that brings together a number of academic institutions many countries around the world, and they're really focused on how do you build the resource is an infrastructure and community around what open source in these large scale models really mean. So that could be open data sets, that could be open technology development. Really building on that understanding that we need an infrastructure in place and a community engaged in thinking about safety and innovation through the open lens.

This approach brings together organizations and experts from around the globe with different backgrounds, experiences, and perspectives to transparently and openly address the challenges and opportunities today. I poses the collaborative nature of the AI Alliance encourages discussion, debate, and innovation. Through these efforts, IBM is helping to build a community around transparent open technology.

So I want to talk about the future for a minute. I'm true, is what you see as the biggest obstacles to why spread adoption of responsible AI practices.

One of the biggest obstacles today is an inability and really a lack of understanding about how to use these models and how they can most effectively drive forward a company's commitment to whatever products and services it might be deploying. So I always recommend a couple of things for companies really to think about this and to get started. One is think about how you are already using AI across all of your business products and services, because already AI is integrated into our workforces and into our workstreams, and into the way in which companies are communicating with their clients every day. So understand how you are already using it and understand how you are integrating oversight and monitoring into those One of the best and clearest ways in which a company can really understand how to use this response is through documentation. It's one of the areas where there's a clear consensus in the community. So how do you document the models that you are using, making sure that you've got a registry in place. How do you document the data that you are using and where that data comes from. This is sort of the first system, first line of defense in terms of understanding both what is in place and what you need to do in order to monitor it moving forward. And then secondly, once you've got an understanding of how you're already using the system, look at ways in which you could begin to pilot or iterate in a low risk way using these systems to really begin to see how and what structures you need to have in place to use it moving forward. And then thirdly, make sure that you structure a team in place internally that's able to do some of this cross departmental monitoring, knowledge sharing and learning boards are very very interested in this technology, So thinking about how you can have a system or a team in place internally that's reporting to your board, giving them a sense of both the opportunities that it identifies for you and the additional risk mitigation and management you might be putting into place. And then you know, once you have those things into place, you're really going to need to understand how you work with the most valuable asset you have, which is your people. How do you make sure that AI systems are working for the workers, making sure that they're going into place. The most important and impressive implementations we see are those where you have the workers who are going to be engaged in this process central to figuring out how to develop and deploy it in order to really enhance their work. It's a core part of a set of Shared Prosperity guidelines that we issued last year.

And then, from the side of policy makers, how should policy makers think about the balance between innovation and regulation.

Yeah, it's so interesting, isn't it that we always think of, you know, innovation and regulation as being two sides of a coin, when in fact, so much innovation comes from having a clear set of guardrails and regulation in place. We think about all of the innovation that's happened in the automotive industry, right we can drive faster because we have breaks, we can drive faster because we have seat belts in place. So I think it's often interesting to me that we think about the two as being on either side of the coin, but an actual fact, you can't be innovative without being responsible as well. And so I think from a policy maker perspective, what we have been really encouraging them to do is to understand that you've got foundational regulation in place that works for you. Nationally, this could be ensuring that you have strong privacy protections in place. It could be ensuring that you are understanding pential online harms, particularly to vulnerable communities, and then look at what you need to be doing internationally to being both competitive and sustainable. There's all sorts of mechanisms that are in place right now at the international level to think about how do we build an interoperable space for these technologies moving forward.

We've been talking in various ways about what it means to responsibly develop AI, and if you're going to boil that down, you know the essential concerns that people should be thinking about, like what are the key things to think about in responsible AI?

So if you are a company, if we're talking specifically through the company lens, when we're thinking about responsible use of AI, the most important difference between this form of AI technologies and other forms of technologies that we have used previously is the integration of data and the training models that go on top of that data. So when we think about responsibility, first and foremost, you need to think about your data. Where did it come from, What consent and disclosure requirements do you have on it? Are you privacy protecting? You can't be thinking about AI within your company without thinking about data, and that's both your training data. But then once you're using your systems and integrating and interacting with your consumers, how are you protecting the data that's coming out of those systems as well? And then secondly is when you're thinking about how to deploy that AI system, the most important thing you want to think about is are we being transparent about how it's being used with our clients and our partners. So you know the idea that if I'm a customer, I should know when I'm interacting with an AI system, I should know when I'm interacting with a human. So I think those two pieces are the fundamentals. And then of course you want to be thinking carefully about making sure that whatever jurisdiction you're operating in, you're meeting all of the legal requirements with regard to the services and products that you're offering.

Let's finish with the speed round, complete the sentence. In five years, AI will will.

Drive equity, justice, and shared prosperity if we choose to set that future trajectory for this technology.

What is the number one thing that people misunderstand about AI.

AI is not good, and AI is not bad, but AI is also not neutral. It is a product of the choices we make as humans about how we deploy it in the world.

What advice would you give yourself ten years ago to better prepare yourself for today?

Ten years ago, I wish that I had known just how fundamental the enduring questions of ethics and responsibility would be as we developed this technology moving forward, So many of the questions that we ask about AI are questions about ourselves and the way in which we use technology, and the way in which technology can advance the work we're doing.

How do you use AI in your day to day life today?

I use AI all day every day. So whether it's my bird app when I go out for my morning walk, helping me to better identify birds that I see, or whether it is my mapping app that's helping me to get more speedily through traffic to whatever meeting I need to go to, I use AI all the time. I really enjoy using some of the generative AI chatbots more for fun than for anything else. As a creative partner in thinking through ideas and integrating it into all aspects of our lives. Is just so much about the way in which we live today.

So people use the word open to mean different things, even just in the context of technology. How do you define open in the context of your work.

So there is the question of open as it is deployed to technology, which we've talked a lot about. But I do think a big piece of PAI is open minded. We need to be open minded truly to listen to, for example, what a civil society advocate might say about what they're seeing in terms of the way in which AI is interacting in a particular community. Or we need to be open minded to hear from a technologist about their hopes and dreams of where this technology you might go moving forward. And we need to have those conversations listening to each other to really identify how we're going to meet the challenge and opportunity of AI today. So open is just fundamental to the partnership on AI. I often call it an experiment in open innovation.

Rebecca, thank you so much for your time.

It is my pleasure. Thank you for having me.

Thank you to Rebecca and Jacob for that engaging discussion about some of the most pressing issues facing the future of AI. As Rebecca emphasized, whether you're thinking about data privacy or disclosure, transparency and openness are key to solving challenges and capitalizing on new opportunities by developing best practices and resources. Partnership on AI is building out the guardrails to support the release of open source models the practice of post deployment monitoring. By sharing their work with the broader community, Rebecca and Pai are demonstrating how working responsibly, ethically and openly can help drive innovation. Smart Talks with IBM is produced by Matt Ramano, Joey Fishground, Amy Gaines McQuaid, and Jacob Goldstein. We're edited by Lydia jen Kott. Our engineers are Sarah Brugaer and Ben Tolliday. Theme song by Gramoscope. Special thanks to the eight Bar and IBM teams, as well as the Pushkin marketing team. Smart Talks with IBM is a production of Pushkin Industries and Ruby Studio at iHeartMedia. To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Glabo. This is a paid advertised span from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions,

Smart Talks with IBM

Join Malcolm Gladwell, author and host of Revisionist History, and hosts from your favorite Pushkin  
Social links
Follow podcast
Recent clips
Browse 44 clip(s)