On this podcast exclusive episode of The Middle we've got Vilas Dhar, president of the Patrick J. McGovern Foundation, back with us to answer even more of your questions on artificial intelligence and explore how AI is impacting our society. #AI #artificialintelligence #chatGPT #generativelearning #skynet
Welcome to a special bonus edition of The Middle podcast. I'm Jeremy Hobson, and we've done a few live shows on the radio where we're getting so many great calls and great questions that we could easily go on for quite a while longer, but we can't because the way The Middle works on the radio is we have to start at a specific time, we have to take breaks at a specific time, we have to end at a specific time because the show is on over four hundred and twenty public radio stations, and that's just the way it works. So we did a show a couple weeks ago that was one of those occasions. It was about artificial intelligence, which is becoming a bigger and bigger part of our lives every day, whether we like it or not, and we were asking for your questions about AI with two guests, one of whom is here to answer some more. I'm delighted to welcome back to this podcast extra episode of The Middle. VLAs Dar, president of the Patrick J. McGovern Foundation, which is trying to make sure that AI is being used for good. Vilas, welcome back. Thank you so much for joining.
Us, Jeremy, thanks for having me. This is so much fun, and.
If I had a nickel for every time I read the word Skynet in our email inbox after that show. But anyway, there were a lot of people with a lot of questions and comments. So first of all, before we get into the extra questions that people had, did anything surprise you about what our listeners were asking?
You know, it's didn't surprise me, but it's something I hear all the time. Is even for those of us who are in AI. There's one way of thinking about this that's about techno optimism. Everything is going to be great and don't worry about it. It's not the school of thought I come from, but sometimes I reflect on how much public fear and concern there is about these technologies. Some of it's tied to what AI is and will be, and some of it isn't jeremy. Some of it is just a lot of public narrative about all the things we should fear, and I think one of them. The most worthwhile things we can do is not to try to confront those directly, but maybe make sure that we all understand where we are at this point in time on AI and where we're going, and realize that some of these narratives are hurting more than they're helping.
It was interesting that we got one call during the show where somebody said, are we being too negative about this? Is there a risk in even talking about the risk of AI?
It's a fair question. Look again, I start from common sense in these conversations that technology is nothing more than an expression of our values, and we should be able to talk about the risks and fears. But if we don't balance it with an idea of what we're trying to solve for, we're just spinning in circles.
One thing that has happened since we did our show is that the chairman of the Chinese company Ali Baba said he sees signs of an AI bubble in the United States because of all the investment that's being made here.
Do you think that that's a concern.
You know. The funny thing, Jeremy, is just today there was an article in the MIT Tech Review that talked about the AI bubble in China. One of the things that's happened is they have invested so heavily in data centers and AI infrastructure, and they don't have the commercial demand to meet all of the things they've built so AI tech bubbles will happen. I think we might be in one now, and I bet you that we'll have five more of them over the next ten years. To me, the focus is on what are the core principles and values. What are the ways we're thinking about policy in the space that's more than just responding to whatever hype cycle we're in the moment. How do we think about five, ten and twenty years in this debate?
All right, well, let's get to some of our listener questions, because they're not all doom and gloom.
Here is one of them.
My name is Josh, I'm calling from Demver. My question would be, how do you guarantee that everyone will participate in ethical use of AI? And what protection are in place when that epical code is broken?
What do you think?
So I have a starting point to this journey that I think is really important, which is that ethical AI doesn't exist. There's no such thing as it. And over the last few years we finally moved away from that term. Ethics is a human function, It's a product of human decisions. Now, one of the real questions that we have today is who's going to be responsible for building ethical decisions into how we make these technologies. And there's an easy and incorrect answer, which is that the people who build these tools should be responsible for its ethical use. But to me, that's too restrictive of a starting point. We can't just fix this by building ethics classes into university computer science curriculums. That's just not how the world works, because ethics is a product of where and how these tools are used, what they're built for, who they're supposed to benefit. So the question that this listener asked is exactly the right one. How do we make sure that ethics is something that we hold as a society, and that means a few different things. It's certainly the technologists wo will build it, they have an integral role. It's also policymakers, and it's consumers, and it's creators who build content that has nothing to do with AI, but it's being used in these platforms writers and artists and authors. And what we need to build, Jeremy, that we don't have today is a convening space that lets all these people come together and actually say, what's our shared vision of what ethical use of AI looks like? And we need to build that fast. We're seeing the seeds of it.
Now, how does that kind of a thing get built? Who builds it?
The first thing I think that would be really helpful is for us to see some action by the government, whether it's Congress or by the White House, to say this is a priority for us to figure out at a whole of society level. I've spent today actually on the hill meeting with legislators from across the country, and I'll tell you this is one of those uniquely bipartisan issues because everybody is concerned about the same topic. So the first thing we need is for legislative action that says we need some sort of federal regulation of these tools. The second, and you'll here are my bias in this, because I lead a civil society institution, is to have civil society step in a meaningful way. Universities, nonprofits, governance think tanks to step in and say we're going to start building the public infrastructure for a discourse about the ethics of AI. I spend a lot of my time, very unusually for a philanthropic CEO in small towns and communities across America, asking people what they're concerned about, and when we open up space for those discussions. I learned just as much in those conversations as I do out in Silicon Valley talking to somebody who's building a tool. We need to take that and lift it up. And some of the places where I'm seeing it happen are great op eds, are great creators and public culture people like fran Dresser, who might remember the nanny who came out to lead a public conversation about the use of screenwriting and IP and how AI will intersect with how we make movies, artists and musicians who are using it. We need to make sure that there's a pattern of these kinds of questions being asked in the public sphere.
Well, that's why we do this show, so that we can let everybody into that conversation. Let's get to another comment. This came in online at Listened to the Middle dot com. Sylvie in Atlanta, Georgia rights AI uses a tremendous amount of water. Will AI ever realize it is competing with humans for water?
And what will it do? You know?
We heard the lost during the show about some of the environmental consequences of AI, But what about this question from Sylvie?
I love the question and I love the way it's framed. Will AI ever realize that it's competing with us? Honestly, I hope not, because I don't want AI to realize anything. I don't want AI out there thinking on its own. Instead, what we need to make sure is that we build AI that's conscious of this. I'll give you an example. Just last September, I worked with one of the world's leading AI artists, a fellow named Rafique Anadol, And if you haven't seen his work, it is absolutely stunning. It is beautiful. It was at the moment, and we actually showed this work at the United Nations. But here's why this particular piece that we built in collaboration was so important. It has three elements, and I'll be very quick. One is ethically sourcing one hundred million images of underwater environments from around the world. These are corals and natural seascapes. We brought them together with the consent of communities, and then we trained a new model. You might think of it like a GPT, but we trained it only using power from renewable sources and only trained when the grid wasn't overwhelmed. Now, as you know, electricity is a direct connection to water, right so if we could build AI tools that are conscious of how we're using water, we could make it a part of the design process, and we could show it that you can actually build AI that protects water resources at scale. And the third element of this ethical data use of renewable energy and sustainable water resources. And the third was the product of this was this beautiful engagement for people with the visual ideas of what our natural seascapes look like that actually caused people to walk by and say, wait, this is what it could look like. Why aren't we doing more to conserve it. The takeaway from this is the connection between AI water use and a broader response to climate change and sustainability of our natural resources. We shouldn't just focus on AI. We should focus on why we can't actually have a public conversation about building renewable and sustainable power that actually protects our natural water resources.
But the way that you answer that question, it makes it sound like you think, oh, well, AI would never do something that we don't want it to do. AI is never going to be mean to us and take that road. As long as we build it the right way, it'll never figure anything out and start being evil.
Well, you're taking me down a different rabbit hole, and I love it because where you've taken me too, is where we've heard some of the most sensational and crazy talk coming out of Silicon Valley. This is Skynet all over again. Right. What we're worried about is the specter of an AI that's going to sit there and say, I need to realize that I'm doing something that is in opposition to humanity. And as soon as you start at that point, you immediately get to what happens when AI decides it's more important than we are. But that everything I've just said is still in the realm of science fiction. There's no credible science today that talks about whether AI will have agency, or consciousness or an identity all of its own. We should be thinking about it, we should be planning for that as a potential future. But I'm going to say it very clearly here, there is no scientific evidence or consensus that we are anywhere on a path to building AI that's going to have its own agency, its own sense of purpose, that's going to be oppositional to humans.
Let's get to another email comment. This is from David in Minnesota. He writes, how could AI advance quantum computing and vice versa. And as you answer that question, you can be as condescending as you want, because I don't know anything about quantum computing.
Never condescending, I hope. But quantum computing is fascinating, and honestly, I have a bunch of degrees, and even today's day, I'm still trying to figure out how quantum works, as is everybody on the planet. But let me kind of give a little bit of a brief explainer or spliner as we like to call it. So quantum computing. You know it's in the news a lot. What it essentially is is a totally different way of thinking about computing entire and instead of being limited by what we've always thought computers would let us do, now there's this brand new set of things that quantum will let us do. Some of them are really as ateric. They're things like, well, we could break all cryptography that we've ever had, and there's no such thing as secure communication. Maybe some of the more interesting things are in foundational science. Quantum computing could unlock how we do drug discovery and understand proteins and biologic functions. There's two areas where quantum and AI are intersecting in really amazing ways. The first is AI, as it does with so many other disciplines, is changing the speed of scientific discovery and how we create quantum computing. It lets us test new ideas and virtual environments. It lets us design quantum chips faster so we don't actually have to build them. AI becomes a tool that supports our scientific discovery. On the flip side, and we don't really have any answers, and Jeremy, I'll tell you I have no answers in this space yet. But some speculation quantum computing might change how quickly AI works, and coming back to our earlier questions that we found in it might totally change this idea that the way we think of AI today, which is take a box, throw a lot of data into it and somehow it's able to reason, would actually fundamentally change because of the nature of quantum computing. It might mean that we don't need to train a model using enough power to power a small city for six months. We might have a totally power efficient way to do it. This is all speculation, but in the next ten to twenty years this will be a really fun conversation.
For us to have. All right, let's get to another caller.
Listen, Hi, my name is Mary. I'm coming from Atlanta, Georgia. I'm wondering how people utilizing cat GPT to write legal documents or documents in school systems, how.
That's going to affect PURPA and.
Other things like that.
She mentions FURPA, which is the Family, Educational Rights and Privacy Acts of This caller is worried about privacy.
What's your response.
Let me start with the absurdity of AI and legal documents. I love this thought experiment. I was a lawyer for a period of time. Zeremy and I went to an to legal professionals. I explain this to them. Think about a world that's maybe five years away, where your lawyers use a GPT product to write their briefs. A judge has their clerks use GPT to summarize the briefs. The judge makes a decision, then has GPT write their decision down, and then that goes into the academic literature and legal scholars use GPT to understand it. At that point, what are humans really contributing the process. Well, it's going to force lawyers to actually justify what they do in a way that maybe is going to be really good for the rest of us. So that's a little side comment on the legal professional at large. AI is going to have a transformative effect on it. But the privacy question is really important because remember that the way it works today is all the GPT, the AI products we use, they're started and owned by a very small handful of companies. There's a couple of open source ones like deepseek, but really it's often OpenAI or Google or Microsoft. And when those tools are used to evaluate your records and potentially be used by a lawyer to write something, remember that's your personal data that's being transmitted to that company servers that sometimes is then being used by them to train their models, and we have no visibility or control over that. So this privacy question that's being asked by as listener is really important. In the United States, as you well know, we have no structural constraint around privacy regulation. We don't have a single national law, and that's going to be one of the most important things that has to happen in the next five years is a way for us to be able to tangle with these questions. If my private data is going to an AI system. What are my rights to privacy and how do we keep companies from using that material data information for their own good? For now, there's not a good answer, and I hate to say it, and which is why if you're contracting a lawyer, need to be very careful to tell them exactly how you're willing to let them use AI on your data.
Yeah, I have to say I'm not mad about it because it gets me through the airport faster. But it is kind of amazing that I don't know what I signed. But now I just look into a camera. I don't even give the tsagent my ID, and all of a sudden I'm through and it's like, oh, okay, I guess I guess we're done with that level of privacy.
You've got my eyes.
Now you've figured out how I look, and you're willing to trust that just to let me through security.
Look, I travel all the time, and I'm with you, And this is the real challenge that our listeners need to pay attention to. For the last thirty years, this has happened to us over and over again. We get a terms and conditions, we click, I accept, We rarely read it, and in that moment, we are giving up some of our privacy rights and we get some value back for it. It's been easy when that was in order to go online and read a news article or on a dating site. But when it gets to really fundamental information like our biometrics, our digital identity. Now is it time for all of us to wake up and say, hey, wait a minute, I don't want to do that passive consent anymore. I want to actually know how you're going to use my data. And again it comes back to the baseline. We got to call our legislators and tell them what we care about and tell them that we want privacy regulation.
I'm speaking with VLAs Star, who's the president of the Patrick J. McGovern Foundation, and you're listening to a special edition of The Middle podcast.
We will be right back. This is The Middle. I'm Jeremy Hobson.
I am talking with Patrick tim McGovern Foundation president VLAs Dar on this podcast extra episode to answer more of your questions about artificial intelligence. And let's get to another voicemail.
Hi, my name is Jaye Martin calling from Mount Pulaski, Illinois, and I was kind of curious you all talked a little bit about how CHATGBT is bringing up kind of the bottom level of the entry level workers. That's not exactly screwing over the middle class, but the average worker might be having a more difficult time. You all thought about bringing the bottom up, sound like it was just a negative, But I don't know if there's some positive in there too.
So it sounds like JA is talking about the fact that chat GPT and other generative AIS are sort of making entry level work a little bit easier, while at the same time potentially disenfranchising people whose jobs are now potentially being streamlined or made obsolete because artificial intelligence can just do it for them. So is there a way to make it so that AI can benefit people's professional lives and not just take their jobs at all levels?
Jeremy, I have a very good friend named Jamie Marisotis who leads the Lumina Foundation, and years ago, even before chat GPT, he wrote a book and it's called on Human Work. And in that book he identifies all of the different parts of our jobs and make some speculations about how AI might automate them. And he has some very clear insights in that book. One of them is he says, you know, there's a lot of our jobs that are mundane, banal kind of tasks that you just kind of check the box on. But in every job, and it doesn't really matter whether it's entry level or super senior, there is something that actually index us to human creativity, to human empty to something that we do as a part of our social connection to each other that, as far as we know, AI systems are never going to be able to replace. Let's take that proposition and try to answer this question. Even at an entry level job, there's gonna be something there that AI probably isn't going to do as well as a human. So the choice in front of us is are we actually going to try to protect those tasks? Are we going to make sure that we leave open opportunities for people to do them, or are we okay with going to a society where we say we don't care. There's a great meme going around lately if somebody pulling up to a fast food restaurant and there's an AI order taker at the drive through and they just kind of mess with it a little bit, and you know what happens is it might be funny for the first order, but at some point you realize that our lives are made up of hundreds of social connection points we have with people. Sometimes they're deeply frustrating or annoying, like when you call the customer service at your cable company. But we're social creatures and a lot of what we do is engage in that way. So I think the answer is, can we actually find ways that even in those entry level rolls, those first jobs, we're prioritizing human work, and we're training people to do those things so effectively that maybe it actually makes the world a better place, a more fun and enjoyable, happier place for all of us.
But if I just think about that an example of that, let's think about, you know, a job that has definitely been replaced in many areas by machines, which is the checkout at the supermarket. What is the entry level workers' contribution that they can make.
I mean, they are faster, they are faster.
If I go and bring up everything myself, I'm not going to move as fast as the person who knows that the you know, heirloom tomatoes are this code and they've moved quickly. But what is the other benefit of that kind of can you see a human element that is better than the machine in that way.
But this is the perfect example of a question of just because we could doesn't mean we should. I've been in those stores from one of the big tech companies where it's you walk in and you wave a credit card and there's not a human to be seen, and you walk out and it's novel and fun Jeremy and look. I like technical things. I'm like, this is great, but if you give me my choice, I don't really want to go to that store. I want to go and see somebody and say, how's your day going right, and have a chat about the weather. This is the point is a lot of these tools are going to give you more productivity and efficiency. They're going to drive profit margins for big employers. But when did you and I get to make a choice and say, you know what, We're happy with the fact that you got rid of all the tellers of the line. I actually like talking to the tellers in the line. I'd rather go to the store and talk to them. The inertia of the moment is that all of this is going to get automated and we're all going to have to go along with it. But I don't believe that. Like I believe in political power of people coming together and saying we want to advocate for a different choice. There's going to come a time pretty shortly when it's not just your teller, it's your nurse, it's your pharmacist, it's everybody who provides service in your life, and at some point people have to stand up and say just because we can doesn't mean that's what we want.
Let's get to another email comment. This is a very interesting one from Fred in Pennsylvania. He says, are you concerned, in an ethical sense, not for the rights of a human being, but that inalienable rights aren't being offered to what is being built as an autonomous being? Do you think the same abuses levied upon humans by corporations historically will be more prevalent, if not easier to inflict, due to a lack of oversight into the rights of AI itself.
I'm going to give you a controversial answer, Jeremy. I've done a few things in my life, and you shared some of this. I was a human rights lawyer for a period of time. I think it's pretty easy for me to say to you, non controversially that human rights are human rights, and that's not even AI. Let's just stop it at humans and not talk about corporations as having human rights either. We are down a weird and winding path at the moment, but I don't care what we build, as autonomous as they might be. Human rights are human rights, and that's where we should stop the conversation. If we're going to build tools that somehow are intended to increase human welfare, then that should be a part of the conversation, is that the things we create are intended to help us. Now, I may regret this when our robot overlords call come knocking in fifty years, but for the moment, I'm pretty confident that actually we should live in a world where we prioritize human interests.
When you're on the AI version of Meet the Press and they say, I feelst are you said in twenty twenty five that we don't have any rights?
Okay, senator gets no recollection of the events in question.
Right, let's get to another listener comment. It's something I hear a lot about when it comes to the tangible benefits of AI. Tony in grand Ledge, Michigan Rights. I've heard that many doctors are either retiring or planned or retire in the next few years, and that US medical schools are not graduating new doctors fast enough to replace them. What role do you see for AI in the medical field in the next ten to twenty years.
You know, I had a chance to go out to Stanford Medical School and I met with a high school classmate of mine, a woman named Sarah Midendor who leads emergency residence program out there. She's an exceptional doctor, a carrying and empathetic human, and I got to spend almost a day with residents, students, and practitioners in medicine. I asked them the same question, and they said, you know, we're really excited for all the ways that AI will be used in medicine, and we had lots of use cases. I'm happy to share those with you. But at the end of every one of those conversations we got back to the same point, which is, we don't see a world in which AI is going to replace a doctor. Medical profession isn't just about technical knowledge. It's not about being able to do diagnosis and diagnosis. It's about supporting somebody through some of the most vulnerable points in their life. So the question this listener is asking is the key one, which is what's happening here? Why aren't people going to medicine. I don't think that has anything to do with technology. That's an indictment of our healthcare system, of the ways we built a system that's so unjust and inequitable that has reduced the prestige and status of medical practitioners. Sometimes this is an easy buy, and I apologize I'm not trying to dodge the question, but I'll just say this isn't a question technology displacing medical professionals. It's about whether we center our social values, about whether we honor and respect what these people who give up so much of their lives to serve us do, and whether we can turn that profession back into something that people aspire to do and build a pathway where they can do it without taking a lot of financial and social harm attached to it.
But let me just push you on the issue of like what AI could do. I think about somebody that has dementia or Alzheimer's. Could AI play a role in eldercare in the future.
As I said, I love talking about this, and if you ever want to have a new conversation we just talk about an healthcare but let me give you three examples I think are amazing. The first is basic things like when you have a chronic condition, adhering to a clinical plan that a doctor gives you is really hard. You might see your medical professional once every three months or six months, but in between there's a lot of things they've given you to do as a checklist, and sometimes when you're facing that mental degradation or other things, it's very hard. AI can be an amazing partner in that work because they'll be with you twenty four to seven. They can help you identify behaviors and practice as you're doing that are problematic, and they can also just remind you to make sure you take your pills. The idea of a care companion who can help you adhere to the medical plan your doctor's giving you is amazing. There's a second category of things around diagnostics. Right medical radiologists are great at what they do. AI can make them even better so that they can do earlier scans, they can identify issues earlier, and they can make sure that people are in better care. And the last thing around medical care that's super important has nothing to do with the actual delivery of care. It has to do with how inefficient our health system is. All of the back end work that goes on in billing and negotiating, and how insurance companies negotiate and decide whether or not to pay out claims. These are things that are perfect for AI efficiency. And if we could cut a lot of costs out of that dead weight that hangs over our medical system, we could get our providers to spend more time with their patients and deliver better care.
All right, I want to get to one final caller here. It's kind of a goofy, but maybe we'll have some fun with it. This comes to us from Ariel in Boulder, Colorado.
Consider AI for managing workers, and let's do star Trek. We have Kirk, Spock, and McCoy, each providing their unique personality and their unique skill set. Now Doge would think all I need is efficiency, so it would be spockxbox Bock. But your average Trekkie would say that would not work because each provided their unique solution to their problems in every episode. So AI may be biased towards something according to the person who's managing it, and therefore would fail.
All right, So I think at the core of that. I mean, besides the Star Trek stuff. The caller mentioned DOGE, which is the Department of Government Efficiency Elon Musk's department, and they're trying to cut spending in the name of efficiency. It's tying into a lot of fears people have over AI that efficiency expediency is prioritized at the expense of the collaborative spirit. What do you what do you say to the caller and this fear that AI excludes the human element from the work that it's trying to support.
I don't know why you're not letting me talk about Star Trek, Jeremy. I just want to spend it.
Can you do that too? If you want to go, go for it in spots.
But let's answer let's answer the actual, the important question that you've asked. You know, there's a lot of metaphors out there that people want to use about AI. Some people want to call it a lens or a mirror or all kinds of things, and I think they're all evocative that at the end of the day, and this is the core conceit of AI. What AI becomes is what we build it to become. There is no special supercomputer out there that's trying to make AI into an evil genius or a villain or even our best friend. It's people who are making decisions about what AI will look like. And so I think I said to you when we had our first conversation, I often think of AI as a trojan horse. We can get a lot of people to talk about AI because it's in the hype cycle and people want to talk about AI. But from me, every conversation about AI starts with speculation, and it goes to technology, and then it ends up with what are the decisions we are making as people and as a society about what these tools will do for us. It's going to force us to have some really hard conversations. If AI allows us perfect access to healthcare, are we okay with the world where some people have it and some don't because of a political decision. If it's going to displace workers, are we okay with the fact that it's going to remove that teller from the storeline that you and I talked about, or are we going to say we value human experience and in the workplace? Are we going to be okay with companies saying our profit bottom line says that we're going to solve for efficiency over empathy and care well, I think absolutely not. AI is going to force us to have a real hard conversation about the society we built and whether we're happy with it, and what kind of society and norms, values and principles we want to have guide it, and it's our choice to make. This is the hardest part of the argument because it feels so abstract, But this is the most important takeaway from this conversation. None of this is going to happen without us stepping forward and saying we want a certain kind of future, and as a community in a society, we're going to come together to shape it. Or the alternative is we don't do that and we just go along with what the tech companies do. That's the choice in front of us. It's a moral choice, not a technological one.
We have the power, you're saying.
We have the power if we choose to use it.
I have one more question for you, just a personal question about this. And you know, the way that I'm mainly using AI directly is through things like chat, GPT, and I wonder is it learning from what I'm telling it or is it just learning from what I'm telling it in regards to what it's doing for me, Like does it take the information I give it and use it more broadly than that or not.
It's a really good question. I'll give you a non technical answer, which is it's always learning, but it's learning in a more abstract form. So if you tell it something about yourself, it's probably not going to take your personal information right away and put it into its corpus. But if ten people ask a similar kind of question, it's going to learn that that's a question that's important. And when you tell it an answer is good or not, it's going to remember that as well. But it takes and aggregates its interactions with several billion people into the central model, and then it plays it back to us.
Well.
Thank you so much, VLAs Star for joining us and answering these listeners questions. VLAs Star, the Patrick J. McGovern Foundation President, really appreciate it.
Jeremy. This has been such a pleasure and so great to hear from people across the country who are curious and committed about these issues.
Absolutely, and thanks to you for listening. Help us out, Share this podcast with your friends on social media.
Sign up for our weekly newsletter.
Listen to the Middle dot com and while you're there, support us by buying a Middle mug a Middle t shirt. They're available in the Middle Merch Shop. And I'm allowed to say that on the podcast, but I can't say it on the radio show. So you are very important to supporting our merchshop. I'm Jeremy Hobson. I will talk to you later this week.
He was still him