On this episode of The Middle, our guests try to answer your questions about artificial intelligence as it becomes an ever-increasing part of our lives. Jeremy is joined by Ina Fried, Chief Technology Reporter for Axios, and Vilas Dhar, President of the Patrick J. McGovern Foundation, which is trying to make sure AI is used for good. DJ Tolliver joins as well, plus calls from around the country. #ai #artificialintelligence #chatgpt #jobs #digital #machinelearning #technology
The Middle is supported by Journalism Funding Partners, a nonprofit organization striving to increase the sustainability of local journalism by building connections between donors and news organizations. More information on how you can support the Middle at Listen to Themiddle dot com.
Welcome to the Middle.
I'm Jeremy Hobson, along with our house DJ Tolliver and Tolliver so much new stuff this week. We have a brand new fresh Middle website at Listen to the Middle dot com.
We have a new.
Merch store with Middle T shirts, and we have a new podcast that's been around now for a couple of weeks, an extra called One Thing Trump Did.
Uh yeah, you can actually get that on the Middles podcast feed as we dive in exactly one thing Trump did, not two, not three, just the one. Also, when you're gonna get me a T shirt, man, I didn't know that it's coming.
It's in the mail, you know, Tolliver.
As we prepare to have a conversation about artificial intelligence, I'm going to come clean and say that the title of One Thing.
Trump Did was a idea.
I wanted to call the podcast Extra Trump Tracker, but there already is a Trump tracker, So I asked chat GBT for a bunch of other names and it came up with this week in Trump Chaos, Breaking Trump, which is a play on breaking news and wtf did Trump do?
So we settled.
On one thing Trump did and it is available, as Tulliver said in the middle podcast feed and partnership with the iHeart Podcast on the iHeart Apple wherever you listen to podcasts. So we're going to get to your questions about AI in just a minute. But first, last week on the show, we talked about transgender rights. A lot of calls came in. Here are some of the voicemails we got after the show.
Hey, my name's Aaron Castillo and Los Angeles. I'm a trans woman. This attack on the trans community has been really scary because I don't feel safe. The pervasive culture and the intense attacks have made it feel to at least mean that there is a culture of life acceptance towards the trans community.
This is Carmela calling from Atlanta, Georgia. I hold a PhD in molecular genetics. People love to use basic shorthand to try to justify these political views and it's silly.
Miname is Steven Hoyle. I'm calling from Cohenwald, Tennessee. I do think that there is reason to be cautious, particularly for very young children, about moving toward medical transition. And yet we don't need to be falling on the sword over a tiny minority of a tiny fraction of people who aren't causing problems or doing any harm to anybody.
Well, thanks to everyone who called in. So now to our topic this hour, artificial intelligence. We actually did a show like this last year and asked for your questions. But a lot has changed with AI in the last year. The technology is improving rapidly, and a new survey from Elon University finds more than half of Americans are now using AI tools like chat, GPT and Gemini and Claude. But there are a lot of questions, and that's where you come in. What are you a questions about AI right now?
Tulliver? Can you give the phone.
Number please Elon University.
Uh, it's eight four four has nothing to do with Elon Musk. Just let's be clear, Okay.
To clarify, what's eight eight four four four middle that's eight four four four six four three three five three, Or you can write to us a listen to the Middle dot com and you can also comment on our live stream on YouTube, TikTok, Facebook, and Instagram.
I kind of wonder if that's an issue at Elon University. Right now, let's meet our panel. Vlus Star is the president of the Patrick J. McGovern Foundation, which is trying to make sure that AI is being used for good.
Vlus, great to have you on the show.
Oh it's a delight, Jeremy. And now I gotta get ready for this T shirt swag. I think it's coming my way.
Yeah, every guest. Ena Freed is also with us, one of the best tech reporters out there currently at Aksousina.
Welcome to the Middle.
Thanks Jeremy. Great to chat again.
And before we get to the phones, just give us a lay of the land for AI right now. What is the elevator? Pitch about how it's being used in America in twenty twenty five.
Well, I think like you did to come up with a name is a good example. I think AI is being used by most people at the edges to do a particular project that could be a personal thing, it could be a work thing, but it's still at the very early stages. A lot of businesses, for example, are still figuring out, you know, how are they really going to use it at scale? They're running a lot of experiments, and I think individuals are curious. I think most people are asking a question if your high school or college age, you might be seeing just how much help with that essay you can get. It certainly caused a challenge for educators, which I'm sure we'll get into and maybe to draw a picture. But again, I think we're just scratching the surface. Everyone's still trying to figure it out.
VELAs how would you answer that question?
Yeah, I think ENA's right.
You know, I often say it almost feels like the five stages of grief, Like we had AI come out into the world, and all of a sudden, everybody was curious, and then they were overly excited, optimistic, and then they were existentially scared, and then it feels like maybe now we're getting to a place of pragmatism where people are asking a foundational question, Look, I know what I need to do in this world.
Can AI actually help me do that better?
So I was just at south By Southwest in Austin and there were like eight million AI sessions going on and I went to one of them, which was about media and the example that they used, and this was somebody from Amazon. They were showing off a product that they have that basically allowed them to take an entire season of a TV show and in a few hours create a Hollywood style trailer to recap what happened in the last season, which would take marketing professionals weeks.
And a lot of money to do.
And it made me wonder, are these marketing professions professionals about to be out of a job? And Ina freed, what do we know about the job loss that is already occurring and may occur because of AI.
I think right now you're mostly seeing job losses that would have probably occurred that are being slightly accelerated. But I don't think we've seen the real economic disruption that's coming. And again, the early learnings were that it's not like you could just have chat, GPT, this generic tool replace an entire job. What you found is one, you need more specialized tools like the one that you mentioned that Amazon's doing, that are designed to do a specific job. And two you need the technology to be good enough. But it's coming. I mean, the pace of progress is faster than anything I've seen in my twenty five years of covering tech. And I think people are making a mistake when they look at today's technology, especially the most generic forms of it, and say, well, this isn't going to take away somebody's job. Look, even if it can do only half the tasks of a job, a company is not going to pay the same number of people to do half as much work. They're going to have half as many people in that department.
Jeremy, did they show you the trailer or do they just talk about it?
Yeah? They did, actually, and it was okay.
It wasn't probably as good as what humans would be able to do right now. But vilas on that point about jobs, how do you create a world that's filled with artificial intelligence, but that also includes human beings that are still doing work and getting paid for it.
You know, Jeremy, I'm going to be a little wonky with you in this first answer, which is we often think about jobs as if they're either going to go away or we're going to keep them for humans. I think there's a different story to be told here. Our entire economic system is built on this interplay between capital and labor, where you have resources that come in and the people who control those resources the people who do the work. One of the things I'm deeply concerned about is what happens when you fundamentally alter that balance.
I'll give you an example.
We are now talking about this word that's kind of everywhere, much like your south By Southwest conversations, agentic AI, the idea that people are going to use automated workflows that are run by AI to go often do really complex things. One of the things you might envision is a transformation of a business that goes from having a thousand workers to maybe having a set of agents that actually control the flow of work, that guide people to do their tasks, that make sure they're doing them well, that evaluate them. But is fundamentally shift the balance between who has power in the workplace, because now if you have capital, you can control the AI system, and can control the AI system, you can direct what people do in a much more authoritarian way.
Well, I was just going to add on because I totally agree with what you're saying. And one of the things of particular concern is in the past, technology has tended to make the best workers better, and it's disproportionately advantaged them. What we've seen that generative AI is really good at is bringing up entry level workers to the media and faster. On the one hand, that's great, it gets people trained. But to Vilas's point, and I think where you were going before I so rudely cut you off, I think it devalues the and takes power away from the average worker. I think the risk is that workers become more fungible, and there's more power for the companies and employers that can afford to own these AI systems, and.
Who gets who gets to decide that is the AI that's evaluating everybody and telling you how good they are and how not good they are.
Well, I don't think that's inevitable, Jeremy. I think that's the point is a lot of what we're doing is just following the inertia of a few people who are creating a few systems that are out there changing a lot of things for all of us, people like you and me. But I don't think it's inevitable that that's the way we build this future. I actually think we could sit down and have a real conversation about what it means to have a worker centered future one where we actually talk about how these tools don't just think about controlling for efficiency or how these businesses become better and more productive, but actually centered dignity in workers' hands. We could have a variety of alternative futures ahead, but it feels like sometimes we're starting from a very different place when we try to have that conversation than the one that I just raised a couple one hundred million dollars to go off and build a tool that lets me control my workplace more effectively.
Well, yeah, you know, free, is anybody you're talking to trying to build a worker centric future of AI?
They are, but they tend not to be the ones raising the most capital, and you know it goes to this power dynamic. I think what Vla's hands, that I think is really the critical thing is the AI future is not guaranteed. There will be a future with AI in it, but what that future looks like depends on societal norms, what regulations are past, what we insist on as societies, And I do think right now we're sort of letting the tech companies take the lead. I think we need to be vocal about what we like and don't like the nice thing about AI or one of its benefits is it's very accessible. People can try it out. And if there's one thing that I would encourage people to do is try it. Whether you like it or don't like it, you're going to be much better able to have that conversation with some sense of what the technology can do.
And when you say try it, are you just talking about like a chat GPT or something like that. Is that the easiest way to try it right now?
I think the easiest starting point are these chatbots. And it's not just chat GPT, Microsoft, Google have them. A lot of the technology is free, at least at the basic level. You may not get all the features, but you get a lot of it. And I think figuring out looking at your own career and saying, you know, how, how is this going to change my job? How is it going to change the way I raise my family? That sort of thing, rather than just this isn't a future we have to have thrust upon us.
Jeremy again, our number, that's right, our number, Tolliver is a four four four middle it's a four four four six four three three five three And you know Tolliver before there was chat GPT, or Claude or Alexa or Siri and apologize if I just made everyone's phone wake up by saying that word.
There was a chatbot called Eliza.
Yeah.
It was developed in the nineteen sixties at MIT by a scientist named Joseph Weisenbaum.
Eliza is a computer program that anyone can converse with via the keyboard and it'll reply on the screen. We had a human speech to make the conversation more clear.
Men all alike, in what way they're always bugging us about something or other?
Can you think of a specific example. The computer's replies seem very understanding. But this program is merely triggered by certain phrases to come out with stock responses.
And by the way, that sound you just heard for our gen Z listeners, that's what keyboards used to sound like.
So yeah, exactly.
At some point we decided, you know what, it doesn't need to make noise when you press the buttons.
We'll be right back with more of your calls. On the Middle.
This is the Middle. I'm Jeremy Hobson. If you're just tuning in the Middle is a national call in show. We're focused on elevating voices from the middle geographically, politically and philosophically, or maybe you just want to meet in the middle. This hour, we're asking for your questions about artificial intelligence.
Are you using it? Does it concern you?
Are you excited for the possibilities? Tolliver, what is the number to call in?
It's eight four four four Middle. That's eight four four four six four three three five three. You can also write to us at Listen to the Middle dot com or on social media.
I'm joined by Villa Starr of the Patrick J.
McGovern Foundation, an Axios technology reporter, Ina Fried, and let's go to the phones and Renee, who is calling from Houston, Texas. Renee, Welcome to the Middle. Go ahead with your question about AI.
Hello, Hi, Yeah, So my question is this so in a world where it is becoming more and more difficult to understand the true nature and intent behind certain media products, For example, social media, the algorithm is designed to keep you on there for as long as possible, some news companies maybe are trying to push certain narratives, and I'm wondering, first of all, how likely is it that AI, specifically chat AI programs will be you to push certain narratives? And if this is likely. How are we to be able to discern this and protect ourselves from this quote unquote propaganda.
What a great question to start off with. I'll go to you first, Ena on that.
You know, I think it's a very smart and reasonable fear. I think we haven't really seen how far this is going to go, in part because right now a lot of these services are at least built with the user in mind, in that basically you're paying a fee and it's delivering a product, and they want you to like the product, which you know means you're paying in many cases, although they do have free products. I do worry about a future in which advertisers are paying for the content and suddenly the interests of the chatbot are not aligned with me as the person using them. And then you think of how a political use. A political person might use it to reinforce their narrative. So I do think it's the right question to be asking, and there aren't really a lot of rules in that. And I do think these systems are very persuasive as they are, and right now they're not programmed necessarily to persuade us to a viewpoint or to buy something. But I think there will be chatbots that are, and they will probably be pretty effective.
Mm hmm.
I feel there was an IPSOS poll that found that people are worried about AI, but that many Americans actually trust it more than humans to not discriminate or show bias. I wonder you know what you make of that contradiction, you.
Know, I think it tells us a little something about the sad state of where we are with public trust today.
Unfortunately there right.
But I gotta say, I think, you know, I disagree with you just on the order of a few degrees here, which is I still believe very deeply that there is a way that we build AI for you and me, for hundreds of millions or billions of people. But right now we build AI for the tech bros of Silicon Valley. And what I mean by that is we're building tools that promote the interests of these companies, often at the expense of users. That even when we invest in products that are good for users, there's an ulterior motive. They're trying to capture attention, they're trying to capture screen time, and they're trying to sell advertising. So I think there was a part of that question that was what do we do about it, And to me, I think one of the things we have to do is invest in kind of a response or reaction, but maybe even perspectively think about what kind of AI we would build that actually supports what you and I want to accomplish in the world. If it's about media, then what does it look like for me to have an AI system that evaluates the kind of news I'm being fed and actually helps me understand where there's bias or whether there's misdirection or manipulation. And I have to say I don't see the same kind of public investment in those tools as I do on the other side, which is why I think civil society needs to enter this conversation in a really meaningful way.
Let's go to another call and Daniel, who is calling from Kansas City.
Daniel, welcome to the middle.
Go ahead, Hey, thanks for taking my call. Yeah, I have been concerned recently in seeing how many people are being fooled by AI fakes, whether it's social media posts like your previous callers talked about, or actual like scams and frauds, that people are actually being really damaged by this. So my question to you, you're experts, would be how close. Do we think this is getting to him intelligence? And that's a big question of AI. If it's fooling people already in lots of different ways, do we think this is actually approaching or encroaching on human intelligence?
I have the exact same question, and I wonder when we're going to get to that point.
You know, what do you think?
Well?
I think, unfortunately, and this is just a sad fact about where we are, you don't need human level intelligence, unfortunately to scam people. I don't think we're there. In terms of deep fakes. Voice cloning is very good. You can make a voice that sounds very much like somebody. You can do videos, but most of the scams right now are a level below that, and they're already working. So to me, that says we need to really be cautious of what's coming and prepare for a world in which we can't necessarily just trust because we saw a video that that's what someone said. There are some technological answers, but I think media literacy is really the key. You know, I spend a lot of time I'm telling my parents. Look, you know, if you hear a phone call, it's not me, it's not your grandson calling like, you've got to interrogate that. And I think that's the world we need to prepare for, because the technology is going to make it trivial. I do worry it, particularly as the systems get more powerful about their ability to do scams and fraud at scale, so not just being able to target one person at a time, but target everyone with a lot of personal information. In the past, you know, people would do a phishing scam and they'd basically try and gather the one or two people that are most gullible at the end. Now you can really target everyone with enough personal information to sound pretty darn convincing.
So when we talk about all these bad things about AI and scamming people and all that kind of thing. V L A Star, is it too late at this point If we decide we want to stop and just say no more AI, we're done.
Well, there's a really important truth that I think is actually right in the question we were asked, which is the question was, well, what about AI fooling us? I want to be very clear what that question really is asking is how are people using AI to fool us? And that came through an enos common as well. These AI systems, at least so far, don't do anything by themselves. They don't go off and have their own ideas, they don't go off and try to hurt people. There is a fear around that I think we should just dispel upfront. They're still very much directed by people, and so the question becomes, as enough put it, are people trying to scam each other? And are they using these tools more and more effectively to do so? Yeah, I think so, and I think that's a real problem. But the way we address it isn't to try to stop creating the tools. It's certainly not to try to put that genie back in the bottle. It's going to be to say, how do we build those new social norms and principles. How do we make sure that we think about whether our legal institutions, our law enforcement is equipped to deal with these new kinds of threats. How do we make sure that our technologists are building in safeguards into tools so they can't be used in ways that are just wildly abusive, And maybe most of all, how do we make sure that you and me, the people in my town in rural Illinois or wherever like, have a sense of the reality of the situation instead of just the hype that's coming through the headlines, that they actually know what these tools are capable of, and how they can take on better practices that make them more robust and resilient in defending against these kinds of attacks.
John is calling from Chicago. John, Welcome to the Middle Your question about AI.
Yeah, my concern primarily is about education. I'm a high school teacher out this way, and I will often see, you know, there's all these debates about whether or not you're allowed to have cell phones in a classroom or whatever. But what I see predominantly AI being used for is my students trying to get out of doing the trivial THEA see it that way, you know, the everyday homework assignments, and then we finally get around to an assessment of some kind. They're bombing them horrifically, and I'm getting parent complaints and emails, and there's not a way for me a police whether or not a child is actually activating a brain cell prior to walking into you know, the day of the exam or the day of the quiz, and so it's really doing a disservice to a lot of students who aren't using it effectively.
John, can you tell when they're when they turn in something that's from AI?
Not initially, No, because a lot of the stuff that I still collect is paper copies, or we might use, you know, an online platform where they can submit them digitally, but oftentimes it's like a photo of their work, and if they're getting copies of the work or just the answers, it's difficult to tell whether or not they arrived at it authentically themselves, or you know, it's like the age old copy from a friend prior to class sort of thing. It's it's right along those lines. And obviously I can tell when they take the assessment. You know, you got a's on all the homework, but you clearly didn't understand the material when it happened. But it's opening a whole new avenue for all those same issues we've always had an education.
Yeah, there's one tell that I know that I can tell when something's from AI, which is that they used that rocket ship emoji that nobody ever.
Uses and lessons from touching.
I think that that's something we humans like to use. John, Thank you for that A very interesting you know what about that in education, What do we know about how how this is affecting our ability to learn things and just keep the students in line.
Well.
This is interesting because a lot of people are touting AI as something of great promise for personalized education and for scaling education in places where the teacher model doesn't scale. Yet some of the first people to have to struggle with the implications of AR, especially high school and college teachers. I think there are a lot of answers short of banning it. What we saw early on was a bunch of school districts just banch at GBT, and that's not a long lasting, permanent solution. I do think we're going to have to change some of the ways that we evaluate things. Some of that can be handled technologically. I wrote last week about turn it in dot com, which used to build its big business on detecting plagiarism. They're creating a canvas where students can show their work so they can use AI in whatever ways the teacher has permitted, but the teacher can see or see a summary of the work that they actually did. How much of that essay were they writing versus how much fact checking were they doing using the AI tool or were they just having the AI do the work. And I do think the future will probably look somewhat like that. It'll look like more oral exams and things where you can tell how much a student is learning. I do think educational systems need to adjust, and I think this is something we've done before as a society, and I think teachers are going to have to build the relationship with the students. That says, Look, at the end of the day, you are going to have to come in and take a test and prove you know it. So doing the homework and using chat GPT isn't going to help anyone.
Yeah, Tolliver, I know some comments are coming in online.
Yeah, okay, this first one is in all caps, so just know that. How do I tell these corporations to stop yapping on and on about it? Seriously, I'm fine with AI with limits, but still they keep going on and on with Oh, our new AI model is superior to yours. Please read at least I forgot dinner for this.
That's what tiff commenter says exactly.
Tony from grand Ledge, Michigan says, I have heard that many doctors are either retiring or plan to retire in the next few years, and that US medical schools are not graduating new doctors fast enough to replace them. What role do you see for AI in the medical field in the next ten to twenty years.
Vilas, What do you think about that role for AI in the medical field?
Yeah, I love it, you know.
I just I came from giving grand rounds at Stanford University, where I get to talk to some of the most amazing kind of medical students and one of the greatest hospitals in the country, and I asked them what they think about AI, and you know what, to a t they were excited about what it might mean for them five ten or fifteen years out, and yet they still said, you know, today, I'm still learning to practice medicine the same way would have five, ten or fifteen years ago. And so I think in that story, you're seeing what we see across society. People are excited for what AI might create for them, but today it hasn't yet totally transformed our lives. I think in that paradox, you've got something we really have to think about. How do we make sure that people are excited to become doctors when what we're doing is telling a lot of stories about what medicine might look like instead of bringing back home to what medicine is for to make people be healthier, to make sure that we're investing in the kind of social structures that let people live dignified lives. This is the distraction about AI that really scares me is sometimes we get so focused in talking about the tool that we forget to really analyze the problem that we care about through that lens of human experience and dignity. If we did that, I think we'd shape a very different kind of AI for the future.
And if I can jump in, I totally agree with the loss, And I think medicine is a really good example of where the human and the AI can really compliment each other. If you think about the career path of a doctor, doctor goes to medical school at the beginning of their career, they get ninety percent of the training they'll ever get, and then they have a whole career. And so we still want that doctor. We want that human being. I don't want to just see a chat pot. At the same time, I think AI, when used properly, can help suggest things to the doctor, it can operate with them. I think some of the complexity and nuanced, though does come with how do you make sure that the humans still have to velus this point a valuable role, a meaningful role, so they go into the profession and also enough training. We can't have the AI doing all the diagnostics, all the grunt work and still have an experienced doctor. But I do have a lot of optimism that that is a field where humans and AI actually can complement each other quite well.
Let's sneak in a call here. Michael is in Northwest Alabama. Michael, welcome to the middle.
Go ahead with your question, good eving, thanks tremendously for accepting. Michael, make this as brief as possible. You had some wonderful questions over there, and when you said talked about the possibilities of AI solving human problems, why don't we ask more of those questions. I'll challenge you on two of them. Safeguarding your problem my worries about privacy under AI, even from data used by being used sold to companies and corporations and employers, going all the way to the way the Chinese government uses AI on ordinary people. And also you talked about phony voices and phony accents, what about doctoring photos and video footage even more than even more skillfully and seamlessly than photoshop can do. A good example that I fear is not only companies, I mean media with biases using video footage that's been doctored up to buttress up their biases, but also using videos with phony information and photos with phony information in court. And if you have anybody you can think of or I can think of as our enemies putting their faces on pornography, sort of like those famous jib Jab musical electronic Christmas cards the employer to get fired. Thank you. I'll take the phone off for your answer. Thank you tremendously. I'll be with all of you.
Appreciate you VELAs what do you think? The privacy issue is obviously a huge one for people.
I gotta tell you, I love the throwback reference to jib jab to start off.
That makes me super happy.
Look, you know what's interesting is both of these questions have the same framework attached to them, which is people are doing things using these technologies that fundamental affect our rights. And we got to have a conversation about the tech, about deep fakes, about watermarking, about the ways that we'll make sure that we can verify information. But at score there's something more fundamental, which is you and I, our governments and our systems don't have a single agreement about what's okay and what's not. And this is where we're getting stuck because every time we come up with one of these, we can think of these extreme examples that really offend us. I don't want my face being put out there with a message that I didn't put on it. I was just this week with a Bollywood actress, a very famous young woman, who told me about the experiences she's had with people creating deep fakes for her, and they're terrifying. But the problem is we don't actually spend the time to think about what we want to make sure we allow and what we don't allow. And we need to invest in shared governance and the mechanisms to make sure that we can do.
That well, Tolliver, it has been twelve years since Hollywood imagined a world where our digital assistants are so human like we can even fall in love with them.
Yeah, the movie Her started Walking Phoenix who fell in love with this robot played by Scarlett Johansson. And how can we not play this iconic clip tonight.
After you were gone, I thought a lot about you and how you've been treating me, and I thought, why do I love you? And then I felt everything in me just let go of everything I was holding on to so tightly, and it hit me that I don't have an intellectual reason. I don't need one. I trust myself, I trust my feelings.
Haven't we all had a moment like that with Siri Toliver.
One or two.
She's a little bit trusive sometimes.
Oh man, We'll be right back with more calls on the Middle.
This is the Middle.
I'm Jeremy Hobson. In this hour, we're asking for your questions about artificial intelligence.
You can call us at eight four Middle.
That's eight four four four six four three three five three. My guests are Vilas Star, president of the Patrick Jim Government Foundation, and in a free chief Technology reporter at Axios and the phones are lit up, so let's go to them. And Liz is in Birmingham, Alabama. Liz, welcome to the Middle.
Go ahead, Thank you very much. So this is more of a sort of personal take on this. I have a ten year old and a fourteen year old, and I find myself thinking, you know how can I help them to make decisions about their future career in college, trying to sort of future proof for industries that will maybe go away or you know, not be ones that are very large in the future. So it's a lot of information and it's hard to know at this point, you know, in eight years or four years, where.
It's going to be.
So maybe I'm just looking for some help.
I don't know.
Yeah, great, great question, Liz, Thank you, Ena Freed.
Where should Liz is where where should those children go to college?
Or what should they study?
Well, I totally empathize. I have a twelve year old and thinking about the same sorts of things. I think it is really hard to know. I don't wouldn't claim to know how the job market will have changed in four or eight years. I think we can know what are some of the skills that are going to be valuable in an AI world, And I think they're the things that we are uniquely good as human beings at doing analyzing, bridging the gap between an answer that a book or in this case, an AI can give and what a human needs to act on it. So I think it's a combination of critical thinking, media literacy, and also where people's passions are. I think ideally AI will bring us to a world where people will be able to better align their career with their passions. I'm not convinced that's the AI future we're building, but it's certainly the AI future I want is one where the AI allows us to take our own curiosity, our own interests, and use that knowledge in conjunction with technology.
That's my hope.
Yeah.
I have to say I have been inspired by AI at times when I'm when I'm trying to come up with ideas or I'm working on something and AI gets really excited about it and so.
Oh you should do this and this and this. I'm like, oh, thank you.
Okay, give me a little little little you know, kick to get things going here.
Let's go.
Let's go to Watson, who's in Atlanta. Watson, what are your questions about AI?
Hi?
I really appreciate you guys donce perspective between sort of you know, acceleration and then also breaking. So I think I'm I'm really curious to know what do you guys see as being the risk of talking about AI risk and does it get in the way about actually steering or shaping Uh? This is like a material towards the goals that we want.
Great question, VLAs, what do you think I.
Think I missed the middle of that, Jeremy, what.
Is the risk of talking about the risk of AI basically holding ourselves back from from you know, getting as far as we can with A by just worrying about what could go wrong?
Super good? I love this. I think two things happen.
One is, we started talking about the risk of AI as if it was some existential.
AI is going to destroy humanity.
But there's a different risk to AI that we should be talking about, which is how are we going to make sure that we are minimizing that risk that we are talking about the risk of power and institutions that actually take a world that we have today and cement it for generations to come, even when it's unequal. So we do have to talk about risk, but we can't just talk about risk by itself. We have to talk about risk along with governance, along with management, along with who's making these decisions, and broadly about democratic participation. You know, the one thing I'll tell you, and I hear this all the time from a lot of the folks who are running these AI companies is all you ever hear is an almost juvenile sense of bigger is better, more power, more compute, more data, build bigger AI systems, and everything else will figure itself out. And that's just not how the world works. We should be thinking about. Let's take the AI we have today, figure out how to use it to make the world a better place, and in doing so, make sure a lot of people get to feel and see use these tools and do something good with them.
Speaking of making the world a better place in a freed what do we know about the environmental impact of AI at all those servers, and how do we make sure that we can make the world a better place with AI without destroying the world in the process.
Yeah, And I think that's been one of the challenges with this risk conversation. It was so focused on the existential risk it didn't deal enough with the risks that are here right now and misinformation and bias or two of them. But as you point out, the climate impact is another, and I think there are reasons to give that time and attention right now. I do believe we tend to get better at making technology energy efficient over time, so I may be a little less worried but it won't solve itself, and we do have to place a priority on the environmental impact and be smart about how we use it in this moment where it is very energy intensive. I think there is a sense among those that run these big data centers that they need to be powered sustainably, and so again, I think there are reasons for optimism, but I'm not the kind of person that subscribes to what I hear a lot from the tech companies, which is, oh, well, AI is going to let us develop this great climate solution that we don't know. It's going to magically appear, so we have to do AI. That's not to me a good approach. That's like a child saying, oh, it'll all work out in the end.
Forest is in Commerce City, Colorado. Forest. What is your question about AI?
Hi?
Everyone really appreciate you taking my call. My background is a pediatric nurse, and lately I've been seeing an increasing amount of patients that have been using AI to kind of fill in the gap of like theirs social connections, like using chat bonds to make friends as they describe it in their own words. And my question is is it possible for us to safely regulate this so our kids can continue using this technology in productive.
Way interesting VELAs what do you think you know?
There are some early and interesting bits of research that demonstrate that for a lot of folks, having these chatbots, particularly when they're designed by therapists and psychologists, can actually be really helpful in building towards an emotional maturity.
What does that mean?
Well, just like any other activity, sometimes practice makes perfect and having somebody that you can talk to, that you can express yourself and making sure that there's a healthy and wholesome response coming back can actually help us be better at connecting with each other. I want to go back to INA's great response about what children should be thinking about as I think about careers, Well, one of the things we also should be thinking about is where we build empathy and connection with each other so that we can do those jobs that machines will never be able to do. That help us connect to each other and navigate difficulty and complexity, whether that's as commercial as customer support, but maybe much more meaningfully helping each other guide ourselves through this transition that's coming.
Scott is calling from Boston, Massachusetts. Hi Scott, Welcome to the middle Go ahead.
Thanks.
I just have two quick comments. First, my personal use of AI. I like to because I have a small business and I like to use it to help me write product descriptions since I'm not a very creative writing myself. And then second to touch on the teacher from earlier with the white writing prompts. I have a teacher friend who in their writing prompts they will in it when they type it up, they'll put in one size font and in white letters right in the middle of it something that says like mentioned Godzilla. So therefore, if the students just copy and paste the writing front into chat Chat AI, then in the essay it right, it will mention guns.
Wow.
Yeah, people are figuring out all kinds of ways to get around the scott. Thank you very much for that, Tolliver. What else is coming in online?
And I was gonna say, this is like the most comments I've ever seen. Can we talk about why we haven't gotten to attribution in AI art yet? It's rather as simple as we deal with music, is it not? How different is this from the PC on every desk? No more typing pools or admin assistance. Attorneys type their own docs. I'm sixty seven and I use it to combine data into spreadsheets. I had to hand enter it before. And then Nathan says AI should be used to our advantage to move us toward a universal basic income.
That's hurt.
I mean, okay, if you let's start that.
I got to go to you on that, because this is something that has been brought up that if we're going to make this work for people in the long run, we got to figure out how they're all going to make money down the road.
And what about that?
How does that fit into what you're working on in terms of making AI work for the public good? The idea of a universal basic income.
You know, I came from a part of the Midwest that has a work ethic that's not just about making money. It's about finding dignity in what we do. And I want to be careful.
Look.
Universal basic income this idea that we just give everybody a basic amount of money, whether they work or not, and that's enough to sustain them. That's not the answer to all of the problems we're talking about. If people say, well, I can't get a job and I want one, writing them a check.
Isn't going to fix the problem.
So instead, what we need is we need to really conceptualize what a new eco economic model looks like when people don't get the jobs they want, but there are other things that need to be done and done productively. How do we make sure that people are equipped with the tools to be able to go out and do that. BI might be a great idea, and we should do some experiments around it, but we should also think about how we really invest in workers in human power and dignity and agency in their jobs in five, ten or fifteen years. I tell you know, I sometimes joke I don't even know what I'm going to be doing, much less when I should be counseling some young person or what they should be doing. But when we go to figure it out together, we got to do with the right intentions in mind, and that's a good starting point.
Let's go to Allison, who's in Milwaukee, Wisconsin. Allison, welcome to the middle. Go ahead with your question.
Hey, yeah, thanks for taking my question. First, my comment, I just think it's so naive to have us playing around with chat GPT when it's very clear that large organizations are going to use AI to do every kind of reasoning task that computers can do better than humans. And that's a lot of things. And as far as you know what the imperative is right now, it's not play around and make images and avoid the risk. It confront the risk and mobilize labor to put pressure, whether it's universal basic income or just regulation, and get out to vote for people who are going to protect labor. That's what I am. What do you think about that?
Yeah, okay, Alison, thank you. I mean the idea that we are just feeding more data into these AI chatbots so that they can use it against us. I guess it's kind of part of what Allison was saying there at the beginning, you.
Know, Yeah, I mean, I think there's very valid concerns that are prompting that. I'm not sure I one hundred percent agree with the approach though that by avoiding chat GPT we're somewhat how doing that. I do think we should be smart consumers of the technology and pay attention to privacy policies. There are really different settings you can use. You can decide, you know, hey, I want to you know, use an incognito mode like you might in your browser, or I want this data saved. I think you can decide on AI systems that will use your information to train future models and those that won't. I think the broader point of protecting human product work product is going to be really important, and we're already seeing it in the entertainment industry. I think, you know, there's a real divide. There's two legal arguments.
You know.
One is this idea that you know, if you use my work to train your system, I should get credit compensation. And then the AI companies, the governments asked for comment on their AI strategy, and both Google and Open Ai submitted comments today saying we want the right to train everything anything that we can publicly find, we should have the right to train our systems on it. And that's a very profound discussion that we need to have as a society.
Let's get another call in alex Is in Columbia, South Carolina. Hi, Alex, what's your question about AI?
Well, mine was really related to the moral issues and ramifications of AI, and like, you know, does it if AI can read kan't and hypocrits, does it give it a soul? Or you know, is there a soul as far as is that's concerned. You know, I'm sure there's deep religious concerns concerning it, especially in the human decision making process for health care and specifically you know, lawfare. So to say, given that also you know, you look at you look at it's its ability to ignore racial factors. So in the decision making process, given in lieu of George Floyd, in this massive outcry of systemic racism and implicit bias within the legal system, you know is are they planning on utilizing open AI or a system similar to that in the jury process or in the litigation process and to reform Let me ask you, which is removing human element?
Yeah.
We mentioned earlier that there's a poll that said that people tend to trust AI more than humans to not have biased But it sounds like you're not in that camp.
You think the AI is going to be more biased.
I don't feel either way. I'm not saying either way because I don't know the system architecture, and I do understand that it's a product and it's it's creator is a capitalist, and capitalism praise upon the week to reward the few of the ridge. I mean, it's just dog eat dog world, And that's that's all right, because that's how the that's how the big wheel turns, you know.
Yeah, And it's interesting that we've had so many calls of Vilas and Ena that are sort of getting at at the fact that these big corporations are the drivers of AI right now, you know.
Yeah.
And I think the piece that I took away from that, which I think is a really important thing that doesn't get talked out about enough, is when we're adding AI to these important decisions, are we really scrutinizing what's underlying the AI's decision, Because, guy, you know, at its best, can you know, apply you know, more equality and equity to its decisions? But it's got to overcome a bunch to get there. First of all, the training data is often based on all the bias that's existed in the human world. So if we aren't careful, we're just codifying that bias. And we've seen that in early AI systems that decide things like parole and loans and housing, very consequential things. So we need to be really careful before we even hand partial decision making power over both to the bias and to how are we applying this, how are we using it? So I don't think it's an either or thing. But I definitely think we need to be paying attention to noticing the bias that exists in the training data, because otherwise what you have is something that looks just and fair and has compelling sounding reasoning attached to it, but is no better than somebody who has their own biases.
Let me just finally, and we've come to the end of the hour, but Vlastar, let me go to you finally on the question of regulation. The US government and all governments are notoriously slow in figuring out how to regulate tech because move so fast and they've got to get their heads around it and all of that.
I mean, we've just had a.
TikTok ban that didn't go into effect and maybe still will, but you know, it.
Takes a while.
If you were to make one recommendation to the government right now in terms of regulating AI, what would it be.
You know, Jeremy, I've spent twenty five years working on AI. I'm probably one of the world's leading experts on the question you just asked me, and I wish I had a magic bullet answer for you. But I'll tell you two things that have to change. The first is we have to stop talking talking about government's role as reacting to tech companies or limiting them or changing the way they work. That can be the point of regulation. The point of regulation should be to think about what a positive vision of an EI future looks like and put in place all the pieces necessary for that, from public funding and financing, to protecting privacy and autonomy, to maybe sometimes when needed, restricting tech companies, but also fostering.
An ecosystem of positive growth.
That's one and the second is we can't just do this inside of the US alone. This has to happen as a part of a global effort, and we're beginning to see the seeds of that. And again, I'm an optimist. I'm gonna leave you with a bit of optimism. This might be the first topic that we can actually step above some politics and actually really think about the policy that's going to affect every person on the planet, because we all recognize what might happen if we get this wrong, and I think we can begin to hope what we might be able to do if we get it right.
That is a great note to end on. VLAs Star, the President of the Patrick J. McGovern Foundation, and Ina Fried, chief Technology correspondent, at Axios. Thank you so much for coming on and answering our listeners questions.
Thanks, it is a great discussion, what a joy, and thanks Tolliver.
You're awesome.
Yeah he is. Thanks, everybody loves Tolliver.
Okay, next week we are live at Colorado Public Radio and Denver, in a state that is both a hub of renewable energy and also oil and gas. We're going to be talking about the future of American energy in the context of President Trump saying he wants to drill.
Baby, drill always.
You can call in at eight four four four Middle that's eight four four forty six four three three five three, or you can reach out at listen to the Miiddle dot com. We can also sign up for our free weekly newsletter and check out our new Middle merch shop. I think I'm gonna do the same thing where every dollar that comes in goes back into the show.
The Middle is brought to you by Longnok Media, distributed by Illinois Public Media in Urbana, Illinois, and produced by Harrison Patino, Danny Alexander, Sam Burmis, DAWs, John Barthonicadessler, and Brandon Condritz. Our technical director is Jason Croft. Thanks to our satellite radio listeners, our podcast audience, and the more than four hundred and twenty public radio stations that are making it possible for people across the country to listen to the middle I'm Jeremy Hobson.
I'll talk to you next week.