As the saying goes: a lie gets halfway around the world before the truth has a chance to put its pants on. As AI is increasing productivity across industries, it’s also raising concern about how to regulate its output and keep it from putting many of us out of work. And as the next campaign season approaches, another question comes into focus: what about its potential to quickly create and spread misinformation about political rivals?
Bloomberg’s Laura Davison and Emily Birnbaum raise the curtain on the little regulated and largely vexing ability to disseminate political hay and deepfakes via a chatbot.
Read more: AI Is Making Politics Easier, Cheaper and More Dangerous
Listen to The Big Take podcast every weekday and subscribe to our daily newsletter: https://bloom.bg/3F3EJAK
Have questions or comments for Wes and the team? Reach us at bigtake@bloomberg.net.
We've talked quite a lot on this show lately about artificial intelligence because of how quickly it's inserted itself into so many parts of our lives, and how it's becoming more and more difficult to tell what's real and what's fabricated. One place where that's increasingly becoming a problem politics, especially now as the twenty twenty four US presidential campaign heats up.
EI artificial intelligence is now hitting the US presidential campaign trail. Donald Trump has been got in a series of aidep figs this time. Ron DeSantis is also in the spotlight. Reports say Desamptus published fake images of Donald Trump and Anthony Fauci.
Already, candidates and their surrogates and even bad actors overseas are reaching for every tool they can to try to shape voters' perceptions and ultimately influence the outcome.
There is steph an arms race when it comes to AI, but there is an element of mutually assured destruction.
And the government is struggling to figure out how to police the use of AI in campaigns.
What makes the FEC really interesting in this is that they are by designing a partisan agency with a mix of Republicans and Democrats and so across the board, you really see them deadlocking on almost every issue.
I'm west Caasova today on The Big Take, Bloomberg's Emily Burnbaum and Laura Davison tell us how AI is transforming and distorting our political reality. Emily, it seems like it took about five minutes after AI really came forward as a thing that was easily accessible for the political world to say, hey, we can use this for all kinds of reasons, some good and some real not so good.
Yeah, we're still really in the early stages of this. We've yet to see the most dramatic examples of political misinformation thanks to generative AI, but there are a couple really big and well known examples that have been circulating. So some far right politicians in Germany used AI to generate images of immigrants that look very angry and violent. No one was real, just you know, meant to incite sort of nationalist fervor. Here back in the US, we're seeing a lot more of that. So presidential candidate Ron DeSantis his campaign arm put out a video that included images of Anthony Fauci hugging former President Donald Trump, and they were not real. So Desantas's campaign didn't say that they were AI generated, but experts kind of circled and identified them as fake, and then within a couple hours there was a Twitter note appended to it saying these are not real. And the other big example is the Republican National Committee put together an AI generated video that purported to show, you know, here's the future that Biden wants, and it was very dystopian and disturbing. They actually said it was AI generated.
Yeah, this video was really quite disturbing. You know. It showed images of China attacking Taiwan. It showed the streets of San Francisco with troops marching in as martial law was taking over, and those looked very very real. The one tailtale sign at the end of this video that it was fake was it showed Joe Biden just slumped over his desk, looking really dejected, and you could see that his elbows weren't quite connecting with the edge of the desk. So it also had a little disclaimer, very very tiny in the corner saying this was AI generated. But one of the things that consumers are gonna have to start doing is be really good about determining what video is fake and what is real? And there's always with AI, the technology is pretty good, but you know, sometimes people have six fingers on the Fauci and Trump hugging images. You know, it looked very airbrushed. Trump's hair looked very weird in certain angles. So that's kind of a telltale sign if you think something looks weird and you're like, look at the hands, look at the feet, look at the limbs, look at some of these features. They look a little bit distorted, and that's usually a sign that something is a little off.
I remember in the last election we started thinking about deep face, these videos, these pictures which are indistinguishable from what's real, and the technology wasn't quite there yet, And now, of course we see every day that it is getting much better. And I guess that raises this question that as the campaign heats up, we're going to see so many of these images that it's going to be really hard to tell what's real and what's not correct.
And there's no regulation that requires any person, any advertiser, any candidate to declare something as fake. Google has said that in their tools that they are going to have not a physical marker on it, but sort of a digital marker, So if people go and reverse image search it, it will flag it as something that's fake. But that's how often do you go and actually reverse image search something. Not that often, So it's going to really take a lot of discerning, both from kind of the truth squatting faction on Twitter as well as from individual consumers to know if something's real or fake. And as we know, a lot of people don't even know who's running for president right now.
Yeah, just on the topic of regulation not being in place, the Federal Election Commission this month tried to vote on a measure that would have enabled them to create rules around AI generated political ads. Basically, the FEC would say you have to declare when AI has generated your ad. But they found themselves deadlocked. So there was a deep disagreement among the commissioners, not even just along partisan lines, about if they even have the authority to do that. So the agencies that we have are currently deadlocked, unsure how to proceed, and on Capitol Hill they're only beginning to have these conversations.
Emily, why would the Federal Election Commission have any doubts about what to do? Is there anyone in favor of allowing fake stuff to be spread all over the place without having to be labeled.
It's less about the actual measure and more about if the FEC has the authority to do this. So we actually run into this with tech related issues and our government all the time. Most of these agencies weren't set up to deal with issues like deep fakes, misinformation scams online, and so they're having to adapt in real time, and the FEC, some of the commissioners at least say that they just don't have power over this part of the lot. There are a lot of protections for political speech that is an important bedrock of communication in America, so that makes this issue even more difficult to deal with.
What makes the FEC really interesting in this is that they are by designing a partisan agency with a mix of Republicans and Democrats, and across the board you really see them deadlocking on almost every issue. The FEC has had very little success doing anything, which just points to that the FEC realistically isn't going to be able to take this one on based on all the other issues facing the agency, and what about.
The candidates themselves. It's easy to put something out that's fake about your opponent and maybe it gets you all kinds of attention, but then you know they're going to be doing it about you too, And so there's this kind of AI fakery arms race that we seem to be setting up. Is there any recognition on the part of the campaigns that this is a really bad line to cross?
So there is definitely an arms race when it comes to AI, but there is an element of mutually assured destruction where as soon as one campaign really goes out, puts some words in their opponent's mouth that makes them look bad, distributes images videos widely, then that sort of opens this whole can of worms, and so there's going to be tons of these kinds of ads, and actually no one really wants that. So there's some internal reckoning that's happening among political consultants and campaigns right now.
I guess they've been coming from these outside groups that support campaigns so that they're able to have a certain amount of deniability even though it's very clear which candidate they're supporting.
But it creates for what we call like dark money groups. So five H one C four political advacy groups. This is a real wide opening for you know, groups that aren't necessarily representing a specific person, but an interest area or a larger party. The dark money groups, they can go out and take all these images and they're super cheap and easy to produce, so they can react in real time. If something is said in a debate about a candidate they don't like, they can have an image or a video out there online by the end of the debate. So that's what we're going to start to see if this really rapid response. You know, what we used to just see with with tweets or emails can now be done with images and videos.
Laurie, we've been talking about how we're seeing this happening at the national level, at the presidential level, but it's also filtering down throughout politics in the US.
Yeah.
So AI is really cheap, and so that makes it a really great tool for these smaller campaigns that don't have a lot of money, that don't have a lot of staff. So there's been in some mayoral elections recently AI used both by campaigns to sort of support themselves as well as attack ads. There was a really great example in Shreveport, Louisiana, where there was an attack ad against the incumbent Mayor Adrian Perkins, and it showed this scene where it was you know, it was his face on a body and his voice was talking, his mouth was moving. It wasn't him, it was an actor and then they had used AI to create it to make it look like him. He ended up losing this race, not necessarily because of this ad, but just a really great example of how someone a developer in town who didn't particularly care for the mayor could come in and create this ad really quickly, easily and cheaply, and get it on the airwaves and really make a splash with it. In Toronto, we also saw the same thing in their mayoral race, where candidate Anthony Fury used artificial intelligence images in sort of his campaign materials, you know, showing packages of crack pipe kits that were sponsored by the City of Toronto, you know, using this to criticize policies of his opponents.
Emily, you talked about how difficult it is for the government to try and figure out how and whether they can regulate this. Is there anything illegal about a candidate making another candidate appear to be doing something completely false?
One use of AI that could be illegal, And we can remember this is still an untested area of the law in many ways, so we're not positive. But one thing that could be illegal is if you use a candidate's likeness to get money that the candidate doesn't actually receive. So that is one thing that the FBC was discussing that seems like a clear violation of our campaign laws when you are not allowed to impersonate anyone else and not allowed to lie about where money is going. There are also laws dropping up in the states across the country that ban certain kinds of quote unquote deep fakes. So a lot of those laws pertained to pornography. So there have been some reports that show over ninety percent of the deep fakes on the Internet are pornographic images, mostly featuring women, and so a bunch of states have rushed in to fill this gap in the law, basically just improving victims' rights, so people who were featured in pornographic images on the internet can sue and have the right to stop the proliferation of those. But there are far fewer rules about election related deep fakes. There are some they probably won't withstand legal challenges, so it's still kind of complicated. Whenever you try to touch political speech, but there are clearer violations after the break.
How AI is changing the way campaigns get their message.
Out, Laurie, you hinted a little bit before about how campaigns are using AI for non nefarious reasons, that it's just like become a useful tool being used by campaigns the way it's used by a lot of different industries.
Yeah, so there's a bunch of different things that campaigns are using and some that people would kind of clearly put into a non nefarious camp. So things like helping use AI to analyze voter rolls of people that they should be targeting, organizing some of their data, figuring out what kind of messages would resonate with which voters. Another thing that they're doing is using tools like chat GBT, not necessarily chat GPT because that tool has some restrictions on political speech, but to write the first drafts of press releases of fundraising emails. And they found a really great cost savings instead of having five people who are copywriters on a campaign working on all these messages, that you'd have one person that is just telling the tool, okay, write a fundraising email with a focus on democracy, right, one with a focus on the environment right one with a focus on defeeding Republicans, for example, and then they can get something on the page and then just edit it and it goes much much quicker. The one campaign consultant described this as solving the blank page problem. Anyone who's ever written anything can probably sympathize with that if it's much easier to get something on the paper and then edit it versus just starting from scratch. So they're finding really big cost savings in that of they're able to both raise more money because they're more effectively targeting people as well as cut down on staff cost for that particular function of writing. That's been really widespread and one consultant estimated about fifty percent of campaigns are doing this, but expect that number to be somewhere closer to one hundred by the end of the twenty twenty four cycle.
But Congress just set new rules on how staffers can use chat GPT, and the main concern is that a lot of what you put into chat GPT can become available in other ways. You know, there are a lot of privacy concerns around chat GPT. So the new rules say, first of all, you have to use the paid for version of chatchept, which is better protective of privacy. Second of all, you can't put confidential information into chat GPT, which congressional staffers are often working with, so they're kind of like making it up as they go along. But congressional staffers work all day and night. They are never off the clock, and so anything that cuts back on time is really valuable.
Open AI. The makers of chat GPT, they are kind of alarmed at the way campaigns could be using their technology and or trying to figure out a way to limit it.
If you put in really anything related to politics and chat gept, there's sort of this standard disclaimer language coming up of saying, you know, look, I don't have an opinion on this. There's nuance to this, and so chat GPT is being very careful with anything related to politics. But if you push it a little bit, like if you type who are the wokest Democrats, it will name Alexandria Kazi Kort, says Rashida talib Aana Presley Bill on Omar. Interesting that all like the top four or five names that came up were all women. Sort of treading this careful line, it's not completely staying out of the political arena. So for a lot of political things, chat gibt has basically stopped cataloging language from after September twenty twenty one, So when you ask chat gibt a political question will be like, here's my information as of September twenty twenty one. You may need to do more research if you want something more recent. So it's kind of just clamping down and not feeding the beast, so to speak, with more information to make it sort of an obsolete tool when it comes to political speech.
They're also putting together something called classifiers, so open ai is able to see what people put into chat GPT, and so they're working hard now to figure out what are the classifiers or identifying freezes that people use, where when they're using those, you know that it's for a political purpose, And then if that's happening, how can you control what chat cept says in response? They really don't want their product to be used to create lies and misinformation that they're blamed for.
To sort of add though, CHATGBT is sort of the big juggernaut and the industry of generative text AI, but there's so many other tools that are being developed and tools that are being developed specifically for political uses, so usually these are paid tools that campaigns would subscribe to. But just because chat GBT is sort of clamping down on this, it doesn't mean that other actors can go buy these tools that are specifically designed for this to create some political misinformation.
Laura earlier, Emily painted this picture of mutual assured destruction where if one candidate attacks the other and the other attacks back, then everybody gets hurt. And in you're reporting, you found that both Democrats and Republicans are using this technology, but in slightly different ways.
So what we've seen in terms of the visual and video side as Republicans using this more. We saw big first video ad was the RNC attacking Biden. We saw DeSantis attacking Trump with this. But both parties are definitely using it, and probably Democrats are using it more. When it comes to the text and generative AI, they have firms that are using the text side a lot more. That's really widespread among campaigns. This is a really hard thing that's hard to track you know exactly how widespread it is within either party. Democrats are definitely more cautious on using the visual and image side. They kind of like to say, look, we're being responsible here, We're not going to create misinformation or deep fakes. Republicans, particularly because they're going into a contested primary so they have more going on right now, but they're a little bit more willing to try out this tool and see how it works, while Democrats are a little bit hanging back on the sidelines.
I spoke to a spokesperson for the Democratic National Committee who said Democrats are wary of misinformation. There's a history there that goes back to the twenty sixteen election. There is a whole team dedicated to stopping misinformation at the DNC, so that makes them extra cautious as they begin to implement AI slowly into things like fundraising.
Emily, that's an interesting point that the Democrats are saying that they're going to really need to be monitoring this. Do you think that we're going to have now this whole campaign industry where groups are doing AI checks similar to the way we have fact checkers.
Yeah, there is this anxiety and wariness among political consultants about how AI is going to be deployed. So the trade Association for Political Consultants, which exists, shockingly came out with this statement saying we are against the use of deep fakes in political campaigns, and we really shouldn't be messing too much with this technology. It's quite dangerous. So I do think that increasingly there is desire to call out misinformation, to call out when generative AI is being used. I also think that a lot of technology companies are about to make a lot of money off of basically tools that allow you to identify what's real. Those are going to be commonly used, at least among political campaigns, if not the general population.
When we come back, what can we do to avoid getting duped? So what are we supposed to do about this? Just as citizens watching a campaign, trying to make good decisions, trying to figure out what's real and what's not. I mean, Laura, you kind of jokingly at the beginning said, oh, look at the hands, let's eve it as six fingers. But this is a pretty serious thing. How are we going to be able to discern what's real and what's not.
This is probably when tech companies are going to have to start stepping in. We've seen Twitter and Facebook come in with labels on misinformation regarding vaccines or other things. Regarding the twenty twenty election, we're probably going to see those start to come in. I don't know that the tech companies, they don't have fully fledged policies yet on how to do that, and part of it is that it is really difficult to check. We see companies like Google saying, Okay, we're going to put these markers in these images. That's probably the next step that we're going to see of each company, because they want to be seen as responsible and because they don't want an image created on their platform to be the thing that blows up an election, for example. You're going to see pressure in the industry to do more, but it's right now, it's really incumbent upon the viewer. So it is kind of looking at the image, look to see if there's a little tag in the corner that mentions it's AI generated, or a screen at the very end of an ad that says this was created by AI.
You know, in previous campaigns, we've seen Facebook, we've seen Twitter make big public announcements, They've hired all of these people who are going to be monitoring, they're going to try to crack down on fakery, and yet they just could not possibly do it in the case of Twitter, we're seeing all kinds of fakery all over the platform, especially since Elon Musk took over the company and kind of opened it up in a way that previously people couldn't post stuff on it.
Yes, but we're also seeing tools like Twitter's community notes, which allows everyday users to weigh in on whether something is true or false, that was already under construction before Elon Musk came in. It's actually proven to be a really effective obstacle for the spread of generative AI misinformation. So if we think about the ad Byron de Sampas's campaign within hours that had a Twitter note on it saying tech experts have looked at this and these images are not real. So if you think about what you can do as an individual, it's a mitzvah to just get out the word that something is fake, that maybe tech experts are weighing in and just spreading that information together. People create media literacy that way.
And the thing too is ethicists are less concerned about people like Biden and Donald Trump that have swarms of press following everywhere they go and everything they do. But on the local level, where local news isn't as strong, where there are people and candidates and politicians who aren't necessarily followed as closely. It's way easier to spread misinformation in these areas where there is more of a news vacuum.
Emily Laura, thanks so much for coming on the show.
Thanks so much for having us.
Thank you, thanks for listening to us here at The Big Take. It's a daily podcast from Bloomberg and iHeartRadio. For more shows from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen, and we'd love to hear from you. Email us questions or comments to Big Take at Bloomberg dot net. The supervising producer of The Big Take is Vicky Virgalina. Our senior producer is Catherine Fink. Federica Romanello is our producer. Our associate producer is Zeneb Sidiki. Rafael m Seely is our engineer. Our original music was composed by Leo Sidrin. I'm Westkasova. We'll be back tomorrow with another Big Take.