Surveying the AI Threat Landscape

Published Dec 22, 2023, 1:47 PM

Chester Wisniewski, Director, Global Field CTO at Sophos, discusses the use of artificial intelligence to spread misinformation.
Hosts: Tim Stenovec and Jennifer Ryan. Producer: Paul Brennan.

This is Bloomberg Business Week with Carol Messer and Tim Stenebeck on Bloomberg Radio.

Jennifer, I don't know if you saw this making the rounds on Twitter last week, but this video went viral.

It racked up more than a million views.

It purports to be from a company that calls itself quote the world's first AI powered news network. Now, I wasn't able to independently confirm that this video is indeed all AI, but it's still worth taking a listen to kind of see where we are in this day and age.

Check this out.

Hello, and welcome to Channel one, a new way of consuming, reporting, and thinking about the news powered by artificial intelligence, all presented by our team of AI generated reporters.

Will show you how AI powers and empowers our newsroom to deliver journalism that's fast, trustworthy, and accurate. Let's start with our reporters. You can hear us and see our lips moving, but no one was recorded saying what we're all saying, powered by sophisticated systems behind the scenes.

Okay, so again, I was not able to independently confirm that this is indeed all AI, because if you were watching that it looks like actual people, but there are some clips of people of the actual anchors, like very quickly speaking different languages.

It's all pretty wild.

So depending on your view, it could either be really exciting or if you react like me and then everybody I showed this video too in the newsroom, it's pretty scary stuff. Let's see what our next guest has to say about all this, because he's got more than twenty five years of experience on the front lines when it comes to cybersecurity, which in this day and age increasingly looks at threats posed by so called deep fakes, AI and misinformation. We've got Chester Wiznski, Global Field CTO at Sophos. He joins us on Zoom from Vancouver.

Chester. Good to have you with us this afternoon. How are you. I'm well, good to be here.

It's getting close to the holiday times, so it's kind of an exciting time to get a chance to have a chat about all the things we're going to be seeing in our social media feeds the next few weeks.

Yeah.

The reason I brought that up is because this is something that I signed my social media feed over the last couple of weeks, and again I reached out to this company, and I wasn't able to confirm whether or not it was all AI, but you got to see some of it, You got to listen to some of it. Is that the type of deep fake that we're seeing these days, Well.

I think that's on the sophisticated end of the spectrum. And it's important to remember that wasn't probably created in real time. You know, it wasn't something that they could just whip up in thirty seconds, right, I mean there was.

Clearly some effort very high that. Yeah. Yeah.

But I mean, on the other hand, like this is how far this has come, right, if you have the resources to want to create a realistic audio and video, and certainly, as we've all seen with chat GPT in the last year text, if you've got the resources, it is getting really close to the point where it's indiscernible from real content. I suspect that that entire video is all done an AI. While you were playing the role in there, I was hearing some telltale things in the audio that my ear kind of knows to listen for. But it's pretty convincing, isn't it.

What were some of the telltale things to listen for because I must confess they passed me by completely.

Yeah, well, it's getting better all the time, certainly when we're looking at AI generated photos and even video. Sometimes AI historically has had a pretty bad approach to creating straight lines. You'd think straight lines are the simplest thing for a computer to draw, ever, but the way AI is trained, it's actually really hard to teach it to make straight lines. And with human faces, it often makes mistakes like one ear lobe being attached and the other one being detached, or too many teeth in the mouth, or you know, there's some telltale things that frequently messes up too many fingers on a hand, for example. But these are all getting better all the time. And in the audio the things I was listening for, there was there was there's a crispness to each word in the way that they're separated, and it almost had a bit of a slightly robotic tone when it's trying to inflect emotion. That is just feels a little bit off.

When I'm listening to it. Right when I heard the.

Anchor saying regular words, it sounded pretty plausible. But then if I listened really carefully when she was trying to inflect some emotion. It had a bit of a robotic on natural feel to.

It, still so ched.

It all raises the question how this stuff can be used for malicious purposes moving forward. I mean my first thought is misinformation and disinformation, the idea that videos can be created that you can just not trust, whether that's of a world leader saying something or doing something, or of a celebrity saying something that just isn't real.

Yeah, And I think that's really the real problem here is this allows this content to be created at a scale that it hasn't been able to be done before. Right, if you and I have been speaking last year at Christmas, or even the year before at Christmas, it would have been pretty unlikely that a commercial company would have been able to produce that video that you played a little clip of. And it's accelerating so fast now and it's getting so hard to tell the good from the bad. It also means it can be done at a volume that humans would be unable of creating. Right When we think about a video like that, that's something that now can be done in hours and with's a modest amount of money, whereas before, you know, it would have required a ridiculous amount of resources to even attempt to generate something that believable. So while it's not something you and I are going to be able to create that newsroom clip tomorrow, maybe if we talk again a year from now, we might be able.

To even be able to do that as individuals.

So this is going to be very challenging moving forward because it's not something that's easily to disprove. So I think we need to be working with our media sources toward making sure the real content can be proven to be real as a way of, you know, allowing us to more easily discern what's real what isn't you know what?

This is actually leading up to my next question because, as you say, a year from now, who knows what the capabilities are going to be, but from what you're saying, they're going to be much better. And then if we're looking at next Christmas when we have this conversation, the thing that will have just happened before next Christmas is the US election, and it's already very contentious, and we've got to ask what can the average voter, you know, US journalist, you know, policymakers, how are we supposed to be able to tell what's real and what's fake as the election approaches, and what are the risks to a free and fair election if we're not able to really get on top of this problem straight away.

Well, I think we just need to get in a better habit of going to trusted, verifiable sources, whether that if it's about a candidate or an election campaign, then making sure that we're getting the information from the campaign itself, so that we know it's real from that candidate. And you know, we've been seeing on television ads for years that this ad was approved by whoever the candidate is, right, there is an official channel for receiving that information so that we can verify that it's definitely from a campaign or a party or that kind of thing. And of course our new sources, I mean Bloomberg is a great example of that. But I mean, no matter what your politics are, that there's a lot of trusted journalism out there, and going to those sources to verify a story before we repost things on social media I think is going to be a really important step because trusting random accounts on x and Meta and TikTok is probably going to lead us in a dangerous direction.

Yeah, that is certainly true, and I'm not optimistic about the way that these platforms have allowed misinformation to flourish. Hey chat, I promised i'd ask you, as somebody who's been on the front lines of cybersecurity for twenty five years, what worries you, what keeps you up at night?

Well, the quality of this stuff and the quantity of that can be created is going to really blur the lines for a lot of people between you know, the reality and this misinformation, and a lot of us have gotten too comfortable. You know, we don't specifically study people trying to manipulate elections. We're worried about people being manipulated into scams, getting you know, their money stolen from them, getting their computers infected with viruses, that kind of thing.

Right, And when we.

Look at the things that people used to pay attention to, you know, they read their email and they look for the spelling mistakes. So this email doesn't really look like it's from Bank of America. All those tell tale signs are out the window now, right, There's really no discernible difference between computer generated content that a criminal can make tens of thousands of inn minutes and the real thing that you're really getting from your financial institution or anybody else that you do commerce with. So I think us educating society on to be more suspicious and to double check things before they believe them is going to be important part of it. And more than that, I think a lot of it comes back on the industry itself, on the AI industry, and on us people in security to build better tools to sort through this stuff so that the majority of what ends up in your inbox is signal, not.

Noise, just real quick thirty seconds. What do you think regulators and social media companies should be doing about this?

Yeah, that's challenging.

I mean, the Biden White House has published in an executive order at the end of October that covers a lot of AI usage and is moving toward marking AI generated content with a watermark. Certainly when it's produced by the federal government. That's encouraging. But of course, what I'm really worried about is people who aren't going.

To follow the law.

Right if we're worried about Russians and influencing our next election, they're not going to abide by an executive order from the White House to be good boys and good girls. That's just not going to fly right. We have to be able to, I think, invest our effort in proving authenticity rather than trying to prove an inauthentic content.

Chester really appreciate you taking the time. That's Chet Wiznsky, Global Field CTO at Sophos joining us from Vancouver.