Fake news detection just got a harder test, and the cartels just raised the bar. You're watching Mexico unravel in your feed and the images are terrifying: skylines on fire, planes on tarmacs reduced to ash, buildings exploding. But those planes weren't burning at that airport. Those images were AI-generated by the cartels themselves, seeded deliberately to amplify fear during real violence. You couldn't tell. Almost nobody could.
That's not a personal failure, it's a systemic one. An Ipsos poll of 25,000 people found 90% of Canadians have fallen for fake news, even though only 10% believe they have. Math turns out to be a surprisingly useful filter: when a post claims a million people attended a protest in a small town square, you can run the numbers on whether that crowd is physically possible. University of Washington professors Carl Bergstrom and Jevin West built a whole framework around exactly this.
This is a calibration problem, not a gullibility problem. The skeptic on the beach dismissing everything was as wrong as the accounts posting AI explosions. Buses burned. Stores were looted. Real fear was real. Finding the middle requires building new habits for reading your feed.
Topics: fake news detection, AI-generated images, cartels Mexico, media manipulation, information literacy
Originally aired on 2026-02-25

NEW - How to Train Your Dino: The Fossil That Appeared Millions of Years Too Early
10:06

The Little Robot Rides the Power Line Like a Zip Line
09:28

Saturn Is Visible Tonight and the Rings Aren't the Story
09:45