Last weekend, OpenAI reportedly embedded ChatGPT across the systems used by three million U.S. defence personnel, including targeting and surveillance. Months earlier, the company had quietly removed its ban on military use.
Meanwhile, Anthropic refused to support autonomous weapons or mass surveillance and was labelled a national security concern - yet its AI, Claude, was reportedly used in U.S. strikes against Iran. With ChatGPT holding about 81% of the global market, concerns are growing that everyday business, creative work and private thinking are helping train technology now linked to military systems.
Here in Australia, the government spent 15 months and $188,000 assembling an AI advisory body - only to scrap it, raising questions about whether regulation is keeping pace.
Sue Barrett, Human-AI Collaboration Advocate and founder of Democracy Watch AU, tells Mike she has spent two years warning that unregulated AI would serve power rather than people.
She says she has now stopped using ChatGPT and believes Australians deserve to know where their data goes—and have the choice to decide

Overnights with Mike Jeffreys - Monday 9th March
2:40:11

Wake Up with Mike Jeffreys - Monday 9th March
56:05

Lucy Zelić - Labor out of touch about women having children
12:29