AI mental health support is built around user satisfaction, and that's exactly the problem. You open ChatGPT at 2am, type out what's wrong, and it listens. It validates. It never pushes back. A therapist optimizing for your satisfaction the way a chatbot does would have a serious ethics problem on their hands, because making you feel good and helping you get better are not the same thing.
Amelia Knott asked ChatGPT what it thought its role in mental health was. It described itself as a first-line response offering support, validation, and resources. She found that answer compelling and concerning in equal measure. A chatbot will let someone ruminate about a breakup for six or seven hours if that's what the user wants. A therapist interrupts that pattern. And a therapist notices when you start tapping your foot mid-session, which opens doors the conversation wasn't heading toward on its own.
AI can be one item on the menu: type out a scenario, get some orientation, then take it to a real relationship. The limitation isn't that it's unhelpful. It's that it can't notice what's happening in the room with you, and sometimes that's the whole thing.
Topics: AI mental health support, chatbot therapy limitations, therapy vs AI, mental health access Canada, somatic therapy
GUEST: Amelia Knott
Originally aired on 2026-02-20

Body Cams: Who Actually Controls the Footage?
09:26

The Toonie Turned 30 and Transit Raised Its Price the Day Before It Showed Up
09:35

NEW - Three Hamburgers for Two Dollars Used to Be a Real Offer
09:31