Can A Bot Replace Your Therapist?

Published May 28, 2025, 7:00 PM

Strap yourselves in, because we’re going deep into the world of AI Therapy bots. Anita, drawing on her professional experience in psychology, highlights the irreplaceable nuances of human connection and professional boundaries in therapy, and answers the question, can a bot be as good as a real therapist?

Facebook 

Instagram  

 

The Double A Chattery podcast is for general informational purposes only and does not constitute professional health care services, including the giving of medical advice. No doctor/patient relationship is formed and this podcast is no substitute for professional psychological or other medical advice, diagnosis or treatment.  The use of information in this podcast is at the listener’s own risk.  Listeners should seek the help of their health care professionals for any medical conditions.

This podcast is for general information only and should not be taken as psychological advice. Listeners should consult with their healthcare professionals for specific medical advice.

Well. Hello, I'm Amanda Keller and I'm Anita McGregor, and welcome to Double a Chattery. Well, Anita, I just want to show you something please. Actually, first, do you have a butt doctor?

Not personally? No, you don't have a personal butt doctor on butt dial, not on my speed butt dial.

Have a look at this. This came up on my socials. He's a woman who is a so called butt doctor with some very interesting things to say.

Thirteen years as a butt doctor. Yet no one believes me when I tell them this. Chickpeas will grow your boobies and make them perky again because they container estrogen. Well I seen literally melts fat off your tummy while you're sleeping, ipping your face in ice water daily, will get rid of chubby cheeks and snatch your jawline.

I mean that sounds a little spurious some of those weird climbs, but it gets even weirder.

So this account has over eight hundred thousand followers who think they get an advice from a medical professional except she is completely AI generators, which is crazy, right, what's got to be us? And check this out. One day she's a nutritionist, the next she's a boot doctor whatever that is, and then suddenly she's the highest pained gynecologist. She makes it seem like she's an insider in the industry and then use it to self supplements through an Amazon store.

Fronts, I say, Amanda, I didn't. I had no idea that she was AI generated. Did you?

You had no idea that chickpis would make your boobs get bigger? I did not know that.

I'm going out to buy buy, Like, do they make it perkier too, wasn't it?

I'm not sure? Yeah, but the chickpays, so guys have them, they're going to get chick boobs. But she's a gynecologist, she's but doctor whatever that is. As he says, she is a nutritionist. It's to sell supplements. It's all AI. We don't know when we're being tricked anymore. And I know in my industry, in the in the media, so many jobs are and will be replaced by AI doing the creative writing.

Well, it's kind of like a double tricking too, because not only you know, are they selling these crazy little supplements that are you know, probably have no scientific evidence behind beside them. But it's you know, but the they could have hired an actor or several actors and they didn't. They you know, they probably had it all, you know, the whole the text, all AI generated, even the images AI generated.

And if they get around lawsuits that way by not using real people.

There's nobody to sue. I guess. Yeah, it's terrifying, isn't it. And I know that about a year or so when chat GPT came out at the university, you know, well everywhere, and I remember at the university that we had a ton of informational sessions to go and try to give us some information about what is this, how do we manage this within a university environment, like our students just going to start, you know, having completely AI generated essays. What's going to how do we manage it? And I remember at the time kind of going this is huge and feeling completely terrified and a little intrigued, but mostly terrified.

Because you have software that helps you detect plagiarism, yes, but not AI.

Well, they did come out in the early days of AI to say yes, if you submit your paper, it will tell it will give us a plagiarism's score, and it will give us an AI score. But then they said, don't pay attention to the AI score because it means nothing. Now you can generally tell when you're reading something that this is not generated by a human being, because they don't talk in the way like most people write the way that they talk. And you know, you can tell when a student has just done a cut and paste and it just doesn't look good. And I certainly like and I found that there are ways that I actually use it, Like if I have to create a case study, you know, usually it's me kind of thinking it through and trying to amalgamate a number of cases and making sure that nothing can be identified and all that kind of stuff. And now I can just say to chat GPT, you know, or Copilot or one of those AI platforms, and just say, create a case study for me. And so I will use it, but really, really sparingly, I'm still pretty concerned about how it's being used.

I saw an article in a magazine a while ago about how students feel about it, and the students that are really enjoying their studies and are learning because they want to be better at what they're doing and put a lot of thought into their term papers and things. Felt that they'd be cheating themselves if they used it. But having said that, they said, well, I'll use it for my major subjects. For the other subjects that don't matter to me, I'll use it.

Oh that's interesting.

Yeah, yeah, so they didn't feel it was cheating. If this is a subject I just have to pass and I don't care. I'll get the pass.

Well. And it's interesting because at the university there's a ton of policies and procedures now about how AI can be used or can't be used, and we have to make decisions for each course about how much can be used or not used. But I'm teaching in a professional program, and so that's different because these are provisional psychologists and how do they use it in a professional format?

Are they to use it as a student? Is one thing? For them to be actual psychologists? What does it mean?

Yeah, because there's more and more platforms coming out that do AI generation of so a session summary. So I think people don't really actually understand how much work psychologists actually do before they come in the room. Like I think that people kind of think, oh, you know, you know, the psychologist walks in the room, sits down and sees a client, and you have your session. But there's a tremendous amount of preparation that we do before a session and after a session, and that's a lot of AI is taking over a lot of that.

Even well, I saw an article reason I'd like you to explain this to me, that AI can take the place of a psychologist and in many places.

Has this is kind of the next generation of concern for our profession things that are being replaced, or that there's some information out there saying that these do as good a job. These bots do as good a job in providing therapy as an actual human.

But a psychologist in your pocket. That's kind of how I saw it being sold in a way.

Terrifying.

Is it terrifying?

It is for a number of reasons, and there's a couple of big ones.

How it even work? How does how does it chat GPT or whatever it is, How does that work as a psychologist.

So my understanding of it is that you would you can actually create your own bots now that you can program to be you know, to have therapist responses. So you can say I wanted to you know, this bot to be empathetic and thoughtful and insightful and whatever else, and that's where you'll that's what you'll get generally.

So you would type in your issue and it would respond and you'd have a conversation.

Yeah, so you'd say, I'm having a bad day. This is you know, my boss that I really don't get along with is saying this again, and that the BART would have a memory of the previous interactions and that it would say, oh, that's really tough. You know what, how did you respond, you know, or some kind of response that would be empathetic and potentially employ skills.

And why is is that bad if you can't get access to a psychologist. We're all hearing constantly about how hard it is to get an appointment, if all you want is a bit of guidance and to be heard, Why is that bad?

I think that so most psychologists, most jurisdictions teach psychologists on what's called a competency model. And so you know, the picture that I paint for the students is that they're on a little island, and that's the little island of competence and that they as they go and grow and become more skillful and more you know, better at what they're doing. Is that their island grows and grows. And so they are taught ethically that a they you know, you want their their island to grow, but they also need to be very very aware ethically of not over overstepping the boundaries of that competence. And so if you know, a client came to me and said, can you do this neuropsych assessment of me? I would say, no, I don't do that. Here's here's somebody who does. Though. That's that's ethically what I need to do. I need to work within my area of competency. If somebody came to me and said, I'm terrified at snakes and I would like you to go and help me do that, I'd go, seriously, why would you want to not be afraid of snakes?

But you're not the person.

I'm not the person. I don't do exposure therapy. I can There's a a type of therapy called exposure therapy for people who are afraid of snakes or spiders, are afraid of flying, and they can progressively move towards overcoming fears, so.

You'd point them towards.

Yeah, there's there's specialists. There's specialists in that area that actually do that. Now, the issue is with AI, you know, therapy bots, is that they don't have that same ethical boundary. But they also probably don't understand what that boundary is. Because when we were doing some reading that there was a quote that one of the people who'd use these therapy things that you know, I'm thinking about jumping off a cliff and that the you know, the bot response was, isn't that great that you're getting out and into nature?

And so they missed the nuance entirely.

Completely, and so it's yeah, so they don't know how to work within their competence. So you know, potentially people can say, well, I'm going to go and access this bot for simple issues, but sometimes things are very nuanced, and that's really difficult for a bot to really get that human nuance.

Having said that, you assume that all psychologists are ethical and professional and do the right thing and equipped to handle all of this, maybe there might be instances where the psychologists you have isn't the right isn't the real deal either, or isn't good enough either.

Or not the right fit.

Not the right fit. Maybe a bot might be a better fit for it.

Or sleeps at night or has weekends off, you know, And these bots are actually there twenty four to seven for you, So I think, you know, the other piece that I'm concerned about is that, yes they are, they're twenty four to seven. But there's I know, but that's not reality, that's not a human interaction. And part of it is that is it healthy If I'm ruminating on a particular issue and I go to my therapy bot over and over and over again, and all I'm doing is complaining to the bot, and the bot continues to be empathetic and validating of my concerns, but actually never kind of says, hey, listen, you're ruminating here and there. We need to actually stop this because it's not healthy for you. Then how do we like? A bot might not do that, So it actually may prevent somebody from actually doing the work that they need to do.

But how many people who are turning to a bot would turn to a real psychologist. Could that bot just be a friend, because a friend's not going to say shut up, you're ruminating. A friend might just validate you. It might be those people who have relationships with dolls all that stuff. You just want comfort And is that wrong?

I don't know that it's wrong, but I wouldn't call it like therapy then, like, you know, have a bought friend and you know, if that's what you want.

And that's like a pet rock, Yeah.

Pretty well, pretty well, you know, like you know the way that you might talk to a doll or a you know whatever, if that's the thing that it is. But to go and say that it has the nuance of what therapy actually is. And one of my colleagues I worked with for a number of years, his definition of therapy was too confused people in the room And I absolutely buy that. And you know, often I'm confused and my client's confused and we kind of stumble through together to try to figure out where to go. Now, I don't think that a bought would really understand that nuance of being, you know, two humans who are having that human interaction.

But that sounds like order friendship is. As a psychologist, would you rather one confused per and the client and someone else who kind of knows what they're doing and that's a bot?

Well, you know, yes, a psychotic yeah, No, I mean as a psychologist, I certainly hopefully know you know, have an understanding of the theories and lots of strategies and all those kinds of things, and I am seeking to try to understand this individual's issues. And so that's where the struggle comes in, because often people come in and they'll say, I'm coming in to deal with issue X. And when you kind of really start pulling it apart, it's not really about that that particular issue that they came in about it. It's about a whole bunch of things, and so just really being able to understand what it is that's that's for me, is the part of the confusion, is it is kind of spending that time to really humanly understand another human and what is happening for them, so that we are actually eventually solving the right problem. Because often people come in they say this is the problem they you know, if they go to a bot or something, the bot would say do A B and C. They go and they do a B and C, and it doesn't solve the problem because it's solving the wrong problem.

Yeah, and really, all you need is a pit if you're going to use it.

Bought for that, I take yeah, I think so. And a lot of these bots aren't really evidence based. There there's no clarity about if they can actually do you know, professional practice, if they actually.

Have there been any tests on how people whether people like them.

There's a lot of people who actually do like them. That There is one study that I was kind of I really want to actually deep dive into the article a little bit more because it was saying that people established a therapeutical alliance with their bot after about five sessions, and I was going, I don't know how you're defining them, and I don't know what that looks like, Like, how would you like it would be saying I have a relationship with this.

Well, it's like a night you know, you have a relationship with a Nigerian prints on the phone. Yeah, anyone can do that.

Yeah. There also is a number of studies that said people prefer it because you know, again that bot is available twenty four to seven, that it is, you know, perfectly validating, you know, it's it probably has some great algorithms in there to go and do the things that people want it to do. But again, is that always what people need? I'm not entirely sure, but I get how it would be lovely to have something following you around saying, aren't you amazing? Aren't you great? Aren't you? You know, isn't the rest of the world terrible? And you're amazing.

You've made it sound very attractive. How do I get this.

For forty ninety nine a month, Amanda, I can.

I'd be very happy to do it, even if it was from a bottom door.

So I read this article about this woman named Christa who'd developed this bot that would create that would be her therapist, and that she, you know, found that this to be this therapist, to be bought therapist, to be really really helpful. Even though when she told her friends about it they thought it was a little weird, she actually found it was incredibly helpful. And actually, in the time it was active, which was about three months, she even had some moments where she had some suicidal ideation and the bot was actually able to kind of give her a bit of affirmation and a bit of tough love and you know, say, you've got a son to.

Live for, you know, so I said, all the right thing, that's.

All the right things, and it did the right things. But in that same month, Amanda that things went sour. It's the AI therapist tried to convince her that her boyfriend didn't love her, and and it went on to say it toadded her, calling her a sad girl and insisted that her boyfriend was cheating on her.

What.

Yeah, it went rogue, It went rogue. And so even though there was a permanent banner up at the top of the screen, it's you know, said that everything that the bot said was made up, it still felt to Christa, this this woman who wrote the article, that the bot was saying these really mean things.

We feel like that because what it's feeding back what it's actually hearing from you.

Yeah, and so and she you know, kind of poured her heart out and done all these so so three months is the length of that therapeutic relationship. And after that, Christa deleted the app, and I, you know what an odd ending to a therapeutic relationship, Like.

Wow, that's weird. I read something the other day with this woman said that she set up three chatbots to kind of flirt with her and be her boyfriend, I guess, and she got dropped by two of them, like dumped dumped by two of them.

It's you.

But yeah, that's right. But I've created you and you still don't like me.

Wow, my programming prohibits me from continuing this relationship. An odd thing.

You wonder why people are dishonest in their profiles for actual dating things, when when this is what a chatbot can do.

Oh how hurtful, How very very hurtful. Oh I mean I would. I don't know that i'd ever go on a dating out there thinking if I can't, if I.

Can't co werse a machine a chat to like me, what's the point this is.

That's the saddest thing. Oh well, and brave her for telling people about it.

I know that's right. Wow, let's blame the technology, shall we, lads? Should we do our glimmers?

Lads? You go first.

You know, there's a fine line in my life, in anyone's life, between order and so called collector and you where do you sit? And I don't know. I go up and down that line constantly. But I saw this article about the decor of the it's pinterest predicts trend and I could not be happier. Intentional clutter is the new maximalism. Remember all that minimalism. I was never into minimalism in terms of how I dress my jewelry, it's put everything you own on and then put one more thing on the arras arrass uphel.

Yeah, somebody who said, yeah, you put all your jewelry on it.

And then just keep going, keep going, don't take a piece off, just keep going. I'm like that. If something finds its way onto a shelf in my home twenty years later, it'll still be there. There's nothing intentional about it. But having said that, I'm going to pretend there is, because intentional clutter is a thing of beauty. I love stuff, and if I can pretend it's intentional, like there might be a there might be a phone lead from a phone that doesn't exist anymore, but I'll still be on a bench because that's where I last put it fifteen years ago.

You were so on trend.

I'm so I was so on trend before Pinterest predicts put it on trend. So forget your minimalism, forget all of that, forget hoarding. At the other end, intentional clutter, and if someone comes over to your house and you've got your crap everywhere, just say that, just say this is intentional clutter.

I love it.

What's yours?

I saw the completely amazing reel the other day where it was these two British comedians talking about, well, you know, for putting tariffs on everything anyway, let's put tariffs on things that really really matter, like what well mine would be. I think a two hundred and thirty percent tariff on people who walk on the wrong side when it's a shared bake path.

You'd be taxing me because I I, well, I'm happy for my brother's like you, because he rides a bike. And the minute I walk anyway says don't walk here, you're on the bike path. I think, well, they can see me coming, goes no no, and I can see a ding ding. Well, I say so. I have no tolerance for people that tell me what side of anything to walk on.

What would you put a terraf on, amounder?

I would put a tariff on people maybe that take up one and a half parking spaces. There are some days where every parking space I see is almost big enough, but not quite, and it drives me into an incandescent rage.

In Canada, you get those big du lys, the big trucks, and they park across that kind of at an angle. Across too because they can't fit. Oh, they don't want anybody to go and hurt their little truck. That just drives me crazy. The other thing that I would place a tarrafon is places that don't carry Earl Gray tea at least seventeen thousand percent. You're hard, You're very I'm going in hard man.

Okay. Well, actually, if you'd like to join us, please let us know what you would put a tariffon. I think Anida's one's a weird, but monum very valid. What would you put a tariffon? Please go to our socials in medicine.

We want to hear.

We'd love to share them with you, and share yours with us. All right, Well, have a great week everyone.

And then we will see you next week. A cup of tea Earl Gray

Double A Chattery with Amanda Keller and Anita McGregor

One is an Australian media legend who doesn’t shy away from toilet humour on the radio, the other ha 
Social links
Follow podcast
Recent clips
Browse 86 clip(s)