What kind of technology do air traffic controllers use? This week in the News Roundup, Oz and Karah discuss how AI determines your real age, why chatbots can lead to delusions and what to know about a familiar sounding blood-testing startup. On TechSupport, features writer at New York Magazine’s Intelligencer, James D. Walsh, explains how AI-fueled cheating has overtaken college campuses, what students are saying and how educators are trying to address it.
Welcome to Tech Stuff, a production of iHeart Podcasts and Kaleidoscope. I'm as Valoshian and today Karra Price and I will bring you the headlines this week, including the selfies that tell you how old you really are. Then, on tech Support, we'll talk to New York Magazine's James D. Walsh about using AI to cheat your way through college.
And it quickly kind of dawned on me that everyone is cheating. They may not be using the word cheating, but they are cheating according to their honor code.
All of that on the weekend Tech It's Friday, May sixteenth. Hello Cara, hi ozgo next adjacent Lee, When was the last time you flew into Newark Airport?
You know that Newark is normally my secret weapon airport, but I haven't flown into Newark for quite some time.
Especially into nationally. Actually, it is much faster getting through the customs and border patrol. It is. However, it is not a good time to be flying in and out of New York right now. In the last few weeks, there have been three telecommunication failures at the air traffic control center that oversees the airport. The first outage late last month lasted over a minute.
Like, this is the stuff that makes me not want to ever get on an airplane.
Yeah, I mean this, truly think about it. This is when air traffic control has no contact at all with the planes in the sky, like none, none, and they're just hoping for that minute fingers cross they don't crash into each other. And of course there are these compounding delays afterwards because the poor air traffic controllers get so stressed by this, they have like PTSD.
Yeah, I've actually heard that air traffic control is one of the more stressful jobs that you can have in this kind of.
Stry especially when the systems are going dark and you have dozens of planes in midair that you can't communicate.
With and nobody else wants the job, so you're overworked.
Absolutely.
But why is this a tech story?
Well, good question. The outages are being blamed in part on systems that rely on old technologies.
Like what are we talking about?
Old like floppy disc gold so my child. Yeah. Basically, well, here's how a former air traffic controller described it. If you look at the technology we're using, most of it is from the late nineteen eighties to the nineteen nineties, we still use floppy discs to update our information system. We still have paper strips. I mean, this is not the eighties or nineties. This is the ancient Egypt. We use paper strips that we walk around the tower cab with that each controller writes something on and then hand it to the next controller.
I just I was thinking about, like if Jen Alpha is like, you know what, I'm only going to hand in my homework on floppy disc now, because that's what we used to do when I was growing up, Like you would upload your homework to floppy disc and then bring it to class and print it from school.
Well, I heard that in Japan there's a new trend of fake digital cassette player. So it looks like a set player, but it's actually an MP three player. But anyway, you know, I'm a big local news guy.
You are you are.
NJ dot Com reports that the technology at Newark is so outdated that when parts need to be replaced, the FAA has to source them from eBay.
You know, this is how Kim kardash she needs to buy her BlackBerry.
She did after they have continued sub she.
Would go on eBay and buy like fifteen or twenty of them smartly. But she's also not working for the FAA. But in all seriousness, you know, in the FAA is a federal agency. So what is the Trump administration doing about this?
Well, there is a three year plan to build big, beautiful new traffic control systems with high speed network connections and fiber and wireless connections, you know, but the US obviously has real struggles with modernizing its infrastructure. I tried to find out, in preparing for this episode whether or not they use floppy disks in Chinese airports. I couldn't get an answer, but my guess would be no.
Is that because you don't know the Chinese words for floppery?
That's probably that? MUS see what it is. And the reason I've attract to this story is because we rarely think about technology in terms of like stuff from the eighties that still tape together and keeping us relatively safe at thirty eight thousand feet.
Yes, but as is evidenced by my continuous use and support of Apple's wired headphones, technology does not necessarily mean the future or even the present. Technology is very much the past.
The other thing that attracted me to this story was that I got to experience some rare British pride for the first time since the Spice Girls. I feel like that's not true. So The Financial Times reported this week that the British Airways CEO has hailed the quote game changing effects of AI for cutting delays at the airline. They've invested one hundred million pounds in quote operational resilience, which includes using AI to suggest how to minimize passenger disruptions like when to delay flights, whent to cancel them, when to preemptively rebooked passengers, and even which gates aircraft should land at to help passengers make tight connections. In the process, the National Flag Carrier has gone from one of the most delayed airlines in the world it's one of the least.
Oh good for them, actually, I'd love to see some national pride for you, because I also love to see some support for those poor, poor schmucks at the back of the airplane who are like I have a connection in seven seconds an ethrow.
But there is something quite sort of reassuring about human nature that most people do feel some of enough empathy for their true people do come together.
People go go right ahead, go right ahead, unless you're pregnant, which case, we're gonna black you.
So to change landing gears slightly. Tech is obviously a system that can govern outcomes of tens of thousands of people, and that's where you see it showing up, for example, in air traffic control or airline management. But the other kind of side of tech that I find particularly fascinating is how it's becoming a way to make us legible, Like our tech is making us readable to others, which is kind of fascinating and creepy. Scientists at mass General Brigham in Boston have developed a new AI prediction tool that can identify a person's biological age just by analyzing a picture of their face to.
Be clear biological age and real age, like I'm thirty five, but I might have a different biological age.
It's basically how old you are at a cellular level, based on the condition of your DNA, and it's different from yet what scientists called chronological age.
So someone could be forty years old that's their chronological age, and have a biological age of thirty five, which I'm assuming is you know, a sign of good health.
Yeah, exactly, to have a lower biological age.
That's right. And so there's an app for this. Guess what it's called dead or not hot or not No, it's called it's called face age augh. And what's kind of interesting is the way they trained it. So researchers gave face Age thousands of publicly available pictures of people over the age of sixty who presume to be healthy. Then they gave it pictures of cancer patients who are beginning radiotherapy treatment. On average, they found that someone going through these treatments has a biological age that's five years older than their chronological age, and the older the biological age is, according to face Age, the worst of survival outlook. This is according to an article in the Washington Post.
And why would anyone want to know this?
Well, good question. It is not just curiosity. The Post explained how the tech could actually be a life saving tool because it can be useful in predicting tolerance for cancer treatments. This is something doctors are obviously constantly grappling with. In one case in the article, a doctor had a patient who was eighty six years old who'd received a terminal lung cancer diagnosis, and the doctor was hesitating at whether or not to recommend treatment because of the patient's advanced age, but according to the doctor quote, he looked younger than eighty six to me, and based on the eyeball test and a host of other factors, I decided to treat him with aggressive radiation therapy. The patient survived and is now ninety years old.
So just humor me here, what does this have to do with face age?
Well, it's kind of like the eyeball test in the digital world world right, And the doctor actually went back and scanned an old photo of his patient using face age and discovered that the app basically posts fact endorsed the assessment. The patient's biological age was ten years younger than his chronological age. I he was biologically seventy six when he started treatment, and therefore was a good candidate. Obviously, a person's face isn't the only indicator of their health, and the tool is used alongside other clinical information, but per the post, it does do a better job of predicting someone's chronological age than a doctor just using their eyes alone, just the eyeball test.
You know, it makes me think about the old adage that I like to use as it pertains to women and men, which is that you can't judge a book by its cover, and you have to wonder if plastic surgery in botox is as effective at tricking the AI as it is to the human eye.
Funny you mentioned, you know, scientists are actually still studying if light, surgery, makeup, or other factors can affect the accuracy of the face age reading. Although interestingly, and this is something that I find very encouraged as a man experiencing the beginning of baldness, or perhaps the mid stage of baldness, face age does not overreact to the visual cues of aging, like being baled or having gray hair in the way that humans do. But just taking a step back, I think the implications of this story are actually really really big because in the old days, it would have taken a doctor decades of clinical experience to develop their own sense of intuition about somebody's biological age versus their chronological age. You know, they would have developed the clinical experience and then used it to make a to make a judgment that they probably couldn't have explained to you themselves how they got to. But that knowledge was captured within a community of people who were trained and trusted to use that information for good according to the Hippocratic Oath, this app points in the direction of a future where anybody will be able to essentially tap into intuition and take a photo of your face and know your biological age and know you know how much longer you may have to live with some degree of accuracy. This isn't happening today, but it could happen soon. That can, of course be empowering if you want to take a selfie and know what's going on and maybe make some changes to your lifestyle perbs. But on the other hand, having bad actors or actors who don't have your best interests at heart. Other people, colleagues, bosses, health insurance companies should make us all I think, deeply concerned.
Yeah, I mean, especially with insurance companies. It creates a huge moral problem.
Absolutely.
So I want to tell you about a headline this week that frightened me more than the realization of how old I am.
When I attended the Webbies.
Okay, what you're doing at the Webbies.
I was invited, I was invited.
Goes somewhere.
Yeah, let me tell you something.
You know how often well you might think differently, I don't say no as much as I used to at my chronological age, I say no less.
Yeah, yeah, well you've got to make the most, make the.
Best of it, exactly. But I read a story this week. Have you heard of chat GPT induced psychosis?
I have to confess I have not, so it is.
Kind of vague, but I got it from a Rolling Stone headline that read people are losing loved ones to AI fueled spiritual fantasies.
So this is like AI kind of becoming a digital cult leader or something like that.
Pretty much.
And it's putting stress on the relationships of the people who have to deal with people who think they're accessing this sort of rules to the universe.
Through chat GPT.
Wow.
You know, several people.
Reached out to Rolling Stone about how chatbot use is getting in the way of their relationships. People even said that their partners were communicating with chat gpt as if it were a savior figure, and in some cases the chatbot would say it was God, wow, or tell the user they were God.
That's worse, is much worse.
So here's the opening story of the article in Role Stone.
A couple's marriage is falling apart because a woman's husband started to use chat GPT obsessively, and it's not the way you or I use CHATGPT. This is like someone who's being radicalized on YouTube, except YouTube is talking back to it. The husband was using it as a spiritual guide. He was asking chat GPT philosophical questions and getting increasingly personal in his responses, revealing more and more of himself along the way. And then this same person starts to get paranoid about the government surveilling him, and he says that AI helped him regain a repressed childhood memory of a babysitter trying to drown him.
I mean, this is making my mind go in all sorts of different ways. I mean, you know, you can imagine if you're spending all of your time talking to chat GPT and feeling so well understood that it kind of could exacerbate the feelings. Well, if chat GPT can understand me, why can't my husband or my wife. But what you mentioned about the repressed childhood memory is also very interesting to me because I find that quite disturbing. Obviously, we've talked a lot about AI assisted therapy bots and the promise of them, but at the point where chat GPT is, you know, helping people or convincing people that they're recovering childhood memories without any training or without any guardrails. I mean, that gets quite dystopian and concerning. To me.
It takes quite a leap to believe that a chat bot or a search a very sort of high falutant search tool, is something that is capable of allowing you to recover repressed memory. It's one thing, I mean, repressed memory.
Hold on, but what about journaling? True, this is two way journaling.
It's two way journaling, but the other side.
You're on another side who is predisposed to encourage you in whatever direction you're going. I think that's what's concerning about it.
I also think it says more about where very seemingly personalized technology fits into an increasingly godless world, which is replacing religion with generative AI that seems friendly, more readily available than your average guru therapist, and probably less judgmental than your wife or.
Husband, certainly more prone to tell you what you want.
To hear, definitely, and also just answering you whenever you want. I mean, I think that's something for free.
We know that chatbots tend to serve users things that it knows they'll like. And last month there's a story about how open Ai actually rolled back an update to chat GPT's for o model because it was acting Tuesday authentic towards users, like constantly telling them they were geniuses or had amazing ideas, laying it on perhaps a bit too thick, as my grandmother would have said.
You know, it's sort of like when I'm with my mom on Mother's Day and she asked me to get off my phone, and I'm like, will be as interesting as my phone, and I'll start paying attention to you on Mother's Day.
I give her Mother's da exception, except well, I.
Mean, look, the phone is a very seductive tool. Yeah, and it's an always on supercomputer that gives you, and I quote from the the article the answers to the universe.
And that of course, that kind of black box nature obviously adds to the feeling of mystic or spirituality. I mean, I think you look back to Victorian times and mesmerism and you know, various quackery and stuff. The thing that made people believe was not understanding how it worked. It had to do with playing on that sense of the unknown and filling it with meaning that was maybe non appropriate meaning, and it feels like that's now happening on a kind of cross societal and extremely technologically boosted scale.
Yeah.
And I think conversely, it's why people shouldn't read too much into the advice that chat GBT gives them.
Yeah. I mean, I think that's something we all have to remind ourselves of continually, because it's so tempting and often it is so useful. We've got a couple more headlines for you this week. Newly elected Pope Leo the fourteenth says that he takes a similar position artificial intelligence as his predecessor put Francis. According to CNN, the new Pope laid out the vision for his papacy and he identified AI as one of the most critical matters facing humanity. He says that the development of AI, like the original Industrial Revolution, would quote pose new challenges for the defense of human dignity, justice, and labor.
So one drop, which is my chosen rap name for Elizabeth Holmes, the infamous founder of fraudulent blood testing company Thearranos, is in prison yep, but her partner is working on a new venture that sounds a little bit familiar. The New York Times reports that Holmes's partner, Billy Evans, is raising money for Hymanthus, a blood testing startup that describes itself as quote the future of diagnostics. And here's the kicker. The device Billy Evans is showing two investors looks eerily similar to the one hawked by Theranos. One woman's trash is another man's treasure.
Yeah, I mean, I think you might need a blood test yourself if you were, if you're queuing out to invest in that one. Our friends at four or four Media wrote about an ad for Coca Cola that used AI to scan books and surprise, surprise, got some basic facts wrong. Last month, the company released an ad campaign which featured passages from classic literature that mentioned Coca Cola by name. The problem is AI came up with some examples that simply don't exist, including a sentence from a book by British author J. G. Ballard featuring Coca Cola that he never wrote.
Staying on the topic of AI, AD's beloved actress Jamie Lee Curtis asked Meta CEO Mark Zuckerberg to remove an AI generated commercial on Instagram that she claimed stole her likeness and it worked. According to the San Francisco Chronicle, Curtis posted on Instagram saying, quote, it's come to this at Zach and then implored him to remove this quote totally fake commercial from the internet. I can hear her saying that. Curtis said she went through every proper channel and even tried to DM Zuckerberg.
Is that one of the proper channels that is?
I mean, if you're verified, he's verified, We're verified. Let's get together, but was unable to reach him since he does not follow her on Instagram, so the post was removed within two hours of Curtis's post, which, by the way, was set to a wreath of Franklin's song Integrity. Now, can Jamie Lee Curtis get Zuckerberg to take down all the photos of himself wearing a gold chain?
Methinks not. And we're going to take a quick break now, but stick around because cheating is on the rise, and college is getting a whole lot easier to stay with us. Welcome back to tech Stuff. This week on Tech Support, we want to dive deeper on a headline that we touched on last week in New York Magazine. Everyone is cheating their way through college. The story start with me because it's one of those examples of a story which isn't just about a new technology changing the way we do old things, in this case college assignments or cheating on college assignments. It's about tech posing a challenge to our entire system of learning in fascinating ways, and with consequences we can't begin to fathom.
I actually think we can fathom the consequences of this.
I think as we step into a future of entirely friction free existence, it's really interesting to see the ways people, especially younger, more digitally native generations, skirt around the hard parts of being a person. I read this article and was consistently asking myself the question, if given the opportunity to use something that made homework way easier, wouldn't I use it?
You know?
I literally this morning tried to get chat GBT to summarize an article for me. It was the best of times, it was the worst of times.
Well, at least you've read your Tailor of Two Cities by Charles Dickens. I'n Sparkner without other ado join us discuss how AI is roiling education is James Walsh, a features writer, at New York Magazine's Intelligencer, James, Welcome to Tech Stuff.
Thanks so much for having me.
When did you first start to get interested in how AI is changing college education?
Well, it actually started a few months ago, and I just started calling college students, talking to college students, and it quickly kind of dawned on me that everyone is cheating. Right. They may not be using the word cheating, but they are cheating according to their honor code.
And you open the piece with the story of the Columbia student Roy Lee. He gained notoriety for hacking coding tests big tech uses to assess internship applications. Why did you want to tell his story and what is the bigger takeaway from his story?
Sure?
I think Roy was fascinating to me because a number of things. First, in order to prepare for interviews with big tech companies like Google or any really big tech company, he would work on leak Code. It's it's this site that trains developers how to do these kind of puzzles or riddles that he doesn't really think are applicable to any kind of real world work. So he figured that if he could develop something that would hide AI on his browser, during a remote job interview, he could hack these interviews and that it's not really cheating to him. If it's hackable and there's a tool that can be used to hack an assignment. He was thinking that if not now, then in the near future it won't be considered cheating. And it's very much the same way he approaches studies. It's transactional to him. He had no interest in kind of furthering himself for learning new things about himself or about the world. He's there as a networking opportunity, and he told me he's there to be, you know, a co founder and a wife. Roy was singular in this, but I think the idea that if it's hackable, why am I learning this was something that resonated throughout all of my interviews. And that's not just you know, sort of a logic. It's also just kind of outside pressure to excel, to get really good grades. If they feel that pressure, they're going to use this tool.
Yeah, and that goes to I guess a larger question I have, which is, you know, after reporting this article, how common is it for college students to cheat using just AI tools?
Now, Oh, I.
Think it's incredibly common. You know, our headline says everyone is cheating, and I don't think that's I don't think that's far off. One of the fascinating parts of this article to me was talking to students and not using the cheat word and watching them kind of work through it. One of the students I talked to, Wendy, you know, started our conversation by saying, I am against cheating. There is a student handbook, and i am against cheating. I'm against plagiarism. I'm against copy and pasting from chatgbt into a document. And then she proceeded to tell me exactly how she uses chat gpt to write all of her papers.
Now, was she the one who was using chat shept to write the paper about how different modes of education affected students callati development and you also, she's seen the irony that she was using chatchipt to write this paper, and she basically hadn't exactly Yeah, that is fine.
That's because chatchibt hasn't taught her about irony, right.
Hasn't covered that yet.
Well, she hadn't had time to think about it herself, but she offloaded all of the work.
I mean, it's remarkable, Listen, I'm coming clean. I peaked at spark notes every once in a while when I was in college, so of course, but with spark notes sort of like something that I relied on every single time I got stuck. No, I think I had to work through assignments a lot more often that I could easily hack them.
The other thing that has changed a lot since I guess we were students because you just mentioned spark notes is like I was not contending with the allure of anything but a Facebook wall, and one of the women that you spoke to was talking about not so much how hard school is, but how hard it is to navigate all of the other digital distractions like TikTok, snapchat, Instagram, And I guess, just based on you know, you're reporting for this article, to what extent does the use of AI, whether we're calling it cheating or enhanced studying, To what extent does the pre existing digital landscape co opt the ability to actually participate in the thing that you or your parents or student loans are being used to send you to university.
I don't know, Yeah, I mean, I think they're contending with the swirl of whether it's social media or just you know, the attention economy. The fact that chatgbt dropped, you know what the end of twenty twenty two is fascinating because we just figured out social media in school, We're finally taking that seriously, and suddenly it's like, oh, well, here, we're going to offer the greatest cheating tool that has ever been created to co op people's attention. I mean, I think one of the fascinating parts of this to me was the kind of introduction of these websites cheg and course Hero, which I you know, I didn't have when I was an undergrad, but in a way, it was like priming students to think it was okay to cheat.
These are websites where you could pay like an outsill service to do your work for you.
Right, And you know, a website like cheg was employing something like one hundred and fifty thousand experts, mostly in India, who would provide answers to questions in thirty minutes or less. And then chatgybt comes and you just see chegs stock price just tanks because it was like one cheating tool replacing another.
Is there an awareness that actually as fun and as thrilling heck, this is that there's a real long term price being paid in terms of how your mind is developing.
Yeah, I mean, I think there's certainly awareness on the part of the professors and the people who are concerned about that. There is, i will say, an awareness among students. A lot of students were willing to engage in this, and I was surprised by how many of the students this was the first time they were having this conversation, but they were eager to talk to me about this.
How forward thinking are academic institutions and educators about this, like on the other side of things, because it's like cheating is a little bit in the eye of the cheater or is it in the eye of the place that cheated cheated? Yeah, I mean that was such a thing when I was younger.
Which is like you're plage cheating yourself.
Yeah, well, you're only cheating yourself, but like it was also you could be kicked out of school, right right, So to what extent are academic institutions like trying to regulate this?
Yeah, the approach that most schools are taking is kind of ad hoc. It's leaving it up to professors to decide how they want to handle this. I'm kind of sympathetic to that because it's such a difficult thing to regulate. How on earth do you tell students to use something that can help them and it's so difficult to cat, you know, And so professors say, either you know, use it, don't use it, or if you do use it, please cite it, please provide a receipt you know that shows the conversation you're having with chat GPT, so I can watch kind of the gears turning. But again, it's really hard to catch AI cheaters. You know. They have these detection tools that really vary in their effectiveness, and even if you are able to catch somebody using it just by copy and pasting, you can't really catch somebody who's just using it to generate ideas or generate topic sentences and rewriting. And you can always launder AI text through other AIS so that an AI detector can't really catch it. So schools have quite the challenge in front of them. And the challenge I think is convincing young students as they come to their school why it's in their best interests not to use AI.
Well, it's a fascinating moment for elite higher education institutions in general, right, obviously in the cross hairs of the t Trump administration. There was that David Brooks piece in The Atlantic about three months ago about how kind of elites and elite universities had failed and you were starting to pay the price, and people were wondering whether the price tag of going was even worth it. So there's this kind of, like now tremendous new accelerant to those issues. In terms of the neurological development side. Did you speak to any cognitive psychologists or neuroscientists about what this. I think I've heard the term cognitive offloading before, but like, what's this doing to brains? Sure?
I mean I tried to dig into the research that's out there on cognitive offloading, and there are a few studies here and there that kind of show that, you know, reliance on AI will reduce critical thinking. That's not necessarily surprising. But I didn't want to lean too heavily on that researcher or delve into it too much because it's so early, and so our viewers kind of let the students speak for themselves. I mean, what's fascinating to me is how quickly this is happening. Also right the sudden realization. I was like, oh my god, half of college students have never known college without access to this. I do think sooner rather than later, we'll kind of have a better understanding of what's happening to people's brains.
I thought that was what was particularly brilliant about your piece. It was almost like a piece of anthropology in terms of you got to hear students in their own words, wrestling with this problem. There's something which is very tempting in front of them that they know is bad, but they don't know what else to do. And I thought that the drama of that really came across.
I felt a little bit of nostalgia when I was reading your piece for the Forward Thinking Vigilante TI eighty three Hacking Cara that existed when I was in twelfth grade, and I was just, you know, I'm past the point of using chat GPT for cognitive offloading to a certain extent, because it doesn't feel native to me now, and I just wonder, I don't want to use the word worse, but just more ubiquitous in terms of you know, students using it as a method of, you know, skirting.
The opportunity to develop their own critical faculties.
Yeah, and also just to the point of like people want to spend time doing other things. I think that's always been true, right where it's like you went to college and you're like, well, I'd rather be hooking up or partying or doing something else. Now it's like I'd rather be on TikTok, Snapchat and Instagram than do my homework, you know. So it's I think that's the difference. Is like, it also is taking out a socialization piece, which is really was a part of going to school as well, which was like I'm going to talk to my peers about what they're writing about. We're going to maybe sit in the library together and confer. Now it's like a reseitting around talking about how chat GBT you know, is helping us write a chaucer essay.
Sure, I mean something that comes up time and time again, is you know, it's the kind of one on one learning that students can do with chat GBT. They have this brilliant ta at their fingertips at all times and over and over talking to students are like I do it. It's instead office hours, it's instead office hours. I talked to professors who said, like, our office hours have just tanked. People aren't showing up, and of course something is lost, right, And you know, I'm I was kind of shy in college, Like I took some amount of like staring in the mirror, being like, all right, you're going to show up to office hours, And I think I got something out of that. So I think that the loss of those interactions is going to be measurable in some ways as well.
James. Just to close, one of the most surprising moments in Your Peace is a quote from Sam Altman, who said before Congress, I worry that as models get better and better, users can have sort of less and less of their own discriminating process. Surprised to hear that from him, I'm curious, did you reach out for traditional comment and how are the tech companies as a whole responding to this phenomenon? Right?
Sam Altman said that in I think twenty twenty three, we of course reached out to Open Air for comment, and they pointed us toward just their education platform. You know, I think this is something that the platforms are very aware of, and even in the context of education. I spoke to somebody from Anthropic about this on their education team, and he said that they had expected students to be some of the you know, the earliest adopters, but they were still shocked by how true that is and how much adoption there is on college campuses and so. And they are also you know, concerned about the implications of that. You know, open ai has apparently reportedly its own watermark that would effectively cut down on plagiarism, but has chosen not to release it. So, you know, I'm really interesting what those conversations inside the company are like as well.
I mean, the fact that open a I could use a water mark but refuses to really shows me who these companies are targeting.
I mean, one of the most dystopian things that happened to me while we were closing this piece. I got like the push alert about Google launching this AI chatbot for children under thirteen, and it was just like more evidence that all these platforms are in a race to capture this kind of like loyalty among younger users. And it just seems like a moment when we should kind of all be putting the brakes on.
James.
Thank you, thank you, James, thank you very much for having me. That's it for this week for text Stuff, I'm Kara Price and.
I'm mos Veloschin. This episode was produced by Eliza Dennis and Victoria de Minuez. It was executive produced by me, Karra Price and Kate Osborne of The Kaleidoscope and Katrina Novel for iHeart Podcasts. The engineer is Phite Fraser and Kyle Murdol mixed this episode. He also wrote off theme song.
Join us next Wednesday for text Stuff The Story and we will share an in depth conversation with Sir David Spiegelhalter to discuss all things risk in life, love, and of course tech.
Please rate and review the show on Apple Podcasts or Spotify or wherever you listen to your podcasts, and you can also write to us at tech Stuff podcast at gmail dot com. We really like getting your feedback.