Robert and Garrison round up all the horrifying childcare products with AI jammed in them from CES 2025.
Also media.
Oh it's it could happen here. A podcast from CEES The Consumer Electronics Show twenty twenty five. I am here with my friend and work partner, Garrison Davis. We have been trotting the boards, the boards being the Las Vegas Convention Center all day. Garrison, today, you started earlier than I did because I was catastrophically hungover after getting very drunk with the priest last night. Yeah, we had a nice dinner and then we set out to experience a fresh new hell. And in this case, that fresh new hell was what the AI bros Have ready for your children.
No, it's funny how we both stumbled across AI products for kids like the same day, during the exact same time.
Uh huh. Yeah, it really is remarkable that, Like, yeah, I guess in part just because like that is such a focus. I think it has something to do with what you saw some of yesterday where and I had caught a little the day before where they're like, yeah, they don't really like this stuff. We're gonna have to get around it. Like obviously this is inevitable, but like people really also seem to not enjoy it very much. No one can explain why but I think that this may be like, Okay, well, if we get them when they're young enough, if we train these kids, we can force this on them and they'll have no choice but to like it.
And it's interesting to say that because the first thing I did today was go to a panel at the Venetian titled Raising AI Kids Responsibly, which is maybe the best title for any single panel.
Yeah, that's fucked up.
The description was a new generation of kids are being brought up with AI technologies as a part of their lives. How does this affect their learning, entertainment and socialization? Which is a good question. Yeah, we should be asking that more people, should There four people on the panel Karen ruth Wong from Ido, play Lab Partnerships, Nilo Lewick from Skyrocket Toys, Melissa Hunter from Family Video Network, and Joshua Garrett from a Ready Land. And I'll talk about all these all these different companies and people in a sec.
Yeah.
So the panel started with Karen ruth Wang from Ido, which is the company that first partnered with sessame a workshop to start making online apps. So you know, that was interesting to me because sessme workshop generally puts a lot of care into, like, you know, making media for children. This is a company that works with them. So I was interested in what she was gonna say, and basically she talked not about any products that her company's making, but instead research into how like AI is affecting gen Z, how gen Z wants to like interact with AI, and talked about a whole bunch of research that her company has been doing for the past few years on like what people you know, my age and you know younger, what their attitudes are towards this thing that has become like an increasingly encroaching part.
Of their lives.
I'm just going to play a series of clips.
Couldn't be more excited.
So I was sharing this morning a little bit about what they're learning.
That question is what if the.
Tech savvy generation is at buying anity?
Moore. We have a lot of.
Really interesting opinions and assumptions in our heads that these are the ones that are going to be the first users and the first ars, and in many ways they are. But then the ones that also come with the most informed opinions, not just about how badly the tech feels, how cringey some of them I be landing, but also how it's affecting their sense of humanity.
That's fascinating.
Yeah.
The very first thing this is literally like the minutes into the panels, is like after they do their introductions, the first thing to talk about is how gen Z is both an early adopter of new tech, but they're also kind of the most AI critical.
Yeah. Yeah, yeah, it's cringey.
Yeah, like how it feels cringy and and not just that, how it's affecting people's sense of humanity and viewing this like, you know, in some ways as like an obstacle to get over, but also this is I'm not sure how I feel about about, like, you know, Karen and the company she's representing here, because in some ways I felt like she's probably actually good. She just had to frame all of the things she was saying as like shocking revelations to all these tech bros. Be like, yeah, actually, it turns out kids surprisingly don't want their lives run by AI.
Yeah, don't want to communicate only with AI.
I actually liked what she was saying, It's just her presentation of it felt kind of odd at times because of who the audience is.
Get do you get the feeling that she was like a bad person trying to help, like other bad people sell poison to children, or somebody who was trying to like in a way that these guys would listen to tell them that what they're doing isn't work.
Maybe like twenty eighty so like a little bit of like, yeah, we have to sell some of this, but mostly it felt like trying to inform people about how this isn't really what people want, and you know, it has a lot of actual dropbacks. Here's a clip of Karen talking about the sort of questions that they're asking kids to, you know, get data on how they feel about AI.
Here's a few progocative ones.
You really put tangible expressions what it would be like to interact with a potential AI tool. And so we asked questions like, Okay, you've recently had a friend breakup. What kind of intervention do you want? Do you want someone to counsel you through that process, or do you want someone to kind of replace that friend for the time being, just so you can you know, back yourself out from that relationship. So by asking really tangible questions, by putting prototypes in front of youth, we were able to co design to view insights, but this one it always gets all audience members. We put out a provocational expression of imagine you could have an AI trained on your preferences, on your personalities, live your life for you. Imagine they could.
Swipe your tender for you. They would have the achy.
Conversations, or they would go through the awkward introductions you new person in school. And we heard some really interesting things. I want to go on a bad date for myself and I want to have that bad vacation. There was a really interesting design that being able to live life for your selves, that pat.
Being able to live life for yourself is a badge of honor.
Amazing that human beings don't want a robot to replace them in such drudgery as the search for love and human connection. Incredible that that teens aren't interested in letting a robot go on dates for them.
No, it's super interesting and like even like the first thing she said about you know you like lost some friends. Do you want an AI to like to you know, like counsel you or like you know, like like like talk about your feelings or do you want a friend replacement? And no, people don't want a friend replacement.
And this even like.
Honor question of like you know, like like AI swiping your Tinder for you trying to figure out what your preferences are. No, like gen Z wants to live life for themselves. It's like it's odd because.
I because that's what being a person that a person that's right.
But like it's it's odd how that's framed as like a surprising revelation.
Wow, these kids want to live lives.
So yeah, it was. It was a kind of an odd panel to go to. She highlighted that the key areas of tension in AI for for gen Z is twofold a creative expression and human relationships. These are the two biggest things that people are concerned about is how how will affect your ability to you know, make art be creative and what it means for like, you know, relationships as a human being, right, especially if you're being asked questions about you know, would you let an AI like meet someone that you want to date first, have have them go through like a first like fake AI date to like to like get through like icebreaker questions or something.
The amount of people I meet who feel that way about like their digital twins or like who take pride in having like an AI trained off of their social media posts at events like these. It's shocking to me because like, do you feel good about saying that a chat bot? You feel like it is you that you have trained a chatbot to be a reasonable simulacrum of yourself? Do you feel good about thinking that? Does that make you happy about yourself?
Well?
And the data that person was talking about showed no, like, yeah, people actually don't want these things, Like no, that's actually is it what anyone wants out of life? This isn't what anyone wants out of this technology, right, Like we use AI all the time, you know, like like you know, like autocomplete. It has a whole bunch of like you know, pretty basic uses.
Yeah, it saves me from having to spell certain words too many times.
Yeah, but we don't want it to like go on dates for us. And the whole part of being human is having you know, a degree of bad experiences and that that helps shape us as people. And this isn't like a hurdle to get over. This is like a part of what it means to be human. And she kind of talked about that a little bit more in this last clip that I'll play the next one here.
I prefer to give opportunities to people over technology. I think these are the ones. And again they've seen what it's like when people feel replaced. I'll definitely share a lot more, but starting off with a few key learnings. Gen Z's valid advised some perspective from loved experience, something about designing for friction. I'm gonna do like an design for friction. In our age of optimization and our age of assuming that everything should move as fast as possible to make life as smooth as possible. There's something about the challenge, and that comes back to play, right, Why would we spend so much time to hit a ball several hundred yards away. There's something about the joy of achieving, the joy of overcoming challenge, the joy of moving through your first friend breaker or your boyfriend or girlfriend breakup that makes you to a person. And as many times as helicopter parents or as people who are design technologies assume that the smoothest possible path is the best possible path, there's some rush back.
There, some pushback, some pushback to the idea that like you should live a life, your one precious life should be lived.
No, but there's a whole bunch of interesting stuff there. Gen Z has great fears about being replaced. Yeah, you know, like having like workforce replacement. Gen Z prefers to actually like make connections and network with other people. Our agent actually like share opportunities.
Yeah.
In previous panels, this was something that was also talked about how millennials were way more like selective about like sharing like employment opportunities because they were like so focused on making sure that they make it and there's a lot more like like open collaboration and sharing sharing opportunities.
It's harder, so you guys have to be better about that.
Yeah.
Yeah, no, talking about you know, like designing for friction. There's like there's value in something being challenging.
That was very interesting because the surprise about that because it is it is this kind of I'm sure most of these people were born to wealth and privilege and the first thing that people do with money, the primary reason to have money is to reduce friction. The fact that that's surprising to anyone that like, no, like friction's necessary otherwise you're not a person. I mean, it's like that, It's like the ghoule we saw the other night, right, Like you know, they're just not really people, you know.
One thing she kind of closed on in this section is talking about how how gen Z does not trust AI to understand the nuance of their lives, especially in this age of like tech optimization, Like that misses a part of what it means to, like, you know, feel proud of yourself and the work that you've done. Something should have talked about at the very end of the panel was like how they hadn't factored in like like gen Z, you know, and people in general, right, will will feel proud about, you know, making a piece of art, and they don't have that same sense of pride for an AI generated image no where that's like a screenplay, whether it's whatever someone gave an example of, Like you know, I have a kid who does creative stuff. They edit videos, right, and there is AI tools that make editing videos like easier, But if the AI does all the work, they don't feel happy about that, Like they don't feel proud, they don't feel like they've actually achieved something. And you have to feel proud about the work that you've done, so like there's actually a sense of like ownership over like the art that we create an exact quote was quote you can't eliminate life formative aspects, which which is like yes, like yeah, you don't ever have any I'm happy someone at CS is saying this, the fact that it needs to be said at all.
Very bleak, very sad.
It's really bleak.
Yeah, dating people, making friends, being social, doing whatever it is you do for a living as yourself is what life is like.
Yeah.
I think last thing that you talked about it is like gen Z aren't technophobes, but they do have strong boundaries.
Yeah, good, and they.
Have to reinforce their own sense of self because we're constantly being bombarded with you know, slop content, influencers, podcasts, live streams like everything you know, TikTok social media. So we have strong boundaries on how tech it like integrates into our lives and a lot of the way these tech bros want AI to like become more invasive.
We are not super into now.
Like all they're offering people is like this machine will do everything that you actually want to do with your time, and also you won't have a job, Like that's what big tech is promising gen Z.
Yeah, so that's why I started my day.
Speaking of gen z Z stands for zillions of dollars that will get if you listen to these ads and we're back.
So unfortunately, that panel wasn't just talking about how kids maybe don't want AI to run their lives. It also had two other people from AI products. The first one that I'll mention is called ready Land, which I think I think partnered with Amazon ors to some degree. It at least uses like Amazon Alexa's. It's essentially a chooser and adventure story book with like a natural physical copy that Alexa will read to you and you can talk to it, so you can talk to characters and choose different pathways. I was more skeptical out of it at first, because I just don't like AI's reading books to kids. But this became more like an interactive story thing, and it actually seemed kind of good at what it was doing. And then the guy behind it clarified ready Land is not using AI to generate new content for kids. It's all like pre programmed, like human paths, you know, just with so many variables already built in based on you know, like if you're making food in one of these books, or like you know, a kid wants to go on like a weird side quest. The AI already has like stuff for how to handle that. You knows how to say these words and knows how to stitch together these things. But it's not actually generating new content itself. It's if everything is like pre baked, it can be assembled in many different ways. Okay, so every time you read a book to the kid, it'll be slightly different because the kid will respond to certain plot elements. The kid can like talk to talk to characters, ask questions. So like this was this was actually pretty interesting. The fact that it's simply not even generating new content makes it miles better than any of these other.
AI Kids products. That it's actually just kind of using some of the tech that makes up AI. Exactly how you to make something humans wrote more reactive exactly.
Yeah, So like it's it's it's it's actually like a pretty interesting piece of technology, and it's not just Alexa reading a story book. It has like a large interactive element which you know that makes the Alexa part, you know, actually useful. And then there was this other product, what was this What was this one called? It's from a company called Skyrocket Toys hoe the AI Teddy.
Bear or something like that.
Yeah, Poka Hoe the AI bear, which does generate live content with with guardrails. He did say, oh good, but the AI content both comes from the input and the output. He talked about guardrails, you know, he said, you know, chat gpt does have internal guardrails, but the reliability is suspect, which there certainly is, considering just last week there was a piece of news about chat gpt helping someone build a bomb.
Yeah yeah, which they used in just this magical city.
Yes.
So he did say that, like guardrail reliability can be suspect, but there is a difference when you have certainly like like more like child friendly features turned on. But he admitted that like moderation is part of the challenge.
I don't know.
He basically how this works is you have an app synced up with this AI teddy bear that talks with a not very pleasing voice.
Oh I got to hear it.
Do you want me to pull this up?
Yes?
Absolutely, okay, But basically you put in a whole bunch of story inputs, being like, I want the story set in this place, I want it featuring these types of characters. I want this archetype to be the villain or it has like dozens and not hundreds of like archetypal things that you can like click and then the teddy bear will will generate a new story. So it is generating new content, but with like pre baked characters. Okay, so then it'll sit together the story. The weirder you make the variables, the weirder the story is going to be. Well, let me play a clip for Robert here, Peter Nilo.
It's a bright and shiny January morning, the perfect time for another story. Did you know that in Las Vegas, where our story takes place, they have a gigantic Barris feel called the high Roller. It's taller than the Statue of Liberty.
So sol in real world events in places based on the setting that.
Achieves what am I told you?
There's a mystery waiting to be un ground at the consumer electronics show.
There's excitement here.
That guy like sitting there talking almost rolling his eyes at his own product while it yaps in his lap is a perfect like he clearly didn't think about how that would look because it does not make an appealing ad for the product.
No, so it doesn't sound good. So yeah, they generated a story set in Cees in Las Vegas, and he would occasionally interrupt the bear to like explain what it was doing. So that was the other product, not nearly as polished or like really really as thoughtful as like the AI story book. But you know, maybe if you are tired of having to, you know, talk to your kid, you can just get one of these teddy bears to.
Yeah throw thrown front raise it. I mean, it looks like you could probably handle all of the physical contact they need too, so you don't even need to ever touch your child. And in fact, you can just have chat GPT root that through the bear and never even see your own flesh and blood. Like I think ideally you would have them cut out of there, you know, really surgically remove that baby, you know, a month or two early, and that way you can kind of absolutely minimize the amount of time that you ever spend in contact with your spawn.
One other thing I will add is that the ready Land guy the AI story book, pecifically, when talking about, you know, the importance of guardrails, he said that there's multiple levels to safety. Right, an AI kids robot that swears right, It's one thing that's it's pretty easy to avoid Actually, like that's that's pretty easy.
There's a limited number of swear words, right, and.
You could just like you could just block out certain things from happening. Yeah, you can build that in. But another aspect that's really important to safety is like the accuracy of the things it's saying, right, Like, what if it's saying something that's supposed to be you know, some like factual statement about the world that's just that just like isn't true or can actually like lead to danger, Right, what if it tells your kid to do something which is actually kind of dangerous, or what if it says like not even not even directly telling them, but you know, it says something that if the kid then tries to do that, it's really dangerous. And like this is why their storybook program. You know, it does not generate new content, So everything it says is like it's like already pre approved, like it already is going to have you know, like verified like verified safe you know sentences versus this ai Teddy Bear. Because it is generating new content, you know, it could if things go horribly wrong, you know, talk about drinking bleach. You know, yeah, theoretically, you know, it's just just like something you know, like things can go wrong. So it's not just about you know, avoiding bad words or talking about sex or you know those types of like like inappropriate things. It's also making sure it's not like hallucinating or saying things that could like lead to dangerous situations.
Right, Well, the good news is that I don't think these are going to be wildly successful products. I mean, I guess we'll see. But these are super expensive and like, did you get a price point for that bear?
I did not hear a price point for the bear.
I'm curious as to what they're going to be charging for it. I mean, we'll see if any of this stuff really does take off. I wouldn't consider optimism to hope this stuff takes off, but like, they don't seem like great products to me, so I guess we'll see. I read something very interesting that is related exactly, and it probably was he might have been talking about like that weird bear or something. I read something very interesting on the subject of like AI children's toys from a guy who was like an AI developer. This was from a post on Twitter by Alex Volkov. I got my six year old daughter an Ai tooy for her birthday that arrived for Christmas instead. She unpacked it all excited. I explained that this isn't like other toys, that this one has AI in it. She of course knows what AI is has seeing the things I've built and interacted with them, chatted with chat GBT and Santa Mode, knows that Daddy is doing AI, etc. So a very interesting experiment happened. After Magical Toys reached out and fixed the issue reference below. She started playing with this dino, chatted with it, and then learned to turn it off and doesn't want it to talk anymore. She still loves playing with it, dressed it up. It now has paper shoes in the top hat that we made together. But every time I asked her if she liked to chat with it, she says no. The few times turned it back on and she did speak with it for a bit, and then she just turned it off again, not wanting to engage. I gently asked why, and I wasn't really able to understand where there's the resistance. It's not weird to her. In fact, at one point she was pretending that Dinah was a baby and was turned on. So I told her let's ask it to pretend to be a baby, and it obliged and said okay, So he asked it to cry. Granted, they don't have an amazing advanced voice mode like open AI, so it did its best, but it sounded weird, which made her laugh really hard. It was basically making crying sounds like talking. And also there are still technical issues. The voice is sometimes choppy, so it could be that it's still uncanny for her. I'm honestly fascinated about why the AI aspect of this didn't connect with my six year old.
Because it's creepy, because it's people.
They don't like it. Nobody wants this. Yeah, ick yeah, ick. I know this is a sample size of one kid here, and I'm sure many many things will change. I she'll grow and learn to interact with more ais in different forms. But the first toy contact was interestingly almost a complete failure.
That is tristing.
Yeah, I find that fucking fascinating.
Yeah, no one wants this, even six year olds. You're like, hey, I would prefer just a regular toy I can play with.
I would prefer all pretend it's a robot. But I don't want it to be a robot that talks to me.
Poe. The AI bear is fifty dollars on Amazon.
Oh that's not bad. Actually, no, that's good. Okay, good, all right, Well maybe maybe we.
Can even maybe order one and see and see what we can get out of it. Yeah, all right, we're gonna go on another break and return to talk once again about AI products for your children.
Okay, we're back.
So we went and saw something else today while you were at a different chunk of the event talking to yet another flying car company promises to revolutionize the ease with which we can all do nine to elevens. Super excited for that future. By the way, I stumbled upon the booth for a company called TCL.
A pretty big company, fairly large.
Yeah, large crime and make a lot of TVs stuff like that. They had a couple of things. They had an AI laundry machine.
So many air laundry bots.
Yeah, this one was the worst because it's like this little almost a soft rounded pyramid shape. It hangs your laundry. They say they can't do folding yet, so it just sort of like picks up dry laundry and holds it like.
It just suspends it in the air.
It suspends it in the air inside of itself. And also it can only do a kilogram of laundry. The only thing they had in there was like handkerchiefs and scarfs, So it's like probably a couple of thousand dollars, but you can AI can clean your handkerchiefs and scarfs.
As opposed to my regular washing machine.
Yeah, and they had a washing machine that it can identify and count exactly what clothes are in it, how many of them there are, and it'll tell you the soil level and YadA YadA, YadA YadA. Like I'm sure some people changing will want this shit, but it's like, yeah, the only people who have a lot of money and want to spend it on a laundry machine, because I don't see that it actually reduces the amount of work you need to do at this point. But the thing they had at the booth that caught my eye was a robot toy for kids. AI Space Me is the name of the robot. Baby Yoda was a partial inspiration because like a Ferby, Yeah Ferby, there's some porg in there. It's a two part toy. The interior part is like a swaddled up almost looking little porg thing with a cute face, and the eyes are reasonably good, like they did a decent job of the eyes not looking creepy, but like that blink and change color and contract and expand, and then it's got like two little flapper arms that can like wiggle, and its seeded inside almost like the Aliens and Independence Day. It's seated inside like this large rolling body frame that allows it to move around on the ground. And so it's supposed to like be your child's friend. And the that was upsetting to me because they had this video ad that would play every so often and it was very creepy, and you know, I thought back to when we were doing the interview with the guy who had like the robot for old people. He was like, it's very important that it not tell them it loves them. That it like always reiterate that it's a false thing. This robot just keeps telling the kid, I love you, like I care for you. When the lady did a demo, she was like, it's a toy that actually knows and cares about your child and like, no, it's not, No, it's not don't say that. That shouldn't be legal for you. To say that for you to sell this to children and tell them it's an intelligent being that loves them is like deeply abusive in my opinion, Like that is actually child abuse because it's not alive anyway. So I had to bring Garrison over because you needed to see it.
Oh and saw it, I did.
Yeah, And I'm gonna play a little clip from the ad, so I want you to hear the way this thing sounds.
Reminds us.
Oh my god, I found that profoundly upsetting disturbing. Yeah, your kid can like pick it up and like walk with it. It'll like talk to them, It'll make up stories. It'll like look at pictures your kid draws and then generate them into like live AI videos. You can put a pin on and it will record stuff that your kid does and play it back to you at night as a video. So again, absolutely minimize on the amount of time you have to spend with your child.
It's in the car.
Yeah, it takes over your car, so that like it's talking to you from the screens in your car, like.
The video like like taking taking this thing every like everywhere the kid goes. It's like the kid's main interaction with the world. Yeah, is with this little rolling like plastic furbie and yeah, like like talking about like like expressing like love and like how damaging this must be for like a four year old to have. Like the first thing that it constantly expressed like love and affection for is this little rolling robot that's that you're gonna throw in the garbage in like you know, four years when you're when you're like too old for it. How like traumatizing and like deeply fucked up. That's gonna be for your for like your sense of self and like love and affection.
The mix of things that we're trying to have this do. Like the other ones were build as toys, this was built as like a friend for your child.
As well as like a home assistance.
Yeah, it's supposed to also act as like it'll change that you can hook it into your smart home so it can change the temperature. Like they did a little in person demo where like a woman pretending to be a mom talked with it about like planning planned a birthday party for her kid with it, Yeah, and it like put food in her Amazon cart and like change the temperature inside because more people were coming over. One of the things they advertise is is security mode, where it like travels around your house at night and acts as a sentry watching your home like wild stuff.
No, it's it was Honestly, I've seen a few like disturbing things, you know, all of like the the new drone tech to have like solar powered drones that can stay in the air to drop bombs is like bad, But like this type of stuff is like really dehumani. Yeah, it really like viscerally upsets me.
Yeah, and I think probably very bad for children. Everything they showed us was incredibly curated, Like I when we watched this live thing where she was having a very fluid conversation with it that was clearly scripted. Yes, and so I wonder how well this thing actually works in practice.
We never got an actual like live down.
No, because they always show it perfectly recognizing the kid, perfectly recognizing like what's in their you know, little kid drawings and stuff. What it's supposed to be to make beautiful, creepily shiny AI moving versions and stuff. So like, I wonder how much less good it's going to be in reality than the thing that they've showed us. But it's definitely some amount shittier than what they've displayed already. And part of why I think that is, like we went to check out the booth that this other the South Korean company just called I think sk had like a they called it a quantum security camera that was AI enabled, And then thinking about how like in the ads, it always like recognized the kid and its parents and are drawing accurately. Well, this one when I flipped off the camera with both middle fingers, recognized it and wrote up a description of a man giving the camera a thumbs up. Like I'm really curious for when these things hit the market and people start buying them, like what sort of fucked up stuff it'll do, and how kind of big the seams are. I don't expect a long life for this thing, which is going to be funnier because like there was already a big eight hundred dollars like Children's Companion AI toy that failed last year and the company shut off access to them, and like so parents had to explain to their kids who had bonded with this thing that it was dying forever. And that's especially excited to because they they've built a robot that talks to your kid and tells it it loves them, and eventually that robot is going to be taken away from the child by the company when it no longer becomes profitable. And that's I'm excited for that, like new ground and how to fuck up kids. Anyway, that's what I got, Garrison.
What an uplifting ces adventures again?
Yeah?
No, that's all Yeah, it's all great. All right, everybody, Well, this has been behind the bastards. No it's not, or no it's not. What is this? This has been? It could happen here? A podcast by somebody who is slowly going insane.
Yeah, because we're like four days in Vegas now we still have one more days.
I'm out of my mind. I'm completely broken.
Uh, hopefully tomorrow we'll have our final of our like on the ground coverage with our with our cees bestined show. Yeah, so ed always going to be maybe.
A high note. So see you there.
It could happen here is a production of cool Zone Media.
For more podcasts from cool Zone Media, visit our website cool Zonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts. Can now find sources for it could happen here, listed directly in episode descriptions. Thanks for listening.