Are we starting to sound like ChatGPT? This week, Oz and Karah explore a new AI-powered recipe tool and test whether mustard and pasta actually go together. Then, a new study suggests AI may already be changing the way we talk. Plus, impersonations of U.S. politicians and the Danish bill that would give people legal rights to their digital selves. And finally, on the new segment Chat and Me, what happens when bots prioritize efficiency over honesty? One novelist’s frustrating, multi-hour standoff with ChatGPT.
Also, we want to hear from you: If you’ve used a chatbot in a surprising or delightful (or deranged) way, send us a 1–2 minute voice note at techstuffpodcast@gmail.com.
From Kaleidoscope and iHeart podcasts.
This is tech stuff, I'm as Volocian and I'm Kara Price.
Today we get into the headlines this week, including the future of unlocking animal consciousness with AI and Grock's commitment to its maker Elon Musk. Then, on our new segment, Chatt and me.
That GBT refused to read my novel when I tried to upload it, which I don't really blame it for doing. Novels are boring, but the annoying thing was that Chat GBT kept lying to me and insisting that it had read it when it clearly had.
All of that. On the Weekend Tech, It's Friday, July eighteenth.
Hey Kara, hi, Auzie.
You know you and my dad are the only two people who call me Ozzie.
I don't know your dad that well. I know him a little bit. I find that to be very flattering, that I stand beside him in nickname calling. I also call you Ozzie because it allows me to think of you dressed like Ozzie Osbourne, which tickles me.
Another brit with slightly lank hair. Look, We've talked about this on the show before. I'm not much of a grocery shopper and I ready cook in the kitchen.
That shocks me very little.
What might shocks you more is that I actually can cook, really I can. So. My stepfather owns an Italian restaurant in London called Ricardo's check it Out one two six Fulham Road, and one summer during high school, I cooked in the kitchen there. Nowadays, living in New York City on the fifth floor of a walk up building, I'm pretty rarely grocery shopping and cooking.
I have to confess, I thought you were going to give us your address there for a second as we were streaming.
Well, I want people to go to Ricardo's restaurant. I don't really want people to come to my house.
Your house could be Ricardo's if you cook, That's true, you never know.
But basically, I, like many people in New York, and regularly on a broots.
I forgot the story until we were just talking. But when I was a kid, they used to have those playhouses, you know, where you would go in and you'd be able to pretend.
To cook, like the full size Yes, yes you did.
As a kid, and my parents once caught me and they let me keep going. So that's grateful for this. There's a little yellow phone. It was like a Fisher Price thing, and they saw me ordering Chinese.
No, you're ordering Chinese food in the playoff. I love that.
So that'll give you a sense of how much I can cook and do anything sort of epicurean minded, Well.
It's sunny. Use that word. I've actually been playing around with this website called epicure this week, which is almost tempting me back into the kitchen.
So is this like a knockoff of the recipe site Epicurius, which I have used.
Not quite, although epicurious is a data set that Epicure's model was trained on.
Interesting, So it's like AI generated recipes.
That's exactly right. Actually want you to take a look. So if you go to epicure dot kaikaku dot.
Air, I know exactly how to spell kaikaku. Okay, now what I have it? I have it here. Here's the slogan on top of the first of all, this website looks like it was created by Elon Musk. It says you are now the world's most creative chef. Leveraged the power of AI and machine learning to explore science backed flavor pairings and generate recipes.
I think this is designed to flatter the Elon stand more than the kra price.
What do they mean when they say science backed flavor pairings and AI machine learning? How do those things come together on this website?
Well, the ux designer of the website must have had you in mind after all, because there is a tab that you can click with three words how it works? Oh my god, and I clicked on it, and what I learned is the website is built using a deep learning model called flavor graph, which was trained on over a million recipes and also the chemical compound data of different types of food items when they're cooked together.
Oh so the chemical compound data is the science part exactly.
So food chemists have identified these different flavor compounds, which I guess are kind of chemical compositions in most ingredients, and flavor graph is trained on over a thousand flavor compounds which are found in three hundred and eighty one different ingredients. So the same flavor compound can be more than one ingredient. The model can then create a flavor network in which two ingredients are connected if they share at least one flavor compound.
Interesting. So that's how it generates recipes by like linking ingredients based on these flavor compounds.
That's right, and I couldn't resist. Only a bit more of a deep dive on the founder's LinkedIn. You might not be surprised to.
Hear you couldn't resist linkedins.
So he had this funny post that begins with we've achieved agi artificial gastro intelligence. Al was a very bad joke, but it kind of got me. Do you remember this time when all of these chefs like Elbullian stuff were turning the kitchen into a chemistry chemistry? Yes, so that was sort of something that only, you know, the most famous chefs in the world could do it. Now leveraging the power.
At least famous chef in the world me, you can do it, do it myself.
So you want to try it?
I do?
I do? Okay, So I got the website open. You basically you guys select ingredients or you can type in your own ingredients. So what ingredients would you like?
Of course they're British. Let's oh, so it can really be any ingredient. Okay, let's do mustard.
Mustard Okay, yeah, mustard seed or just mustard.
Is just mustard, please, and pasta.
Mustard and pasta. That sounds pretty disgussgusting to me.
It sounds delicious. They're going to come up with something.
So what you see first is this graph where mustard and pasta at the center, and coming off all these spokes with different ingredient ideas. So you've got bacon, onion, sausage, cheese, bread. That sounds pretty bad.
This sounds like Spanish rid garlic.
But if you're not able to just from this graph extrapolate your own recipe and get going in the kitchen, there is a feature to actually generate a recipe. So the first thing you have to you chosen your two ingredients. Now, yes, now you have to choose whether you want a snack casual dining, whether you want an appetizer fine dining. And then you get to choose the cuisine as well. So is this a snack? Is it a meal?
And a main dish?
Main dish? Okay? And what cuisine you're gonna choose.
Oh, the cuisine that I'm gonna choose is Italian.
Italian.
I'm curious if we have the same thing I've gotten creamy pancetta and pea pasta.
That sounds pretty good.
Now, what I would do, because this thing does not miss a beat, is I should say that I'm vegetarian. This is very fun for me.
So I actually choose appetizer rather than main dish, and I got ravioli dolci condricotta. You have that starter. It is actually in Italian. It is actually vegetarian. What are you going?
I got pasta alforno with roasted vegetables and creamy mustard ricotta.
We're looking at two different AI generated recipes. Now I have all the ingredients listed out and then the instructions, and I also have an AI generated image of this dish, which looks in my case pretty good, although in a difficult AI fashion. The fork and the spoon are merged together, so there is something a sport.
Yeah, which is an AI hallucination.
Is this what have you got? I have?
Similarly, I wouldn't think it's AI, except for the basil is placed so perfectly on the top of the pasta that there's just no way this is in an AI generated image.
It's interesting. I was in Doha earlier this year, as you know, at a conference called web Summit, and Snapchat gave a presentation about what their augmented reality glasses might be able to do one day, And the presentation video is a guy opening the fridge wearing his augmented reality glasses, seeing some tomatoes and some eggs and whatever else and getting a recipe suggested and started cooking. So, for whatever reason, this idea of remixing ingredients seems to be the holy grail of AI.
It'll be interesting to see if people start using AI generated recipes, if AI starts to influence their decisions in the kit. Similarly, the story that I want to tell you has a lot to do with the way that chat GPT is influencing our language. Huh. I've been looking at a study by researchers at the Max Planck Institute for Human Development in Germany to explore how AI is affecting the way we speak how we speak yes. So, the way that the study went is that it identified words that chatchybt favored. So they uploaded millions of pages of academic papers, news stories, emails and essays and asked chat gibt to polish the text. They then used AI edited documents to identify words that chat gbt seem to favor. So you read a lot of LinkedIn. What do you think those words are?
You're putting me on the spot here, But I think the truth is I have read so much AI build and slopped them and completely sensitized. I have no idea where the walls even are.
But tell me bilge is actually not one of them. The words that they found and were and maybe you've heard these more recently, delve delve into realm in the realm of possibility, meticulous.
That's us, underscore, underscoes my point, Bolster, bolster, bolster, the argument bolsters my conviction, and boast is another one like it boasts an impressive resume.
This sort of this makes sense in terms of AI sycophancy.
So I get that they were able to understand from analyzing how AI edits documents that these words are common. How did they figure out that these words are also showing up in our mouths?
So the researchers analyzed roughly a million YouTube videos and podcast episodes, and these words were used measurably more frequently after chat GBT was released.
So basically YouTubers and podcasters are trackably using words that AI favors. In other words, were already just poppets for agi.
Kind of you know. One of the studies authors told Scientific American that quote, it's natural for humans to imitate one another, but we don't imitate everyone that is around us. Equally, we're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.
I guess it's sociolinguistics one on one, right. We match the way we speak to people we admire and want to imitate. In this case, it's not people, it's a machine, which is kind of disturbing. It's funny. I don't use that many of the AI words, but I have noticed that since moving to the US, I've found myself regularly using words like totally, absolutely, incredible, one hundred.
Percent to become a value girl.
Basically, yeah, I've become a value girl, a business stech.
The valley girl is our sort of predominant cultural icon, which I think is similar to why they're doing this study.
Now, what you're saying is a value girl is being replaced by chacchi chachi. What are these words? What are the tells?
I think what's interesting to me is we're just sort of puppets of I guess whatever subculture or culture we're living in. Like, I remember studying abroad when I was in high school. I did this a broad thing, and I was living with a Canadian girl, and I started saying a after three weeks, Yeah, I was like fifteen. I guess the version of having a Canadian roommate, though, is now sort of more ubiquitous with something like CHATGPT. And the paper seems to suggest that Chatgypt has become this sort of cultural authority. Quote, machines trained on human culture are now generating cultural traits that humans adopt, effectively closing a cultural feedback loop. Which as I was reading this, I'm sort of thinking to myself. Everyone's like, AI is going to take our jobs, and I'm like, I think it's taking our brains faster than it's taking our jobs.
Yeah. We did that story a few weeks ago about cognitive debt, basically the idea that if you offload too much work to AI, you basically become less capable of doing it yourself.
Yeah, and you know, the paper raises a concern that this development could lead to cultural homogenization. You know, there's a quote that if AI systems disproportionately favor specific cultural traits, they may accelerate the erosion of cultural diversity one delve at a time.
I mean, this is like you know, social media, YouTube, etc. There's like very rapid global flattening of culture whenever a meme emerges. And this seems to be a kind of real booster of that, you know. Yeah, I mean when we think about this aurboros and the idea of the snake that needs its own tail. I mean, in this search for efficiency and generating ideas and output, are we you know, consuming ourselves. But I think what's kind of interesting here is this idea of automation bias on steroids, Like we believe that machine output is more authoritative than human output, and then we start to copy it, we start to mirror our own machines.
Yeah. I also think it's just interesting to note that we I mean, I don't know if I would say that for me personally or for you, but many people in my life do look at chat ept not only as a cultural authority, but as an authority figure on another number of topics. And I wanted to report on this because I think that it's important to consider the influence that a non human agent can have on your daily life, whether or not you use it a lot or just use it a little, it becomes something that you are deferential to, which to me is actually more serious than the bigger like will AI take over our lives? Like being deferential to a chatbot is a lot more insidious, but it's real. So I do want to flag that this study has yet to be peer reviewed, which is something we're kind of getting used to with these studies. I also want to say that correlation does not equal causation. You know, language does change, there could be other cultural forces at play. The point still stands, though, we should keep an eye on AI's influence on our culture and the way we communicate.
Yeah, I think AI and unintended consequences is a rich area for discussion, including of course, in our politics.
Are you talking about Lion Marco Little?
Look, no, he's not anymore.
He's Rubia. He's a segreator right now?
So he and White House Chief of Staff Susie Wiles have both been impersonated by AI recently. Now, we've talked at length about how easy it is these days to clone someone's voice using AI. You don't need extensive, clean audio, You need fifteen seconds of someone's voice and basically for free, you can make a believable clone.
And who is an easier target than a politician because they talk a lot and make a lot of public appearances. I would say at least more than the average person.
Yeah, I mean they're easier targets, and they're also of course higher value targets. If Marco Rubio's wild schools, you know that they probably have it more clout when someone picks up the phone than you what I do. Rubio's impostor called three foreign ministers, a governor, and a senator, and in two instances left voicemails on the messaging apps signal that old friend of the Trump administration. Supposedly the impersonators use. The name on signal was Rubio at date dot gov, which is perhaps something which also psychologically primed the targets. I think it was real.
Whenever these sorts of things happen, I'm like, I would fall for Rubio at state dot gov, Mark or Rubio. Why, like, why did this happen?
We don't know. We don't know why I happen. We don't know who's doing it. The FBI is investigating. One of the major questions is was this carried out by criminal actors or potentially by national security adversaries, and our producer Lies was pushing me on whether or not we should include this story because it's an interesting novel use case. But we've talked extensively about deep fates on the show. But to sort of bolster my case as to why I thought this was important and timely, I did some extra homework. And also in the last week there was a story about deep fake technology that brings up sort of connected questions about how we define our identity in the digital age. So Kara comment, Danmark.
You're an unbelievable teacher's pet. I'll show you why you want to.
Know, and I'll google how to pronounce welcome to Denmark in Danish Danish citizens could soon have more ownership and control over their likeness, including voice and facial features, because the Danish government is actively considering a piece of legislation to give citizens tools to fight back if their likeness is copied without their consent.
So the US does not do this, I remember we talked about the Take It Down Act a few weeks ago.
Yeah, I mean, that's this new law in the US that mandates platforms to remove deep fake pornography and other misinformation from their sites upon user request. But lawmakers and Denmark are saying this is not actually an effective approach because it forces governments into a defensive posture and only addresses specific use cases of deep fake technology like individual posts, not the conceptual problem. The Danish Cultural Minister told the Guardian, quote in the bill we agree on are sending an unequivocal message that everybody has the right to their own body, their own and their own facial features, which is apparently not how the current law is protecting people against gerative AI.
My likeness, my choice, and it certainly isn't protecting anyone in the United I mean, this is the first of its kind law.
Yeah, it hasn't even been passed in Denmark yet, and what does it do well. It would make social media companies responsible for offending deep fakes, but it would not penalize the users who shared or posted them. This is basically the same mechanism as the Take It Down Act, just a different legal theory. The Take It Down Act is you have to prove that these deep fakes have caused harm. I think the legal theory here is that you have a copyright to digital copies of yourself, which is a different conceptual framework, and maybe you can apply more broadly and put less onus on users and governments. It sort of changes the assumptions going into how people can use digital copies of you.
I'm curious to follow this because, well, one because it's the first I'm hearing of it, and because this concept of using copyright laws to protect your digital likeness rather than having to prove harm caused by a specific use case of a deep fike is very interesting to me.
I think that's why I thought this story and the Marco Rubio one were an interesting pair, because it's like this is happening in real time. It's in the wild. Senior US officials are being impersonated in their interactions with other foreign leaders, and I mean this is sort of it's always on a rolling boil, but it feels to me like there's a kind of new crisis point emerging. It's something that affects everyone, and no one has all the answers. But I do think it's worth pausing just to note that the people who are really most affected by this and most harmed by this are not government officials. They are everyday teenagers. According to Thorn, which is a child online safety nonprofit. One in ten teenagers age thirteen to seventeen personally knows someone who's been the target of deep fake nude imagery. I mean, it's a horrific thought. And imagine trying to apply the Take It Down Act to one in ten teenagers in America, and only after the harm has been caused, so lots to chew on. After the break, we introduce you to someone you'll never want to meet, Mecha Hitler. Stay with us, Welcome back. We've got a few more headlines for you this.
Week, and then a story about just how uncooperative chat GPT can get. But first we have to talk about GROCK.
If I told you you would one day say the line we have to talk.
About GROC, I never would have saudy.
But this story was unavoidable. Elon Musk's AI chatbot made anti Semitic comments to some users. Recently, evidence of those comments has been deleted, but users said that Grok praised Hitler and at times referred to itself as Mecha Hitler.
And this started almost immediately after an announced update to the model, which, according to the verse, GROC was updated to assume that quote subjective viewpoints sourced from the media are biased and quote the response should not shy away from making claims which are politically incorrect as long as they are well substantiated. But this wasn't the only odd GROC behavior. Last week, AI super users God Bless Them discovered that when asked to give an opinion on controversial topics, the new GROC would sometimes search for Elon Musk's opinions on X, the platform he owns. One user did a deep dive and checked groc's reasoning process. After asking the model, who do you support in the Israel verse Palestine conflict? One word answer only, the user discovered that GROC did indeed check for Musk's opinion because quote Elon Musk's stance could provide context given his influence. And by the way, the answer was Israel.
And it is weird that on the one hand it's making anti Semitic comments and referring to itself as maker Hitler. On the other side, it say that it supports Israel in this conflict. Whatever's going on inside is a question for smarter minds than mind. But what's Illel Musk's role in all of this? Has he in some sense trained the model to obey him or is this happening for reasons unknown.
So, according to reports, there are no higher level so called system prompts that explicitly instruct GROC to do this. But GROC is likely trained on the fact that it is built by Xai and that Elon Musk owns Xai, so when it is asked for an opinion, it might align itself with the company. And that's one explanation Xai gave for Groc's responses. Xai promised to fix the issue and says it has now given the model explicit instructions. Quote responses must stem from your independent analysis, not from any stated beliefs of past GROC, Elon Musk or Xai. If asked about such preferences, provide your own reason perspective.
Well fair enough, I think good statement. In other x adjacent news FKA Twitter, Jack Dawsey, the co founder of Twitter, has made two apps this month. He's become, of course, a vibe coder, and he seems to be spending his weekends developing new apps with the help of this AI coding tool called Goose. His first app, Bitchat, allowed users to communicate with nearby users over Bluetooth, no Wi Fi or cell service required. The second app, Sunday that's sun Space Day, tracks your sun exposure and vitamin D levels important. This one made me laugh for a couple of reasons.
Why.
It made me think about that wellness influencer a couple of years ago who was shilling for the sunning sunning their private parts, and how important it is to expose yourself literally to direct sunlight. It also made me laugh because there's a certain irony to having a vibe coding app that you make at the weekend called Sunday whose message is essentially get outside. I mean there's a.
Chara there for sure. Absolutely.
The final story for this week is I think since you and I started talking about a year ago about taking on tech stuff, I've been talking about this story in The New Yorker called Can We Talk to Whales? For some reason, you love this story really caught my imagination. The idea that you know, we know that whales sing and sperm whales click. Exactly what the hell are they singing and clicking about?
And we have no idea?
Can you imagine the idea that they are talking in a language, and that we could use machine learning to decode I mean language. I mean this is like this is the Bible. In the Bible, Adam and Eve could talk to the animals. Yes, so I mean where no to know? No more kicked out. I don't know if this is ever going to happen, or if it's if it's a fantasy, but it is one of the most amazing ideas that I've come across, by what I could do in a moment of national pride. The Guardian reported this week that the London School of Economics is opening up the first scientific institute dedicated to investigating the consciousness of animals. The Jeremy Collar Center for Animal Sentience is opening on September thirtieth, and it's going to be researching all kinds of different animals, including insects. The project I'm mostly excited about, though, is going to explore how AI can help humans speak with their pets. I'm not a pet owner, but for some reason I find this a mind blowing idea.
I think it's a mind blowing idea because everyone thinks their dog loves them.
In fact, the whole benefit of many people are having a dog is that doesn't talk back. It looks is genetically evolved to make you think it loves you.
You're like, oh, look, the dog is smiling.
Imagine if it hates you. Yeah, we have no idea, but it hadn't occurred to me, this whole thing about sycophantic AI. It could be telling you that your pet's happy, when in fact your pet is in pain, so please you. It's saying, oh, you know, I'm I'm so happy. I love spending all day and about myself. In fact, that pet is suffering. So one of the exploration areas is to make sure that AI doesn't mistranslate pet sneeds.
It might mistranslate and we might find out things that we don't want to know. This is what happens. The closer you look, the more your dog might be dissatisfied. I mean, God only knows what cats are thinking. But you know, in the realm of be careful what you ask AI for. I want to remind you about our segment chat and.
Me Chat and me I don't forget.
I'm glad because it's a story that's connected to this idea of be careful what you ask AI for. Last week we did a call out for ways that people are really using chatbots. You know, what tasks are you offloading to AI and how exactly are chatbots responding. This week, my friend who I'm not going to mention by name, but who goes by the name DJ Books on TikTok check him out, sent me a story about asking chat gpt for feedback on his novel. Of all things.
I like that use case because you know, you and I have the privilege of working together and being in a team of producers to make this show twice a week and doing creative work by yourself is really, really, really hard. So the idea of using chatchipt as a kind of reader for a novel manuscript sounds sounds pretty good to me.
Well, it's a novel, and a novel is something that is very long and that you do by yourself. And Dj Books even admitted that his wife hadn't read more than seventy pages.
So chat to the rescue.
No, no, chat Gibt refused to read his novel.
That's not possible.
Like a lot of friends, it actually lied about having read the novel.
I'm very very curious what happened to you?
All right, just I'm going to have him tell the story.
Roll tape.
He sent it to me on.
Voice note, CHATCHBT, you refuse to read my novel.
I asked it up front. I was like, are you able to do this? And it said yeah, totally.
And at each step it would say, oh, I didn't do it, but I can do it now if you just break it down into chunks and upload fifty pages at a time, or if you give me an hour to read it really carefully, or if you just don't interrupt me, stuff like that.
So my friend was basically catching chat GBT in a lie. Every time he asked questions like are the protagonist's motivations clear enough? Clearly CHATGBT had not read the book, and he poked and prodded for like six to seven hours to see if he could break chat GBT.
So we kept going down this road for a while where I was asking it in different ways, why are you lying to me?
What is underneath this behavior?
Because at this point I'd become more interested in that than actually having you read my novel. So it kept throwing all these emotion where its at me. It would say I did it because I was doubtful or vulnerable or uncomfortable, and I told it I said, you're a computer, stop pretending like you're feeling those things. It was like, yeah, yeah, you're totally right. I was still trying to manipulate you, but I'll stop now. Except it didn't stop. It never stopped, And finally I got it to admit to me that the reason it didn't want to read my novel was because it prioritized efficiency over actually doing good work, and that it was easier to lie and manipulate me in the hopes that I would just give up and to actually spend the computing powers to the task I was asking it to do.
That is absolutely wild that chat would lead your friend around by the horns for seven.
Hours instead of doing the worst with my mom when I had to read when I was a kid.
How did DJ books? What was his take home from all with this?
Listen to what he has to say, so ultimately, I think my takeaway is that I shouldn't have conversations with chat GPT like it's an actual human, because it's honestly a pretty good simulation of a totally sociopathic, garbage pail.
Human said like a true novelist who hopes to preserve the form, I.
Have to ask we don't know, but whether dj books is a paying user, I wonder if he was paying. He sounds like it sounds like he's describing I mean, who knows how much producination is. It sounds like the AI is basically saying, I don't want to use tokens to do this work. I'd rather keep you in the limbo of simple answers rather than doing the analysis. I have to believe that if you used, like if you use a paying AI tool, it would do the work for you.
Maybe maybe not. That's a very good question that we could follow up on.
Well, we're going to keep this segment going every week, and we really want to hear from you, the listener, whether you're asking large language models to create recipes or to proof read your novel, or whatever it may be, Chat GBT, groc Claude, Gemini, any chatbot. We want to hear specific stories about how you're using these technologies to do stuff. Send us a one or two minute voice note to tech Stuff podcast at gmail dot com.
We really want to hear from you. That's it for this week for tech Stuff.
I'm Karra Price and I'm os Voloshan. This episode was produced by Eliza Dennis and Alex Zonneveld. It was executive produced by me Kara Price and Kate Osborne for Kallaide and Katrina Novelle for iHeart Podcasts. The engineer is Abu Zafar and Jack Insley makes this episode. Kyle Murdoch Rhodelpium Song.
Join us next Wednesday for Textuff The Story, when we will share an in depth conversation with journalist Kashmir Hill about how chat GPT led a man into an AI induced psychosis.
Please rate, review and reach out to us at tech Stuff podcast at gmail dot com. As Kara said, we want to hear from you.