Week in Tech: You, Me & Ghibli

Published Apr 4, 2025, 9:00 AM

How do LLMs solve math problems? This week in the News Roundup, Oz and Karah explore what AI models could mean for the fashion industry, the humble-but-mighty device our modern world depends on, and what Anthropic’s researchers learned about the inner workings of their LLM. On TechSupport, The Washington Post’s technology reporter Gerrit De Vynck explains the state of the AI race and how some of tech’s biggest companies are vying for position.

Welcome to Tech Stuff, a production of iHeart Podcasts and Kaleidoscope. I'm mos Vloscian and today Cara Price and I will bring you the headlines this week, including the fashion models getting Ai twins. Then on Tech Support, we'll talk to Garrett da vinc a reporter at the Washington Post about all the news in AI from Open Ai, it's huge new fundraise, to Xai's acquisition of x formerly Twitter, all of that on the Weekend.

Tech It's Friday, April fourth, Cara Price.

As Volashan, you're looking like me today.

Yeah, we are matching.

We are very matchie matchie. I was at.

McLaren f one's headquarters as you know, earlier this week and I sent you a picture that I was very proud of me and Zach Brown, the CEO of McLaren Racing, And your only comment was, oh, black trainers.

Always black six sneakers. You can sponsor us, but yes, always black A six sneakers, even if he's wearing a tie. But it's, you know, for a tech podcast, it's very funny how much we talk about fashion. And it's not because I'm a woman.

I mean, it is also ironic that we both have our own uniforms. We're wearing blue blue shirts, dark blue pants, shirt, blubby pants, and a cool hat.

Though I am wearing a cool hat, I'm always wearing well, I wouldn't call it a cool hat, but if you want to call it a cool hat, it's an appropriated Yankees city by hat that is completely illegal and was taken off the internet the minute it went up, So I'm very proud to be wearing it now.

The reason we're talking about this is not just because we're totally self indulgent, but because the topic of tech folks and their uniforms is pretty fascinating. Earlier this year, we talked about Mark Zuckerberg, who had this famous, weirdly shaped T shirt with the Latin phrase out zuk out nihil printed on it, which means all zook ll all nothing and is a is a reference to Julius Caesar. Of course I can't.

It's just insane to me. The level of It's not narcissism. It's this sense of like an obsession with being cool that, while chased, is the antithesis of anything cool and fashionable.

I thought that one of the genuinely coolest tech swipes of the year was when Jay Graber, who's the CEO of blue Sky, the kind of left leaning social media platform, wore a T shirt at south By Southwest that said in Latin, no more Caesars, and it was a real sort of shots fired at Zuck, I think, and.

Also just again in Latin. I mean, these are the nerdiest people in human history.

This was a huge best seller for Blue Sky. I saw that almost immediately.

I bet it was, you know, it's tech Titans, as I was saying, just seemed to want to have a signature look. And I don't know if part of it is pathology, psychological pathology, I mean, certainly minus psychological pathology. I like to wear the same thing every day. I don't have to think about it. Like our friend Elizabeth Holmes, who also rocked the Black Turble.

She flex hard on the Steve Jobs.

She flexed and copped. She coughed something from Steve Jobs. And I guess it's sort of worked for her because we're still talking about it.

But I think the most sort of talked about tech CEO of the moment is none other than Jensen Hwang of Invidia.

The man who made a lot of people a lot of money and now is making a lot of people a little less money.

And still still a lot, but little less exactly. He has a signature. Look, can you describe it.

It's sort of a Marlon Brando, sort of on the waterfront black leather jacket that is I think Tom Ford.

I think it is Tom forty, which is like a ten thousand dollars.

That's an extremely expensive jacket.

But there's a lot of Invidia stands. It turns out, and if you build it, they will come to buy a fake black leather jacket.

If you build it, they will come and buy it a lot cheaper exactly.

So our friends at four or for Media reported that there are all these websites popping up with knockoff Jensen Hwang in video jackets for which have creative titles like Jensen wang in Video Jacket totally optimized their CEO.

People, They're going to find my phone and be like that was the last thing that she was looking at.

But I think I don't sure if you saw it. But this week Jensen Hwang went to visit a company called x one Robots, and they had one of their robots come up to Jensen whanging humanoid robot and present him with a new black leather jacket, not a tom Ford one. This one was bedazzled with the Nvidia stock ticker on the front left pocket and the logo in kind of Swarovsky style crystals on.

The back to imagine if the humanoid robot is also sentient, if the robot was like, I don't want to do this, like the people programming mean, please give me something better to do.

But there's a more serious story about fashion AI here, and it comes from The Guardian. So according to their story, the clothing retailer H and M announced that they're going to create so called AI twins of thirty models, which they're going to use for social media marketing posts.

The way, I wish I had this for a podcast. As much as I enjoy doing it with you, I wish.

You could just have a I'd have a relic. Would you rather use your replica or my replica?

Well, if we wanted me to be on time, I would use your replica. But sorry, keep going, because I do love the story.

No, So these models, the real models, have given their permission to H and M to use their likeness with AI. You know again, as you said, sounds good, you know, why why sharp yourself you can send your digital twin.

Well that and also I think there's something that's very interesting here that reminds me of the Atlantic sort of selling its data, which is these are people that, like in the film industry, are cannibalizing their own job.

Well, of course, just like with the Atlantic and Open Ai, where open Ai actually compensates the Atlantic for using its archive, these models are also being compensated for their image. They own the rights of their twins, and their twins can work for other brands, not just H and M, and they'll get paid. One of the thirty models said, quote, she's like me without the jet lag.

So the other thing is, and it's worth saying, the images in the business of fashion right up look amazingly similar.

Like it's not distinguishable.

Yeah, it's not like who was that person that we used to cover on Sleepwalkers. She was a model Lil Michayla. Oh yeah, like littl Michayla looked like AI. I think you know, after four years, we have now gotten to a place where it's just an AI replica, which is incredible.

Yeah, and there's no sick finger. As you pointed out, it does raise concerns about the future of modeling in the fashion industry because although these models who are participation in the partnership will get paid every time that digital twin is used, you know, just like in the film industry, also raises questions about what happens to all the people who work on sets. I mean the hair siders, the makeup artist, the lighting designers.

Yeah, you know, I was actually talking to our producer Eliza. Shout out to Eliza, I like to keep her in the mix. And she actually knows a hairstylist who works with a company that's testing out replacing many of their e commerce models with AI, and you know, work has slowed way down for her.

So this is happening.

I mean, think about it. You don't need to be doing a blood on a digital avatar.

Yeah, and I mean even for models themselves, if I guess if you're you know, for these thirty who have gotten first through the door. It's one thing, but this may affect the kind of future job landscape for model as well as people work around modeling.

But this is like a This is Zulander three where Derek realizes we're in jeopardy. But in all seriousness, you know, what what does it look like for models who are no longer in demand? And it might not seem important to like a lay person, but I do think it's a harbinger of things to come, which is like, if you can replace something that is so sort of ubiquitous for you know, a century, what does that look like?

Yeah, I mean wile I find most interesting about these like AI digital twins or AI actors or whatever, or you know, chatbots of famous people from history is their effect is kind of to lock in the very few most famous people in the world as the only characters who are worth interacting with. I mean, you know, if you think about an action movie, why would you not make it with Tom Cruise? Or talking about doing a fashion shoot, why would you not make it with el McPherson. So kind of, I think there's a longer term kind of chilling effect on the pipeline and new talent in creative industries, which you know, which will pretty interesting, you know, and somewhat disturbing. To be fair to H and M, they're being very upfront about their use of AI. They're going to watermark the images of the AI twins in their ads so people will know they're the AI versions, And by doing this, the company will also be abiding with the EU's AI Act, which is coming into effect in twenty twenty six, and it will require all AI images to be labeled as AI images, to.

Which I say, who's looking for that? Like, I you know what I mean? Like the role of AI and creative industries, which you know, is something I'm obsessed with and sort of how to regulate it is something that's obviously going to keep coming up, and I'm curious to see where it goes next. I wonder to what extent the lay person cares that, Like where they're buying a shirt, is either modeling that shirt with a fake twin or the real person.

Probably not, Probably not. The next headline comes from the stuff of nightmares for anyone who's a frequent traveler, and it has to do me, and it has to do with an airport being shut down for twenty four hours.

I heard it was Heathrow.

It was London heath Throw Airport, where I flew out of just this week. It shut down, leading to over a thousand council flights after a fire caused a power outage, and Bloomberg reported that the outage could be traced back to a single point of failure, a burned transformer and twenty five thousand liters of transformer cooling oil that was a blaze for several hours. It's a fascinating story, and Bloomberg had such a great headline, which was the device throttling the world's electrified future. But Kara, do you know what a transformer is?

I know what the movie Transformers is. I know it's a car that turns into a man.

So most simply put, a transformer is something that changed is voltage. So when you create electricity in a power plant, you actually want to increase the voltage as much as you can because that allows it to travel far and fast with less loss of electricity along the way. So use a transformer on the way out of the power plant. But then when it gets to your home or the local electric grid or whatever it may be, you actually want to use a transformer to turn the voltage back down because otherwise it blows up your stuff.

So ostensibly, it's sort of like when I'm staying at not the greatest hotel and I turn a blow dryer on and all of a sudden, the entire room short circuits and all of the electricity goes off because the voltage is too high.

Yeah, exactly mean that is a short circuit where there's likely been a problem with the transformer being bypassed or not functioning correctly.

Right. And I think that in a storm or a natural disaster, they can sometimes explode and it's very loud.

Yeah, it sounds like fireworks or a bomb going off. And I mean during hurricanes and other kind of natural disasters, these things come under a lot of pressure when they do go out. Follow And so what happened to Heathrow Airport was this fire broke out in a substation which houses transformers, and it took the firefighters seven hours to get it under control. The airport, in order to come back online, was able to accept power from other substations, but even the time it took them to do that, many many flights were canceled. It was chaos.

She made it, but it was not a pretty team.

Yeah him. Back in twenty thirteen, there was actually a sniper attack on a substation in California which caused a fire that burned seventeen transformers and almost knocked out all of the power of Silicon Valley. Now This actually led to a lot more security around substations and even stockpiling of transformers, which of course sparked another problem, a shortage of transformers, transformer shortage and obviously with the supply chain issues recently, the lead time for delivering a new large transformer is now about three to five years for a single transformer, and bear in mind these can be huge. Nonetheless, the scale of transformer that was Heathrow, it takes a long hundre replace and they've got a much more expensive So you know, now we're living in the error of EVS and the AI boom powered by data centers and transformers are also required to bring renewable power onto the grid, and so you know, one of the interesting implications of this story, as Imaza, who's on the show, recently told me that actually the US's struggles to onboard new power to the grid maybe the reason why the US ultimately falls behind China in AI.

So it's ironic actually that the increase in transformer prices could be further increased by President Trump's on again, off again relationship with imposing tariffs in Canada and Mexico, which is where we import a lot of our large transformers from.

And that's why I love this Bloomberg story because it's fun think about all the sexy stuff like new chips and data center construction, you know, but there's still this one hundred year old technology that hasn't changed and that has to be imported and which is absolutely critical for infrastructure, digital and otherwise, you know.

Speaking of the AI boom and things that use a lot, a lot, a lot of energy. This next story that I want to tell you is for those people who've been sitting here thinking about lms and not really understanding how they work. I have news for you. Much like Trump's decisions on tariffs, nobody knows how they work. And Anthropic, which is the same company that makes the AI model Claud, has been trying to figure out how the hell these things.

This is to solve the so called black books.

The black box problem. So Anthropic, which is the same company that makes the A model Claud, has been trying to figure out sort of what's under the hood, and they recently released two reports on how llms do things like complete sentences, solve math problems, and suppress hallucinations. And they use a technique called circuit tracing which let them track an LM's decision making process for ten different tasks by working back from the solution to the query.

Huh okay, that makes sense. But I just think more broadly, as more and more decisions are taken for us by AI or outcomes kind of determined by AI, it's kind of remarkable that this huge elephant is still in the corner of the room, which is we can't understand how they make their decisions.

But it also makes you think of like, I don't really know. I mean, I know how a car works, but like the I'm not an engineer, and yet increasingly, I mean, you know, since whenever the nineteen twenties, people have used cars more and more to get around, and we're just kind of like, well, this thing's gonna work until it blows up, you know. That's why I feel whatever. But it's important to note that lllm's, like Claude, which was the focus of this study, are trained, which I thought this was really interesting. They're trained, not programmed on a bunch of data. They create their own rules based on the data they ingest. But up until now we haven't been able to see into the models to know what those rules actually are. Let alone how the models generate them.

Yeah, and I think what the work the anthropic is doing is all about basically understanding decision making. So then not yet at the stage of being able to understand how the models generate their own rules, But I think this story is all about how it's starting to become easier or possible to basically backfigure out how a model has made a decision.

Yes, and it's not that simple. I think the researchers in this particular case were inspired by brain scan techniques that are used in neuroscience. They found that llms, which again I'm so interested in how we anthropomorphize lms, But they found that llms store different constellations of knowledge in different parts of their model. So, for example, the concept of smallness or the idea of a rabbit. Anthropic was actually able to identify and then turn off certain parts of the model. So like the idea rabbit and tune it down so that it couldn't be part of a queer result.

And for exac like what eats carrots? It would be like.

People right, a dog right exactly, And so the same query would have a different answer if the rabbit part of the model was dealt up or down, which you.

Say said by contrast, you might you might say, you know, what's a mammal? It always asks rabbit if you dial it up, and if you if you dial it down, it would never own a rabbit.

Correct.

So I mean, and it's similar. I mean it's similar in human beings, which is like I don't drink anymore, so I go to a bar and I'm like water. It's a similar kind of thing.

I mean that that actually clarifies for me, and I kind of We've done a bunch of coverage of AI and spoken to Jeffrey Hinton and others, but like that idea of a neural network is to me really clarified by this study.

Well, and how could it not be. It's something that we've created, you know what I mean, it's going to reflect a sort of human centric way of working. I think this story actually came from the MIT Technology Review with the headline anthropic can now track the bizarre inner workings of a large language model, to which I said, Now, what really blew my mind, though, is the way that this thing solves math problems, because it's not the way that humans do math.

Okay, tell me more.

So, research asked it to solve the equation thirty six plus fifty nine, and while Claude came up with there, what's the correct answer? Well, well, Claude took a very circuitous route to get that answer. Anthropic found that Claude used multiple computation paths in parallel to get its final answer, unlike you, who just used your brain. So one path added a bunch of numbers. I love this. This is like nerd alert close to thirty six and fifty nine to approximate the total like thirty five and sixty, while another path focused on determining the last digit of the sum, so it actually added the last two digits of thirty six and fifty nine, six and nine to know that the answer had to be a multiple of five. So Claude used these two paths to come up with the correct answer, which is as you said.

Ninety five. You know, I find particularly fascinating about this, so I always struggled to be with math. My dad, on the other hand, is a crazy math murder. He was like the under thirteen chess champion of Britain, etc. You know, he said to me about the most important thing about how to do math well, is how approximate the answer before you work it out. He basically told me to do exactly what this model does. Basically break it down to much simpler calculation, and then when you do the actual work, you will know whether or not you're in the range.

Right.

Well, Claude does math like your dad. When asked by a user how it got the answer ninety five, it claims to do it by the book, for example, simple addition, carrying the one and one of the explanations anthropic positive.

Positive for the fact that, when asked how it got to the answer, did it shared a response that was not, in fact, how it got to the answer.

Right because Claude's written answer and I quote may reflect the fact that the model learns to explain math by simulating explanations written by people. So when asked to do math without being taught how to do it, it may develop its own internal strategies to do so. Remember it's trade not programmed.

We're going to take a quick break, but stick around and we'll be back with this week's tech support, all about AI competition among tech giants. For our next segment, we're going to be talking about all things AI. Surprise, Surprise, on tech stuff Rise tell surprise, But in all seriousness, there's just so much happening in AI all the time. I find it even though it's our job, but it's hard to keep up with.

We know. I literally work on a podcast about technology and half the time other people are telling me what's going on in the world of technology. Because I've come I've become kind of like a sieve for technology news from my friends.

Absolutely, So this week we're going to talk to somebody who can help navigate a whirlwind of headlines. And you've got the big open Ai fundraise, You've got elon integrating x into Xai, and then you've got the struggles at Apple with.

Their AI products or lack thereof.

Here to help us decode the current AI landscape is Garrett Devinc who's a technology reporter for The Washington Post. Garrett, Welcome to Tech Stuff.

Happy to be here.

Let's start with a big one. The volume of news around AI today feels quite overwhelming. I mean, also, it comes from so many different companies Open Ai, Google, Xai, Microsoft, Amazon, Apple, The list goes on and on. How do you keep up with all of this and also figure out how to sort the signal from the noise.

Yeah, I mean, I think this is kind of a key question. I'm sure a lot of people are asking themselves, and I think first thing is, just take a deep breath, calm down. AI is not coming for your job. AI is not going to take over the world tomorrow, even if really smart people or powerful people or rich companies are saying that. Essentially, what we're seeing here is the tech industry is this huge conglomeration of a bunch of powerful people and billions and billions of dollars that are always looking for the next thing, always looking for the next way to make money. And they look back at tech trends over the years, the Internet, cloud computing, moving to mobile phones, and the tech industry has sort of collectively decided that AI is the next one of those stages, right, And so when the mobile phone came out, a bunch of new people were able to make money.

Right.

We didn't have Uber, we didn't have.

Those kind of mobile first door Dash, those kinds of companies before the mobile phone came out, and people made huge amounts of money during that tech transition. And so what's happening now is the tech industry believes and is convincing themselves and trying to convince all of us that AI is that next step, and so they're pouring money into it, they're pouring marketing dollars into it, but at the same time, there's still trying to build the plane as it's taking off. And so that's why you might see a lot of products that maybe.

Don't work very well.

You don't know really how they fit into your life, and you're not sure whether you should be paying for them yet. And so I think the first thing is to say, you know, don't worry. You're not missing the boat if you're not using a million AI apps right now. And so, yes, this is happening. There's a lot of hype and interest here, but it's not as if AI is just sort of gonna change everything immediately.

And one of the things that Karen and I sometimes talk about is like, is it speeding up or is it slowing down? Because like November December, all the headlines were slowing down CHACKGBT five is not coming, and now it feels like we're in a big speeding up moment. Again, is it even a relevant question? And where do you fall on it?

Yeah?

I mean I think it's a great question, and I think that analysis is correct, right. I mean, everyone is trying to either build something up or tear it down.

That's how these things work.

And so the reason we had this huge boost in interest was because chat gipt came out. It was definitely better than what most people had you used before and being able to experience directly, and they were able to put it in a format that regular people could actually understand and have a conversation with it. And so that's sort of what fired the starting gun. And then they said, okay, how do we make that better? And then the techniques they were using, which was essentially to use way more data, to just shove more data into these AI models and hope that they get smarter. That had been working up into a certain point, and then they kind of ran out of data and that method slowed down, and so they've now pivoted to using different techniques where they're actually spending a lot more time training the model to.

Do different things.

They're sort of doing more coding so that it can be a bit more efficient and strategic, and now they're seeing a boost in capability.

From that technique.

And if people remember the Chinese AI model called deep Seek. They really had a huge breakthrough where they were able to use a little bit less data and less computing power to come up with a model that was really quite capable. And so now everyone is saying, oh, we're speeding up again because we found new ways of increasing the capabilities.

Which is I mean, ultimately a more long term effective way to get these things to work than having to rely on humongous data sets that might not be replenishable.

Yeah, yeah, absolutely.

And I mean the other aspect here is that AI is very compute intensive, which is essentially just a way of saying they need a lot of computers and a lot of computer chips in order to do AI in the first place. Takes a lot of energy, and so there's a lot of potential environmental concerns. Even here in the United States, coal power plants that were slated to be shut down have actually been ramped back up in order to serve all those AI data centers. So there is a lot of interest and pressure in making AI more efficient so that it's cheaper and more environmentally friendly.

One of the biggest names in the industry is open AI, and for anyone who is living under a rock and wasn't on social media this week. Why is everybody talking so much about Studio Ghibli.

Yeah, so, open Ai they released a new image generator, right.

So one of the things that people have been able.

To use AI for over the last couple of years is you make a short description and it spits out an image, and you know, it's been getting better and better over the months, and essentially they released a big update to THEIRS and people realize that they could use photos, upload photos and get the model to recreate that image and sort of the same design styles from iconic animators like you mentioned Studio Ghibli or even like the movie Wallace and Grommet or the Muppets, and.

I like the Lego Family, the Lego Family or exactly.

It was this moment where the technology allowed people to apply their creativity.

In a new way, and that's why it went viral.

And open Ai I think they did hope that this would happen. They themselves were using some of these Studio Ghibili examples early on. But also they can't really predict or control how this is going to go, and some of their releases have just kind of fallen flat. Everyone's been like boring we don't care, but.

This one a lot of people.

They found it really fun, they found it really interesting, and it definitely went viral, and it actually brought a lot more people who.

Hadn't been using chat GPT before.

But now, of course this raises all sorts of questions about art and copyright and big problems like that.

What's interesting, though, is while they're trying to grow very quickly, we have someone like Sam Altman come out and say, please slow down with this image generation.

He said that the GPUs are being melted by the demand, right.

Yeah, I mean it's possible some GPUs actually did melt a little bit. I mean, you know, when you're using your laptop, you've got two hundred Chrome tabs open and you're trying to listen to YouTube, it gets hot, right, and so that's exactly what happens.

My phone does get quite hot.

Yeah, you know that same thing happens with AI, right. I mean, so this is still a very physical thing. Every single time everyone in the world says, hey, make an image of this. Hey write me a resume for this, Hey answer my test questions for me. That needs to go to a data center, It needs to be computed on these computer chips on GPUs, which is the technical term for the computer chips, and that heats them up and so I don't know if they were actually melting, But essentially what sam Altman was referring to there is just that so many people wanted to use this that it was becoming very expensive for open ai to run it. And this kind of gets to a central problem for them because the more people use it, the more it costs them in computer chip costs, and so they want people to use it, but at the same time they need to figure out, you know, how can we convince people force people to pay for these things so that we can actually grow as a business. And this is a huge question mark around open ai and other AI companies.

As fun as it is making a studio ghibli portrait yourself, it's hard to measure it being a huge, huge business driver of people buying tokens to do that.

That's sort of a question for open Ai, right, I mean, they've been able to have these viral moments. Chat GPT itself was a viral moment, and I do think people are using these technologies. I mean, in some ways open ai chatch GPT is one of the fastest growing, if not the fastest growing consumer Internet products ever. But we're not quite at that point yet where I think regular people are saying, oh, like, this is so important to my life, I need it so badly to write emails or to have fun like generating these images that I'm willing to pay two hundred dollars a month for it. I mean, I do think that the company is still in this world where they're trying to figure out how do we convince people that they need this so badly that they're willing to spend hundreds of dollars a year on it.

So you may be somewhat skeptical, Garrett, but the market is not, or at least soft Bank is not. You talk a bit about that.

Yeah, I mean open Ai raised forty billion dollars and at a three hundred billion dollar valuation, and I mean it's.

The largest ever private financing of a US company that.

A company that's not a private company a little while ago.

Yeah, And the only private company that is actually worth more than three hundred billion dollars is SpaceX. So that's Elon Musk's Space company. But they build big physical rockets that cost hundreds of millions of dollars, So that company's worth three hundred fifty billion. Opening Eye is now, according to its investors, worth three hundred billion dollars, right, And so I think this mostly goes to the to that question about how expensive it is to do AI, right, and so they need that money in order to buy data centers, buy computer chips to keep helping people make these these studio ghibli images and all the other things that Opening Eye is working for.

And so so it's.

Just the uber model where you basically subsidized users and then once you get them hooked, you start to charge them, but at huge scale.

Yeah.

I mean that's a playbook that tech companies have used for years now, right, and get people hooked on something that is fun, cheap, easy, free work into their lives and so that they feel like they needed every single day, and then start to increase the costs. I mean Google has done this. I'm paying for my Gmail storage. I don't know if you guys are.

I mean, yes, it's a question of when does something go from the gimmick phase to the business phase. I think, at least in terms of chat GBT.

Yeah, And I think the other thing to point out here is that open ai does have a business where they sell access to their AI to other businesses, right, and so there's the consumer question, which is exactly what we're talking about. Then they are actually selling to businesses who want to put AI technology into their own apps and into their own technology, and that's also a big part of what open ai is trying to do here. But that's also, you know, a big question mark because we have these open source AI models as well. Deep Seek, the Chinese one we mentioned earlier is an example of that. Facebook also provides these tools where they just essentially put the AI out there for free for other businesses to take and use in their own ways. And so open ai is a very strong business. They have incredible technology, they have some of the smartest people in the world on AI, and they have huge funding and backing.

From their investors.

But at the same time, that doesn't guarantee that they're going to continue to grow or even be around in five years.

When we come back, we'll hear about how other big tech companies like Amazon and Apple are fairing in the scramble to develop AI products. Yeah, we want to ask you a little bit about what Amazon and Apple are doing in the realm of AI. But just before we get there, there's another huge deal this week, also with a potentially dubious price tag.

Yes, so I think you're referring to Elon Musk's merger of two of his companies little confusing.

They're both kind of called XX.

I couldn't even get through the read at the beginning of the shows. That's a tongue twist. Yeah, exactly.

I mean I think that sort of speaks to Elon Musk.

He has many companies at this point, and he's been known to sort of move assets around a little bit. And so when it comes to x which is formally Twitter, so that's the social media platform that he bought a couple of years ago for forty four billion dollars, he has now sold that company to his AI company, which is called Xai. And sort of formally, the social media company is now owned by the AI company. And so I say formally, because these companies had kind of been working together in a lot of ways. User data from the social media company was already being used to train the AI on the AI company. The AI company's main product, which is called Grock, which is a chat GPT competitor, was available through the social media company, and so in a lot of ways, these companies were already the same thing. And what he did here is he said, look, the AI company is able to raise a lot of money because everyone loves AI, everyone wants to boost AI. So I'm going to use that money that the AI company is able to raise to bail out and kind of give me more time when it comes to the social media company, which is very influential and maybe Elon Musk's most important company right now because of the political influence it gives him. But from a business perspective, the social media company has struggled and sort of been, you know, going through the wilderness a little bit since Elon Musk bought it because most of its users left, a whole bunch of new users came in, advertisers left.

Maybe the advertisers are going.

To come back, and so it's a way for Elon Musk to sort of use the AI hype to kind of help shore up the finances of his social media company.

You know another major tech company that isn't mentioned as much in the AI raise, and that is Amazon. They recently unveiled Amazon Nova Act. They're AI agent. So what does the lay person need to know about this.

So okay, a couple of things. I think AI agent is a term.

That people are probably already hearing, and I guarantee you they're going to be hearing more about it in the coming months and years. An AI agent all that is is just you know, you can say, okay, well, if I can have a conversation with chat GPT, I can ask it things. Can I ask it to then go and read the internet for me? Can I ask it to go do things on the internet for me?

Right?

If chat GPT is able to read an e commerce website, can't I just tell chat GPT, hey, go buy me the cheapest sofa for my new furniture that you can find from my new apartment that's green and seats three people.

Right.

And so that's what an AI agent is, is essentially using AI to go and help people to do things on the internet for them. And so this is something a lot of AI companies are talking about. Obviously, there's a lot of problems. The technology is not quite there. The last thing you want is to say to your AI agent, go buy me a sofa for under one thousand dollars, and then suddenly eight sofas that cost ten thousand dollars each show up at your door a week later, right, I mean we need to be really careous. Is that?

What is that the phase we're in right now?

I mean a colleague of mine ran an experiment where he asked some of these AI agents to go and find him the cheapest eggs.

This was sort of at the height of the egg panic, and eggs.

Were selling for twenty dollars a dozen and the agent actually went and bought I think it was like thirty dollars eggs and had them delivered to his home before he even had the chance to say, like, yes, make that purchase.

And so it's definitely in the experimental phase.

But Amazon they sort of see that this is maybe the next frontier of the technology.

Is that what the key interest is to basically do your shopping for you?

I mean, I think that would make sense.

I mean, they are the shopping company, and we all know Amazon Alexa.

Was an early version of this.

I mean you could ask Amazon Alexa to buy things for you on Amazon. You could ask it obviously to remind you about the weather, and people never really use it for more than that. And so people have been saying why was Amazon not ahead of this trend? Why is Amazon Alexa not smarter? Why can chat GPT do things that Amazon Alexa can't. So Amazon has been under a huge amount of pressure from its investors, from its own employees, from other people in the tech industry to show that they are riding this AI wave just like these other companies. And I think this Nova Act product, which is very new, it's still in the experimental phase, is a sign that they're trying to do that.

All of this, of course brings us to Siri. I mean, the irony being that Amazon Alexa and Apple Siri were kind of ahead of the curve in terms of voice driven assistance, essentially agentic type features, and now Amazon feels like it's behind the curve a little bit, and certainly Apple does too when it comes to AI. What's going on there?

Yeah, I mean, I think there's this dynamic that you're putting your finger on where these companies are really really invested in their own way of doing things and then there's a new way that kind of comes out of left field and to adjust right. This is the classic innovator's dilemma. And so I would say for both companies. Don't count them out. Amazon, Apple. They are both massive companies. They are bigger than pretty much anything we've seen in the history of business. They are in people's lives every minute, every day, and so I think, first of all, we should be careful not to just write them off and say, oh, they're behind, therefore they will fail. But I do think that this is really five alarm fire moment for them.

So you, I mean, they've been having internal sort of ol hands meetings. What's the kind of internal drummer at apput on this?

They are saying, Wow, we need to get on board with this. And Google had the same thing when chat GPT eventually came out.

Right, Google is the AI company.

A lot of this technology we're talking about was actually developed by Google and then just shared with the world because they didn't quite know what to do with it. And so for Apple, people have said, okay, well Series been around, are you going to make series smarter? This seems like an obvious application. They stayed quiet for about a year after chat GPT came out, and then they said.

Yes, we're doing it. We're going all in.

We're an AI company like everyone else. And now they've delayed some of those releases, right, they've said, oh, actually, we might need a bit more time to get it to the level where we want it. And we can see some of Apple's AI experiments have already failed pretty spectacularly. They had a bot that summarized some of the messages that were coming into your phone. Well I remember that, yeah, last year Apple Intelligence. And what they would do is they would see news alerts and they and said, oh, let me be helpful. Let me summarize the three or four news alerts you have. And the summaries were incorrect, right, And so that is upsetting as journalists. It's upsetting as a user because you want accurate information and Apple is muddling the waters.

And so they had to pull that back.

And people are starting to seriously ask the question, is this the moment where a company like Apple kind of falls from grace and loses its steam, loses its power.

But here's the thing. Fifty percent of Apple's revenues come from setting the iPhone. And I can't imagine why do they need to be a monket eater in AI? Why can't they license other people they eye products?

Yeah, I mean Apple doesn't really like doing that. They like to do everything on their own. They now build their own computer chips, they build their own software.

And the whole point of Apple was.

That it wasn't a PC, right, they had their own operating system. And so that is a huge part of Apple's value where they say, come into our walld garden. You're going to pay a lot of money, but you're going to be more secure, it's going to be cool, it's going to be more intuitive. We're doing things our way, and so that is their whole pitch to the consumer. And if they suddenly have to start saying, come into our world garden and use Google's AI, come into our world garden and use opening.

Which I do now by the way, Yeah, I mean I use my Apple products to use chatchibt. But it's also like can you get everything from one guy? I don't know?

Yeah, And I mean this is another big question because that's been the struggle over the last ten years between these giant tech companies to sort of box each other out and try to corral people within their own ecosystems. And maybe AI is the technology that kind of blows that up in a way.

Gary, what a great place to end.

Thank you so much for thank you so much, of course, it is my pleasure.

That's it for this week for tech Stuff, I'm mos Vloshin.

And I'm Kara Price. This episode was produced by Eliza Dennis, Victoria Dominguez, and Adriana Tapia. It was executive produced by me Oz Valashan and Kate Osbourne for Kaleidoscope and Katrina Norvel for iHeart Podcasts. Jess Crinchich is the engineer and Jack Insley mix the episode. Kyle Murdoch wrote our theme song.

Join us next Wednesday for tech Stuff The Story when we'll share an in depth conversation with Reid Hoffmann, legendary founder of LinkedIn, venture capitalist at gray Lock, and author of the new book Super Agency. What could possibly go right with our AI future?

Please rate, review, and reach out to us at tech Stuff podcast at gmail dot com. We want to hear from you.

In 1 playlist(s)

  1. TechStuff

    2,451 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,448 clip(s)