Clean

Tech News: You Got AI On My Google Search

Published May 11, 2023, 11:05 PM

Google has announced a ton of stuff at the 2023 I/O event, including how AI will show up in future Google searches if you opt into it. We also learn how Microsoft's deal to buy Activision is going, how YouTube is discouraging ad-blockers and how one influencer is using AI to make clones of herself for people to date. For a fee, of course.

Welcome to Tech Stuff, a production from iHeartRadio. Hey therein Welcome to Tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you? It's time for the tech news for May eleventh, twenty twenty three, a Thursday. Bye, I'm doing my math correctly. Google is currently holding its Io event. This is where Google invites app developers to attend various keynotes and workshops to learn about new Google features that the developers can integrate into their own work. So yesterday Google held the opening keynote, which is traditionally when the company makes the bulk of its announcements and sometimes includes like impressive displays of technology to get folks excited about Watson development. So here's a quick rundown of some of the things Google announced yesterday. They unveiled several new devices. One of them is called the Pixel Fold, which, as you might imagine, is a foldable mobile device, essentially a smartphone that can unfold into kind of a tablet. It has an o led screen when it's folded, that screen measures five point eight inches on the diagonal, but if you unfold it it becomes a seven point six inch display. That doesn't sound like it gets much bigger, but it's actually kind of like a almost like a square tablet rather than the rectangular kind you're you're probably used to. It is more square in shape. I have to admit I've never been the target market for a device like this, like a hybrid device that can be a smartphone or a tablet. But for those who do want that sort of thing, they will need to save up about one thousand and seven hundred and ninety nine bucks to get one when they become available. Google also showed off a new smartphone model, the Pixel seven A, so the A series of phones, like the five A and the seven A. Those tend to be kind of a budget model, a slight update but a budget model of the previous year's newest phone. So the seven A will cost five hundred bucks for the basic version. There's also a version that will support millimeter wave transmissions, but only certain carriers support that. But if your carrier does, then you can get that one for five hundred and fifty bucks when they go on sale and then. Google also introduced a tablet called the wait for it, Pixel Tablet. It's an eleven inch tablet. That one will cost five hundred bucks when it hits shelves. These have Google's latest processor in them, and lots of other bells and whistles, but you know, I'm not going to dive into all of that because it would take up the whole episode. Beyond the hardware, Google also talked about some changes to Google products and services, and a huge one is Google Search. You know, it used to be that that search was the only thing we really associated with Google. These days, I don't know what most people think of when they think of Google. I still think of search, but then I was around before Google Search really became a thing, when I was using web Crawler, So yeah, I don't know if everyone still thinks of Google Search. But that's where the big change is coming. And you know, Google Search has remained relatively unchanged for ages, although you could argue the incorporation of ads has switched things up a few times. But this new feature, which will be an opt in experience, meaning it's not just the standard experience you do have to elect to be part of it, is called Search Generative Experience, and as that name suggests, if you click into that mean that your search query could include an AI generated answer at the top of your search results, and that might mean you don't even need to go any further to find out more about whatever it is you're searching for. Like if you're searching for a fairly simple or you know, not to in depth topic, then you'll get a little summary and you might not have to click through to a different web page. Or maybe you use the answer that's provided by AI to ask follow up questions and get more in depth answers and still not click through. This is the kind of stuff that makes web page administrators really nervous because if no one's clicking through, then there's no traffic. If there's no traffic, then there's no revenue. And unless you're, you know, a subscription based website, and people just don't bother to cancel their subscriptions. But otherwise, yeah, it's a scary kind of thing. In fact, this is something that a lot of websites have been concerned about for all while, but it could potentially be helpful to users as long as the answers are reliable. You know, as we have seen that's not always the case. The company also has opened up google Bard before it was in a beta test and you had to be invited to be part of it. But now you can just use it if you want to and maybe see if it's improved. Since Google developers expressed concerns about its dependability and accuracy, I would recommend that if you do use Google Bard, don't you know, rely on it to give you safe instructions on how to go scuba diving, because developers pointed out that an earlier build of Google Bard gave instructions that could be harmful or even fatal if you followed them. So yeah, I would not depend entirely upon AI responses right now. Google also announced that it is updating its operating system for wearable devices. This is fittingly called where a Swear where OS, so the fourth generation is what is currently in development, just so you know, where OS three hasn't yet fully deployed. So it's interesting that they were talking about the next generation when the current one hasn't really fully rolled out yet. There's lots more too. Google also showed off an AI tool that pixel users the smartphone and maybe the tablet as well. The pixel users will get to use this AI tool that will help them edit photos. So let's say you wanted to change the brightness of just one part of the photo without affecting the rest. You could do that or and this was really cool. They showed off an example of this. Let's say that you took a picture of your friend. Say your friend is standing near a cruise ship, and it's a good picture, but you just wish that you had had your friend's step like six inches to the right. Well, with this tool, you could shift your friends six inches to the right, and AI would effectively paint the background so that it wouldn't look as if you had moved the person in the first place, like it should seem as if you could move your friend to wherever in the frame you want, and the AI will just take care of the background so that it doesn't look like you messed with it at all, which is really cool. Also kind of scary because when you get into that real time photo manipulation stuff where AI is kind of covering for you, it can raise a lot of potential misuse scenarios further down the line, but still kind of neat if you're just using it casually for the purpose for which it was intended. There was a neat privacy announcement. Android devices will be able to detect Bluetooth devices that were intended to track stuff, so like Apple's air tags, right, the purpose of air tags is such you can slip one into, say your luggage, and then you can use the air tag to find that piece of luggage if you get your destination and your luggage doesn't that kind of thing right, Well, some people obviously have been using air tags to tag other stuff, not just like luggage or whatever, but sometimes, like I don't know, a person that they wanted to follow, like a stalker would use it to slip the air tag into say a car and end up stalking someone that way. Well, now Android will be able to detect these sorts of devices. So if it sees that there's some Bluetooth device that is traveling with you, like it's detected, and it's still detected even as you move through your environment, like if you're driving down the street, it will give you an alert so that way you can check to make sure that someone hasn't bugged your car or whatever. I assume that it's only going to do this when it does detect that stuff is moving around with you, so that you don't get crazy numbers of notifications. When say you go to the gym and everyone's using Bluetooth earbuds or something. Google also made some other announcements, but I think that that's enough for right now, and we could probably dive into more in future episodes of tech stuff. Over in Russia, the Russian government has fined Google's parent company, Alphabet, the staggering sum of three million rubles, which is, let me just convert this into dollars. Oh, it's about forty thousand bucks. Okay, that's not a lot. But then you might say, okay, well, why did they find Alphabet in the first place. What was the infraction. Well, the Russian government says Alphabet failed to remove videos on YouTube that the government has determined contain misinformation, primarily about Russia's war against Ukraine, which has not been going well for Putin, but the Russian government would prefer that not be publicized within Russia itself, and also any videos that contained what the government has referred to as LGBT propaganda. The government classifies any content that quote unquote promotes homosexuality as propaganda and that whomever is responsible for publishing it can face fines. Now, I would make some very judgmental comments about this about how Russia has targeted these folks. But I feel like before I can start casting stones at Russia, I gotta aim a whole lot closer to home, because currently we're seeing an increased movement against the LGBTQ plus community in various states here in the US, including some that border my state, and I imagine my state, which has perhaps some sympathetic politicians in charge, could follow suits. So I can't get two up in arms about Russia doing this. Although avi Russia has gone to extremes to suppress anyone in the LGBTQ community there, I can't, you know, We're seeing similar things happen here in the States, so I can't really just single out Russia anyway. I don't know if Google will bother paying this fine. I mean, maybe they will. It's only forty thousand dollars. I'm sure executives could just root around in their couches and come up with that. But Google, like a lot of tech companies, has already pulled out of Russia. They closed down their offices based in the country, and while Google services are still available in Russia, the company doesn't operate there like on a like a corporate scale. Speaking of YouTube, there's apparently a new anti ad blocker feature on the service, at least for some users. According to Reddit, a few folks having counter messages saying that ad blockers are not allowed on YouTube, and if you want to watch YouTube without ads, you need to become a YouTube Premium subscriber, or your other option is just to disable your ad blocker and then you get ads supported YouTube access. Google has seen a lot more people subscribe to YouTube Premium over the past couple of years, so it is possible that this is really meant to push more folks toward that. In fact, the redditors said that the pop up message included a button to make it super simple to subscribe to YouTube Premium, so you could just click there in the pop up window and do it. So far, this message has been part of a limited rolled out experiment. It's not going to everyone. It's just going to a subset of YouTube users. It is possible that YouTube will not expand this to the general user base. But as I've said before, if a company sees that a tactic is effective and is generating revenue, then they're more likely than not going to adopt it wholesale. It's hard to back off of that, and as full disclosure, I've actually been a YouTube Premium subscriber for ages. Although to be fair, I was actually a Google Music subscriber, but then Google sunset that service, and when they did, they ported my subscription over to YouTube Premium because there's YouTube music as well, and I just kept it because I like watching YouTube without ads. But I also understand that, you know, you can't get something for nothing or else. No one can make some things anymore. Speaking of that, this is perfect timing. We're going to take a quick break so that we can listen to a few messages from our sponsors and then I'll be right back. Okay, here we go back with the news. Sam Altman, the CEO of Open AI, is about to appear before the US Senate Judiciary Subcommittee on Privacy, Technology and the Law. This happens next Tuesday. Senators want to have a conversation with Altman about his perspective on how governments should approach AI. You know what sorts of laws might need to be updated or even new laws potentially proposed in order to allow for the safe and responsible development and deployment of AI while minimizing risks and threats and other types of problems. Now, in the past, Altman has been pretty candid about his opinions, and they aren't always what you would expect from a guy who runs a company that is known for perhaps the most high profile implementation of AI that's out there right now, at least among the general public. He has said more than once that chat GPT is not as miraculous as some people made it out to be. And remember, he's the guy running the company that made chat GPT. He has already met with political leaders about AI safeguards recently. So I think this is a really good step. I don't think it's just going to become political theater where Altman's being grilled for all this stuff. I feel like this there's the potential of actual collaboration here. Now. I have never met Altman, I don't know him personally. I get the sense that I mean he is trying to walk the line between running an AI company and also trying to manage expectations in a field where the hype cycle can take off without warning. We've seen that recently with reports about how investors are pouring billions of dollars into different companies that are centered around AI, even if those companies don't have you know, a business plan or something. So I hope that this meeting next week is informative and productive and that perhaps lawmakers can start looking into things that need tweaking, like you know, right to publicity and right to personality laws as an example. Meanwhile, over in the European Union, lawmakers there are also nearing the final step toward regular AI. So the EU has a big set of rules about AI that's nearing the point of adoption. And these rules cover a lot of ground, including stuff like the rules around facial recognition, for example, that it should not be used in public spaces, that law enforcement should not be relying upon it, because, as we've seen here in the United States, facial recognition technology is far from perfect. There's a tendency for this tech to have a bias that makes it difficult for facial recognition systems to differentiate people of color, for example, and this leads to disproportionate harassment when law enforcement relies on such technology to identify and track suspects. So the EU doesn't want to perpetuate that, and these rules would forbid law enforcement from relying on facial recognition technology for those purposes. The EU's approach has been to categorize AI not in terms of the AI's capability, but rather in terms of its perceived risk, which is interesting, Like, it's interesting to think about classifying AI as the perceived risk it has on people. So if an implementation of AI is perceived to be dangerous, well, then it'll likely get the unwanted label of unacceptable, Like the level of risk is unacceptable, so we cannot allow it. It would be against the law to implement AI for that specific purpose. If the AI is meant to I don't know, just like automatically tally up calculations in a spreadsheet, it would probably get a much lower risk assessment assigned to it, although you could argue that even that application of technology can potentially have really nasty consequences. It all depends upon what that spreadsheet is all about. Right now, I have yet to actually read the law. The law itself has been in the making for a couple of years. This has been a long process, with lots of different parties having input as the EU lawmakers have tried to structure this proposed bill. So I am not able to give my full opinion about this because I haven't read it. I do think that it's encouraging to see lawmakers get to this stage. However, it took a long time to get there, but at least we got there. So often when I'm talking about AI and the law, I'm just talking about this nebulous era that we're in right now where very little appears to be happening. Well, things are happening in the EU. This is, by the way, very super complicated stuff. You want to make sure that you protect citizens and all that, but you also you don't want to enter into a situation where you're outright preventing research and development on tools that could potentially create enormous public good. It is not an easy thing to do. I think we often get frustrated, including myself, get frustrated at our leaders for taking so long to adopt rules and regulations meant to protect people. Often we feel like whatever regulations are out there aren't to protect people, but are there to protect other things like corporations, which in here in the US are treated as people. But you know, it's refreshing to see some progress on that front, at least in the EU. We also have a few stories about other places that are adopting rules that relate to AI. For example, lawmakers here in the United States, specifically in the state of Minnesota, are closing in on passing a bill that would make it a crime to create a deep fake image or video that shows someone appearing to have sex without that person's consent. So, in other words, it's a law that would make deep fake pornography illegal if the person being depicted in the video or image didn't agree to it in the first place. Likewise, it would be illegal to use deep fakes in an effort to spread election disinformation. Now, the state's House of Representatives had already voted on a version of that bill, and yesterday the state's Senate. Because the lawmakers are in two houses, the House of Representatives and the Senate, both at the state level, and then we have a federal level here in the US as well, this is just for the state of Minnesota. So the Senate unanimously voted in favor of its version of the bill. However, the Senates version is technically more strict than the House version. The House version had an exception built in with regard to things like free speech and satire and parody, but to get further into that would get far more complicated. Now, the Senates version of the bill will go back to the House of Representatives. They will then discuss the changes made and vote on whether or not to adopt it. If they do vote to adopt it, which seems like a pretty strong bet, there's a lot of support for this bill, it would then move to the governor to be signed into law. Minnesota is not the first state to do this. They would follow states like Texas and California that have already put laws on the books that make certain uses of deep fake technology outright illegal. You might remember a story earlier this month about how IBM's CEO speculated that as many as seven eight hundred open positions, which is just a small group of a much larger section of open positions that are all under a hiring freeze right now, those seven eight hundred jobs could ultimately go to AI rather than being filled with human beings. Well now, IBM's chief Commercial Officer, Rob Thomas has said that managers who refused to use AI or who failed to learn how to use AI in the context of their jobs will find themselves replaced with managers who do use AI. Now that at first, when you hear it makes it sound a little sinister, but honestly, I think this is more in line with how most AI experts have framed the best use of this technology for the last several years, that AI is not meant to outright replace people, but rather to augment employee's abilities so that they can do their jobs more effectively, and also to focus on the parts of the job that are more rewarding and are less suitable for automation. Right so, someone who is willing to use AI in order to do that will be seen as a more valuable leader than someone who isn't. While the CEO's revelation about potentially automating thousands of jobs outright is I think a controversial one, I think mister Thomas's point is much less controversial. It makes sense as long as you're talking about using AI to augment your ability to do your job. If he, in fact is talking about managers using AI to eliminate entire divisions of actual employees, then no, I can't. I can't sign on for that. I think that's a bad idea. Microsoft's quest to acquire Activision Blizzard continues to hit more roadblocks. The competition regulator of the UK has now passed an interim order that forbids the two companies from purchasing an interest in one another, so they can't like Microsoft would not be allowed to purchase any ownership of Activision or vice versa. While the UK is standing firm against this acquisition, the European Union is expected to actually approve the deal sometime this month, So if you're keeping score, Japan has already approved this deal, the EU is on the way to approving the deal, the UK has voted against the deal, and the US is teetering toward opposing the deal but has not yet firmly come down, So it's really still kind of a coin flip situation. And I honestly don't know how things go if every one other than the UK approves this deal, Like, I don't know how that ends up working, because global company structures and the rules regarding them confuse the heck out of me. So maybe some people out there probably know exactly how this would unfold. I am not one of them, so I'm not going to try and guess. Disney Plus saw a drop in subscribers for the second quarter in a row, with four million people deciding they would let it go. Let it go. However, at the same time, the company reduced the losses that it has experienced running the streaming service. So while more people have left, the actual financial losses have decreased. They're still operating at a loss. They're not profitable, but they're not losing as much money per quarter as they had been. So producing content for streaming is expensive, as you might imagine. Right now in the US, it's impossible at least four US base operations. It's impossible because the Writer's Guild of America is on strike. But on an earnings call CEO for the second time now, Bob Eiger said that the company will unite the Disney Plus and the Hulu streaming services into a single service. And for someone like me who subscribes to one of those, that being Disney Plus, but not the other, that being Hulu, that sounds pretty great. One thing that's less great is that the plan is also to increase the monthly subscription fee for those who want an ad free experience. So yes, you will have access to more content, but you'll also be paying more to get that content if you don't want ads. I don't know about the ads supported one that may remain the same. The price hike could get here before the unified streaming platform does. That also raises the possibility that the price could go up again a second time once Hulu and Disney Plus have tied the knot. I don't know. Iiger said, this unified platform should launch before the end of the year, and I may have to find someone who is an expert in streaming strategies to come on the show so we can really talk about the challenges that streaming companies face when they're running these services. We've seen it a time and again, from Netflix to Warner Brothers, Discovery to Paramount Plus and beyond. There have been a lot of examples of services trying new approaches to generate revenue and to reduce costs, and in some cases to merge with other services. You know, war Brothers Discovery and Paramount Plus have both gone through that and now discuss Disney is doing it too. So as someone who just can't bring himself to subscribe to everything that's out there, I'm actually in favor of a little consolidation in the field. I don't want it to go out of control, but to reduce the different number of streaming platforms so that I'm not having to, you know, manage eight different subscriptions or something that sounds good to me. I just don't want to see streaming subscriptions reach the same height as cable, right, because then we're just back to the model that was disrupted. Now, it may be that it's necessary to do that in order to fund the platforms so that they can actually create the content that we want to see in the first place. But yeah, this is that complicated nature about streaming. That's why I need to get an expert on the show so we can kind of talk it out and really understand all the different factors that go into the streaming platform business. All right, I've got three more news stories I want to talk about that are pretty interesting. But before we do that, let's take another quick break for our sponsors. Okay, we're back, and this is potentially really exciting news. So Heleon Energy, which is aiming to bring fusion power not just into reality, but to make it a viable commercial service, has announced that it has landed its first customer, which is Microsoft. Microsoft has agreed to buy a certain amount of energy from Helion Energy once its fusion reactor goes into commercial service. So fusion power is what the sun runs on. Stars run on fusion. It is the opposite of fission, which is what traditional nuclear power plants rely on. So fission nuclear fission is where you take a heavy atom and you expend some energy that causes this atom to break apart. As the atom breaks apart, it releases way more energy and you harness that to generate electricity. Typically you do it by superheating water into steam, which then turns turbines and generates electricity that way. That's your typical way of creating electricity through a nuclear power plant. It's not that different from using other types of energy to heat water up and turn it into steam so that you can turn a turbine. It's just it's on nuclear now. Fission power plants have some real perception problems. Some would argue that that's all they have, that those are the only problems. It's perception, and that everything else is totally you know, cromulent. I'm not quite at that level, but I do think that the opposition to nuclear power is perhaps predicated more on outdated concerns. Not all of them are moot, but some of them are. Anyway, a lot of people associate nuclear fission power plants with nuclear waste, which is understandable. It's scary stuff because it can remain dangerous for tens of thousands of years, and thus nuclear power plants are also often seen as a potential threat to safety, also understandable because we've had some high profile examples of that happening. Even if you could argue that, and you can, you can argue that our reliance on things like fossil fuels have led to far more deaths and medical complications over the course of its history than all the nuclear mishaps and disasters added up together. Right, So nuclear fusion doesn't have these perception problems because what it does is you take very light atoms, stuff like hydrogen, and then you use some energy to fuse them together. So classically we would take hydrogen and then applying a lot of energy, really focusing these atoms so that you can overcome the nuclear forces that would otherwise resist fusion, you overcome that, they fuse together and then you get helium, and that process also releases an enormous amount of energy. So there have been several experimental fusion reactors around the world that have achieved fusion. They have done it, but there are still hurdles in the way of making that a viable means of producing energy. So, for one thing, most of these experiments actually required more energy to fuse the atoms together than they got from the reaction. So if you're expending more energy then you get out of the process, that's a net loss. That doesn't work right, Like, you might as well be using that energy directly to produce electricity as opposed to using this middleman where you've got a net energy loss. Even in the cases where you could argue you got more energy out and there are a couple, but they very narrowly focus in on specific parts of the process and ignore everything else. But even if you get past that, you still have the challenge of making this a persistent reaction so that you can continue to produce energy and not just have like a spike of energy production. Then you have to essentially refuel and do it all over again. So there's that hurdle as well. So this is very very challenging to do. It's hard to do. However, Helium says it's aiming for commercial power generation by twenty twenty eight. That's crazy soon way earlier than what most folks have predicted. Now, I would love to see this happen. I have my reservations, I have my doubts, but if Helium can actually pull this off, well, it could serve as an incredible model to totally overhaul our energy infrastructure. Fusion could provide a source of clean energy without stuff like carbon emissions or nuclear waste. The actual fuel we'd be using would be well, I mean, technically, it's the most plentiful stuff on the planet, although you do have to spend a lot of energy to get it. But if we could do that without it taking a huge chunk out of the energy we're producing through the process, it's worth it. So it would mean that we could rapidly transition off of fossil fuels. That would be like literally ancient technology in comparison. But we have to hope it all works out. We can't assume it's going to work out. Hoping, I think is okay, but assuming is not. And I should add there are experts in the nuclear fusion field who remain skeptical of Helium in general, who have said that it's a company that has shown more on paper than in reality. So whether this company can actually produce energy starting in twenty twenty eight on a commercial level. Remains to be seen. I hope it works out because if it does, that would be transformational, absolutely transformational. But you know it's I don't I don't know how how sure A bet that is up in Canada, specifically in a suburb of Montreal called Brossard. That community is testing out a new traffic light and it's a traffic light that's kind of like Santa Claus, except of knowing if you're sleeping or if you're awake, and knows if you're obeying the speed limit, and if you're not, no green light for you. So this traffic light is meant to calm traffic and it monitors a driver's speed as they go down the street and they're approaching the light, and if they're going above the speed limit, then the light turns red and then the driver has to slow down and come to a stop. But if the driver is within the speed limit, the light stays green and they can just pass right through. So those of you all in Europe might be saying, we've had traffic lights like this forever, but this is the first time it's been used in Canada and for those who are in like Canada or the United States and you're having trouble imagining this. This is not a traffic light that's at an inner because that would be crazy, right. If it's a light that would just be green whenever people are going the speed limit and red if they were going too fast and it's at an intersection, then you just end up with a lot of you know, low speed collisions that it would be the intersection everyone would avoid. No, this light is just right smack dab in the middle along the side of a street. So I would love to actually see these kinds of things rolled out in my neighborhood because I sometimes see people zipping down our little side streets way too fast in an effort to bypass traffic that's on the main streets. And there are a lot of kids who live in my neighborhood and I worry about them because folks who are frustrated with traffic on a big road are aggressively driving down side streets where you're not meant to go that fast in order to just get around it. So I would like to see this incorporated in other places. Then again, I also don't drive so maybe the drivers out there screaming at me saying no, I don't want to have a light, tell me I can't speed. And finally, an enterprising young woman who has already created a successful career as an influencer is doing something a bit bold and perhaps controversial. So her name is is Karen or maybe it's Karenne, it's cr yn, So I'll say Karenn karenn Marjorie, And my apologies if I just picked the wrong pronunciation. I am old and KARENN Marjorie's like twenty three, so we are worlds apart. I don't I have never heard of her before, but she has like two million followers on Snapchat, so lots of people do know who she is. And she teamed up with some developers who used, you know, thousands of hours of her content to essentially build an AI version of herself. And you might say, well, why would she do that? So she does it so that she can rent out artificial Karennes to folks who want to have her as a friend or maybe a girlfriend, or maybe a casual fling. So the AI version of Karen has a computer simulated version of her voice. It is Telegram based so it's like a voice chat based AI version of her, and supposedly it has essentially a copy of her personality, or at the very least some version of her public personality. So interested customers can rent time with Aikrinne for about a dollar a minute. And she actually launched the service like a week back as a beta launch, but this week it became you know, it emerged from beta and it's called corinne AI so c r y nai. Fortune reports that the beta test netted more than seventy one point five thousand dollars in one week. Yaoza. So users can connect to the on telegram, they can talk about whatever they like. It may not come as much of a surprise to most of you that the overwhelming majority of people who tested it out happen to be male personally. I think it's a clever business move. Karin is leveraging her influence as well as leveraging technologies capabilities. She's kind of turning the turning the tide on things like deep fakes, right because a lot of times when we look at deep fakes, they're being used to mimic someone without their consent. She's saying, well, I want to be in control of this. So I am going to actually be the person creating this and marketing it and benefiting from it. So instead of being a victim, I have agency. I am in charge of it. I should add, however, there are experts in ethics who are really worried about this sort of thing, not Carinn specifically, but generally speaking, a trend toward these humanized AI agents because they worry that interacting with them, especially interacting on a more frequent basis, could start to reshape how people interact with other actual human beings. And based upon what we've seen with the web and how that has influenced how people interact with one another, I think that is that's a realistic concern. It's something we should actually really think and talk about. So I feel conflicted about this. On the one hand, I think that Karin is absolutely right to take charge of her own identity and public persona, and on the other hand, I worry that people will become too dependent upon this, that it will become a kind of crutch that will possibly warp the way that they interact with other people, and that that in the long term, can be harmful to themselves and the people around them. So yeah, it's complicated, but it is interesting to see someone take charge like that. And she doesn't need me to say it to say good luck to her because she's doing just fine based upon making more than seventy grand in a week. She definitely doesn't need any will which is for me to see success. As for the rest of you, I hope you are all well. And that's it for the news for this week, so I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,448 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,445 clip(s)