An organization in charge of a hotline to help people with eating disorders finds out that chatbots aren't a good substitute for a human operator. A judge in Texas explains that generative AI has no place in his courtroom. And Meta and Amazon both face some challenges around the world.
Welcome to Tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you. It's time for some tech news for June of first, twenty twenty three. How did we get to June already? Yikes? All right, it's another AI heavy news episode because you know that's what's going on out there. You might recall that on Tuesday I talked about Stephen Schwartz, the lawyer who submitted a filing in a legal case that contained false information courtesy of chat GPT. He didn't do it on purpose. Chat GPT just gave him information that was not true. Schwartz had used chat GPT as part of his legal research in a case, and the chat bot invented some cases that never actually existed as precedents. So Schwartz did actually think to ask chat gpt if the cases were real, Like there's a there's an exchange where he said, hey, is that a real case? In chat GPT is like yeah, yeah toats toad's real. So it turns out you just can't trust a chatbot. This, by the way, is why. I'm really concerned about these chatbots being incorporated directly into things like web search, you know, with Microsoft and Google both rushing to do that. I think it's a mistake because of multiple reasons, one of which is this tendency toward hallucinations. Now Schwartz is awaiting a hearing that will happen on June eighth that will determine what, if any, sanctions he will face as a result of this goof them up. Meanwhile, in Texas, a judge named Brantley Starr has made it clear that he will not abide any AI chat bought shenanigans in his court. He said that attorneys in his courtroom must promise that quote no portion of the filing was drafted by generative artificial intelligence end quote, or if any part of the filing did involve generative AI in any respect, a human being must have checked the information and verified it to be true and accurate. This covers pretty much anything lawyers would submit to the court, and I think it's an excellent idea. As we saw with Schwartz, AI just it's not trustworthy. It can make stuff up in some cases, and honestly, this step is good for everyone in the legal system. In the long run, I wouldn't be surprised to see other judges follow suit. Open AI, meanwhile, is trying to address this troubling problem of AI hallucinations. So yes, in case you forgot what halluci nations mean with regard to AI, it just means an incident in which AI invents information, such as the fake legal cases cited in Schwartz's situation. And it's not that AI is a pathological liar or has some sort of motivation to give us the wrong information. It's more like, when this AI gets into a situation where it does not have all the relevant information, sometimes it just makes stuff up in the absence of reliable info. To me, it's kind of like if you've ever had a friend who just seems incapable of using the words I don't know the answer to that, then you probably feel like this is a very familiar situation, right. Just think of someone who, rather than say I don't know, that's interesting, I don't know, they say, oh, it's probably because or maybe they don't even you know, try to aculivocate. They just outright say something they think is probably true and they don't know. One way or the other. It's kind of what open ai says is happening with chat GPT, actually what a lot of AI experts say are happening with generative AI in general. And so now the company is saying it's going to revisit how AI works toward creating an answer. So right now, the model apparently follows a process called outcome supervision, in which the goal is just to get the final answer. It doesn't really matter what pathway you took to get there. The ends justify the means. In other words, so the outcome supervision is just the AI gets a reward if the answer it provides at the end of the day is correct. The problem is that when AI makes a mistakes, say early on in the process, this can have a much larger effect further on in the process. Like if you've ever put something together and you made a mistake early, you might realize that by the time you're getting toward the end that small mistake has created a situation much further in the process. That is a huge problem. Well, so is the same with AI, And so open ai is saying they're looking at changing over to process supervision in which you know, reward stages for the AI occur throughout the reasoning process, so this thought is supposed to less think that you know, AI would reward itself every step along the way as it made the right choices, and thus would reduce the possibility of making a mistake. Further down the line, critics argue that it may not make any difference at all to the amount of misinformation or just fake information that is generated by AI. It might not matter what process it uses, but rather what's more important is that AI operates with a lack of transparency, so it can be really hard to pinpoint where a problem starts because you can't actually see what the processes. And if you can't see what the process is, it's very hard to diagnose where the problem is popping up. And so the critics worry that this change in method won't actually solve the problem of AI creating incorrect responses and misinformation. Over in Italy, a senator named Marco Lombardo stood before Parliament and delivered a speech about Italy's agreements with Switzerland, and that does not sound particularly techi except at the conclusion of the speech, Lombardo revealed that the speech he read out was not written by a human being instead it was generated by AI, and further, Lombardo said he did this in order to prompt a larger conversation about AI and to really consider what it can do and the potential consequences that can occur if people misuse it or if the AI does not perform as expected. Italy has been one of the more proactive country to consider AI and to be critical of AI. Previously, Italy banned chat gpt, although only temporarily, and did so because of concerns that information shared with the chatbot would not be secure and thus violate citizen privacy laws. And we've seen with chat GPT in particular that the history of chat ended up becoming open right. People were suddenly able to see what other people had been talking about with chat GPT. So I think it's an understandable concern along with privacy, but also it's good to see governments having these discussions to really seriously talk about how to think about AI in order to make the best use of it and not have it create problems. Italy's approach has motivated the EU in general. Lawmakers there as well as some of the United States, have indicated that they are now working on a code of conduct regarding AI and AI companies. Representatives from the EU and the United States met at a Trade and Technology Council meeting and started to talk over this code of conduct, which would be voluntary. On my mind, a voluntary code of conduct is a little bit on the weak sauce side. I get that optics can be bad if an AI company refuses to adopt the code of conduct, and that might mean that the company would find it difficult to do business with customers if they refused to sign on to this code of conduct. So there might be some social pressures and business pressures to do it, but it's voluntary. Considering the potential risks associated with AI now, I am not going so far as to claim there's an existential threat level risk out there, but there is risk, and that's bad enough. But considering that risk, I think we might need more than just a voluntary code of conduct in order to keep things in line. Also, it'll be interesting to see what role various AI companies and open Ai in particular, will take in helping draft this code of conduct. You know, Sam Altman, the CEO of open Ai, has tried to get in front of this stuff and It makes me worry because if the people who are the subject of a code of conduct are actually allowed to write or at least influence the writing of that code of conduct, you can end up with rules that don't actually guard against anything at all, and then it's just optics and that's useless. And for a horrifying story of how AI can be harmful, Chloe Shang of Motherboard has an article titled Eating disorder Helpline disables chatbot for harmful responses after firing human staff. Yeah, the headline, that's a lot. So the story here is is that the National Eating Disorder Association or NDA made a plan to replace human operators of a mental health hotline with a chatbot called Tessup. So I guess the idea was that the chatbot would be more efficient and cheaper than keeping human beings who have expertise and experience and you know, empathy on the payroll. But you know, this is not a helpline you would call if your lawnmower stopped working, Like, I can see a chatbot being used for something rather mundane like this. This is a helpline design for people who are dealing with eating disorders. Union representatives have accused ANYDA of using union busting tactics and warned that relying on AI could lead to terrible situations. And earlier this week, a social media post about how this AI chatbot led to a terrible situation went viral. So, first up, there was an activist named Sharon Maxwell. I had to test out Tessa, and she said that quote every single thing Tessa suggested were things that led to the development of my eating disorder end quote. So by that, what I think Maxwell is saying is that she had previously developed an eating disorder, and the thoughts that went into her head that led to her developing this eating disorder were the exact same things that this chatbot was now suggesting as advice. So, in other words, Tessa was giving Maxwell the advice of, Hey, maybe you can lose a pound or two by doing this, this and this, And when you're trying to make that suggestion to someone who's dealing with the eating disorder, that is a very dangerous thing. And then a psychologist named Alexis Conason, and my apologies for the pronunciation of your last name, I'm sure I'm getting it wrong. She conducted her own test and found similar results. So what was Anyda's response. Well, initially the organization accused Conison of fabricating the whole thing, so she sent screenshots of her conversation to an Eda, and then not too long after that, Anyda took Tessa offline in order to address some quote unquote bugs in the program. While Tesla has guardrails that are meant to keep the chatbot from doing stuff like this, we have seen again and again that AI can bypass guardrails even if the person on the other end of the conversation isn't trying to force things that way, and the story really points out that for some jobs you really probably should just depend upon human beings to do the work. Okay, we're going to take a quick break. When we come back, we've got some more news items to cover. Okay, we're back. Meta says it's going to remove posts containing news content for users in the state of California if California passes a law that would require platforms like Facebook and Google to pay publishers if work from those publishers show up on those platforms. We've actually seen this issue crop up around the world. Notably, it happened in Australia a couple of years ago, when Australia passed a similar law, Facebook went dark in Australia temporarily. Eventually the law took hold and things have kind of entered into an equilibrium. Whether or not the law actually addressed the issue that was of concern is another matter that actually is a good subject for an episode, honestly. But the idea is that publishers want compensation. They're arguing that platforms like Google and Facebook are siphoning traffic away from the actual news websites. That's where they monetize that traffic, and these platforms are benefiting from the work being done by journalists, but they're not compensating the news outlets in the process. Meta's Andy Stone, who's a name that I see pop up pretty much whenever the company wants to dismiss regulations that would work against it, said the bill, if signed into law, would amount to nothing more than a slush fund that would benefit large media companies but not help smaller California based publishers, which, depending on how this bill is framed, that actually might be true, because I've heard similar criticisms about the Australian law. Now I don't know I haven't read the bill. If I had read the bill, I probably wouldn't have a good grasp on its limitations because law speak be scary you. It's even more difficult to read than a really complex technical manual. But yes, this is another battle we're starting to see unfold. I don't see California backing down from this, and it will be interesting to see where this goes. But honestly, at the end of the day, I'm mostly concerned that the law actually does what the law aims to do right. It gets very frustrating when you hear about laws that potentially could correct a situation, but because of how they are written and how they're enacted, they failed to do what the purpose at least claimed to be. Metta also faces a fine levied by a court in Russia this week. The charge is that the company failed to remove prohibited content from WhatsApp, specifically about a drug called Lyrica, and so the fine is three million rubles, which equates to about thirty seven thousand dollars, and a Russian court also find the Wikimedia Foundation a similar amount of money, also three million rubles, but they said that Wikimedia Foundation failed to remove quote unquote false information about Russia's war wind Ukraine. Something tells me that neither Meta nor Wikimedia Foundation will consider these moves particularly intimidating. Meta certainly not thirty seven thousand dollars is like, I don't even think they would notice if that money went away, So I don't think that this is really a big move against the companies. Amazon also has a couple of bills to settle this week. First up, the company agreed to pay a thirty eight million dollars settlement in relation to a lawsuit that accused the company of having illegally collected and storing information relating to children through the Digital Personal Assistant, whose name starts with A, ends with A, and has lex in the middle. The Federal Trade Commission in the United States brought the case against Amazon, so this settlement requires that the company changes its data collection, storage, and deletion practices on top of paying the fine. The other bill Amazon has to pay is five point eight million dollars. This is another settlement, also with the FTC. This one is with regard to Amazon's ring products. Those are the security systems and doorbell camera systems. The FTC accused Amazon of having a system that allowed employees and contractors to access video feeds from customer cameras without any real safeguards to prevent that from happening. As you might imagine that as a huge privacy and security violation. Amazon has agreed to create new processes with regular checkups to make sure that the company has a tighter data security strategy, while also simultaneously saying we never broke the law, because that's what you can do when you make settlements, all right. One trend happening with some tech platforms is to make changes to how these platforms give access to an API, which is an application programming interface. That's what lets app developers tap into larger platforms in order to do whatever it is. So a developer might create an app that ties into a larger platform like Twitter or as we'll talk about in a second Reddit, and these apps then reference they send requests to the underlying platform, and that populates the app. So Twitter as a price tag associated with this. For every fifty million tweets requested, a developer has to cough up forty two thousand dollars. And yes, fifty million tweets, that's a lot. But if your app is super popular and a lot of people are using it all day. You're going to hit that fifty million mark over and over again. Reddit is now doing something similar. They've changed their API, and Christian Sellig, the developer behind the popular Reddit app Apollo, reports that he might have to shut the app down entirely because of Reddit's new policy, which is to charge twelve thousand dollars per fifty million requests. Selleg revealed that Apollo generates around seven billion requests every month, and that means it would cost around twenty million dollars a year to operate. The app, understandably is not in a position to pay that much. Generally speaking, the app developer community is not too keen on this approach, as it punishes you for being successful. Finally, if you have a PC rig with a motherboard from Gigabyte, you should know that security researchers at Eclipsium discovered the company had created a backdoor system to deliver firmware updates to the motherboard, and that system lacks proper security, which means a hacker could potentially hijack that delivery system and use it to send executable code straight to target computers. Do not pass goes, do not Colick two hundred dollars. If you're curious, if your device has a Gigabyte motherboard, you can go to the start menu and Windows and look at system information. More than two hundred and fifty motherboards are affected by this, so that's a big ouch. Supposedly, Gigabyte is working on this and intends to create a solution, but as of right now, I don't know of any solution that has actually out there, so be careful out there. All right, that's it for the tech news for today, June first, twenty twenty three. I hope you're all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.