Artists, musicians and even stock photography companies are starting to fire shots at generative artificial intelligence systems. Will we see lawmakers create new legislation that creates a legal framework for generative AI? At what point can you say a machine plagiarized an artistic work? Plus, Meta sues an Israeli company for scraping user data and Google has plans for all those Stadia controllers once the service shuts off tomorrow.
Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio and how the tech area. It's time for the tech news for Tuesday, January twenty twenty three, and let's start off with cryptocurrency. Now, I'm sure most of y'all know I'm pretty skeptical of crypto in general, and they've got a lot of bones to pick with cryptocurrency. But I also think it's only fair to report on when the market gets some momentum in a positive direction. So Reuter's reports that Bitcoin is up twenty six percent since the beginning of this year and has climbed above the twenty thousand dollars per coin mark for the first time in months. Other crypto coins are also performing well, many of them kind of you know, following bitcoins footsteps, and some so called meme coins, which others would argue our trash coins, are going absolutely bonkers. You're seeing like five thousand percent increase in value for some of these. Now I should add that these meme coins are usually worth like fractions of a penny, so even a small amount of growth ends up looking huge if you talk about percentage, right, If it went from point zero zero zero zero zero zero eight to point zero zero zero zero zero one zero, that's a huge jump in percentage, but in actual money it's not much at all. Right. Also, Reuters rightfully points out that these meme coins tend to be even more unstable than other types of crypto, and the value can drop just as quickly as it can go up. See also coin, which was all the rage for about a month a couple of years ago before the bottom fell out. Reuter's also cautions that this upturn may not sustain itself, that the recent gains in cryptocurrency are probably connected to an overall positive look at the global economy and people making a bet that inflation is at least for the moment, kind of done. Obviously, any big change could send values down again, and as we all learned with Russia's invasion of Ukraine, sometimes it's just an unpredictable global event that can really have a huge impact on the economy. So nothing is ever certain. The changes are not always predictable. So I guess what I'm saying is crypto appears to be recovering and I think I said last year that I didn't believe that the setbacks we saw in late two were the death knell for crypto in general. It was just a real reckoning for crypto and blockchain and related topics. So I suppose we're gonna have to keep an eye on stuff like how governments are going to treat the crypto exchange Binance, for example, to see where crypto goes in because it's not out of the woods right There are plenty of governments around the world that are taking a more critical view of cryptocurrency and looking at imposing regulations. So while we're seeing some improvement right now, that could be just a short term gain. Uh it's too early to say, but I don't know. Maybe we'll see something kind of akin to what we saw in where the value really went up dramatically in the early part of the year. Now we've got several stories about AI, and when I said I thought AI was going to be a big topic in three I didn't necessarily mean it would dominate headlines in mid January. Uh. I suspect we're gonna see AI coverage calmed down a bit later this year. Like I think it's always going to be a big part of the news in three, but I don't think it's going to be quite as dominant once things kind of initially calmed down. Right at the moment, it's sort of the scary new thing that has media outlets jumping to cover, And I don't really think it's it's necessarily that scary. It's just that's how it's being treated. But first up, getting Images, which maintains a huge library of stock photography and artwork that people can license to use for their works, is suing the company Stability AI. Getting images accuses Stability AI of scraping the library of images in getting images, just in an effort to train an art tool. An AI art tool called stable diffusion. So quick explanation here. One discipline within artificial intelligence is machine learn arning, or you might even argue that it's adjacent artificial intelligence. The ven diagram has a lot of overlap and machine intelligence, just like AI, can take lots of different forms. But one common practice in machine learning is to teach a computer model how to do whatever it is the model that is supposed to do by feeding the model lots and lots of information. For example, let's say you wanted to teach an AI system to recognize images of cats, well, you would want to feed the model tens of thousands of pictures of kiddie cats. But you would also want to feed lots of pictures that have no cats in them at all to the model so that it could tell the difference, and you would continuously tweak the model so that it would get better and better at distinguishing which images actually have cats in them. Well, if you want an AI image generation program that makes images that are actually, you know, recognizable as the of stuff you prompted the a A model to create, you have to feed it lots and lots of images, right, otherwise you could just get these random shapes and colors that don't look like anything, or that look like really disturbing versions of whatever it was you wanted to create. So Getty Images has a truly enormous supply of photographs. They're stretched across decades. Some of them are work for higher images, where photographers have set up, you know, like scenes in a studio and they just shoot tons and tons and tons of photographs and then sell those photos to Getty Images. You get some pretty wacky versions of that. If you've ever seen pictures of a woman wearing a silvery body suit and a metallic visor while she's holding an ear of corn. You know what I'm talking about. And yes, that's a real stock image. Most of you, I'm sure have seen it somewhere at some point. Other images come from like paparazzi who take pictures of clebrities and notable figures and then they license or sell their photographs to Getty. And what Getty is saying is that Stability AI used software to scrape Getty's images and copy them without paying for them, without paying a license for it, and using those images then to train a I that will ultimately you be used to create a competing product, right, a competing service, Like instead of going to Getty Images to license a stock photograph that represents whatever it is you want, you would go to Stable Diffusion and have it generate an image based on what you want. But Getty says, well, first of all, you're using our images to train your computer to do this, then you're going to introduce something that competes against our own business. That's an unfair business practice. Now we're starting to get into an area where the law is really lagging behind. The law is not designed to deal with this kind of intellectual property issue, so it does sound like this method is creeping in on the violation of intellectual property laws, or at least tiptoeing around them. But without actual law or court precedents. With AI, it's a gray area at best. But my guess is this year is going to be one where we start to see that gray turn into more black and white sooner rather than later, because it is becoming oppressing and uh and relevant issue. On a similar note, a group of artists have joined to file a class action lawsuit against three companies, dev and Art, mid Journey, and our buddy Stability AI that we just talked about like Getty. These artists argue that the companies have made illegal use of copyrighted works in order to train their respective AI models to generate images. Ours Technica has a great article about this, and they site and AI analyst named Alex Champion Dard, who has pointed out some potential problems with the lawsuit. For one, the lawsuit states that quote, when used to produce images from prompts by its users, stable diffusion uses the training images to produce seemingly new images through a mathematical software process. These quote unquote new images are based entirely on the training images and are derivative works of the particular images stable diffusion draws from when assembling a given output. Ultimately, it is merely a complex collage tool end quote. So they're they're really saying that there's nothing original in any image that stable diffusion produces, that everything has come from one element of one of the images in its massive mound of training images. That's not really how genera to AI works. The language says, the AI is just taking this massive amount of content as a starting point and then using that to create a new image, almost like, well, let me take the nose from this one, and the eyes from this one, and this smile and that hairstyle, and then putting them all together to make the quote unquote new image. But that's not actually how these models work. It's an oversimplification. It's more akin to how a human artist would study works made by other people before they start producing their own work. They would not necessarily be copying someone else's work directly. I mean, that could be an exercise to see if you can master the same techniques as some other artist. But if you're making your own work. You're not trying to copy someone else's work. Instead, you're using other people's work as sort of an inspirational launching ground on how to proceed. That could include things like a color palette, the brush technique, the perspective, all these sorts of things. So that's a little bit closer to what these AI models are doing. And Champion Dard points out that the lawsuit could lead to nothing if the defense is able to argue that what they're being accused of doing is just not true. But then, as I mentioned earlier, the real issue is that there isn't law that outlines the parameters to which generative AI can rely upon existing copyrighted works as training material. I mean, presumably, if we're looking at the written word, most writers have read a lot before they start writing. So as long as a writer is not plagiarizing some other writer, would you argue that a writer owes money to all the authors that they themselves have read before they produce their own work, Because surely by reading that influences your own style, it influences your own sensibilities. So if we if we take this to an extreme, you would say, ah, you owe money to anyone who has influenced you in the production of your own work. That's clearly not realistic or uh, you know, reasonable. But when it comes to AI, it gets a lot more tricky because you're literally using these materials to train up the AI to make its own So, yeah, we're gonna have to keep a close eye to see how this develops throughout this year. We're gonna take a quick break. When we come back, we'll talk about some more tech news. We're back, and we're not done with AI yet. Several news outlets have reported that the chat GPT three point five system successfully passed sections of the US Bar Exam. That's the exam that lawyers have to pass here in the United States before they can law your year or something. I understand that the Bar exam is very challenging and that for lots of lawyers in may take more than one attempt to pass the Bar exam. Now, to be clear, chat GPT did not pass the full exam, so it it it didn't pass it, and it is certainly not currently allowed to practice law. Rather, it passed sections on evidence and torts, and I was disappointed to find out torts are not pastries filled with fruit m raspberry tort The program fell short when it came to taking the full exam. It scored fifty point three on a test that sets a minimum passing grade of sixty eight percent. And I don't think anyone is expecting the US legal system to recognize AI lawyers in the near future, but some of the field have already expressed concern about this, and then others have said these tools could be really useful as a way to augment human lawyers as they go about their jobs. For example, this kind of a chatbot might be able to generate a first draft aft of deposition questions, but a lawyer would then subsequently go through this draft and then refine it. So in other words, the chat bot could do what we hear advocates say AI is good for all the time. It can make jobs easier and more efficient, but it doesn't actually replace the human behind the job. Of course, for it to do that, it has to work well enough to be useful rather than distracting or counter productive. And as we've seen with chat GPT, it's it's really impressive, but it's not there yet, and AI powered chat bots are not likely to disappear either. In fact, I'm sure we're gonna hear a lot more about them. Microsoft just announced that it is increasing access to chat GPT, making it generally available to its customers. In an earlier news episode, I mentioned how Microsoft is apparently planning a massive investment in open Ai that's the startup behind chat g pt, and effectively it would mean Microsoft would end up buying just a scoch under ownership of the startup. Well, Microsoft has not yet confirmed those reports. It has indicated that it will let more Microsoft customers make use of chat GPT tool via Microsoft Cloud Services. What this means in the nearer and long term, I can't really say. I sure do hope we'll see, uh that we do not get an era of AI generated news and entertainment, because that would be kind of the opposite of what we really want AI to do, Like we wanted to augment but not replace. So I don't want to see it doing stuff like what we saw was c net, where we just assigned AI these writing jobs and they churn out low value articles. Um, I don't really want to see that. I mean, obviously, what I want to see is high value articles created by talented human writers. But yeah, it would be pretty darn hard to be a professional creative in a world where we accept the level of creative output generated by a I as being desirable. Um, it's very hard already to make your living as a creative with all the competition that's out there. It would be even more difficult if we say, yeah, this AI stuff, it's not great, but it gets the job done, so let's just go with it. That would make it even harder. Musician Nick Cave knows what I'm talking about. A fan sent him an AI generated song that mimicked the musician's style. And for those of you not familiar with Nick Cave of Nick Cave and the Bad Seeds fame, he wrote the song Red Right Hand, which is featured heavily in the Scream franchise. It's also in a lot of other media. He's known for dark, gothic and often melodramatic ballads and other style songs. And I recommend listening to him. Depending upon your mood and the song you've picked, you'll either find him really intriguing or very very silly, or perhaps both. Anyway, Nick Cave very much did not like the AI generated piece. He said, quote songs arise out of suffering, by which I mean they are predicated upon the complex internal human struggle of creation. And well, as far as I know, algorithms don't feel end quote. He also said the song was quote a grotesque mockery of what it is to be human, and well, I don't much like it end quote. And you know, I find it difficult to disagree with him. I think it's pretty true. The song had lyrics like quote, in the depths of the night, I hear a call, a voice that echoes through the hall. It's a sirens song that pulls me in, takes me to a place where I can't begin end quote. It's not exactly meaningful. I think I mentioned in an earlier news episode that actually tried to have chat GPT create a high kup and the program produced something that's superficially looked like a hiku, but it lacked any poetic value, and further, it didn't adhere to the structure of a hiku poem. And generally that's what I found when I've used chat GPT. Honestly, I find that a little surprising, Like I would think the rules part would be something that a program would be better at handling. Clearly that's not what chat GPT was intended for, So I mean, I gotta cut it some slack in that sense. But when it comes to two works of art that have specific form and structure that you're supposed to follow, I would think computer programs would be better at doing that. It wouldn't necessarily mean the stuff created would be any good, but it would at least adhere to the rules of the art form. So, for example, sonnets, that it would adhere to the number of of versus and the uh the rhythm of them and the rhyme scheme. But it doesn't. So oh yeah, it's kind of surprising to me on that. But except for the fact that I do know, chat gpt wasn't exactly designed to be a sonnet producing machine. Uh but yeah, like, we're not at the point where these programs are capable of generating artistic content that has real merit to It doesn't mean we won't get there, and doesn't mean that, you know, it can't at least mimic the lower lower forms of pop art. And I don't mean to dismiss pop art because I love a lot of pop art, but uh, it's hard to argue that certain popular songs aren't trivial and simple when they very much are. But yeah, I'm I'm with Nick Cave on this one. Okay, moving away from AI, Meta is in the news again, and for once, Meta is the one doing the suing. So in this case, Meta is suing a company called Voyager Labs, and Meta says that Voyager Labs created thirty eight thousand fake accounts as part of an effort to scrape Facebook and Instagram and other social networks of user data. So Meta says that the effort pulled data from more than half a million Facebook pages and Facebook groups, including stuff like the posts people were putting onto Facebook, their photos, their friends lists, and any other information that was set to be publicly viewable. And well, people who may have set their profiles so that only their friends can view it, you have to remember they could still end up popping up in other pages. So even if you are someone who isn't on Facebook at all, some of your information maybe up there just because some friends of yours shared some stuff. So Voyager Labs is a company that's based in Israel, and it describes itself as an AI powered investigation company, and it mostly works with agencies like law enforcement or military organizations, and one of those is apparently the Los Angeles Police Department. But it's not just here in the United States. It's all around the world that this company does business. So Facebook has long maintained that scraping its sites for data violates Facebook's policies that you are not allowed to do that. You cannot create tools that are meant to go across all of Facebook and just pull down as much data as you possibly can. Only Facebook is allowed to collect personal information on that kind of scale and then exploit it. I guess what I'm saying is I'm just grouchy. I hate all this data collection stuff. It doesn't matter to me who ends up using it. I just think it's a bad thing that we've allowed to have happened in general. Uh Now, admittedly, it is way more scary when the agency that's maying use of this information is say, law enforcement or the military, because we've got very long histories of those kinds of institutions disproportionately harming specific populations. So I don't mean to diminish this. It is definitely scarier that this is being used in relation to things like law enforcement but in general, I just think this massive amount of data collection and analysis is inherently harmful, whether it's a law enforcement agency that's relying upon it, or it's the platform itself like Facebook or Instagram. Uh, it's just the older I get, the less comfortable I am that we allowed this to happen and that it has grown to the extent that it has. Okay, I don't mean to be a fear mongerer. Let's take another quick break. When we come back, we'll have some other news stories to talk about. We're back over in the UK, Parliament is drafting an amendment to the Online Safety Bill. That's a bill that's intended to mostly protect children against harmful content that is proliferated upon the Internet, and this particular amendment is likely to have tech executives sweating. So a group of lawmakers proposed an amendment last year that would seek criminal charges against tech executives who failed to protect children from harmful content on their respective platforms. So, in other words, Mark Zuckerberg could be held criminally responsible for allowing harmful material to perpetuate on Facebook, and if if the court could show that he didn't do enough to protect Chill Drin then he could face criminal charges for that in the UK. So this whole thing created kind of a lively debate within Parliament between the desire to protect children, which is obviously important, and protecting freedom of speech, which is also really important. In fact, that's why we have these concepts like safe Harbor on the Internet, where a platform is not responsible necessarily for the content that its users post to that platform because the platform didn't generate it. It's just it's just a a place where people gather. So this had kind of a struggle within Parliament, and ultimately the Prime Minister of England has indicated that what they're gonna do is they're going to create a similar amendment to the proposed one that aims for the same goal, that will hold tech executives responsible for a failure to protect children, and that will then be drafted and a amended to the Online Safety Bill. The original proposed amendment will be withdrawn. I can't pretend to fully understand the political process here, except that maybe the goal is to have a more focused amendment put in place. Anyway, this would be a really dramatic step if it does in fact go through, which it seems like that's the case, and it should really worry anyone who is a leader in a social network because it could mean that they could potentially be charged as a criminal in the UK in the future if they failed to live up to the Online Safety Bills requirements to protect children. And again I think protecting children is absolutely critical. I I I have worried about social networks effects on children, even beyond the really obvious stuff. But yeah, this is a this is a dramatic step and we'll have to see how it plays out. It also points to the kind of precarious position that the UK Prime Minister is in because his own party is not fully united. So sometimes they're going to be cases where they're going to be making some compromises that otherwise they probably wouldn't if they had a united party behind them, the Conservative Party in this case. So yeah, probably means we're gonna see some more political drama over in the UK this year. Next up, the Sony Walkman is coming back sort of, so for those of you all too young to remember, the original Walkman was a portable cassette player. Uh. It really helped promote music cassettes as being able to take your music on the go was a new thing when the Walkman first debut back in nineteen seventy nine. You know, remember we didn't have streaming services or MP three's and such. Back then, we had cassettes that stored music on magnetic tape that sometimes we get tangled up and then you'd have to use a pencil to unwind the tape. Then you'd have to untangle the tape, and then you use the pencil to wind it back up again. And we liked it anyway. The new Walkman models aren't cassette players at all. Instead, they are digital media players, kind of like a fancy version of an iPod, and Sony is positioning them as being able to provide lossless audio quality, meaning you get you know, ce D quality audio out of these babies, instead of the super compressed stuff that you would get if you were just listening to your basic streaming service. Of course, this will depend upon how the music was encoded in the first place. It can't just magically make all music sound better. And you might wonder, is this product going to do well? Will it take off? Now? I'm a little skeptical. I think most folks have proven that they value convenience more than they do quality. I include myself in this category. I mean, obviously we would all love our music to sound as good as it possibly could. That I think goes beyond question, Like we could all want that. But if we're making a choice between that and being able to have our music whenever and wherever we like, we're probably gonna go with the second choice, unless you're an audio file, in which case you might just be like, I can't even I can't even stomach the thought of listening to highly compressed audio. But uh, I don't know. Maybe we're headed back to an age where people rely on specific technologies to perform specific tasks. But it has been years since I've carried a dedicated digital media player because my phone can do all of that stuff. But maybe we're seeing that trend take a turn. Maybe we're going to head back to where people start to go with more simple phones and they go to dedicated digital media players and they separate out their tech. Again, that would be weird to me, but I don't understand young people anyway, So maybe this is what young people are doing. I don't get out much. Finally, tomorrow officially marks the point where Google will end Google Stadia service. Now, if you're not familiar, Stadia is, and pretty soon I'll have to say, was a cloud based streaming gaming platform with a WiFi connected controller. So subscribers could purchase this game controller and then using a compatible system like a television that has Chrome cast capabilities, they could access a selection of game titles. They could also purchase games to add to their virtual library, and then they could access that game at any time. But Stadia struggled and flailed around a bit, and Google decided to pull the plug. However, the company is doing some stuff to take a little bit of the staying out. For one thing, Google says it will release a quote self serve tool to enable Bluetooth connections on your Stadia controller end quote. So at least that means Stadia owners will be able to pair their otherwise useless controllers with other systems such as a PC, and use the Stadia controller as a standard wireless game pad kind of device. This is a pretty big deal because previously there are only two ways you could connect the Stadia to something else. You could use a physical USB cable, but you know a lot of folks hate using wired controllers, so that was a non starter for a lot of people, or you could use WiFi, but nothing else uses WiFi as a connectivity standard for game controllers, so unless you enable Bluetooth support, the controller just can't connect wirelessly to other devices magically. Anyway, it's good to hear that Google is at least making some efforts to avoid Stadia controllers. Transitioning from that thing you never used because you forgot about the nervis too e waste. It extends the useful livelihood of the hardware, and that's important. Also means that once that Bluetooth connectivity is enabled, I can use my Google Stadia controller with my computer and play games on it without having to, you know, use an Xbox controller or something like that. So I'm all for it. And that's it. That's the tech news I have for you Tuesday, January three. If you have suggestions for topics I should cover in future episodes of tech Stuff, please reach out to me. You can download the I Heart Radio app and navigate over to the tech stuff page using the search field. There's a little microphone icon. Click on that you can leave a message up to thirty seconds in length. Let me know what you would like to hear or If you prefer, head on over to Twitter and send me a message using to handle tech Stuff hs W and I'll talk to you again really soon. Text Stuff is an I heart Radio production. For more podcasts from I Heart Radio, visit the i heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows.