Is Elon Musk working on a kinder, gentler AI? Could Microsoft's partnership with OpenAI take a huge chunk out of Google? And how do artists protect themselves against deepfakes?
Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you. It's time for the tech news for Tuesday, April eighteenth, twenty twenty three. Yesterday SpaceX had to scrub its planned orbit old test of the Starship spacecraft. Starship is a two stage launch vehicle. The second stage also serves as a spacecraft that's capable of carrying a crew or you know, cargo. It is the most powerful launch vehicle ever built to date. It is capable of producing nearly twice as much thrust as NASA's Space Launch System, which is its latest launch vehicle. But Starship has not yet left the ground. SpaceX has test fired Starship engines in the past, igniting thirty one of the thirty three engines in a test, but those tests had the launch vehicle bolted to the platform, so you know, it didn't go anywhere. This was really just a test of the engines. Yesterday, the plan was to launch Starship into orbit. It would be an uncrude mission, so there's no one aboard, and it was meant to go into orbit and take a full orbit of the Earth, but that didn't happen. And ultimately, also this is important because it's different from how it will normally be operated. The starship was meant to have both stages crash into the ocean. Now, in normal operation, these components will return to Earth with a controlled landing, so that you can reuse the same vehicle repeatedly and thus bring down the cost of launching things into space, just like they've been doing with the Falcon nine. But this is a much larger launch vehicle, and for the first test, the plan was just to let it crash into the ocean. Unfortunately, none of that happened because a technical error in the form of a frozen pressurization valve meant that they had to scrub the launch and plan for later this week. SpaceX has posted on Twitter that it is now aiming to try again on Thursday, April twentieth, when the company hopes the engines will just blaze and shake free the shackles of gravity. Now onto Elon Musk, who appeared on Fox News in an interview with Tucker Carlson yesterday. Part two of that interview will air today and they covered a lot of ground. Musk waved off Twitter's troubles and said that they were mostly due to just bad timing. I think a lot of folks would disagree that bad timing is the only or even primary cause for Twitter's woes, which are pretty widely distributed across things from having laid off eighty percent of the workforce to seeing around fifty percent of ad revenue drop out, seeing high profile accounts leave the service, Like, We've seen a lot of things happen at Twitter that I don't think relate just to bad timing. He also made an unsubstantiated claim that the US government has had backdoor access to Twitter that essentially government agencies, though I don't believe he named any in particular, could even look at people's private direct messages, that that was included with the access the government had to Twitter's back end. Now, he didn't produce any evidence for this claim, and if it is true, it's rather shocking that we didn't see any hint of that during the Trump administration, or that Trump himself would be banned from the platform. That seems odd if the government had that level of access and manipulation of Twitter, because It's not like Joe Biden has been president forever. He became president in twenty twenty one, and it was less than a year later that Musk started to buy up Twitter stock with the intent of ultimately purchasing the platform. And I infer from Musk's comments that he believes tools like chat, GPT and Google Bard and Microsoft Being represent dangerous implementations of AI, perhaps even representing an existential crisis, and that his AI tool would somehow be different from these and peaceful and beneficial to humanity by just recognizing that humans are pretty nifty. Musk himself recently showed support for a proposal to put a halt on AI development for six months, and some people have said that might not have been so much about trying to make things safe for AI development, but rather it was intended to give Musk's own efforts a chance to catch up to everyone else because they were way ahead of the game. Honestly, I found most of what Musk said to be speculative and difficult to believe. Now I will say this, I do think chat, GPT and other AI tools are potentially dangerous insomuch as they can be used to do stuff like spread misinformation or help craft malware and to perform other malicious acts. But companies like open ai are at least trying to put protections in place to prevent that from happening. So far, those protections haven't been very robust and people have found ways around them, but they're still trying to prevent that. I do not see chatbots as being an existential threat. There's nothing inherent in chat GPT that gives it incredible power. It seems really compelling and powerful and somewhat scary because it appears to communicate the way we do. But that is the extent of what chat GPT does. Like, it's not able to take action really, and it's ultimately generating responses on a probabilistic scale. In other words, it's predicting what the next word should be and then putting it there. It's not thinking. So I think Musk is conflating generative AI stuff like chat GPT and Google Bard with AI as a whole. And you know, this is one of those things where you're like, you say, all ducks are birds, but not all birds are ducks, right. All generative AI is a type of artificial intelligence, but not all of artificial intelligence is generative AI, and I think some of the more dangerous implementations of AI have nothing to do with large language models and chat ponds. Anyway, we've got a lot more to talk about with AI, so let's just move on. Sundhar Pachai, Alphabet's CEO, appeared on Sixty Minutes this week to essentially say that AI is going to be disruptive, that it will impact lots of jobs, including knowledge based jobs, so jobs like my job, and it's not up to the tech industry to figure out how to do that responsibly. All right, So that that last bit was probably a little bit of interpretation on my own part. What he actually said was that it's up to society to figure out regulations and laws, to create the borders within which AI can operate, and to make sure that the rules quote align with human values, including morality end quote, and that quote it's not for a company to decide end quote. This is really interesting coming from a leader of a company that used to have the motto don't be evil. Of course Google shed that motto years ago, so you could say that doesn't really apply anyway. One would think that creating AI that does not cause harm would in the long run be in the best interest of a company in that business. But I guess that's just crazy talk. Anyway, what PITCHAI is saying goes beyond chat bots and into broader implementations of AI. And while I disagree with him about the role companies should play visa v making sure AI doesn't cause harm, I do agree that AI is going to have an increasingly undeniable and disruptive impact on countless jobs and tasks. Now, this does not necessarily mean that the impact will always be bad, or that it will definitely eliminate jobs, although that is certainly a possibility. My hope is that we're gonna push AI to augment rather than replace human employees. Otherwise, well, let's just take this to an absurd conclusion. Right. Let's imagine we're in a world where AI is doing all the work. Humans have been replaced by AI. So humans are out of the equation because they no longer are needed to have work be done. So what is the AI doing work? For? For what purpose? For whom is the AI doing work? If there are no more humans working, what is the AI doing You don't have consumers anymore because you don't have income right, people aren't doing jobs, so they're not making money, so there's no real economy, which means no one can buy anything because no one has income. The companies would cease to exist because there's no way for them to even make money. At this point, money is meaningless. There's no money, no one has a job. So it seems to me like that absurd conclusion would quickly fall in on itself without the implementation of something like, I don't know, universal basic income, then you could reach that Star Trek future right where nobody has to work, everybody makes the amount of money they need to be able to do all the basic things that we need to do, and then we spend the rest of our time pursuing whatever it is we want. But we don't have universal basic income. That's a piece that's missing. And meanwhile, if everyone's pushing for this future where AI is replacing everything, where does that get us in the long run? Sure, in the short term, you could say we've cut way back on costs because we fired all the human employees, so we don't have those costs anymore, But that doesn't sustain itself for very long at all. I don't know. Maybe I'm just missing the big picture here. All right, Well, while I'm spiraling in this weird future reality, let's take a quick break. Okay, we're back now. Before the break, I talked about alphabet and Google's CEO talking about AIS impact. Of course, Google could be looking at a very specific situation in which AI could have a potentially negative impact. In fact, it already has had a negative impact on Google. So apparently last month, Google employees got word that Samsung is considering ditching Google Search for Microsoft Bing, which of course is augmented by chat GPT. There's our AI angle. Now, if Samsung did do this, if Samsung chose to switch from Google Search to Microsoft Bing, that would be a huge blow to Google's dominant market share in the search space. For years, Google has enjoyed being the eight hundred pound gorilla in online search, which honestly, that's an understatement. So according to Statista, which you know you keep that in mind, like that's just one source, Google Search took up nearly ninety seven percent of the mobile market share in January twenty twenty three. Now that's mobile, not search overall, but ninety seven percent. So if that's true, if ninety seven percent of mobile devices use Google Search, as like the default search, there's really nowhere to go but down. You're not really going to creep up. You certainly are in the realm of being called a monopoly, and it'd be very difficult to argue against that. Right. But if you're Google, you do not want to see those numbers go down because that's bad for your business. So the word that Samsung, a massive important player in the mobile space, could turn to Bing instead of Google then sent a panic through Google, which The New York Times picked up on and then published an article about it. And so that panic then spilled out from Google internally to Google shareholders, and yesterday the company saw stock prices dropped by around four percent. As for revenue, according to Gizmoto, Samsung switch could mean Google misses out on around three billion dollars of revenue per year. Yaousa. That's a huge amount of money. Of course, Google rakes in more than one hundred and sixty billion dollars per year, so it's still an enormous company. But maybe Pachai was warning us about AI because Google's hoping to push out its own AI augmented search tool in order to keep Samsung's business and keep that strangle hold, particularly on the mobile search market. Bloomberg reports that a couple of research papers show chat gpt is pretty good at figuring out whether news will be good or bad for a company's stock price. So one of the two papers analyzed how well chat gpt could analyze statements that came out of the Better or Reserve to determine if they were quote unquote hawkish or dove ish, and the other paper analyzed chat GPT's ability to parse financial news headlines about companies and then figure out whether those headlines were a good indicator for the stock price or a bad indicator. And apparently the finding show that chat gpt is pretty darn good. It's sussing that stuff out, you know, almost as good as a trained human analyst would be. So I wouldn't call chat gpt superhuman. It's not like it's doing something that people cannot do. However, chatbots can analyze way more information at a much faster rate than a human can do. And moreover, it's possible that individual investors could start to lean on tools like chat gpt to figure out which investments could be safe bets, which ones could be long shots, and which ones might just be throwing your money away. So if you can get the same sort of guidance from chat GPT that you would normally need a professional analyst to provide, well, that definitely can change the game. And it's possibly bad news for the analysts out there because their jobs could become one of the ones potentially impacted by AI. SONY recently held its World Photography Awards and chose photographer Boris Eldigson as the recipient of an award in the Creative Open category for photography. Eldigson has declined to accept this award because he says the image he submitted was not in fact a photograph he snapped, but rather a computer generated image. Eldigson says that his intent was to test a major photography competition to see if the field is ready to distinguish between photographs taken by human photographers and computer generated imagery, and he described his state of mind as that of a cheeky monkey, his words, not mine. I can certainly appreciate that, And while I'm sure that this matter will spawn a lot of criticism for L. Dexon and the competition, I think his intentions were good ones. His actions show that things like competitions, we really need to take into consideration the possibility of AI aided or generated design as part of that competition, or figure out how we prevent it from being part of the competition if that's our desire. So, Eldigston said that the photography world needs to have an open discussion about AI's place in photography. Does it have a place? Should AI generated photography even be considered photography at all? If not, how do we detect it to prevent someone with access to a powerful image generator from dominating what is otherwise meant to be a fair competition between human photographers. So these questions aren't just hypothetical now that image generating technology has reached a sufficient level of sophistication so that it can pass itself off as a human created work. A spokesperson from the World of Photography organization appears to contradict at least some of Eldigson's statements, claiming that he had made it clear that a generative tool at least played a part in constructing the image, and that they in turn thought that it was interesting, and that he had quote unquote fulfilled the criteria for the category and that the organization only withdrew from conversations with him after he said that he had purposefully attempted to mislead the competition and then declined the award. So whether the organization was aware of the AI involvement, maybe the extent of the involvement was miscommunicator, I can't really tell, but it sounds like the organization says, no, we knew what was going on when we gave him the award. He just declined it, whereas he's saying, I submitted this as my own work, but in fact it was AI generated. I don't know who's telling the truth or where it's getting lost in this article. It may be that it's a little more complex than that, and I just don't understand it all yet. Okay, one more generative AI story. Recently, a person using the handle ghost writer used Generative AI to create a song featuring the generated voices of Drake and the Weekend. The song is called hard on My Sleeve, and it became a kind of viral sensation on platforms like Spotify and TikTok. So the voices sound like Drake in the Weekend, but they are generated by AI. So some refer to this as deep fake audio. Neither artist was involved in the actual making of the song, and it raises a bunch of questions about image and personality rights. So here in the United States, we don't have any laws at the federal level that protect personality or image rights. So you have no right to your image or to the expression of your personality the federal level. Anyway, some of the states do, but not the federal level. So if you were to create a deep fake audio for some song and you were using someone else's voice for it, there's nothing illegal about that at the federal level. There have been companies that have made copyright strikes for deep fake audios, but you can't copyright a voice or a personality. So I guess they're just using that because it's the only weapon they have right now for those kind of cases. But it really shows how the US needs to revisit concepts like image protection and personality protection laws. All right, that's it for this episode. Before I run too long, I just want to say I hope you're all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple podcast asks, or wherever you listen to your favorite shows.