WeWork declares bankruptcy, there's a ton of AI news in the tech space and some attendees at an NFT conference received painful burns. Plus much more.
Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeart Podcasts and How the tech are you. It is time for the tech news for Tuesday, November seventh, twenty twenty three. First up, last week, I mentioned that the company we Work was rumored to be headed for bankruptcy. Now the company has filed for bankruptcy. So at its height, we Work was worth somewhere in the neighborhood of forty seven billion with a B dollars. Currently it's worth less than fifty million dollars. That is an incredible fall from grace. We Work is not actually a tech company. It was treated like one. It was treated like it was a tech startup, but you know, it is just all about buying or leasing office space and then subleasing that to customers. We Work was already in a rough spot before the pandemic even hit, and obviously in the wake of the pandemic, Corporate America has changed its approach to work in quite a few places. So this doesn't actually mean that we Work is totally going to go out of business. That's not what bankruptcy means. What it means is that the company is going to receive protection against you know, it's various creditors because it's in deep, deep debt. And reps are saying that the plan is to reorganize the company and to have as little disruption in business as possible, while it will likely have to get rid of certain properties, you know, sell them, offer or cancel leases or whatever, they want, to keep as many properties in operation as possible. At least that's what the reps are saying. Personally, I think we Work as a business plan has never been so super strong. The whole concept isn't a new one, and it typically has pretty small profit margins. So I'm not saying it's impossible for we Work to find stability, but I'm personally pretty pessimistic about it. Open Ai held its first developer conference yesterday. They called it dev Day, and early on open Ai kicked things off by giving developers an incentive to make apps built on top of the GPT technology through a five hundred dollars credit. The company showed off some new capabilities to developers, including a simplified way to create customized GPT agents using natural language which is pretty incredible really. Open ai is also going to launch an online store where developers will be able to sell their custom GPT agents to customers. The company also announced it would offer protection to developers against copyright infringement claims, so open ai says it will cover costs or those kind of legal claims against developers. The company also announced that it hit a huge milestone of one hundred million weekly users, and it sounds like things are just getting started, which I'm sure is the source of anxiety for a lot of creative types out there who are already concerned that models like GBT may be devouring their work and ultimately will set up technologies that compete directly against them. Tech explores Rob Nichols has an article titled do you trust AI to write the news? It already is and not without issues. So in this article, Nichols tells the story of how Microsoft reposted an article that was originally published in The Guardian. So, in the repost of this article, Microsoft also enabled an AI generative tool that automatically created a poll and connected it to the news story. Unfortunately, this news story was about the murder of a young woman in Australia. Named Lily James, and the poll was asking readers to speculate about the nature of her death, which is absolutely horrifying. Right. It's clearly negligent on the part of Microsoft to allow that through, and The Guardian had no involvement with the use of this AI tool. As Nichols points out, media companies are experimenting with AI to generate stuff like poles because poles are proven to boost engagement, but they also take time away from staff. I'm reminded of when I was working for houstuffworks dot com and we would have things like quizzes and galleries and this kind of stuff that took a lot of time to create. They did create a lot of engagement, which is why the company loved them so much, but it meant that we weren't actually spending time doing stuff like researching and writing articles, which is really what most of us wanted to do. So why not offload those kinds of somewhat mindless tasks to AI? As the story indicates, the subject matter of the news article is incredibly important. Nichols goes on to reference various media companies that are even going a step further. They're leaning on AI to actually generate articles, not just supplemental material, but full on news articles. And I've talked in the past about again how my former employer How Stuff Works did that for How Stuffworks articles. I have not actually been back to the website for a few months now to see if that's still the case, but that's what it was like in the summer. Nichols argues that AI's shortcomings can create unfortunate, tragic, and even dangerous consequences, and I think that's right on the money. With a very strong editorial staff, you could potentially weed out articles that are misleading or harmful, But at some point you're asking editors to act both as an editor and as a writer to rewrite pieces, and then you start getting into these unmanageable workloads. So I'm not entirely convinced that it even makes sense from a business perspective. It certainly isn't going to help things like editorial morale. Meanwhile, the News division of CBS has launched a unit dedicated to investigating things like deep fakes and misinformation, particularly from generative AI. The unit has the name CBS News Confirmed, and they will have actual real life human beings at the helm. Thankfully, Claudia Milne and Ross Dagan are going to oversee the department. The company is looking to hire experts in journalism and AI. Again, this is really encouraging, y'all. I mean not to get on a soapbux, but journalism in general has taken a real bad hit over the last couple of decades. And to see a company say, no, we want experts in journalism and in artificial intelligence so that we are taking a responsible and accountable approach toward reporting on this kind of stuff, I think that's a huge step in the right direction. And moreover, this is something that's arguably already in necessity because generative AI tools are pretty sophisticated, and they are widely distributed, and they are largely unregulated, So it is something that we do need to be put in place in order to prevent harm from being committed across entire populations. Microsoft announced a partnership with in World AI to create Xbox developer tools that well, I mean within World AI. I'm sure you've already guessed it. They're going to integrate AI in various ways in the game development cycle. So the idea is that developers will be able to create AI powered elements in their games, including stuff like AI generated stories and quests, and even characters. Tom Warren of The Verge wrote about this, and his piece actually surprised me because originally I assumed that these AI tools would only cover the actual game development phase on the back end, that developers would be able to use these tools to flesh out content in a game, while the human writers would focus on the most important parts of the game. So your human writers might be crafting a really satisfying and emotional story, right, and you might offload things like random NPC conversations to your AI so that you're not spending a ton of time just generating minds that players may or may not ever encounter. But according to Warren, the tool will also allow quote and AI character engine that can be integrated into games and used to dynamically generate stories, quests, and dialogue end quote. Now, maybe my interpretation is off, but by my reading, that sounds like it could mean that you could have these active within a game, not just in the game development, but in the game itself, so that as you're playing the game, you are encountering characters who are dynamically generating dialogue. At that moment, as opposed to having done it during the game development phase and then humans say yes, let's include that in the game, or no, that doesn't really work, let's strike it. If it's something that's truly dynamic, then it may be like in the game itself. So you could have a conversation with a character and it could be totally different than someone else who's playing the game and having a conversation with that same character. That to me is really interesting. Now that's assuming that my interpretation is correct, and I could be wrong. But if I am right, that means that we could see an end to NPC spouting off the same lines over and over, which would mean that we would have no more memes like I used to be an adventurer like you, and then I took a arrow to the knee. Microsoft said that developers will determine if and to what extent they'll use AI, so it's not like this is mandated, and obviously this is also a very sensitive topic. You can frame this as a way for developers to make better use of their time and to be more efficient, but you could also frame this as a way to take work away from people. Right, whether it's a voice actor with a simulated voice, or game developers or writers, et cetera. There's this deep concern that some game studios could choose to go with the cheaper AI option rather than to pay you know, those pesky human beings to do the work, and among gamers there's also a concern that AI generated games will not measure up to the top tier of titles that human beings have made in past years. On Saturday, XAI, the artificial intelligence startup from x Obsessed Elon Musk, launched a chatbot called groc Grook. If you're curious what sets groc apart from other chatbots, well, to the shock of absolutely no one, it's a bit of a potty mouth. It takes a more grouchy and vulgar approach to answering questions, almost as if the chatbot is insulted that's being bothered to answer those questions in the first place. In some cases, Elon Musk was very coy about gosh I wonder who decided that the chatbot should have an attitude anyway. Xai has indicated that the actual chatbot will have a couple of different modes. It'll have the fun mode, which presumably is the one that has all the attitude, and then I'm guessing it'll have an alternative that'll be a little more straightforward and standard, something that's more in line with the other chatbots that you can find out there. Musk's plan is to release the chatbot to ex premium subscribers once it emerges from beta, which is pretty darn funny because for ages, Musk has argued that one of the biggest problems with Twitter are the bots, and now he's releasing one to Twitter. But whatever, Okay, I'm gonna take a quick break to thank our sponsors. We'll be back with some more news in just a moment. We're back. So. Lucas Ropeck of Gizmoto has an article titled Cruz robotaxis require remote human assistance every four to five miles. As that headline suggests, it has been a bumpy road for the autonomous taxi company, and you've got to remember it. Cruise is also owned by General Motors. Just recently, the state of California revoked Cruse's license to operate autonomous vehicles due to concerns that the company vehicles were quote an unreasonable risk to public safety end quote. This news story is that apparently staff at Cruz frequently have to intervene and provide what was called remote assistance to Cruise vehicles due to the tendency to encounter situations that the vehicles aren't able to navigate. Tiffany Testo, spokesperson for Cruz, said this happened every four to five miles of travel among the company's vehicles, so not four to five miles per vehicle, but rather across the fleet every four to five miles, there was a need to provide remote assistance. Ultimately, the story seems to reinforce that we're still pretty good ways from a future of truly autonomous vehicles and that human intervention is still a necessary component. I will add that it wasn't very clear what extent that assistance goes to, right, whether it's just providing a little bit of data and then the car takes care of everything else itself, or if it goes so far as to require remote operations. TikTok is ending its Creator Fund on December sixteenth, So in case you're not aware, the Creator Fund is a pool of money. Currently it is valued at around two billion dollars, and TikTok uses that pool of money to issue payments to creators who generate a ton of videos from their work, so or a ton of views, i should say, for their videos. So the whole idea was this would be a direct way for TikTok stars to monetize their work because in the past they really had to hustle. Right, they could go viral, but there was no way to make money off of that unless they also landed a sponsorship deal with a third party. The Creator Fund was meant to be a more direct path to monetization, but it didn't get very good reception. Lots of creators complained that when they did receive a payout it was pennies on the dollar. They were barely making any money at all, and it wasn't worth the amount of work, nor did it reflect the tremendous number of views some of these folks were stacking up. So that program is going to go away on December sixteen. However, TikTok does have an alternative in place called the Creativity Program, and it sounds to me like it's pretty similar to the Creator Fund, except this one is specifically focused on longer form videos, stuff that's at least you a longer than a minute, and I'm not sure if this also means that TikTok will be better when it comes to storing creator data. It came to light earlier this year that some creator financial data, like personally identifying and very private financial data of creators, was being stored on servers in China. This was despite the fact that TikTok representatives had been claiming that all that kind of information would only be on servers in the United States or in Singapore. But Forbes investigated this and found that at least some of it was showing up on servers in China, which is concerning. Sony is following Microsoft's lead by discontinuing the PS four and PS fun integrations with x also known as Twitter. Microsoft ended integration for the Xbox way back in April. Sony has not commented on the reason for ending integration with x but if I had to guess, I would say it has something to do with X's change to its API or Application programming interface. So back in April, Twitter at that time shut down most of the features that were found in the free tier of its API. Instead, they introduced these very hefty paid tiers and the enterprise level tier had potentially really hefty price tags. Wired reported last May that some companies could be looking to pay as much as forty two thousand dollars a month in order to make use of this enterprise API. So, assuming Sony was incurring substantial fees to allow for x integration, it's no wonder that they've decided to shut it down now. It's more surprising that it actually stuck around half a year longer than Xbox did, so that's something. Now. You might remember that Epic Games, the maker of the insanely popular title Fortnite, got into a massive legal battle with Apple regarding how Apple handles payments within iOS. That is still not fully resolved because while a judge ruled mostly in Apple's favor, the judge did give Epicic some considerations. That whole case is now headed toward the US Supreme Court to weigh in on it. Meanwhile, Epic is now pursuing a similar legal strategy against Google. So, like Apple, Google strategy and mobile is to funnel payment options through Google itself, and that allows Google to take a commission, sometimes as large as thirty percent per payment. Epic argues that Google's policies are anti competitive, and that they hurt consumers and they drive up prices. Whether the court will follow the path that the Apple lawsuit took remains to be seen. Complicating matters is the fact that Google is currently in the hot seat with the US government in a much larger antitrust investigation. You know, I've often talked about how the way NFTs got rolled out was a total disaster that while I don't necessarily think NFT technology has no place in the world, the way it has been introduced was really really dumb. They didn't really amount to much more than a digital receipt, despite promises that NFTs were going to enable all sorts of interesting implementations. Instead, it just became a speculative circus that ultimately ended in disillusionment, with the texts reputation suffering a massive and maybe even fatal blow. But despite all that, there are still true believers out there. Some of them are pretty badly burnt. And I don't mean that they were burnt by NFT values crashing. I mean they literally got burnt. You see, in Hong Kong there was this big event called Apefest, which was for devotees of the board Ape Yacht Club NFTs and while there, a bunch of attendees reported that they later suffered really bad pains. Some of them even weren't able to see. And while I haven't seen any definitive proof about the matter, the speculation is that the venue hosting this event was using these powerful UV lights as part of its lighting rig and those lights overexposed attendees to UV radiation, so they essentially got sun burnt inside and they even suffered burns to their eyes. According to Jess Weatherbed of The Verge, a totally different event in Hong Kong that happened way back in twenty seventeen had a similar problem. Folks discovered that the venue was using UV lights that were meant to be used to disinfect stuff, so they were emitting UV at a much higher intensity than say, your average black light. Now, just to be real here, I'm wishing everyone affected a swift recovery because it just sound it's awful. And that's it for the Tech News Today for November seventh, twenty twenty three. I hope you are all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio, app, Apple podcasts, or wherever you listen to your favorite shows.