Clean

Tech News: May the Fourth be With You

Published May 4, 2023, 10:19 PM

Sadly, we don't have any Star Wars news today. But we do have tons about AI, Meta facing governmental scrutiny, Microsoft making a questionable decision with regard to Edge and lots more!

Welcome to tech Stuff, a production from iHeartRadio. He there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you. It's time for the tech news for Thursday, May the fourth be with you. Yeah, big Star Wars nerds over here at iHeart as you might imagine, So we've got a whole email chain with podcasters you're probably familiar with, and a lot of producers that you may have heard of, but you know, aren't always on Mike sharing hilarious Star Wars related photographs. I wish I could share them with you, but instead we're gonna go through a whole bunch of tech news now. On Tuesday, I talked about how the Writers Guild of America or wg A, which represents writers for TV and film here in the United States, is now on strike. The negotiations between the WGA and various Hollywood studios could not reach a satisfactory agreement. So here we are. And I also talked about how one of the concerns that the WGA has and one of the reasons that I'm covering the story on tech stuff at all, has to do with generative AI. Now I've got a little bit more information thanks to Vice, which has an article titled GPT four can't replace striking TV writers, but studios are going to try a sidebar, Vice is trying to avoid bankruptcy right now, and that company actually has some really great investigative journalism across all brackets of news. So those of y'all who don't use ad blockers on your browsers might want to head over device and scan the articles. I have no connection device whatsoever. I just hate to see a company that has really pushed for some great journalism. I mean, there's other stuff on there too, but there's some really great journalism on there. I hate to see him in trouble. Anyway, back to the story, so Vice's Chloe Shang explains that one of the WGA's demands was for studios to promise that AI would not become part of the creative experience for film and television, that AI would not be used to do stuff like punch up a script or to generate a draft of a story, or otherwise perform work that should be done by a union represented writer. The WGA's concern is that studios would lean on AI to do some of the early work. It's not that AI would completely replace human writers, but rather that studios might use AI to do like a very rough draft or even just a treatment, and then would hand that over to a human writer to polish it up into something that's actually usable. But the human writer would be doing so at a lower rate because the writer technically wouldn't be coming up with their own ideas. Instead, they're refining an already existing idea. So this is kind of like if you've got a script and you go out and you get a writer to come in and read the script and punch it up a little bit, that second writer isn't going to be making the same amount as the primary writer did. And so what they're saying is you're just trying to push more work over to AI, and then we're left to pick up these terrible ideas that AI is creating, and we're expected to make something of them, and at a lower rate of pay. The studios have balked at the WGA's demands, saying that this decision should be made year by year because they are framing AI as a sort of advancement in tech and that the industry shouldn't be ignoring advancements like that. The WGA counters with the observation that studios have been pushing writers to move toward a more gig economy model, so kind of a work for a higher situation where writers have to hustle gig to gig, with writers' rooms getting smaller in the process, which in turn means they are even fewer opportunities for writers to find work. And y'all, I'm a gen x feller. I grew up before the gig economy was really a widespread thing, and I don't think I have seen a more destructive environment that you know, doesn't directly tie into really terrible stuff like racism and misogyny. The gig economy is terrible. It clearly forces tons of people to work themselves to exhaustion and constantly they stress about whether the money they make this week will pay the rent next week. And meanwhile, the companies that are benefiting from those workers are always figuring out ways to keep more and more of the cash for themselves. Now, you could just say same as it ever was, but it's really more blatant with the gig economy anyway. This is the reason unions exist to provide workers protection in aggregate because a one on one basis there is an undeniable power imbalance between the employers and the employed. So I'm glad the WGA exists to fight these fights and to lay these this groundwork. I don't know how that's going to turn out. Obviously, the strike is still in progress, no agreement has been reached. I don't know if the studios will budge on this, but I think it is important to set those rules now, because even if you argue that generative AI at the moment isn't good enough to even do this, that wouldn't stop studios from trying, and it doesn't stop them from trying again as the AI gets better. So setting out rules to protect unionized workers makes sense. Over in Washington, DC, the US government is going to play host to executives from companies like Microsoft and Alphabet that's Google's parent company, in order to hold a discussion about AI. Theoretically, this meeting will explore the possible risks that AI could pose and the methods that companies should use to guard against those risks. But y'all, I've watched politicians talk to tech experts before, and it often feels like the two sides are speaking totally different languages. Hopefully the questions that these executives are going to get asked are going to be relevant, and hopefully the answers they give will be direct, but honestly, that's a long shot. The motivation behind the talks, however, is a very good one. How can we make best use of AI while avoiding as many serious problems as possible. Already with generative AI, we're seeing numerous issues ranging from copyright concerns, plagiarism, answer accuracy, the tendency for AI to quote unquote hallucinate and generate wrong answers, and misinformation. And there's a lot more to really consider. And that's generative AI. There's lots of other versions of AI we need to really consider. There's also the concerns about how it could impact the job market, as we see with the WGA strike, as well as the CEO of IBM's recent revelation that the company may consider AI to fill in as many as seven eight hundred empty position over the next few years rather than using a human being in that job. There is a lot of ground to cover here, and I'm sure next week I'll have some sort of update to this story. The White House isn't the only US government entity looking into potential risks with AI. The Federal Trade Commission, or FTC's chief has said that the agency is investigating whether AI could create more disparity between companies, saying that it could potentially enhance the power of dominant firms according to Reuter's so, in other words, it could become an anti competitive risk. The FTC chair said that the agency is looking into how AI could make some problems much worse, such as allowing companies to more effectively commit fraud. For example, in the wake of spectacular collapses like the cryptocurrency exchange FTX, there's increased pressure to uncover and address fraud before it can have a massive impact on investors and customers. Con also pointed out that AI tools could potentially decrease competition by allowing for price fixing strategies across industries, that it's not a far fetched idea. We've actually seen something similar to that play out in a microcosm, because in Texas there's this ongoing issue where landlords have been using a specific company in order to set rental prices, and there are class action lawsuits that are saying that this amounts to price fixing, that it's a collusion offect across landowners who are then setting rental prices at exorbitant levels because there's no competition. There's nowhere to go where you could pay rent less at somewhere else, because this centralized service has removed all competition, and that you effectively have price fixing, even if the landlords themselves didn't intentionally set out to do that. That's the effect that this approach has. Oh Brave New World to have such artificial intelligence in it. The Verge reports that Microsoft is unveiling some new bells and whistles with the bing chatbot. Now. As you may recall, Microsoft is leveraging open ais Chat GPT to power the bing chatbot, which, unlike open ais Chat GPT, actually has access to real time information. Chat GPT's info cuts off at twenty twenty one. It doesn't have information more recent than that. Bing is different now. Apparently users on Edge will be able to use Bing to do stuff like find a restaurant and book a reservation, all within bing chat, without having to go back and forth between multiple websites like review sites and the restaurant's own site, and then maybe something like open table or whatever. So you might have a conversation like, where's a good place to take my partner for their birthday? They are allergic to shellfish? And then the chat bought would make a few recommendations, and then you might say, okay, well what's the menu for place X? And it gives you some examples and then you say, great, make a reservation for Friday night at nine pm for two, and it takes care of it. It's a lot like what Google showed off at a Google ioevan a few years ago with its assistant feature. In that demonstration, the company showed how the assistant could pose as an actual human being and make reservations on your behalf over the phone, which was, in my opinion, kind of creepy. Bing's approach is obviously a bit different. It's web based, it's not necessarily making phone calls on your behalf. The Verge also says that another feature lets you use Bing to find content to watch without having to search around to see what platform has what. So instead of saying, oh, I want to watch the latest episode of Barry, but I didn't remember that it's on HBO Max. How do I find it? Do I just start searching? Do I go to different platforms? Do I look to see if it's on their pages. With this, you would just tell bing and Chat and it would send you to the right location. You would just say I want to watch the latest episode of Barry Boom. It's into HBO Max. So useful in that in that regard as well. And if you want to learn more about the various features that are being unveiled, you can go to the Verge. They have an article titled Microsoft's bing Chat Butt gets smarter with restaurant bookings, image results and more. Speaking of end more, we'll have more after we take this quick break. We're back and now we switch from AI to some Meta stories not Twitter at this time. Meta, So first up, France's antitrust agency has leveled a mandate against Meta. The company has to change how it gives access to ad verification partners. So essentially, what the agency is arguing is that Meta has this dominant position when it comes to online advertising and it has used that to dictate terms to partners, and moreover, that Meta needs to be more transparent with its analytics and to allow more ad verification companies access to those analytics. Ad verification companies what they do is I mean it's in the name. They're essentially trying to verify that an AD campaign is doing what it's supposed to do. So an ad verification company monitors analytics to make sure that the reach is what you wanted, like you're hitting as many people as what your ad campaign was supposed to do, and that the results are positive ones, that you're getting more traffic and more commerce whatever it may be because of those ads. However, to be able to do that, you have to have access to the data, and according to the French regulators, the problem is that Meta would offer that kind of access to major partners, so big advertising verification companies could get access to that data and actually do this, but smaller ones were denied that opportunity, and this ultimately puts those smaller companies at a greater disadvantage compared to big players in the space. It is anti competitive. This is not just based on the agency's opinion. An ad verification company called ad Lukes actually sought access to Meta's analytics and received a denial. This is a smaller ad verification company located in France. So ad Lukes subsequently brought this case to the Antitrust Agency and things progress from there. The agency says that Meta has two months to change its approach and adjust the rules for ad verification access or else face. I don't know some sort of repercussions. I imagine I couldn't actually find out what the consequences would be for Meta. If it failed to do that, it would probably be some form of actual legal action against the company. Back here in the States, Meta faces more government scrutiny. The FTC, you might remember them from earlier in the episode, says that Meta broke the rules and violated a policy order that was issued in twenty twenty with regard to how the company collects and uses personal data, specifically that belonging to kids. And so yesterday the FTC proposed barring Meta from monetizing data belonging to kids completely. The Meta would not be allowed. It would be illegal for Meta to exploit the person information that came from kids. You might remember that a couple of years ago, Meta was in the hot seat when a whistleblower came forward with accusations that the company was regularly pursuing business strategies that could directly or indirectly harm people, including children. In fact, at that time, Meta was actively developing a version of Instagram targeting kids. Because currently Meta's platforms are intended for those who are at least thirteen years old. The FTC is essentially saying that Meta has proven repeatedly to be a poor steward of personal information and that there exists a need to update the earlier privacy agreements to place further restrictions on how Meta collects data and how it can then use that information, particularly with regard to children. Plus, the FTC says that Meta should not launch any new products or make substantive changes to existing ones until there is confirmation that those products comply with the terms of this agreement. Meta has thirty days to respond, and Meta spokesperson Andy Stone made no bones about it. He said that this is a political stunt and the company is going to push back against updating this agreement, which is not a big surprise. And I don't know if I agree that it's fully a political stunt. I think it partly is, but I also think it's something that is genuinely needed, So yes, I think in part this is about scoring political points, but I also think that it is beyond time for us to consider how to best protect private information from being exploited in ways that have, you know, truly negative consequences. Further down the line, while the FTC is pushing back against meta to back off of exploiting the personal data belonging to kids, some senators are taking an even tougher stance. For senators, two of them Democrats and two of them Republicans, so it's bipartisan have in introduce legislation that would ban social media platforms from allowing anyone under the age of thirteen to have an account, and further, anyone under the age of eighteen would need permission from a legal guardian before being allowed to make an account. On top of that, for those kids who are between the ages of thirteen and eighteen, things would have to work differently on those platforms. Namely, services like Facebook or TikTok or whatever would not be allowed to use algorithms to recommend content to those users. That is a huge change. As I'm sure you all know, these platforms rely on algorithms to select and serve content that the platform determines is most likely to keep that specific person engaged and stuck to the site or service whatever keeps the eyeballs glued to that screen as long as possible so that more ads can be served to them. That is the name of the game. And if this legislation were to be adopted into law, the social networks would not be able to treat kids the same way they treat adults. However, there are a lot of steps that have to happen before this proposal can turn into an actual law, and there are some debates within government as to how these restrictions might infringe upon civil civil liberties, like, you know, how could they potentially violate the First Amendment, for example, And if you can make an argument and a valid one, that such a measure would be in violation of the Constitution, then that's a that's you know, a non starter, right, the legislation is not going to be able to go anywhere. Also, we have to keep in mind there are obviously tech lobbyists who spend a lot of time and a lot of money bending the ear of politicians in an effort to minimize regulations and to get as favorable a political environment as possible to do their business. So I know, means, is this new proposal absolutely bound to become law. It might, but there's nothing that is guaranteeing it. Then we have the state of Montana here in the United States, which figuratively told the rest of the world to hold my beer. So Montana's governor, Greg Gianforte had previously supported a bill that would effectively ban TikTok in the state of Montana. But then folks pointed out that this bill, by singling out TikTok, was likely to be deemed unconstitutional because here was a state government acting out against a specific company. It looks like it's a targeted attack, because it kind of is. So then gia Forte issued what's called an amendatory veto. So essentially, this proposes different language for the bill, and if the state legislature adopts that language and changes the bill so that it no longer is singling out TikTok, the governor agrees to sign the bill into law. Otherwise, the governor will veto the legislation since you know, constitutional issues would ultimately mean that the law would eventually get overturned by a court anyway. But here's the problem. This new language, by removing the specific reference to TikTok, but keeping some of the other vague passages would technically ban all social networks in Montana for everyone, every single social network, because the legislation says that it would be illegal for any social media application to facilitate quote the personal information or data to be provided to a foreign adversary or a person or entity located within a country designated as a foreign adversary end quote. Okay, Well, that's a big problem because even if you're not scraping data, even if you're not you know, the recipient of a fire hose of information directly from the platform. Because obviously that's the fear with TikTok, right. The concern about TikTok, at least in the United States, is that TikTok is collecting enormous amounts of information and funneling it to a Chinese company, which in turn might be sharing that with the Chinese government, and that that is potentially a risk to national security. That's the concern about TikTok. But the way this language is formed, it means that anyone in a country that's ay quote unquote foreign adversary, so example, anyone in China, if anyone in China could log into, say Facebook, and then check out, say my account on Facebook, Well, they would see my name, that's my personal information. If I hadn't you know, hidden it, they would see things like my birth date, that's my personal information. Like that's personal info I have on Facebook that anyone in China could see if they went to my page. Well, according to the the rules set down or the language set down in this proposed legislation, that would be against the law in Montana, which means that social platforms would have to just stop operating in Montana or find some way to prevent anyone in a quote unquote you know, a foreign adversarial country from being able to access data belonging to Montana citizens. It creates a spectrum wide ban. So, yeah, this is a real issue. It actually really shows how hard it is to create legislation that does not include vague language that could have unintended consequences, because I mean, how do you defend against that If you've passed a bill that has this measure in it and then someone says, well, people in China can see my information, so you have to block Facebook. You know, what are they going to say, like, no, we didn't mean it like that. I mean, it's just it's a mess, oh boy. Speaking of TikTok. By the way, Eric Hann, who was the head of TikTok's US trust and safety operations, is leaving the company. This had been rumored for a while, and apparently he has said that his role was essentially a poisoned chalice, that he had the unenviable job of being the person responsible for leading efforts to work with the US government in order to avoid a nationwide ban on TikTok, and he felt he was essentially set up to fail and to be the scapegoat for that failure. It's kind of hard to disagree with that, because the US government recently has appeared to have kind of a laser focus on banning TikTok. For the record, I do think TikTok poses some risks. I think that TikTok is potentially a dangerous thing when it comes to data collection. But as we just talked about with the previous news item, all social networks are risky. Like that, It's not just TikTok. I mean, talk presents a more clear and present danger, at least as a perception. I don't know if that's the reality. I honestly don't know if people in China have that direct access to TikTok's user data, but even if they don't, without strict privacy laws in place within the United States, all of our user data is still being gathered and bought and sold. So it may mean that there's an extra step involved, but it doesn't really matter because the Chinese government could get hold of all that kind of information across all different platforms just by spending the money. It doesn't have to have that implanted app in the US for this to happen. So I get Han's position here, and it really kind of is unfair when you think of it that way. And again, it's not that I think that TikTok isn't necessarily dangerous. I think TikTok is. But then I also think all social media potentially is. It's that we need to address privacy laws in the United States, and we've needed to do it for decades, and whether it ever happens or not, I don't know. My guess is what will happen is it will see some sort of massive ban on TikTok, which is not gonna solve the problem because there are gonna be all these other sources of data that are you know, data siphons that are going to continue the issue. The problem will still exist. Younger people will be very upset because TikTok was destroyed, and uh, the United States will continue to do nothing about protecting online privacy. So yeah, big old mess there. All right, I'm done ranting about that. I'm gonna I'm gonna drink some chemimal tea and we'll be back after this quick break. Okay, we're back and now to talk about Microsoft again. So Microsoft continues to make some decisions that I think are unwise. I think ultimately some of the company's strategies are going to result in regulatory agencies around the world giving the company a serious rap on the knuckles with a ruler. I mean, we're already seeing that unfold in Europe with certain Microsoft policies. Well they that company seems to be doubling down. Like I just I don't understand the logic behind the decisions. So The Verge reported on this issue and cited Reddit users who are saying that Microsoft is pushing out a change to IT admins in various organizations that are Microsoft customers and Microsoft is saying that it's going to make at least certain links that are posted in stuff like Outlook, which is email if you're not familiar, and Microsoft Teams, which is Microsoft video conferencing tool similar to something like Zoom. Anyway, the company says it's going to make it so that links that are shared in these services will then push users to go to Microsoft Edge if they click on the link. So, in other words, it won't matter what browser you have set as your default browser for your operating system. Instead, if you click on the link in Teams or an Outlook, it's going to open up an Edge browser window and go to that web page. Now, y'all, I don't feel I need to point out that this is falling right in line with the sort of anti competitive accusations regulators have been leveling at Microsoft recently, saying that by tying your various products together so tightly and ignoring things like someone's setting of a default browser, it is inherently anti competitive. Also, while I'm not sure what the scope is on these situations, like I don't know if it's going to affect every single link that shared or if it's just certain ones, I do know there are tools that I rely upon in my job that are optimized for a different browser than Edge. So if you want to use that tool and you want to go smoothly, you have to use a specific browser to do it. I'm sure you've all encountered this. Back in the old days, it used to be that web pages would be optimized for one browser versus another, and if you were to go to that web page using a different browser, it looked terrible. But we've largely moved away from that part. But we still have certain web based tools that are designed to work seamlessly with specific browsers and maybe not so well with others. So opening a link that would send me to an EDGE based browser that's gonna send me over the edge. Of course, worst case scenario, I could just copy the link in my email and then go into my preferred browser and then paste the link into the URL bar. But that's madness. I should just click the link and then be able to go straight to the default browser and see what I want. So, as you might imagine it, admins are not super thrilled about this change. So not only is it something that's frustrating to the admins themselves, but also you got to remember, these are the folks who have to communicate those changes throughout their various organizations, and then they are the ones who get blamed for it, even though they're not the ones who made the policy change, like they're just communicating it. But my gosh, I have seen cases where the person in charge of it becomes the recipient of so much abuse because something stops working the way it's supposed to. But it's all of the IT admin's hands because it's ultimately coming from the provider, in this case Microsoft, so it's not really their fault. If the IT admin does happen to be an organization that is a Microsoft three sixty five enterprise customer, because they're different levels of business customer for Microsoft three sixty five. If they're at the enterprise level, well, good news, they can actually change that policy. They don't have to use the edge version. They can turn that off and allow people to continue to open up links in their default browsers. But if they're working at a company that has a Microsoft three sixty five for Business account instead of the Microsoft three sixty five Enterprise account, well the business customers are going to be subjected to this change. So I guess I'm saying good luck in those courtrooms Microsoft over At Google, some employees have expressed, let's say a little bit of consternation regarding the company's extensive cutbacks. While Sundhar Pachai, the company's CEO, took home a cool two hundred twenty six million dollars in compensation last year, As you might imagine, internal communication tools played host to numerous memes that aligned on a few basic messages. You know that Google morale is dropping. The company has laid off thousands of employees, the company has reduced perks and benefits across the organization, and then the CEO is one of the highest paid in the United States. The disparity is hard to just ignore here, and it definitely makes it more challenging for Google leadership to create messaging around things like sacrifice and cutbacks in the face of a tough economy, when the head honcho could pull a Scrooge McDuck if he wanted, and go swimming around in a big old vault of money. Finally, the Independent reports that the Climate Action Against Disinformation or CAAD Commission has found multiple instances of Google running advertisements against YouTube videos that contain misinformation about the climate crisis. So back in twenty twenty one, Google up the dated its policy and said Google would no longer serve ads against content that contradicts the scientific consensus on climate change. And so what the commission found was that they have failed to enforce their own policy and in fact, the company has profited off of videos that actively spread climate misinformation, and of course the channel's pushing the misinformation, they benefit because their channels are monetized, so they're getting money as well, and that means there's a financial incentive to continue creating misleading content on YouTube, because if you can monetize it and you can get popular, then you're gonna make money. Google reps say that while their policy does state that such videos should not have ads served against them, which is probably something that advertisers want as well, brands are typically not super keen on being associated with messages of climate change denial. The reps say, well, are systems not perfect, some stuff will slip through. And Google took the list that the Commission submitted. It had like a hundred different videos that were in violation of the policy that were still monetized. Google then subsequently demonetized those videos, but I think we can draw a couple of conclusions based on this incident. One, it is genuinely hard to enforce policies on a platform that receives such a huge amount of content every single minute. It's actually at a point where it's literally impossible for humans to review everything, so it is difficult to catch it all and some stuff might slip through, So that part you can kind of see Google's point of view. However, Two, if no one calls the company to be held accountable, and if there's money to be made, we should not be surprised when we find videos that violate its policies. So eternal vigilance is what is called for. Because, as the commission found by bringing this to Google's attention, Google then did go and demonetize those videos. But if they hadn't done that, it probably would have stayed up monetized for who knows how long. So yeah, we have to hold these companies accountable to their own frickin' policies. This isn't even about holding a company accountable to the law. It's holding a company accountable to the things it says it will do. All right, and that's it for the tech news for today. I'm sad I didn't have any fun Star Wars news. It was kind of hard to talk about Star Wars tech because either I'm talking about special effects technology or I'm talking about fictional technology that we only wish existed but doesn't actually exist. Maybe next year we can hope. I hope you are all well, and I'll talk to you again, really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,447 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,444 clip(s)