Today's news covers a lot of important problems cropping up around Big Tech, from anticompetitive practices to deceptive lobbying strategies to the spread of misinformation on social networking platforms. Plus learn how researchers were able to hack into a Canadian broadcast satellite.
Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio. And how the tech are you? It's time for the tech news for Thursday, March one, twenty two. We should all be thankful that a news episode did not fall on April one, because who knows what would have happened. Then. I do know what's gonna happen. Now. We're gonna complain a lot about big tech because there are a lot of stories that just continue to put a lot of companies in a pretty bad light. And I would love for that not to be the case. But here we are, so let's get started. Now. A lot of the news, like I said, is going to be about big tech shenanigans in this episode, not as many as as I originally had lined up. In fact, I threw out a ton of stories because the episode was starting to go too long, and and also it just I was starting to get beaten down by it. So anyway, one of the things that I read about today which I found interesting is an organization called the Tech Oversight Project that has created a wiki specifically to help folks read up on the various issues surrounding big tech over the last several years, like things that big tech companies have done that have been, you know, not good. The wiki is fittingly called the big Tech Wiki, and you can read a great article about this resource and the organization behind it over on gizmoto dot com. The article is titled Watchdog Group publishes Encyclopedia of all the nasty things big Tech has done, and it's by Mac Dagaron. But one caveat because the article itself reveals this, it's not actually all the nasty things big tech has done, because even with more than ninety pages of documentation, it's just scratching the surface. But the wiki collects this documentation about things like anti competitive practices, UH, the spread of misinformation, how these companies are funding political lobbying groups to shape legislation in their favor, and controversial partnerships with other entities, such as for example, Google's work with the U. S Military, and stuff like facial recognition which could potentially be weaponized. And it's a good article, so I do recommend you check it out, and the resource, the actual wiki itself, can help you catch up on all the exhausting amount of skullduggery that's been going on in the big tech sector for quite some time. I should also add that the project receives funds from the Ahmadyar Network, which politically is a left leaning organization. Now, I say that simply because I think it's always good to keep in mind the perspective that was used to cover certain issues. I mean, for that matter, I actually, no surprise, I leaned pretty hard left myself, So I'm not saying that my perspective is the correct one. I don't necessarily believe that I know how I feel, but I would. It would be way too much hubris for me to say my way is the right way. But you should check out the article and the wiki if you want to get angry at big companies like Google, Meta, Apple, Microsoft, those kind of things. Now, along those lines, CNBC published an article titled how Google and Amazon bankrolled a grassroots activist group of small business owners to lobby against big tech oversight. There's a heck of a long headline, and that's just the kind of thing that the Big Tech Wiki would cover. So in that article, CNBC reporters reveal that the lobbying group is called the Connected Commerce Council, and it's supposed to represent small businesses and Amazon and Google are simply referred to as partners in the group. But apparently at least some of the business is listed as members of the group never actually joined it. Many had not even heard of the group, according to them, and yet their businesses were listed on the roster. Now, the implication is that this makes this a an AstroTurf campaign. AstroTurf is a term used to describe a situation where companies create what is supposed to look like a grassroots political movement, but is an actuality an attempt to push against legislation that would restrict or regulate the companies, or in some case, push for legislation that would give those companies more advantages. And astroturfing is pretty insidious. It's also a pretty common tactic. We've also seen big tech companies try to leverage small business in a way to excuse certain policies and practices. Meta has done this a lot as well, claiming that certain restrictions to its advertising strategy would harm small businesses. You know, they try to position it so it's not about hurting the bottom line of the big company, but rather it's the equivalent of think of the children. Is really what it comes down to, and it can be pretty shady stuff. Now that being said, this article also mentions that there are legitimate small businesses that are really part of the organization, so it's not like just a dummy group or anything like that. There are actual small businesses represented in the ranks that really do want to be there, and there are several that are in favor of the group and the group's policies, and they say that the group keeps them up to date on proposed legislation that could impact their businesses. So you might say, is this a real AstroTurf case or does it only appear that way due to the involvement of big tech companies that are particularly politically active, And I honestly don't know the answer to that. In a similar story, the Washington Post published an article revealing that Meta has been paying a consulting firm called Targeted Victory, and Targeted Victory primarily caters to US Republican candidates during election cycles. Specifically, Meta has hired on the consulting firm in order to create a kind of smear campaign against rival social media platform TikTok. Now, according to the Post, the goal was to reach out to various news outlets, and we're primarily talking about regional or local news outlets, and then convince them to run pieces criticizing TikTok and making various claims about TikTok's potential for harm, particularly for younger people. Now, some of those claims I think have some merit to them. I do think social platforms can facilitate harm, not that they're necessarily harmful just by themselves, but that you know, they are very effective at transmitting harm. In fact, Meta thinks this too about its own platforms, according to those internal documents that Francis Hogan leaked last year. But some of the narratives pushed by Targeted Victory are at best insincere and at worst are an outright form of misinformation. According to the Post, some of the stories Targeted Victory pushed were about harmful trends that had supposedly originated and propagated across TikTok, when, in fact, and at least a couple of those cases, those trends actually got their start on Facebook. So, in other words, the stuff Facebook was indirectly accusing TikTok of through targeted victory, we're actually examples of Facebook's own shortcomings. Now, to be fair, Meta slash Facebook does have some distance from these efforts because it's essentially paid Targeted Victory to do the dirty work, so Facebook's not directly involved, and the goals of that dirty work are twofold. One is to try to level the playing field a little bit between Meta and TikTok, because, as we've seen with Meta's financial report earlier this year, the company has been struggling to attract younger users and has cited TikTok as being one of the big reasons for that. Now, another goal is to deflect attention away from Meta and onto someone else, because Meta's been in the center of a of a target for a while now for good reason. But the company would really rather someone else take that place, and TikTok would sure be a nice substitute. And you know, TikTok, being a company that's owned by a larger Chinese conglomerate is a pretty good target if that's your goal. I mean, there are some legitimate concerns to have about TikTok, so it's not like all of these attacks have no substance to them. There are reasons to be concerned. Now, my own point of view is TikTok isn't that great and it does merit some scrutiny, But then I say the same thing about Meta. Both of them need to be scrutinized, potentially regulated, uh certainly held accountable for when they do things that are harmful. Anyway, it's interesting to see big tech behaving more and more like the ugliest facets of the political process. And by interesting I mean discouraging but not surprising. Staying on this topic a bit longer, Global Witness, which has frequently been a thorn and Meta's side, released a report yesterday saying Facebook's algorithm appears to be amplifying climate denial posts rather than offering up links to more reliable sources on the subject of climate change. So they actually ran a bit of an experiment. They created a couple of dummy accounts, Jane and John, both of which were supposed to represent climate skeptics. Now, the John account was set to follow legit scientific organizations, so it was liking pages that belonged to actually actual credible scientific groups and and institutions. The Jane account was directed to like a couple of pages that related to climate change skepticism. Then they sat back and looked to see what kind of content was being recommended in the respective news feeds, and they saw that Jane's news feed began creating, you know, seeing way more climate denial content in that feed. And on top of that, two thirds of the pages that included climate change misinformation were not labeled as such, so there was no warning. They're saying, you know, this is not necessarily reliable information and you should really look to such and such a place to get more reliable info. Now, the feed for John didn't have this problem. John didn't get the notifications of um, you know, posts that included climate denialism in it. So while John was seeing more information from legitimate sources, Jane saw progressively extremist content on the subject. So once again we see how Facebook's algorithm, coupled with Meta's insufficient flagging process leads to the amplification of misinformation. Will be coming back to that in just a moment, because we have another story that that touches on this. But before we get to that, let's take a quick break. So before the break, I talked about how Meta was failing to sufficiently label climate change misinformation as such, while it's not doing much better when it comes to preventing disinformation about the ongoing war in Ukraine either. According to the Center for Countering Digital Hate the c c d H, Facebook only manages to label about of all posts pushing misinformation and conspiracy theories surrounding that the ongoing war in Ukraine, so that means eighty percent of those posts are just slipping by unlabeled, and these include messages such as a claim that the United States has been supplying bioweaponry to the Ukraine to use against Russian soldiers, a claim that has no basis in evidence or real support, and yet is a conspiracy theory that is propagating pretty quickly across social platforms like Facebook. So four out of the five posts that are pushing this in similar misinformation campaigns are going through without Facebook abling the post without so much as a missing context label, let alone and outright false information label. Im Ron Ahmed, the head of the c c d H, said, if our researchers can identify false information about Ukraine openly circulating on its platform, it is within Meta's capability to do the same. That's a pretty sick burn because I mean it's hard to argue against that statement. Right, if some outside group can come in and say, look, we're finding it all over your your platform, and it's your platform, clearly you should be able to find it too, how do you how do you argue against that? So Ahmed went on to reiterate the platforms like Facebook profit off of misinformation, which I kind of touched on before the break, and you know, we've said it many times. Misinformation quote unquote drives engagement and that is a major metric that Facebook relies upon while executing its revenue strategy. So engagement from revenue standpoint is good, and you know, you just have to distance yourself from what type of engagement you're talking about. It's unfortunate and it's something again, it's not new. We've talked about this many times on this show. And we are not yet done with Meta, the company and another one called Sama s a m A which Meta contracts with in order to run content moderation operations in Africa on Facebook. They have been named in a lawsuit in Kenya, and the plaintiff, Daniel Motog says that Sama violated Kenya's laws around employee health, safety and privacy. So content moderation on Facebook is a really tough gig, and in some regions it can be downright traumatizing because it's your job to look through stuff that gets flat flagged on Facebook and figure out, okay, does this in fact violate books policies? And in some cases it is incredibly evident that it violates policies. But in the process of reviewing the content, you're exposed to some really dreadful stuff. Motong says that the first video he remembers moderating had a video of a beheading in it. Now I know that if I were exposed to that kind of content, it would definitely have a massively negative psychological impact on me, to put it lightly. Motan says that Somema deceived employees. It gave them kind of a bait and switch offer. According to Motong, the employees were told they were going to work at a call center, and then once they signed on, they found out, no, it's not a call center, You're actually going to do content moderation on Facebook. Motong also says that some mfl far short of Kenya's requirements for employers to offer sufficient mental health resources to their employees. And this is not the first time we've seen complaints relating to data and the mental health of people who are tasked with the job of content moderation. Um, So we'll have to see what the outcome of this particular lawsuit is and whether or not it will precipitate any meaningful change in the process of content moderation and how the companies that are tasked with doing that are held accountable for employee welfare. Okay, while we add Apple into the mix of stories here, we're still, by the way, also he being criticism on Meta and Facebook, or at least pointing out something that's troubling. Bloomberg reports that both Meta and Apple have handed over user data to hackers who fraudulently submitted emergency data requests. Now, tech companies generally try to hold off on just handing user data over to authorities as a means of establishing trust with users. Right, if you find out that a particular platform is frequently sharing user information with any agency out there, it's probably gonna make you get the he b gebs. Well, when companies are compelled by law, they will do it. I mean, obviously they don't want to break the law, and an emergency data request represents an urgent need for information. It could be in a case where a person has gone missing, for example, it could literally be life or death. So emergency data requests, unlike other types of authority data requests, do not require a court order, and a hacker group called Recursion Team are thought to be at least partly responsible for these fraudulent requests. Now, on the one hand, it's easy to accuse the companies of Apple and Meta of not doing due diligence to ensure that incoming requests are actually legitimate. But on the other hand, the very nature of emergency requests indicates that a speedy response can be absolutely critical. I think in this particular case, the real problem lies in the process more than with Apple and met as actions, and the fact that it was possible for hackers to compromise that process is something we should really look at. Over in the UK case involving a Twitter user reminded me of the limits of free speech and how they are different in different parts of the world. Joseph Kelly was found guilty of sending a quote unquote grossly offensive tweet and a judge subsequently has sentenced Kelly to one hundred fifty hours of community service. So you might wonder, well, what was this tweet that merited uh, that kind of sentence. Well, it had to do with Sir Tom Moore, who was a man who, leading up to his one birthday, was doing laps around his garden. He did a hundred laps around his garden and it was all in a way to raise money for the UK's National Health Service in the early days of the pandemic. Sir Tom more than not too long later passed away, and one day after his passing, Kelly posted his tweet, which was the only good brit soldier is a deed one burn, old fella burn because he's you know, tweeting in a Scottish accent, which I will not do because I can't. Now I think anyone could agree that that tweet was at the very least in very poor taste. Two uh to you know, you had a country in mourning because Tom, Sir Tom had had really inspired a lot of people, and so then he he ends up saying this very you know, uh, unsympathetic thing. Now the question is did that break the law? Specifically didn't break Section on seven of the UK's Communications Act. Now, when that Act was passed, that section was originally meant to create accountability for people who were doing stuff like making obscene telephone call. That's what it was to refer to. But in the years since it has expanded to cover social media posts as well. The grossly offensive part really becomes tricky simply because you have to decide what are the criteria you're using to determine if something is grossly offensive. I mean, grossly offensive is a subjective thing, right, You might be offended by something I'm not offended by, and vice versa. Kelly, by the way, he deleted his message just twenty minutes after he posted it, and his lawyer in the trial argued that Kelly had made the tweet while he was intoxicated, like he wasn't you know, sober when he did it. But none of that managed to get him off the hook. And so now he's sentenced to do a hundred fifty hours of community service. Uh. For people in the United States, that probably comes as a shock because here we could tweet something like that and yeah, we might be called out for being insensitive or or you know, having really bad taste or just being tacky or whatever it may be. But you wouldn't expect anyone to be held accountable and have to do community service in return for doing that. Now, in the UK, there is an a new bill that will be coming law later on. It will becoming effective soon. It's called the Online Safety Bill that will end up replacing the old UK's Communications Act, So that will end up having a new set of rules. However, there's still measures in place where there's pretty vague language that you know, how the the nation could handle messages that are considered to be quote unquote harmful, Like who determines what constitutes harm and how do you determine accountability for those things. So if you are in the UK and you can't tweet something nice, don't tweet anything at all. I guess okay. I have a couple more stories that are less you know, the triolic, But we're gonna take another quick break and we'll be right back. All right, Let's get to the last couple of news stories for this episode. One of those is that Canadian politicians have drafted an emissions reduction plan that will require all new cars sold in Canada to be zero emission vehicles by the year twenty five. Uh, that's passenger cars, I should add, And this would put Canada on a growing list of countries that are setting similar deadlines for when car companies will no longer be allowed to sell new internal combustion engine vehicles in those countries. UH. That list notably does not include the United States. There is no federal mandate that follows this trend, but there have been a and orso states that have listed their own state deadlines for that. And honestly, once you start getting to a certain tipping point, there becomes a a movement within the automotive industry where you would expect everyone to switch over to electric anyway or some other zero emission vehicle design anyway, because it would just make more sense from a manufacturing standpoint to go that way rather than divide it up. So it may be that we don't ever see the US create a similar national policy, but if enough states follow that trend, then the effect will be the same. And like I said, it only applies to passenger cars, you know Canada's policy. Uh. In industry cars like things that are being used for enterprise purposes, they will have a longer timeline in order to convert over to zero emissions, which makes sense. I mean, like, if you're talking about things like heavy duty hauling vehicles or really strong construction vehicles, you're talking about stuff that has power needs that you know might not be met with current zero emission systems in place. So that does make a little more sense. But yes, we are seeing another country say no more internal combustion engine vehicles here, at least no new ones after a certain date. And finally, some security researchers demonstrated that it's possible to hack into a communications satellite and broadcast a video feed to a large region. Now, to be clear, the researchers did this with permission, so it's not like they were secretly hacking into a satellite feed and taking it over and creating pirate satellite television. But they were given the opportunity to attempt to access a Canadian satellite that was no longer going to be used. It had passed out of its useful life expectancy, but it had not yet been transferred to a graveyard orbit. I talked about this briefly earlier this week in an episode about orbits. A graveyard orbit is an orbit where you just you push stuff when you're when it's no longer useful, and it gets it out of the way so that you can put more useful stuff in that orbit. And it typically is is at an orbit that just isn't really well suited for any practical purposes here on Earth. So pushing a communication satellite out to a graveyard orbit means that would no longer really align properly to transmit back to Earth. So if if they had waited longer, this really wouldn't have been a possibility. But because that satellite was no longer in service but still reachable, it also meant there were no competing signals being sent to that satellite. So if you could send a signal to the satellite, it could then be it back down to Earth. Now, accessing the satellite required using an uplink facility. This is essentially a place that has a powerful satellite dish antenna and a really powerful amplifier and sending the right kind of signal that was strong enough to reach the satellite in question. Like you couldn't just do this with a simple radio antenna or something like that. You have to have a very concentrated, powerful beam of of signal to go up and reach the satellite, and then the satellite in fact did beam that signal back down to Earth. So the researchers showed there are no actual security measures in place on the satellites themselves. There's like there's no there's no like password or authentication or anything like that. If you are capable of sending the signal to the satellite, then it will just do its job and send it back down to Earth. Now you can kind of understand why there aren't any real protective measures on the satellite themselves, because you know, in order to even get this to work, you first have to have access to something like an uplink facility. That is not an easy thing to do. So it's not like you can go to your local electronics store and buy a consumer electronics version of a massively powerful transmitter and amplification system. Plus you'd have to have a way to identify where the satellite is and target it and track it. But the researcher showed it was at least possible, and in fact, it's not even that difficult once they had that access to the uplink center, so hackers could potentially get access to an uplink center and cause problems that way. And also they pointed out that just like in the past, it's still possible to hijack a working communication satellite, one that's still in service, as long as you send a signal that's stronger than the official And in fact, this has happened in the past. In the mid nineteen eighties. You lived on the East Coast, and we're a subscriber to HBO, it's possible that you witnessed this yourself because and I think it was six there was a disgruntled technician who was working at an uplink facility in Florida who used that facility to override the official New York Facilities HBO signal to a particular communication satellite. So, in other words, you have this this facility in New York, it's beaming the HBO feed up to a communication satellite which is beaming that back down to Earth on the East coast of the United States. This person in Florida decides, I'm going to use the Florida's uplinks center to override that signal. I'll just send a stronger signal to that satellite and then I'll have control the technician going by the name Captain Midnight took over a few minutes of airtime on HBO and used it to drumroll please complain about how expensive it was to get HBO added on too. Consumer satellite services, good use of time anyway, things haven't really changed that much since the nineteen eighties. Is still possible to take over a satellite by sending a stronger signal to that satellite, although you can also run the risk of damaging a satellite in the process if the signals get to be too strong. This is not that different from how radio works. In fact, radio works exactly the same way. We saw that and so did television. We saw that in the in the nineteen eighties as well, with the infamous Max Headroom incident that was over the air broadcast, not not satellite, but yeah, same sort of thing. If you're able to send out a stronger signal than an official one over a particular frequency, then that's what people are gonna get. That's how pirate radio can be a thing. Um, and it's illegal anyway. Uh. In an era of state sponsored hacker groups and propaganda campaigns, this knowledge kind of raises up some troubling possibilities, Like you could easily imagine a scenario where a country uses its own up link facilities to target a satellite that's that's orbiting a nearby region that happens to be an adversary of that country, and to take over that satellite and broadcast propaganda or shut it down. Even like, you can easily imagine that, and the fact that there aren't these security measures on the satellite themselves makes that a possibility. Or you could even have state sponsored hackers trying to get access to uplinks centers that are in other countries and and achieve the same goal. Uh so, maybe this will lead to changes in security around satellites. I think making sure that the up links centers are really secure is important because again, if you don't have access to an uplink center, you're not gonna send a signal strong enough in the first place to make it an issue. So protect those first. But I think also it might be time to start in, you know, figuring out security for the satellites themselves. Okay, those are the news stories I chose to cover on Thursday, March two. Like I said, there were a ton more, but uh yeah, I was already getting pretty grouchy, as you can tell, and I figured that this was a good mixture to kind of share with all of you. If you have suggestions for topics I should cover in episodes of tech Stuff, reach out to me on Twitter. The handle for the show is text Stuff H s W and I'll talk to you again really soon. Text Stuff is an I Heart Radio production. For more podcasts from my heart Radio, visit the i Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows. Eight