Meta PC's owners say "Not so fast, Facebook" to a proposed name change. Google is bidding on a defense contract. A new methodology relating to quantum computing could change everything. And astronauts make tacos in space with space chiles!
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeart Radio and love all things tech. It's time for the tech News for Thursday, November four, twenty one, and we're gonna start off with kind of a funny story. So on Tuesday, we talked about how Facebook, the company, has adopted the name Meta as a rebrand for the overall corporation. So, Facebook, the social media platform is just one part of that business and it's not changing its name. But now it turns out that another company disputes Facebook's right to the meta name Meta. PC, a company that's been around for around a year, applied for a trademark for the name Meta back this past August, which was obviously a couple of months before rumors popped up that Facebook was going to even rebrand. However, the company has not yet actually received a trademark from the trademark office. Assuming there are not other conflicts, you would think that the PC company would get the trademark due to being what we call in the biz firstees. If they do get the trademark, well, then Facebook would not be allowed to just adopt the name without getting permission first. Now see, Facebook wouldn't have necessarily known about this issue because the trademark had not yet been registered. So I don't think the company was bullishly trying to take over the Meta name from Meta PC. I just don't think they even realized Meta pc was a thing, uh, And they just probably assumed there were no impediments in the way of getting a new name. I mean, if if there was no register trademark, then it looks like there's no foul, right. But the owners of Meta PC, Zack shut and Joe Darger, say they're willing to part with the rights to the name of Meta for a cool twenty million dollars. And they also tweeted a photoshop picture of Mark Zuckerberg holding a Meta pc in his hands, and they said that if they do sell the name, they'll make sure to pick something new that suits their PC company, and maybe they just go with the name Facebook. That seems like that's coming up, which is, you know, kind of a funny, little like tongue in cheek response. Whether this actually goes anywhere largely depends upon the trademark office and whether or not they allow Meta pc to have Meta registered as a trademark. If the Trademark office denies that, well, then Meta pc really doesn't have any leverage. However, it would also presumably mean that Facebook slash meta would also have issues registering the trademark. Otherwise the office is really playing favorites. So I'll have to see how this all turns out. And now for less whimsical Facebook news, because of course there's gonna be some first up Facebook, the social network site is giving up on facial recognition technologies, and that's a good thing. I mean, we've talked a lot about how facial recognition technology is often flawed in its implementation, and that frequently we see that the tech tends to work pretty well for certain ethnicities and not so great for others. Misidentifying people, you know, you get a lot of false positives, that kind of thing. And we also know that this technology can lead to serious problems, such as certain populations being disproportionately harmed by it. You know, if if police forces are reliant upon a technology that routinely makes mistakes when it comes to identifying people of of certain ethnicities, that's a real problem. So Facebook saying no to facial recognition algorithms. That's a good thing. What's less good is that Meta, the parent company of Facebook, is totally not making that same commitment. Meta plans to continue to rely on facial recognition technology as it builds out it's metaverse products, and I get the feeling that Zuckerberg is really pushing for the metaverse stuff to be the future of the company, and considering the fact that Facebook is having trouble attracting young users, I think that that's kind of all signs point that way. So while I'm glad to see Facebook move away from facial recognition, I worry that we're really just putting that problem on hold. Then again, I have no idea how popular this metaverse concept is going to be. If the metaverse is dependent upon users having access to expensive hardware, it may be that only a subset of people will ever really have access to it. Anyway, I think it's clear that Facebook slash that doesn't really view facial recognition tech as being a problem, or at least not a deal breaking one. A Facebook investor named Roger mcnammy addressed an audience at the Web Summit event and recommended that governments conduct criminal investigations into Facebook, even calling for executives to face jail time should they be found to be responsible for crimes. Mcnami first invested in Facebook around and held onto that stock through to two thousand nineteen before he began to start selling it off, and he has criticized Facebook multiple times in the past. He outlined four areas he believes deserve criminal investigation, and he also alluded that he has two others on top of those four, bringing the total up to six. And they range from business issues like Facebook allegedly failing to disclose information to the SEC, even though as a publicly traded company, is obligated to do that, but it also goes up to charges that Facebook was potentially complicit in the political riots that happened on January six, which, holy cow, that's January six of this year. I mean, my ability to remember dates in my own lifetime is notoriously bad. Like I might be like, well, that happened a year ago, and it really happened six years ago. But let me tell you, the pandemic has made it nearly impossible for me to keep things straight anyway. This is more of a sign that there's a growing opposition to meta slash Facebook on multiple fronts, and we're not done yet. The Washington Post reports that in twenty Mark Zuckerberg himself approved a request which was more like a demand from the government of Vietnam, which wanted Facebook to take down posts that were critical of the Communist party in in charge in Vietnam. The government said that Facebook, you know, if they didn't comply, then the government would for bid Facebook and Vietnam and Zuckerberg reportedly caved into the demands and authorized the removal of thousands of posts, and the Washington Post says that Facebook's response was essentially, you know, when asked about this, they said, you know, we did it. But if we didn't do it, then Facebook would have gone away in Vietnam, and that would have been even worse censorship. That would have meant people were cut off from a tool that they were depending upon. And that's a difficult argument for me. I mean, I get what they're saying, but at the same time, this is predicated on the idea that Facebook is providing a service. But arguably you could say that service is to be a pr arm for the Vietnam government, at least as far as silencing any dissenting voices so that I don't know. That argument doesn't work so well for me. The Washington Post also says that the Vietnamese government was effectively using Facebook to track down activists and critics of the government, claiming that even a post that levied just the tiniest amount of criticism towards the Communist Party could result in jail time for the user, which is a big Yike's let's shift over to Google. The company is once again throwing its corporate hat in the ring for defense contracts. Now. A few years ago, Google was participating in the development of Project Mayven, infected essentially one out a bidding war for it, and Project Maven had a lot of controversial stuff involved in it, including things that could potentially be used in drone programs, and this in turn led Google employees to walk out with a protest on Google Uh. They didn't really like the idea of their work being used to help war efforts. Now Google has bid in the Joint Warfighting Cloud Capability can track that the Defense Department has has put up. Now, this project is a successor to a previously abandoned program, which was the Joint Enterprise Defense Infrastructure or JEDI. Now you might remember that JEDI saw a bidding war and it really came down to between Microsoft and Amazon, and Microsoft ultimately won the bid and was selected by the Department of Defense. But then Amazon sued based upon this, uh, this particular agreement. They challenged the contract and they claimed that then President Trump had interfered in the whole process, and because Trump had beef with Amazon and Jeff Bezos, Trump effectively scuttled Amazon's bid and forced the Defense Department to go with Microsoft. Anyway, As a result of all that, that particular project fell through, and now Google is competing to win a place at the table for the new project. So JEDI was a kind of winner takes all approach with one company winning the full bid, but the Joint Warfighting project will actually see the Defense Department work with multiple companies, so it's not you know, winner take all solution. As for what Google would do should it become part of this project, it sounds like the main components for Google would be to provide things like cloud storage and cloud computing services and mostly be used in noncombat oriented applications such as developing, you know, monitoring the developing situation with the pandemic and making uh analysis of that, perhaps predicting where the pandemic might develop in the future, as well as things like climate change. So stuff that's not directly tied to combat activities, at least based upon the initial descriptions. Nowhord yet on whether Google employees are fine with this. Earlier this week, I talked about a crypto scam that referenced squid game. Well, now the Verge reports on a different scam involving cryptocurrency. This scam plays on some old fishing tactics, that's fishing with a pH And here's how it works. When you come into possession of cryptocurrency, you know, whether you earn it or you buy some or you're gifted it or whatever, that cryptocurrency needs to quote unquote live somewhere. It has to be stored somewhere. It's code, but it has to have a storage space. So one way to store it is in a digital wallet that only the owner has access to. So effectively, this is just a piece of software, and that software will live on a computer or other device, and if you were to lose access to that device, you would also lose the digital wallet because it's the software that's on that computer. It's it's on that physical device. This is why you will occasionally hear those rough stories about someone who's no longer able to access a hard drive that might have like a hundred bitcoin on it or whatever. Anyway, the digital wallet is a way for you to hold cryptocurrency so that you can use it in transactions. Well, the scam involves creating mock ups of well known digital wallet companies, so copying them, uh, you know, and these are companies that have a pretty decent reputation or at least not a bad reputation, and the scam artists make a copy of those sites to make it look as realistic as possible, and then they buy out ads on Google so that they rank in Google search results. So when you search for digital wallet, the ad results pop up above everything else. I mean, you know how Google searches work. If you search for something, the ad supported results are the first things to pop up. So on a casual glance, you might just say, oh, I'll just pick the first one. I need a digital wallet. I'm going to pick the first one that's probably the best one. And you wouldn't even know that this was a scam that was using the ads in order to get this sort of placement. So then you go to the scam site and the scammers might try to just you know, get as much of your personal information as possible and cleaning stuff like your bank account or your credit card number, you know, your typical phishing attacks strategy, or they might be even more sneaky. They might have you go through the process of creating a new digital wallet. But instead of actually creating a new wallet, what the scammers do is they assign you an existing digital wallet that belongs to the scammers, so it looks like it's yours. They give you the access to it, but they control the wallet. So then you spend some money, you purchase cryptocurrency, or you otherwise transfer cryptocurrency that you own into this wallet, you are effectively stuffing the scam artist's wallet full of your own cash. The scheme works because Google adds acts like a shortcut. It's cutting in line. So by creating a convincing fake and then buying out ad space, the scam artists have their bogus site appear above the real one, So security experts recommend that you make sure you scroll down below the ad results on Google Search for that very reason, because you can't be sure that the advertised sites are legit, and Google seems unable or unwilling to put in the work to protect consumers, which is pretty ugly stuff. We've got more stories to cover after we take this quick break. We're back, all right. At the cop twenty six Climate Summit, more than forty countries have committed to transitioning away from coal fired power plants. Uh, the countries that have large economies, large developed economies, they plan on making that divide much earlier. They plan to get out of coal firing by which is fairly aggressive for developing economies. Those countries are looking more at the twenty four these, which you know, makes sense because they're not in a position to easily transition away as much as those that have really big economies. The countries that agreed included ones like Vietnam, Indonesia, Ukraine, Canada, Poland, and several more, But there were also some really notable absences from that list, like the United States, for example, or India or China, you know, really big countries that still depend heavily on coal powered power plants. So there's still a lot of work that needs to be done, and unfortunately a lot of that work depends on countries that are you know, responsible for a lot of coal consumption, but yet not they haven't yet committed to changing that. On a related note, many countries and financial institutions agreed to end overseas financing for fossil fuel projects, and the United States was among those. So the US is saying, yeah, we won't, we won't help fund overseas fossil fuel prod jecks, but um back off of us on our own home turf. I guess. The Australian government has issued a demand to clear View AI, famous for its facial recognition database services. The company is to destroy all images and facial templates related to Australia's citizens because the government has determined that clear Views business violates Australia's privacy laws. So for a refresher, one way that clear View has built out its massive facial recognition databases is that it's scraped social networking sites, using programs to collect and analyze images that were publicly posted on platforms like Facebook, and they built out databases using those images, and that lets clear View train machine learning systems to match new images against those databases and clear View markets this to governments and police forces around the world. Clear View plans to appeal the decision of the Australian Court system, saying that the images that it uses were published in the United States, since Facebook is an American company and therefore Australia doesn't have jurisdiction, and the company also claims clear Views claims that because people were posting to public profiles, they have no right to privacy, which is a big old roof, so we'll have to see how that goes. The startup company Too Simple, That's tu Simple, which is designing self driving transportation trucks, plans to test its autonomous vehicles on the roads without a human safety operator before twenty twenty two, which is right around the corner, so any day now. The company plans to unleash driverless trucks with no safety human operator in there for the eighty mile run between the cities of Phoenix and Tucson, Arizona. The trucks will travel down public roads to do this, so there will be people in regular rold cars on the roads at the same time. The company has said it plans to conduct multiple runs over several weeks to test the technology, and it acknowledges that this is a challenging problem. You can design a system to handle known scenarios pretty well, but then preparing for the unknown is a totally different matter. I'm sure a lot of you out there have been in a car, you know, traveling in a car when something unexpected happened, and that can be a really intense and scary thing for humans in many cases. But at least we are pretty good at assessing things quickly and making a decision. We don't always make the right decision, but you know, we can extrapolate a lot of stuff based upon our experience and make judgments about what to do. For computer systems that encounter something new, there's no experience to draw upon, and they're not very good at associative thinking. They can't really say, well, I've never seen this, but i've seen something like this, and I think this is the right way to do it. They're not really they can't really do that very well. The machine still has to make a decision, and it may not have anything solid to guide that decision. So let me give you a very simple example. Let's say the vals traveling at night, and there's a puddle across the road, and the headlights of the vehicle hit the puddle and reflect off of it. Now a human would recognize that as a reflection, they might slow down so they can go through the puddle without like hydroplaning or something. But otherwise they know what they're looking at. But a machine could theoretically misinterpret that reflection and see it as an obstacle that's in the road, so the machine might try to swerve out of the way or slam on the brakes. Now that scenario I just gave is one that I'm sure all autonomous vehicle companies have anticipated and have worked on. That's not like it's so out of the ordinary no one would have thought of it. I'm sure that's factored in. But my point is that machines don't magically no real risk from something that isn't a risk at all. That being said, too simple is taking this process seriously. The company has been limiting the human free tests so far to a dedicated track. So so far, anytime they've run a test where there has been no human in the truck, they've only done it on a dedicated track that doesn't connect to public roads, and for the moment, on the public road tests, they still have operators writing the route between the two cities. And because Too Simple is using this established route, like it's not open ended, right, it's not saying you're going from here to let's pick a city Atlanta and just find the best route. They're not doing that. They're saying go from here to here, and this is the path that you you should take. That means that the company has been limiting the variables, right. They have this established route that creates a more knowable course, and you can still have things that are unexpected happen, but you've cut way back on those variables and it allows us to continue to build toward a future where autonomous vehicles are a viable solution. So while I have some reservations about a top of as trucks, I do think that the process Too Simple has laid out is one with the appropriate amount of caution and accountability. Okay, geeky news alert. The following news item is extremely geeky. A startup out of Australia called q Control that's c t r L has created an error suppression technique that improves quantum algorithms by an astonishing two thousand and yeah, I get it like that, that alone is effectively gobbledegook. So what the heck do I even mean by this? Let's start off with talking about quantum computers. When you boiled down computer science with classical computers, you're talking about processing information in the form of bits, and a bit is a single unit of information it can buy be a zero or a one, and a zero is always a zero and the one is always a one. So you can think of it like a light switch with an off and on. But quantum can computers rely on cubits, and these under certain conditions, can technically be both a zero and a one at the same time, plus all values in between. And when you take that and you combine it with a properly designed quantum algorithm, you can potentially solve a certain subset of computational problems much faster than you could if you were to use a classical computer. So, for example, let's say I give you a really really big number, it's hundreds of digits long, and I tell you I created this number by multiplying two prime numbers together. Which two prime numbers did I use? Well, then you would need to start trying out different prime numbers to see if they divide evenly into the big number I gave you, and then to make sure that the other number that it produced was itself a prime number. And you'd be going, Nope, it's not that one. Hope, it's not that one. Nope. I mean, it would take you ages, potentially centuries to get to the right pair, depending on how big the number I was I gave you. And and that's how classical computers kind of tackle these problems. They sequentially go through all the possible answers to find the one that fits, and even a fast computer would take a very long time to get to that answer. But quantum computers can effectively make all the guesses at the same time, assuming one that the computer has enough cubits to do this, and to the algorithm you've designed that the computer is following works. So all the pieces need to be there. It's not just the power of the quantum computer. It's also the quality of the algorithm you're using to try and solve a problem. But when all the pieces are there, the quantum computer will give solutions to those problems. I guess I should say we typically get solutions with a certain percentage of confidence behind them, kind of like, I'm sure this is the right answer. So really we get answers in the form of probabilities rather than certainties when we're talking about quantum computers. Anyway, the startup says it has created a means of suppressing errors with quantum algorithms, which theoretically should make it easier to design quantum algorithms that can take advantage of quantum computing. Now, I can't pretend to even have a partial understanding of how they achieve this. I mean, the the bits that I've told you about quantum computer so far, that's pretty much right around my level of understanding of quantum computers. It goes a little deeper than that, but not a whole lot. Like once you start really getting into things like entanglement and superposition, things get a little too wibbly wobbly for me to be able to follow properly. But this is really cool. It means that quantum computers could potentially be used for a larger range of applications as we continue to build stronger quantum computers. And I first wrote how quantum computers work back when we were talking about cubits on the order of like ten cubits for our computer, and we're seeing that grow every single year to a point where we could potentially do some really cool things with quantum computers and tackle some very difficult problems. It also, by the way, means that some of the principles that are behind modern day encryption will have to be completely rethought, because a good quantum computer with a solid algorithm could potentially crack encryption at a fraction of the time it would take us using classical methods, which means that essentially, at that stage, we everyone who has access to a quantum computer in one of these algorithms effectively has a skeleton key to all encrypted information everywhere. Obviously, that will change things dramatically, but we're not there yet. But this is the sort of thing that kind of sets us on that Pathway. While we're on the world of kind of science fiction, I want to talk about a story I published in Vice, and the story says ethical AI trained on Reddit posts said genocide is okay if it makes people happy, and I get it that headline grabs your attention. But let's talk about some stuff. One method of machine learning involves feeding tons of samples to a computer model. We kind of talked about it with Clear View AI in this very episode. So the computer models job is to sort through the data that's being fed to it and then to make some sort of decision based upon that data. Now, the example I always give is, imagine you've got like ten thousand photos and some of those photos have coffee mugs in them and some of them don't. And you're trying to teach a computer what a coffee mug is, and you're feeding these images into it, and it gives you some results, and some of the things that says are right and some are wrong. Some it misses some coffee mugs and misidentifies other things as coffee mugs. So you tweak things, You repeat the training, and you do this over and over and over again. You might use a sample that has millions of data points in it, might run that test thousands of times in an effort to refine your computer model. Well, no computer magically knows the answer to these things. It is this training process that's important. And in this case, we're talking about an AI called ask Delphi or ask Delphi if you prefer, as in the Oracle of Delphi, and you are to ask it ethical questions and it gives you answers. Well, again, it has to be trained to do this, and it's very easy for these kind of models to be trained improperly. So I wouldn't be at all surprised by this. This isn't a shocking thing to me. It's actually entirely expected, really, But I do think that the people who wrote the Vice article do make some good points. They do say maybe handing ethical judgments to AI is not a great idea because an AI is always going to be reliant upon the biases that taught that AI in the first place. That also means that you probab probably shouldn't use AI to be in charge of any system that hinges on ethical judgments, which that's a much larger scale. Right. It's one thing to ask AI, Hey, is it cool if I do this? It's another thing if you're talking about a system that, at some point or another needs to make a call about whether something is ethical or not. Uh, you know that that starts to really bring in a lot of questions. I think the headline was just a bit sensational, but I think the piece was actually really valuable. And finally, Ridley Scott put it best. In space, no one can hear you toot, at least I think that's how that goes. Also, that's totally not true. If you happen to be in a spaceship that's got an atmosphere in it and there are other people near you and it's not too loud in the environment, they might hear you if you start cutting muffins. But I wanted to open the segment with that joke because it's about space tacos. Yeah, tacos in space. So our final story is that astronauts aboard the International Space Station grew a batch of hatch chili is as part of their experiments aboard the I s S, and on Friday, astronaut Megan MacArthur tweeted that she had made tacos using the space grown chilies as one of the ingredients. Now, the other components all came up from Earth and various launches, so I don't have any exciting stories to talk about space beef here, but this is really cool. Growing the chilies was an experiment all by itself, and astronauts conducted scientific observations before the fruits of their labor could become taco ingredients. I just thought that was a neat story to end on. And that's it for the news for Thursday, November four one. If you have suggestions for topics I should cover in future episodes of tech Stuff, reach out and let me know. The best way to do that is on Twitter. The handle for the show is text Stuff hs W and I'll talk to you again really soon. Text Stuff is an I Heart Radio production. For more podcasts from my heart Radio, visit the i heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows. H