Clean

Tech News: AI Getting Armed and More Dangerous

Published Aug 29, 2023, 6:31 PM

As US Senator Chuck Schumer prepares to hold a forum on the risks and benefits of artificial intelligence, the US Army and Air Force are each seeking to incorporate AI and robotics in combat operations. Plus, Elon Musk livestreams himself showing off the latest build of Tesla's FSD mode to mixed results.

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland, Diamond Executive producer with iHeartRadio and how the tech are you. It's time for the tech news for Tuesday, August twenty ninth, twenty twenty three. We got a lot of heavy talk about AI to get through today. First up, US Senator Chuck Schumer will host an insight forum focused on AI on September thirteenth. His office has confirmed there will be several important folks in tech who are present at this event. One of those is Elon Musk, who co founded open Ai before he left that organization to go make his own AI development team with Blackjack and well never mind. Another is Mark Zuckerberg, which makes me wonder if Schumer is prepared to keep Musk and Zuckerberg apart, since you never know when they'll revert back to being high school alpha male types and starts scrapping in the hallway between classes. Eric Schmidt is also supposed to be there. He was the former CEO of Google. Sundar Pichai, who is the current CEO of Alphabet, which is Google's parent company. He's going to be there, and of course Sam Altman, the CEO of Open AI itself, will take part in this, and the conversation is going to be a lot around like the risks and benefits of AI and the development of a US policy on regulating artificial intelligence. Now, these folks are not going to be the only ones there, which is a good thing, because if the only folks at the table happen to be the ones who are eager to avoid as much regulation as possible, you probably wouldn't make a whole lot of progress. Schumer's office says that there will be representatives from civil rights groups and worker advocacy groups and creatives and that sort of thing as well. The proceedings themselves will actually be done behind closed doors, so there won't be any reporters allowed inside while this is going on, but Schumer says his office will release essentially a summary of what went on during the discussions. I suspect the tech executives will do their best to reduce any impact of proposed regulations because otherwise that kind of affects their bottom line. Now, I do think a serious discussion about artificial intelligence does need to happen as soon as possible, and I don't just mean generative AI. That gets a lot of headlines, but it is not the one and only application of artificial intelligence by far. For example, according to Jared Keller, who was writing for Military dot Com, the US Army will potentially they soon conduct tests in which they will mount the Army's new sig sour x seven squad rifle to a four legged robot provided by Ghost Robotics. The Army already did a similar test with a robot from Ghost Robotics, and they used an m for a one carbine in those tests. Representatives for the Army have said these tests are to explore human machine interaction in Army operations, but they do not necessarily indicate that there is a plan for these robots to be quote unquote deployed down range. That is, the Army might test the stuff out, but that doesn't necessarily mean that in the not two distant future, four legged robots armed with machine guns will be blasting their way through combat zones. So let's just test chill out. Critics have repeatedly voiced concerns about arming semi autonomous and remotely controlled devices, arguing that can lead to conflict escalation and that the act of ending a human life should be entirely up to another human, which, y'all, I understand that on one hand, but it's really messed up to think about it, right, Like, no, this should be left to a person who will then be left to try and deal with that trauma. But at the same time you think, well, sure it should be up to a person and not just automated, because that is really super dark and grim. But then I am admittedly a hippie dippy type who isn't so big on the concept of ending human lives in the first place. Anyway. It's not just human rights advocates who have voiced concerns. Several robotics companies, including famously Boston Dynamics, have protested the move to weaponize the technologies that they work on. Last year, those companies released an open letter that in part reads, quote, we believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work raises new risks of harm and serious ethical issues. Weaponized applications of these newly capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society end quote. Obviously, this has not stopped the US Army and other nations are similarly experimenting with weaponized robotic platforms. So I guess you could say that unless everyone around the world agrees to back off on doing this and then actually follows through on that promise, the only other option you have is to develop the stuff yourself, which again is pretty grim. But wait, it gets even scarier. So not to be outdone, the US Air Force is seeking a research budget to build at least a thousand unmanned aircraft that can operate autonomously and I'm talking being weaponized aircraft. These vehicles would serve as wingmen to human pilots and would provide support and cover during combat operations. They could also be sent on suicide missions to achieve combat goals in scenarios where the possibility of survival is approaching zero. As such, the aircraft would need to be autonomous and armed. One candidate for the vehicle that the Air Force could potentially use in this program comes from a company called Kratos Defense. It actually makes me wonder if the company chose that name after the character from the God of War franchise. That game came out in two thousand and five, and the company that is now known as Kratos Defense actually chose that name in two thousand and seven, two years after the game came out. If that's the case, yikes. Anyway, the platform itself is called Valkyrie, which is really another yikes. Valkyrie were Odin's war maidens who would escort fallen warriors to Valhalla. Force has been using the Valkyrie aircraft as a support platform for connectivity purposes, essentially acting like a network bridge between other aircraft and other autonomous vehicles that are under Air Force control. But current plans involve using a Valkyrie in a simulation to identify, chase, down, and then take down a target over the Gulf of Mexico in in a test of its capabilities, which is a triple yikes really, as you can imagine. Critics have protested this initiative as well, with the same sort of arguments that they make for the four legged robots being armed by the army, But the Air Force will be requesting a nearly six billion dollar budget to pursue this plan over the course of the next five years. Today is the first day of Google's Cloud Next conference, an event where AI will be one of many topics under discussion. Google will launch an interesting tool at this conference called Synthid. This tool applies a watermark to AI generated images. The watermark is meant to be unnoticed by human eyes, so when you look at the picture, you don't see that there's a watermark there, but it's meant to be easily detectable with any AI detection tool, which is pretty clever. The watermark won't affect how we perceive the image, but will also reveal itself to be the product of AI generation, and Google says the watermark's design is such that you could edit the image, You could crop it, you could deform it, stretch it in various ways, and the watermarks should be unaffected. Google engineers haven't gone into a lot of detail about how this works because they don't want to tip their hand too much lest folks immediately find ways to gain the system and get around the tool. But they have also said that this is really just the beginning of Synthid. This tool is going to face real world tests. People will find ways around it. Like, that's just a fact, and the engineers at Google are saying this, and that's going to prompt changes and improvements on Google side, and that's just how things work. It's essentially the exact same pattern we saw with captures. There are lots of reasons that you would want to employ a tool like this, ranging from everything from preventing the spread of misinformation with deep fakes to avoiding the problem of just mixing up images of actual real world things with AI generated images of stuff that may or may not exist. So there are a lot of different practical applications for this technology. I'm sure we'll hear a lot more about it as the Cloud Next conference continues. General motors will be talking more about how it is using conversational AI as the Cloud Next Conference continues, specifically with the on Star service. So on Star is a connected feature built into some vehicles that lets the driver get support for all sorts of things ranging from the quasi trivial to the very serious. GM is using conversational AI to handle the more mundane, low urgency requests. You know, like if you want to use on Star to help guide you in navigating to your final destination in your car that doesn't have to be a human being to actually manage that. That could be an AI agent helping you with that task. For stuff that's more important, like reporting a crash or asking that you know someone like an EMT be sent to your location, those calls get routed to human operators, which is totally understandable, and by offloading the low urgency stuff to AI, GM says it has decreased the weight time to get in touch with human operators, and obviously that's a good thing if you really need to speak to someone in the event of an emergency. According to GM, the response to the AI assistance has been mostly positive among drivers, and this is the kind of implementation I can really get behind, using AI to offload less important tasks so that people with specialized knowledge and training can handle the more important ones, particularly ones that benefit from a human touch. Okay, we've got a lot more news stories to go, but let's take a quick break. Okay, we're back, and next up, we've got another story with artificial intelligence, along with some arguably dumb real behavior. At least in my opinion, it's pretty dumb. So Elon Musk, owner of x formerly known as Twitter, and the CEO of Tesla, live streamed a demonstration of Tesla's upcoming full self driving Version twelve software with him sitting in the driver's seat of a Tesla. This version of full self driving has yet to be released to Tesla owners, and during the demo, Musk broke a California law which says you're not supposed to have a phone in your hand while you're operating a vehicle. Elon Musk definitely did do that. Follow up on that, police are not going to pursue Musk for this because no police officer directly witnessed it happening. That's a prerequisite for charging someone if it's just a video or whatever. Cops did not see it at the time, they will not go after Musk. Plus, even if they did, the penalty for your first offense can be as low as a twenty dollars fine, so it's not like it wouldn't mean anything. Also, Musk was technically violating Tesla's own policies because the company says that full self driving is a hands on feature and that drivers are supposed to keep their hands on the steering wheel at all times, and Elon Musk definitively did not do that, so he was defying his own company's policies. But anyway, let's put all that aside for now. During this demonstration, at one point, Musk actually had to take control of the Tesla to prevent it from running a red light. That's not a great moment when you're demonstrating the supposed full self driving capability of your vehicle. Musk also used Google to look up Mark Zuckerberg's address and then showed it on camera, but he said that doesn't to doxing anyone because you could just google the way he did. To be fair to Tesla, there were several segments of the drive in which the vehicle navigated through construction zones and roundabouts and didn't have any performance issues. Musk also pointed out that Tesla's now rely solely on optical cameras rather than sensors like LDAR. Another interesting note is that Musk did this demonstration while his company is preparing to defend itself in the first of a couple of oncoming court cases that are arguing that the company's driver assist features led to fatal accidents. So the first court case should begin in California in mid September. And it's a civil lawsuit stems from a twenty nineteen accident in which a Tesla owner named Micah Lee died when his Tesla, which was in autopilot mode, veered off a highway. It collided with a tree and then burst into flames. Two passengers in Lee's car suffered suitvarious injuries, but survive the crash. The second court case is scheduled for October in Florida and centers on a different crash that happened in twenty nineteen. That's when Stephen Banner's Model three failed to detect a big rig truck that was crossing the road ahead of him, and his Tesla collided with the trailer, which killed Stephen Banner. I can't imagine the lawyers at Tesla are super thrilled about Elon Musk showing off full self driving on a live stream, especially in a demonstration that required him to take over to avoid running a red light. While simultaneously preparing for these court cases, Microsoft will soon release version one seventeen of the Edge web browser and will actually be removing some features in the process. Microsoft said the decision to remove the tools was to quote improve end user experience and simplify the more tools menu end quote. As reported by The Verge, truth be told, I hadn't even heard of these features at all, so it's quite poossible that very few people are making use of them. Then again, according to at least some statistical analysis firms, Microsoft Edge commands just five percent of the web browser market in total, so you could argue very few people are making use of Edge full stop anyway. The features affected are picture dictionary citations, math Solver, kids Mode, and grammar tools. If you are one of the few elite that this actually will affect, you have my condolences. Microsoft will push this update out of the beta phase. In mid September, Carl Bode of tech Dirt wrote a piece explaining how e byte companies, through a trade organization called People for Bikes, has lobbied lawmakers in the United States to make exceptions for e bikes in various right to repair laws. Essentially, these companies are trying to make sure that they can maintain control of the entire ecosystem for their products, rather than open up so that customers can either perform their own maintenance and repairs or to seek those from an independent repair shop. The argument that the group has been making is one we have heard before, that this is really for the customer's safety. The group argues that allowing people to do their own maintenance and repair could lead to an increased risk of stuff like fires, and while e bikes have been one of those electronic products that have had problems with batteries catching on fire, that has had more to do with poor manufacturing processes than anything else. In fact, when pressed to cite figures about how many fires were the result of an e bike owner trying to do their own repairs, a rep for the group said that the stories were quote unquote anecdotal, which is another way of saying, I don't have any evidence that this is actually a thing. And apparently these lobbying efforts have been pretty effective, with e bikes getting exceptions and several right to repair laws around the United States, though not as tech Dirt reports in Minnesota. However, the Minnesota law did make exceptions for game consoles, medical equipment, and cars. Back in two thousand and eight, California voters approved initial funding for a high speed rails within the state. This is the same system that would later prompt Elon Musk to say the whole thing was a huge waste of money, and that a hyper loop system would be faster and more effective. Of course, the hyper loop failed to materialize, and despite several companies trying to make it a thing, it has never manifested, at least not in the way that Musk initially promoted it anyway. In the meantime, over those years, the project for high speed rail has moved forward, though very slowly, with the state engaged in construction across hundreds of miles in California while still working to receive environmental approval for some key stretches, and now that project is officially putting out an RFQ, or Request for Qualifications to look for vendors who would provide the actual trains that will travel on those rails once they are finished. Interested companies will need to respond to the RFQ by November. The California High Speed Rail Authority will consider the candidates and then narrow the search in early twenty twenty four. To qualify, the companies will have to be able to build trains that can operate at speeds of two hundred and twenty miles per hour, with tests as high as two hundred and forty two miles per hour at least according to Los Angeles news outlet KTLA five. The company selected will have to build all the trains for the system and provide access to spare parts for thirty years, which is important. Here in Atlanta, we have a train system where the company that made the trains doesn't exist anymore, so getting replacement parts requires a lot more work. The authority's goal is to have the high speed rail service in action by twenty thirty. Personally, I have my doubts that it will be ready by then, simply because these projects are so huge and complicated, and made even more complex through local and state politics, which change with every election. But here's hoping California is able to see this project come to completion and perhaps serve as a model that other states could follow. The lack of high speed rail lines across the United States is pretty embarrassing. Our final story is about how some hackers say they have infiltrated a company called web Detective, which makes spyware, and they have subsequent deleted all the device information that the company had, which will make it impossible for web Detective to collect additional data from those compromised devices. According to Engadget. That means around seventy six thousand devices will no longer be spying on their owners. Not all heroes wear capes, some of them wear hoodies. All right, that's it for the tech News for Tuesday, August twenty ninth to twenty twenty three. I hope you are all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,448 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,445 clip(s)