Clean

TechStuff Classic: The Great Google Car Crash of 2016

Published Mar 31, 2023, 10:32 PM

CarStuff's Scott Benjamin comes on to talk about Google's self-driving car causing its first car accident. What happened? And how will Google prevent future accidents?

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland. I'm an executive producer with iHeartRadio and how the Tech Area. It's time for another classic episode of tech Stuff. This one is titled the Great Google Car Crash of twenty sixteen. Scott Benjamin joined the show to talk about this. This was one of those news stories that was a pretty big deal in twenty sixteen. The episode originally published on April twenty seventh, twenty sixteen. And you know autonomous cars, well really that we don't have a truly autonomous car yet, but it was kind of a new concept, like new in the sense that we hadn't really seen too many of them out on actual street back in twenty sixteen. So Google getting into a little car accident was big news. Now. Obviously, in the years since, we have had a lot more accidents involving autonomous vehicles or semi autonomous vehicles, so things have definitely continued to be complicated and sometimes tragic, but this one was more of an interesting, lighthearted look at why an autonomous car might get into an accident. So sit back and enjoy this episode from twenty sixteen. We're talking today about a peculiar event, something that happened in February twenty sixteen, and it's just taken me this long to finally get around and addressing it. You might have heard in previous episodes of Tech Stuff about how I would champion the fact that Google with their self driving cars had had enormous success, flawless. You might argue, success something like one point four to five million miles driven without a single accident caused by the autonomous system. That there had been about fourteen or so accidents, but all of those were either the fault of a person manually driving the car in manual mode or another driver colliding with the autonomous car, but never the fallow the autonomous car itself. It was a perfect system until February twenty sixteen. Yeah, and that is the day of what I'll caohol, And I really don't mean to over dramaticize this at all, So maybe I'm titling this episode. I think maybe sure we should call this the Saint Valentine's Day Google self driving Car Massacre. Oh, that's an excellent title, and I'm all for it. Definitely not overly dramatic in any way. Well, I mean it's it's also funny because we're recording this the week after I've gotten back from south By Southwest and this was a topic that was discussed heavily at south By Southwest because till this incident, it was a very easy sell to say autonomous cars are the way to go. And then this little accident happened, and and it it wasn't terrible. We'll get into the details of the accident, but this little accident happened and suddenly it sounded like Google's autonomous car had caused an enormous pile up on the highway. Everyone was much more cautious. So you're maybe not buying my alternate title. Then, is that what you're saying? What's your alternate title? No, no, that I'm totally no. I think it is a maskacer, but I think it's a massacer in the sense of the public perception of autonomous cars. I see, Okay, yeah, I'm thinking of from a pr st, I could I could take this opportunity to gloat and say, ah, they're not as infallible as you thought they're They're not perfect, but but I'm going to take a different stance here in this in this podcast, And and I think that as we as we talk through this, we're going to realize that they've been held to a much higher standard than they probably need to be right. And I know that's that's tough to to take, you know, when you when you just hear it that way, But listen to our argument back and forth about this, and and understand that they're being held to perfection when they probably shouldn't be. When humans, I mean, we're not perfect, of course, right, there's a vast m a chasm between what the standards that they're held to versus the standard that human test drivers are held too. Sure, yeah, if you look at if you look at the standard driving test that you have to pass before you get a license, I would argue most autonomous cars could likely pass such a test as close to flawlessly as you can get. But you don't have to be flawless when you take a driver's test. There's room for you to not completely do something perfectly, Like if your parallel parking isn't exactly right, you're going to get points deducted from your total, but you may still be you know, high enough, score high enough so that you could pass the full driver's test. Exactly. You knock over cone. It's not really a big deal. But an autonomous car knocks over a cone, everybody points at it and says, oh, we'll get that thing. It's it's pilot junk. Yeah, exactly. It's interesting that you point that out too, because that ties into a different discussion. I saw it south by Southwest that wasn't specifically about autonomous cars. It was about robots. So this is a little bit of a tangent, but it goes to illustrate the point you just made. I love tangents, So this h pammel about robots. There was a woman, Leila Takayama, who used to work for Google X, but not in the autonomous car division. She talked about how she ran an experiment. She got a guy from Pixar to do a series of very simple animations to show people the interactions between a robot and a person and then to judge which robot is considered intelligent versus not intelligent. And the whole point of this was to show the differences between succeeding and failing, but giving no indication that the robot understands it succeeded or failed, or building in expressions for the robot to follow a success or failure to indicate it quote unquote understands what happened. And it was fascinating because they showed a very simple experiment with a robot trying to open the door. And again, this is an animation, so there was different scenarios. There's one where the robot opens the door and the door opens and then the robot just sits there and it's done because it's done what it was supposed to do. There's one where the robot opens the door and then kind of perks up, like, oh, I did what I wanted to do. There was one where the robot fails to open the door and then does nothing, and then there was one where the robot fails to open the door and then slumps down a little bit as if to say, oh, I'm disappointed I didn't succeed. They then asked able to judge which robots they thought were the most intelligent, and everyone said the robot that failed but showed disappointment was more intelligent than the robot that succeeded but didn't show any expression at all. No kidding, And when you think about that again. It's holding robots to a standard that doesn't necessarily apply to them because of the human element this human robot interaction. We're holding autonomous cars to a similar standard that perhaps is not fair. We're holding robots to a standard that's not fair. But that means that people who are designing autonomous cars and people who are designing robots have to take that into consideration because that's the way humans are. Well, this is interesting because you're you're mentioning specifically imitating human behavior. Yes, and this comes up in an article that I read in let's see, it was in the Verge. Yes, and the Verge article you might have read the same thing. An excellent article. Yeah, it really is. And a person by the name of Jennifer Harun. She's the head of business operations for Google self driving project. And by the way, let's come back to the details of the accident and just a moment. Sure, we'll describe what happened, but she says that, well, you know what, maybe I need to back this up just a second here. How about this, Let's describe the accident and then we'll talk about what she said, because it plays perfectly into it. So in helping understand what happened. So here's here's how you can imagine it. All right, You've got an intersection in Mountain View, California, which is where Google's headquarters is located, and you've got the Google self driving car. Correct me if I get any of this wrong, Scott, I'm going from memory. That's all right, I'm doing mostly the same. It's so the Google self driving cars in the right lane and at once it's planning on making a right turn at this intersection. Yes, Now, at the corner of the intersection, there were some sandbags that were a partial obstruction of the lane. Yeah, I think you're blocking a sewer entry a great maybe or something. Right, So the Google car detected that there was an obstruction, and so it had to plan an alternate way to make its right turn. It still wanted to follow the route that it had planned, so the change would have been for it to kind of edge into the next lane, over the next lane to the left before making a right turn. Behind the Google car, approaching at a blistering speed of fifteen miles per hour was a bus, and so the Google car recognized there was a bus coming. He was moving at a very slow speed at two miles per hour. The Google Car said, well, based upon my programming, what I should expect happen is that the bus will slow down, allow me to move through. I'll clear the intersection, the bus will continue. What actually happened was the Google Car made the move into the lane, the bus continued, and there was a low speed collision and there were no injuries. No one was hurt. There was, in fact a driver behind the wheel of the Google Car. It's just the driver wasn't in control. The autonomous system was in control. And some people might say, well, you know, the bus driver just didn't let the car in. But Google actually said this is important. Google came out and said, we accept responsibility for this. This is something that it's it's valuable that this information has come to light because it means that we need to revisit this particular part of the autonomous car programming. Now, I thought that was really interesting. First of all, I have never heard of a company accepting responsibility for something so fast in my life. Well, they did, and they didn't. I mean, there's a couple of versions of this. Now you got the details of the accident correct, Although I did hear and this this is a bit confusing. I didn't hear that the lanes in this particular part of town are extremely wide, and so what happened was the car kind of edged itself over toward the curb. So it was I guess, mimicking human behavior again. And I'll get to that in just a minute. But um, you know, once this accident happened and they said, you know, we do need to investigate this, they did that to the tune of about thirty five hundred new tests that they've now implemented since this accident that said, we're going to watch for this. You know, we need to understand a little more deeply that some of these larger vehicles may have a different, more difficult times stopping in traffic than a smaller vehicle will. And and that's the reason why some of these bigger vehicles like to continue on their path and think, well, maybe someone behind me will let them in. Google did say we were relying on an element of human kindness to let us into that lane, and that's normally what happens. It really does. Usually there's a back and forth or you know, maybe there's always going to be that off handed time where you know, someone does cut through and they're just like they're like, no, I need to get through that intersection in this light cycle, and no one is going to stop me. Yeah, I mean I'm gonna be ten feet ahead of you when all this happens, right, That's where I you know, that's that's my goal. But usually what happens is it's an alternating pattern and they expected that to happen, and it didn't happen in this case. And this is where it plays right into what Jennifer had mentioned. Now, Jennifer Harun, who is the head of business operations for Google self driving project, explained at and I think she was at the south By Southwest conference as well, and she said that the Lexus, it's a Lexus vehicle that was outfitted with this gear so that it struck the bus in part because it was imitating human behavior. And that's I found that interesting that she kind of is deferring the fault here and to say, well, we were just imitating what we see on the streets and that's that's kind of what happened. So well, it's a it's a double deferral in a way, right, because first they say we were counting on an element of human kindness, which is already kind of a deferral in itself. Sure, yeah, you're you're essentially saying, well, we were we were thinking that the bus driver would be a decent human being and not a Lexus hating scumbag. That's I'm paraphrasing what they said. Obviously I'm taking a little liberty. And then in this one, you're saying, she's also saying, well, we designed the car to behave the way we see actual cars behaving on the road. So in both cases, you're almost it's a little bit of backing away from taking full responsibility exactly. And this imitating human behavior that she's talking about was that they had recently taught the vehicles to hug the right hand side of that lane when they're making that right hand turn, and that that's when it's encountered the sandbags that were unexpected. And so this is what I find interesting is if it's a wide lane and she's saying that it was hugging the right hand side of that lane trying to make the turn as most humans do, if it was in the center of the lane, she's saying, if it had just behaved as they normally would do it. You know that it would be in the exact dead center of that lane. The bus wouldn't have had the gap, I guess, and try to try to make it through that gap, so it would have just been behind the car. Never would have happened. So she's saying, in effect, because we're trying to make it mimic human behavior and we were hugging that right side, that's why this accident happened. Maybe we shouldn't have done that. But then again they come back and say that that's absolutely necessary for them to mimic human behavior, because if they don't that causes trouble as well. There's brother issues that are right. If a vehicle, let's say that it's an autonomous car heading toward an intersection, the light goes from green to amber, and there's technically enough space for the car to break safely and come to a complete stop as the light turns red. Knowing that most humans would just gun it, or at least just continue at the same speed to go through the intersection while it's still amber, you might want to think about that when you're designing your autonomous car so that you don't cause a pile up behind you, like you don't. If the person directly behind the autonomous car expects the car in front of them to continue through the intersection, you could potentially get rear ended. Yeah, that happens a thousand times around. I mean more than that. But it happens all over the world, really, especially in Atlanta, where the rule is if the light turns red, three cars get to go through. It is so true, isn't it. Yeah, it seems like once one goes through, two more following. Yeah, it's crazy. Yeah, I've seen it happen in multiple places around the city. There's some neighborhoods where it's worse. I'm not going to name any names, buckhead, but I'm just saying, yeah, never accelerate immediately on a green light anywhere in this area. You can expect there's going to be that that odd ball car that comes through every right, and there's gonna be like a car that's been waiting to turn left the whole time, and they say, I'm not waiting another light cycle. I'm just going now, like even if they're behind the stop line. Yeah, I'm sure, I'm sure it happens everywhere's it's particularly bad in these congested areas. Yes, So to your point, it is important to take those things into consideration when designing the autonomous car. You don't want an autonomous car to drive like an inconsiderate jerk of a driver. But at the same time, you can't have it be so clinically precise that it is standing out from all the other drivers. The only way that works is if you get to a point where you're at a saturation point with autonomous cars on the road, where then you can affect the behavior on a mass scale across a fleet of cars and not have that issue of human drivers having awful interactions with robotic drivers exactly. Here's here's the way they stated these these spokesman stated. They say, it's vital for us to develop advanced skills that respect not just the letter of the traffic code, but the spirit of the road. I think that's a good way to put it. The spirit of the road. I understand that. I completely get that when I read it is that, yeah, there's little rules here and there that we'd bend, but everybody bends them, and you expect you you understand how other drivers are going to behave in the same situation, and you expect that to happen, right, and you behave in that way and it all works. But when something comes in, a spoiler comes in and it follows exactly to the letter of the log the way that's supposed to happen, that person maybe you know the standout right. Another great example of that in Atlanta would be we have a couple of different highways that run through the city and one that runs around the city two eighty five, and two eighty five is often thought of as the type of highway that if you get on it, you have to speed. You cannot go the speed limit on two eighty five. It's just too dangerous because everyone else is going above the speed limit. Massive truck traffic, Yeah, yeah, and there are enormous, enormous semis rushing down there, and you don't want to get you to be poking along when they come up behind you. So again, an autonomous car would need to have that information and take that into account, unless you got to a point where you had so many autonomous vehicles on the road that it was no longer a concern. Yeah, And this is where we discussed this yesterday, because we're talking off air about this just a little bit to prep for today, and the idea would be that's kind of like schooling, almost like fish schooling, and that the cars know where the other one is at all times and they can communicate between them. The problem is when you throw in the human driver element into that mix, or you know, if you have just one autonomous car among all humans, that's the other problem. It's the other issue, and right now, that's the battle that they're fighting right right. So once we get to a point where there's that tipping point one way or the other, then things will be very different. But there's going to be some growing paints. And this also leads into something that I talked about earlier in twenty sixteen when I went to CEES Toyota had their big AI discussion. You know, they're investing millions of dollars in AI research for autonomous cars and beyond, and one of the things they've talked about was how autonomous cars in general are really really good at handling all the mundane stuff that you would typically encounter on a normal day driving from point A to point B. What they are not good at is dealing with stuff that's outside of that norm and the sandbags that we talked about earlier. Would be a great example of that. It's some form of obstruction that's partially blocking off part of the road and that ends up causing a different scenario, and sometimes the card behaves in a way that works out for everybody. In this case, it didn't. And it's not that the car couldn't handle the situation. It's just that the method that the car used turned out to not be reliable. Yeah, this was an extremely slow speed crash, as we've said. Yes, the bus was traveling fifteen miles per hour in the other lane trying to get through that gap. But the Google car was traveling I think they said to two miles per hour. Yeah, very very slow, very slow speed. So the thing is like with the with the compensation for this, you know, the thirty five hundred tests that they're now going to run additional tests, uh, to determine or to find a way around that situation. So it's never going to happen again. We're gonna do everything we can, but to to think about it that way, to say, the thirty five hundred tests that are going to allow this vehicle to to think about that exact situation and never let it happen again where where it just kind of noses out into into a lane that appears open. Yeah, that's remarkable. I mean it just lets you know that there there are tens of thousands of of um programs or are thoughts, I don't know how to say it, right, that are they're going through this thing at all times, you know, um as calculations and parameters and just you know, if this then that you know, those scenarios are being run all the time. It's just incredible, mind boggling, it really is. And I was looking into you said one one point four or five million miles have been driven, uh flawlessly, right, I mean they hadn't have any problems, you know, at at fault I guess the autonomous vehicle. Do you know how much they test on a daily basis and on actually on a daily basis? No? No, okay, well let's see, I've got I got a note here. I should have looked for that reading that. Okay, here we go. All right, so actually this is a per week and then a per day thing. All right, They drive ten thousand miles per week and that's like you know, somebody in a vehicle on the road ten thousand miles per week, per day though this number is incredible. Per day, they are driving three million miles of computer simulation miles. Oh so three million the equivalent of three million miles. That's because they can quickly just go through that and have multiple systems running these things, you know, So the amount of testing that they do in a year is just unbelievable. I don't have yearly stats or anything, but you can extrapolate those numbers to that well. And the other point in the Toyota press conference that was interesting to me, and this goes back to what you were saying at the beginning of the show about holding autonomous cars at a different standard than we hold human drivers. They talked about how you offer a lot of the autonomous car industry talks about the one hundred million mile benchmark, saying that you want you want one hundred million miles traveled of proven safety. And they said, you know that's not enough. You need to go much bigger than that one hundred billion miles. And I thought, wow, that is I mean, I get it for you want that many miles so that you can encounter as many possible different situations as you might encounter on the road, because obviously, if you if you plan a system and it's great for handling ninety nine percent of the situations, that's fine until you run into that one percent. And when you do figure out how many cars are on the road in the United States alone, traveling on any given day, you realize the odds are Eventually, I mean, statistics show, statistics prove like odds are sooner rather than later, one of those autonomous cars will encounter a situation that would have been impossible to anticipate in the programming phase. Yeah, so well, I get it. On one hand. On the other hand, I get frustrated because I really want to see this feature get here as soon as possible. But I totally understand the need for that level of precision that's demanded so that you can be sure that nothing catastrophic happens when a car encounters something that the programmers just did not anticipate. Now, Chris Rmson again, he was the what was his title, shoot, I think as director of the self driving car project. I couldn't remember director or not, but he he did say, and this is I don't you can find this troubling, I guess if you want to. But I understand what he's saying. He says that, you know, of course the February fourteenth was a tough day for his team, obviously, but he says, and I thought this was interesting. He said, We've got to be prepared for more days just like that if we're going to ever succeed in creating this, this project, you know, making this work. And we're actually gonna have worse days than that. And when I when I hear that, you know we're gonna have worse days than that. Of course you think, you know the worst. Do you think that it's going to be involved in accident that is fatal or you know, harms somebody, anybody in any way, And of course that would be an awful day. That'd be a worse day than what we've seen. But you kind of have to expect something like that is going to happen if you're traveling, if you're traveling three hundred billion miles like you said, or you know whatever, the the enormous number of miles on the road that they want to travel is. Um I would guess that, you know, when you're talking about three or three hundred million or billion or whatever it was, those might be computer simulated miles, because you know, the three million, three million a day is an hormous number and that adds up quickly. But you know, ten thousand miles per week of actual you know, physical drawn the road testing, that's that's pretty impressive still, But how long would that take to get up to three hundred million? And you know, I think somebody who laid it out pretty clearly here is the US Transportation Secretary. His name is Anthony Fox. And you know he's the one who said where I initially read, I guess, don't don't compare these two perfection You can't do that. And one of the quotes here in an article that I read from the BBC says that he says, it's not as surprise that at some point to be a crash when they've got this brand new technology in the road. But what I what I would challenge anyone to do, is to look at the number of crashes that occurred on the same day that with the result of human behavior. And that gets right back to what you were saying, and that you know, there's so many miles driven every day just here in the US, around the world, all over the place, that you just you know that bad stuff is happening all the time, every minute, literally, all right, but this is a great opportunity for me to transition from the Google story, which is you know it's again. It has huge implications for the autonomous car industry. Even though it was in the grand scheme of things, a minor accident, it was something that once you realize, oh, they're not perfect, then it starts raising some questions. These talks were a bit more subdued after that point. Yeah, and at south By Southwest like that was definitely happening, although I went to a couple of different panels about autonomous cars where they didn't even bring it up. They were gung ho. I mean, the general feeling at south By Southwest is that autonomous cars are a definitive future that are coming, and that that most likely there will be some form of shared services model for autonomous cars. I think most people agreed that personal ownership is going to slowly phase out, largely because younger generations don't necessarily see the necessity of owning a car. And there some interesting statistics too. I saw a panel it's called robot cars and sharing road rage or smooth sailing, and this was said. Three panelists on it, a moderator and two panelists. One was the moderator was Frederick Sue of a company called NATO. NATO creates an app and a camera setup where you can essentially upgrade your car into a smart car, not an autonomous car, but a smart car where it's able to use information from the camera and run it through some algorithms that are on the back end of the data system that then transmits to your app to let you know things like how well, how good of a driver is the driver, that kind of stuff. So it's also a use for like fleet management. You can use it to figure out if the driver you've just hired to be one of your employees, if that was a good choice or not, or maybe you need to rethink that that kind of stuff based on driving. Yeah, and uh, and it pulls information from a lot of different sources, but the camera is the primary one. He was the moderator. And then you had Shad Laws from Renault and who was funny because he talked about Renault is a brand that is famous around the world but not here in the US. But you might know our partner Nissan. And then uh, yeah, we knew Renault back in what the mid eighties, I think, yeah, that's about it. Yeah, And then there was a Mark platchin from BMW who was actually a substitute. Originally it was supposed to be Maryanne Wu of ge Ventures. We'll be back with more of the great Google Car Crash of twenty sixteen after we take this quick break. So let me throw you some some statistics at you, or some of the facts at you that Sue brought up. So one of the things he said was that the hypoical American car spends ninety six percent of its life part of its life part. That's an enormous chunk of time. Yeah, so only four percent of your typical American car, knowing that there are cases outside on either end, four percent of it is actually used driving around. So with that when you hit someone with that, assuming that that is in fact correct, I don't know where his source was for ninety six percent, but assuming that is in fact correct, you can start to see an argument for a fleet of autonomous vehicles that can drive around on demand and pick someone up and drop them off, because that means you could free up the space that would be taken by a part car and use it for something else, because I mean, a lot of our spaces are reserver for parking. In fact, there are regulations for office buildings about how much square footage you have to set aside for parking in certain cities. It depends on the city. But imagine that you have a world where people are relying on autonomous cars to pick them up and drop them off. You don't need that space for parking anymore. You can actually dedicate that to something else and make bukuza money. I think it's the way they put it. But anyway, plant a tree, plant a tree. You could also do that. I think, come on, you tree hugger. No, I also think that would be awesome. So one of the things that I thought was shocking it. I think the effect on me was not what the speaker was planning. Shad laws of Renault was talking about the safety factor of autonomous cars, and his argument was that safety autonomous cars. First of all, we can't determine that they're more safe than human driven cars yet because we don't have enough information. We don't have enough autonomous cars on the road, we haven't had a enough scenarios to really tell. But then he also said safety is really not as big a deal as you might think, because the safety benchmark is to try and have fewer than one fatality per one hundred million kilometers driven. Now, in the United States it is one point zero eight fatalities for one hundred million miles. But a mile is longer than a kilometer, right, one mile is one point six kilometers, so it's still below that one fatality per one hundred million kilometers. And then he said for most countries that's the case. There are a few that are above it, but not many. So is this an unrealistic standard to be held to? Well, I think what he was trying to say is that human drivers are pretty safe already, and therefore you can't sell autonomous cars on the promise of safety because we're so safe already. I would counter that argument by saying, more than thirty thousand people died last year as a result of car accidents, as thirty thousand fewer people around today because of a car accident, and more something around the order of ninety percent of car accidents are at the fault of the human of a human driver, at least one human driver, And so my counter to that argument is that it may be statistically speaking, a safe thing, but when you get down to actual numbers, with real human being lives attached to it, I would argue that the autonomous vehicles so far have proven to be a really good move in the right direction to reduce that number dramatically. This is dangerous territory you're waiting into here, because on our show on Car Stuff, we sometimes talk about, you know, the incredible rise in fatalities on Georgia highways last year, because there was a huge increase, like twenty five percent increase or something, you know, year over year. Wow, And it was really big, and it was the first time in a long, long time, a long stretch of time where you know, it had actually been on the rise. It it was going down up until that point, and then suddenly this big spike and trying to figure out why, and we're talking about distraction and all that stuff, you know, smartphones and things behind the wheel and trying to just you know, guess why it's happening that way. And of course somebody writes in and says, well, thirty eight thousand people is not that many people. And you say, well, that's a lot of people die on us highways. And they say, now, how back in the nineteen fifties the number was like forty four thousand and that, you know, and that was with less drivers on the road. And they give you all these stats about population and number of miles driven and all that night, and I get I gotta be honest, I get kind of confused with with that, you know, with that sure angle, like trying to compare apples to apples, you know, back then, you know, sixty years ago to today. That's that's kind of tough to do well, especially you know, there's so many other factors there, right, Yeah, you might have fewer drivers on the road, but your safety regulations weren't anything like they are today years ago exactly. That was one thing. And we always argue that point two, there were no crumple zones, there were no air bags, there's none of that stuff going on, so maybe that it counts for it. But then they counter with another argument. So I'm just saying that, you know, I feel that somebody out there is going to have some kind of issue with you know, mentioning that thirty you know, thirty eight thousands, a huge number, that was what it was last year in the US alone. That's a huge number, no matter how you look at it. I mean, even if, even if they're more people driving, I think my response to anyone who would argue like that that this is less than in the past, I would say, that's good, but it could be lower and lower number of people who die as a result of car accidents. I think it's hard to argue that that's a bad thing. No, you certainly want that number to be as low, as close to zero as you could possibly make it. Of course, automakers strive for that. They're trying, they're they're trying everything they can to make essentially a deathproof car. I mean, you can't, you know, you can't account for every situation, right, every single situation, but they're doing their best to make what is essentially a deathproof car. And there's several that, you know, several marks that they've got. They've gone years without a Yeah, I've got you know, the stats somewhere back on my desk, but there's a few that have gone I'm going to guess here, just based off my memory, it was like five or six years without a fatality, right, caused by a fault in a system in their vehicle. Right. Well, and that also leads me to a different panel that I saw. We'll come We'll come back to the road rage one because we got to get to the BMW's Yeah. But the other panel I saw that was related to this was called looking Forward to Rush Hour the Future of Transit, looking Forward to Rush Out. Yeah. This was from a couple of industrial designers without to it design talking about the future of transportation, and it wasn't just autonomous cars or even just the future of cars. That was one half of the panel, and that was done by a guy. The guy who led that part was Dan Dorley, but there was also Chip Walters who did the other half, which was more about the hyper loop, also fascinating, but that we're not talking about the hyper loop today. So switching back over to Dorley. One of the things Dorley said that I thought was really interesting was that once you get to a level where you have a lot of autonomous cars on the road, like let's say the majority of cars on the road or autonomous, and you have proof I mean, obviously this only works if everything's working properly, but you have proof that because of the number of autonomous vehicles on the road, the number of crashes decreases dramatically, the number of deaths decreased dramatically. Then you can start to play around with other stuff, because if the autonomous cars are a proven technology that's safe, you can let up on some of the major safety considerations you've had to put into place over the last few years in order to minimize that number that we talked about, that thirty thousand or higher number. You could remove crumple zones. You can make cars smaller and lighter, which is especially important if your cars also are electric, because the battery will have less weight to have to move around. That will extend the driving range of your vehicle because you've made your vehicle lighter. Not that not that the battery's gotten any better, but it doesn't have to push as much weight around. Sure, And again this only works though, if every vehicle out there isn't the same. Yeah, you have to have you have to have enough autonomous vehicles, at least the majority, if not of them out there, so that you can be confident that by eliminating those safety features that are important right now, it's not going to make any difference. And I think that I think we're pretty far away from that. But I thought it was an interesting point. He also talked about more car manufacturers creating a sort of a universal chassis where lots of different bodies of vehicles could fit on top of the same basic chassis, leading to a future where ultimately you can and you can do this now. Actually, if you've got enough money, you can go to certain specialty companies and three D print a car design. You could design a car if you wanted to, and three D print a car body that fits on top of a particular chassis and motor drivetrain configuration, and so you can have your own Like people would say, well, what kind of cars as well, that's my car. I call it a Strickland. Yeah, it's it's a Strickland. It doesn't drive anywhere. Um yeah, that's that's a joke about me not driving. But yeah. I thought it was interesting that he was looking into implications of autonomous cars well beyond safety, well beyond the shared model. He was looking at au Thomas cars like, well, what does that do to the design of the car itself. That's interesting that you know, you could eliminate the things that we find that we have to have now. Yeah, and that's that's an interesting way to think about it, like if if, if it's just not necessary, what could you really pair that design down to what what what smart things could you do with that ye to make it work better as an electric form, as an autonomous platform, Right, you know, it all makes it makes good sense. But again you're you're counting on you know, participation in this. Yeah, you need to, you would need to have enough buy in so that there isn't a risk of having something like we saw with the previous Google tests. You know, we talked about there were more than a dozen accidents involving Google self driving cars, previously only only the previous ones until before February twenty sixteen. They were all at the fault of a human driver, either the person manually controlling the self driving car or another driver. So the same thing is true. If you're in an autonomous vehicle and there are human drivers on the road, then there's a chance that one of them could make a terrible like there could just be an accident, it could be a failure, it could be a distracted driver, drunk driver, it could be anything. And until you eliminate those possibilities, it is pretty dangerous to just say, let's eliminate crumple zone. It's not very dangerous. You could do other stuff like imagine, you know, you have no need for controls, so you free up all that space in the front that would be normally be dedicated to steering wheel and pedals and that kind of stuff. You could have a workstation or an entertainment station. Because you're not driving, you don't even necessarily have to face forward. Yeah, you can face backward. I had a discussion about this on Forward Thinking and Lauren immediately said, yeah, could never do that. I'd be yakking all over the inside of that car. Yeah. I think a lot of people have that trouble on a train already or a bus, you know, in certain situations. Right, but imagine if you could sit sideways. Yeah, the design of the vehicle could just be so radically different that none of that really matter. You could you could probably design you know, those honeycomb systems where you could sleep in the car if you want it. Yeah. It's kind of funny because it actually opens up an enormous opportunity for designers. Yeah, right, unprecedented opportunity because you would be completely transforming the interior of a car. All the things we associate as being well, not all, but a lot of the things we associate as being the definition of what an inside of a car would look like go out the window, I mean figuratively speaking. And so you could then have all kinds of different configurations and designs almost like almost more like home design really, or room design, some sort of interior design for vehicles. Yeah, it's a strange, strange thought. Hey, by the way, I want to clarify one thing, really, sure, just something's been bugging me for the last ten minutes. All right, I do know that there were rent cars on US roads prior to the nineteen eighties. I was just mentioning their brief comeback, you know, with the Alliance lineup and uh and and the kind of the I guess I'm going to mention the failure that that was. It was, it was not not all that well received. Yeah, but I think most of my listeners are most of them, let me let me clarify, most of my listeners in the US are probably unfamiliar with the brand Renault. Yeah, probably because I'm guessing many of them were born in the eighties. Yeah, it's a it is a seldom seen vehicle on the roads here in the United States, but in other parts of the world it is a very popular way. Yeah, I mean a lot like pougeaux or something like that. You know, there's reasons, but not the right show. It's so funny when you start throwing around car manufacturer names and car brand names, and then you come to that weird realization that in other parts of the world they are totally different ones that and some some of which that are prevalent in the United States, are largely unknown in certain parts of the world. And it just reminds you like, oh, yeah, that's right. The whole world, isn't the US. Yeah, well it's it's it is strange. And once you travel outside and you see that you see the same vehicle but it's named something different or something like that, it's just unusual. It's just a it's it is eye opening. Really. Now, I've been teasing this for the whole episode, but let's get back to BMW and Mark Platchin. Is this intended to hurt me? No, it's time tend to hurt you. I just want to see what your reaction is Scott. Scott and I started talking about this off microphone yesterday, and as I was talking a little voice in my head said, shut up, Jonathan, save it for the show. So that's what we're gonna do. It's not really you might just shrug and say, oh all right. But in order to set this up first, what is bmw slogan? Now they're known as it's It's a driver's car, right, yeah, it's it's um the ultimate driving machine. Yea, I mean the ultimate driving machine, the ultimate driving machine. So you would think that, you know, of course they're going to dabble in autonomous systems like you know, maybe adaptive cruise control something like that. But I just I've had a hard time all along seeing BMW going fully autonomous because of the way they market their company right now, it's it is the ultimate driving machine. It's a driver's vehicle. If you want something that's fun to drive, that's an experience, you get a BMW. You get it. You know something that's it's it's top of the line, it's expensive, it's plush, it's it's it's a well handling car, it's powerful, it's it's everything you want, and again, ultimate driving machines. So why are they messing around with autonomous vehicles? That's that's my thoughts. Plashin works specifically with the autonomous vehicle section in BMW, and his response to the first part would be, I imagine I'm putting some words into his mouth, so take this with a grain of salt. But I imagine he would say, it's where the future of vehicles definitely happens to be completely understand that and you cannot ignore it. If you do, you'll be left behind. Yes, he said that the company was at a real they were in a quandary. He actually said that. I think it was last year he was brought in to talk, or maybe it was a few years ago. He was taught brought in to talk about the concepts they needed to talk about in an upcoming conversation. They were going to hit like some corporate milestone, and they wanted to talk about what are the next one hundred years of BMW going to look like? Now, anyone who's listened to forward thinking, you know, predicting the future is hard. Predicting five years out is hard. Predicting one hundred years out is impossible. I can't. Yeah, the only thing I can predict tomorrow is that if I don't wear some block, I will be sunburnt. That's it, because I know I'm gonna be outside a lot. But he said it was his job to try and help coordinate this vision of BMW for the next one hundred years. And taking into account the fact that autonomous cars are I mean everyone it's suth By Southwest was talking about them as if it's a foregone conclusion. That's the future, that's where we're going. So taking that as part of it, they actually had serious internal discussions what does this mean with our slogan the ultimate driving machine? What do we do? Do we rebrand? How do we rebrand? This is something we pry ourselves upon. It is a corporate identity. It's it's kind of the central mantra of the company. It's it's the DNA of the company. They started playing with alternatives to the slogan, like maybe we change it to something else, and they tried a few different things out and all internally, and no one liked them. No one liked them, And then finally someone said, well, technically it's a driving machine. It's a machine that's driving. It is the driving machine. We can make the ultimate driving machine. So it's still the same slogan. The context is redefined. Oh, boy, yeah, boy, I don't know, that's why. I yeah, I don't know about this all right. Well, so it's almost like you're putting the emphasis on the on the other part, I don't know, how how do you even look at that? I guess it's how you would say, I'd say the emphasis. The emphasis previously was on driving because you think of driving as a verb that people indulge. So now it's the ultimate driving machine. I see. Okay, well boy, that's so subtle. So if you make and this was also an interesting discussion because people ask the questions and what happens to brand identity in a future of autonomous vehicles that are likely not going to be owned by individuals but will be in some form of shared economy. And they had a really good response for this. They said, well, you could argue that all autonomous cars would essentially be alike that one, you know, robo uber car would be the same as the next robot uber car, except eventually someone would come along and say, you know what, We're going to make a different robot uber car that has X features in it, which appeals to why demographic, Yeah, somebody will pay a premium for that feature, right, because if you're like, hey, we noticed that that young people between the ages of such and such and such and such, they really care about these things and they don't care about these other things. Let's make some cars that go straight straight to what they care about, and we'll be able to dominate that market. And then you get competition there because other companies will follow, which means you still end up getting that differentiation, you still get the brand identity. The question is how do they define themselves so that the experience of being in, say a BMW autonomous car is different from being an Alexis autonomous car. Well, now we know all they do is put the emphasis on machine. That's it, right, But I thought, I guess it beats like BMW we give up or BMW it was fun well at last, Yeah, yeah, you can come up with a bunch of funny slogans. I'm sure for it, but but honestly like to stick with what they have. Really, I think maybe if if that's what they're gonna do, and they're gonna push it that way, that may be exactly what they do. They may you may hear that emphasized machine over over driving, which we are now, So that's gonna be really weird, isn't it. I think so, I mean, I'll so, I think it's going to be weird to be in a world where, assuming that the shared car approach is what what wins out, it'll be weird to live in that world for lots of different reasons, because a lot of us are very used to having our own personal vehicle for multiple reasons, not just for the convenience sake, but convenience outside of just I have a car whenever I need to go someplace, assuming that's not broken down. Scott and I will conclude our discussion about the Great Google car crash of twenty sixteen, but first we need to take another break. What about all the stuff that's in a car. Like a lot of people have stuff that they keep in their car, and it might word in equipment or you know things like that diapers, Yeah, stuff like the new parents might have a box of diapers in the car so that when they travel places they have their supply right there. Sure, if they need to run out to the car, they can or but in the future, if you have shared vehicles, obviously you can't just keep stuff in a car. You'd have to carry everything you need with you all the time, and you would either have to pare down the stuff so that you're saying, well, I might not be prepared for certain situations. But yeah, but it's a trade off. Isn't it really nice to just kind of leave an umbrella in your car and have it when you need it and you don't have to remember it every single time you go out the door. And it was funny because Platschin actually said, well, maybe we'll have services where you could actually store your stuff and what would account what would end up being like a mobile storage unit and you can just call upon it to come to you whenever you needed it. And I thought that puts more cars on the road. That's a terrible idea. Yeah, I'd just tell you right now, that's a bad idea. Joe and Lauren both agree with you, and I do too. I also thought like, well, that doesn't sound like that's ideal. So yeah, there's obviously some huge trade offs that would happen. We'll imagine a block of lockers, you know, that's would it be a block of lockers driving down the road with your stuff and everybody else's stuff in it? Yeah? And what if some went across town. Need I know that they would probably keep it in a central area. Central area. Yeah, but people don't all like Like, let's say that my next door neighbor and I both use the same unit because we both live next door to each other. We don't necessarily work anywhere close to each other, so he might work on the other side of town. He needs his umbrella, I need my umbrella. That car, I mean, easily you could see problems with that model. There was a similar model that I also let me see once you think about this one by Scott so talking about shared cars. Now, in the examples I've been giving so far, it's essentially a fleet of service vehicles, something along the lines of an uber or a lift, only with no human drivers. Right. Yes, one of the alternatives I heard, Shad Law actually mentioned this possibility, which I think is almost as bad as the traveling locker idea. What if instead of it being a fleet car, it's a communal car among multiple households, and you own like a sixth of that car. Can you imagine that working out? How would you guarantee that the car would be available for all the households? Well, I guess the man Okay, this isn't as bad an idea as I mean, I understand that it's it's not great. Yeah, and there's a lot of flaws to this one as well, But um, isn't this kind of the idea behind you know, the the companies that allow you to have kind of a lease on three different types of vehicles at one time and you can use the one that you need when you need it. So you lease, um, you know, a sedan, you lease a compact car that's very good, you know with mileage, and you lease, you know, a pickup truck and when you need to pickup truck on the weekend, you can rent that. You can you can have that brought to you, or you can go get it. Um use it for that amount of time. But what if somebody else is using that that that sedan when you need it, you know, they need it for the week and you also need it for the week. I mean, how does that all work? I don't Again, same set of problems. I think, yeah, maybe a smaller scale than the one that I'm talking I mean maybe if there's maybe the only way I can see it working is that you again go back to the fleet of cars, so you've got a fleet of autonomous cars. You've got a group of people who have essentially collectively invested so that they own the quote unquote own one of those cars. They don't actually own a car, they just own a share time share. Yeah, it's like a timeshare for those vehicles that are on demand. So if I call for a car and my neighbor calls for a car, and they're both and we're both on this plan, two different cars come because of the way we've agreed with this fleet, and it's the purchase price of the vehicle that ends up covering the cost of the individual trip as opposed to doing a trip, you know, a fee per trip, like like a typical rental car. Now. Yeah, so essentially it would be like, all right, well, collectively, we all got together and we put in thirty five thousand dollars to quote unquote buy a car. What that really does is give us unlimited travel using the service within its range of service, you know, assuming that it isn't you know, state or a countrywide or whatever. Yeah, And I could see it working that way maybe, but I can't see it working in such a way where you actually physically have one car to share between the multiple house that would never work. Yeah, just wouldn't work out. There's got to be a way around it, like you said, it's got to be there's gotta be a pool of vehicles to draw from. Yeah, it just wouldn't work. Yeah, but this wasn't I wish you could have gone because I wish you could have seen panels like days and some of the other ones too, Like I was only able to go to three panels total, but there were so many, there were all about autonomous vehicles. It sounds fascinating. I really didn't know until some of the you know, the reports that I've been reading just for this podcast, that this show is so focused on that type of technology. I tend to think of more of cs to be like that, Yeah, rather than this south By Southwest. Yeah, it's it's It's interesting because south By Southwest Interactive for many years was focused on mobile apps, like that was the big thing, mobile apps and some gaming. But usually you're talking about the next Twitter. You know. Merecat and Periscope both came out in twenty fifteen. Meercat went under in twenty sixteen. Periscope is still around because it's owned by Twitter. Anyway, that's the kind of stuff you would expect, But they had different tracks of programming under Interactive and one of the tracks was titled Intelligent Future, and that's where all the robotics and autonomous vehicles and AI all those discussions fell under that. And so a lot of it had to do with the future of cars, and again not just autonomous cars, but the idea of what what is it going to look like from multiple standpoints. I think autonomous played a huge role because everyone just assumes that's going to be part of the future, no matter how it turns out. Yeah. You know, one thing we should probably point out here is that we always talk about how it's happening. It's it's incrementally happening. What's going on, and we're getting little bits and pieces of it now and we see it, you know, in our everyday cars, but not the whole package yet. And the whole package they always it seems like it's always ten years out, is what they say. But yeah, I'm seeing estimates now that range anywhere from three years to thirty years. Yes, and those are all to be honest, those are all realistic. I mean, it could take thirty years, it could be faster than that. It could be we could I have this by twenty twenty. You never know. Yeah, I think I think three years is probably what we would we would start to see actual vehicles make their way onto the roads. Thirty years is where you get to the point where you're at saturation. Yeah, and you know, I know on this podcast, even especially in car stuff, but on this one, we've we've mentioned before they're already out there there cars that can drive you home from work without you touching the wheel or doing anything. You but they simply can't say it's an autonomous vate. You have to be sitting down the wheel and allow it to You can allow it to do it, but you have to be there, and you have to be ready to take control at anymore. And often often you'll get a little little beep asking you to make sure you make contact with the wheel to prove that you're still paying attention and everything exactly. You're not taking a nap on the way home. You don't want to, you know, those companies don't want to be liable for a terrible accident. So the three to thirty years we're talking about is where the companies are actually confident enough to say this is an autonomous self driving car. Let it do it. Yeah, and I'm so glad you're able to join me on this this episode and talk about this kind of stuff. I know that I come across as I love to needle you with these because you're the car guy and it's fun. It's all good fun. I should also mention that pretty much everyone's agreed that personal car ownership is not ever gonna go away entirely in the United States, that no one seemed to believe that that was the case. People said that it may be that fewer people own their own vehicles, but you'll you'll still be allowed to own and operate your own vehicle. Yeah, I could see that happening, especially for things like rural areas. It doesn't make sense to have an autonomous car service out serving way all the way. Rural areas, like you know, cattle ranchers aren't going to have any need for that. Now, this is a congested city situation. Yes, this is for urban, dense urban environments, and it's not ideal for other situations. So but one guy did say that he could envisioned a future in which car ownership of like an actual car owner, will be about as rare as horse owners are today. Really. Yeah, so there are plenty of people who still own horses, just not the general population. Yeah, there's a there's a lot of wide open space out there, and I think that, you know, that's where they'll be still used. Of course, yeah, maybe in cities. You know, I hate to say it, but there may be a point where, you know, you can't drive into the city in your own personal vehicle. You maybe have a giant parking lot in the outskirts of the city and you get out from there and then you take your city city approved transportation once you're inside. And I wouldn't that be something. I mean, it'd be a dramatic change in the way that we do things now. I mean, it would significantly changed the well the entire city scape. Really everything would be different. So it's fascinating topic. And again, thanks for inviting me in today to do this. I always have fun talking with you, and I know you like to rib me a little bit about car ownership and you know the way it's going, and uh, you know, I agree on a lot of this stuff. I mean, I think we can have a decent conversation back and forth about I understand that you know, things are moving towards autonomous vehicles, but I also and I'm glad that you said it too, that you know it's never gonna go completely away. Well, and to be fair, we're so so so in the baby stage of this, right, we're in the earliest stages of this autonomous era. Sure that making any definitive statement as autonomous cars will completely replace manual cars, or that car ownership will completely become a thing of the past, or even that manual cars will no longer be allowed within city limits, any of those. It's so premature to make any kind of statement like that. And honestly, it may turn out that we just see that the ideal mix is somewhere in the middle, with a mixture of autonomous cars and manually driven cars. We don't know. You know, the mathematical models suggests that if you went all autonomous, you avoid a lot of problems, but that's not necessarily the way it will actually shake out in real life. Well, I'm with you. I try to avoid the predictions because it just ends up making you look like a fool later right when it happens eventually, But that's pretty much status quo for me. Well, you kind of have to though. You know, anyways, I really I don't like to do that, but that I like to just kind of sit back and and kind of just take it all in because there's so many changes happening right now. It's actually pretty exciting. Yeah. Yeah, I mean, and you know, you got to remember the autonomous cars, if they'd become a thing, like a real serious thing, like most people believe, there are implications well beyond the auto industry that things that could really be effective, like like airline industry. You know, if you're able to jump into an autonomously driven car and you can do work or you can go to sleep and you are not you don't need to get to whatever your destination is within a couple of hours, that could really impact a lot of airline travel. So well, yeah, I mean, okay, I know we got to wrap up here, but you're you're making me think of you know, the the uh the kind of pros cons you you wagh if you're making a short trip on a plane, you know, if you're flying from here to or Lando, sure, yeah, which is like like from here, it's like an hour little a little less than an hour and a half flight. Yeah, but then then you have to take an account. You gotta get up early, you got to pack the car, you gotta get to the air important park and all that stuff. It ends up taking more than half the day. But you could just drive there too, And if you can do that in a way that doesn't it's not taxing on you, right, it's it's it's actually comfortable and you're in your own your own car. It's a lot more comfortable than being crammed into an airplane. Sure, why would you not do that? You have the opportunity to stop at a specific place to have food rather than just buying whatever little snack box happens will be on the plane. Yeah, so you're right. It does change even you know that short distance sure travel. Yeah, Now for long distances, obviously, I think air travel unless you're determined to do the great autonomous American road trip, I think you're The airlines will still be very much a strong player in that, but it will affect their bottom line, and that will affect how they route planes, how they design planes, how they how they price tickets. So there's some big, potentially disruptive things that could happen ripple out from the automotive industry out into many other ones, so it's pretty interesting stuff. I hope you enjoyed that classic episode of tech Stuff. Like I said at the beginning, there have been a lot more accidents involving cars that are in autonomous or semi autonomous modes. Obviously the most publicized ones are Tesla, but it's not like Tesla has, you know, the exclusive rights to accidents. It's just that they tend to make headlines across the world when it happens. And yeah, and of course Tesla has the ongoing issue between saying calling its products things that make it sound like they're autonomous vehicle products, while also saying this is not an autonomous vehicle product, So we get into that complication as well. If you have suggestions for topics I should cover on future episodes of tech Stuff, please reach out to me and let me know. You can download the iHeartRadio app and you can navigate over to tech Stuff by putting that into the little search engine. You will see that when you go to the tech Stuff page, there's a microphone icon. If you click on that, you can actually leave a voice message for me and let me know what you would like me to cover in the future, or if you prefer, you can head on over to Twitter and you can tweet me. The shows handle is tech Stuff HSW and let me know what you would like me to cover in the future and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,448 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,445 clip(s)