Autonomous vehicles: Are we steering in the right direction?

Autonomous vehicles are a hot topic. Their incredible ability – and at times lack of it – is a source of controversy as much as a source of wonder, from avoidable crashes to drivers literally sleeping at the wheel. What's undeniable is that you can now theoretically sit in a car and let it take control as it guides you along the road. But is that actually a good idea? Is technology truly ready to take the wheel? In this episode, we’ll be meeting some of the people and organisations aiming educate us about the limitations - and build appropriate levels of trust - in autonomous vehicles.

Dr Claire Blackett - IET:
People hear autonomous and it conjures up what we've been told in science fiction movies and so on that this is really, really capable. And we're still really far from that, but we are being misled in how it is advertised to us. So, I think that the over trust in the automation is far more dangerous.

Michael Bird - Host:
Autonomous vehicles have been in the headlines for a few years now. Sometimes for the right reasons, such as records being broken, new technologies and things like that. But they are more often in the headlines for the wrong reasons. Crashes, cars getting confused by the moon and videos emerging of people sleeping in the backseat with no one on the wheel. It seems like as a society, we haven't quite got used to the idea of autonomous vehicles yet. We don't really know what they can currently do safely. More importantly, we, the public, generally don't really know what they can't do when something goes wrong. I get the feeling that we are living through an era people will look back on and laugh at like quaint black and white films from the 1950s about the home of the year 2000. Now we laugh at how wrong they were with their robot maids and flying cars. But we might not be quite so immune.

Michael Bird - Host:
In this episode, we are going to be taking a look at how close we are to fully autonomous vehicles, not just as a form of transport, but as an integral part of our society. We'll be looking at how we can integrate them onto our roads and how we can strike the right psychological balance of trusting them or not trusting them to make good decisions.

Michael Bird - Host:
You are listening to Technology Untangled. A show, which looks at the rapid evolution of technology and unravels the way it's changing our world. I'm your host, Michael Bird

Michael Bird - Host:
Self-driving cars have been around for a few years now. A trend kicked off with Tesla's release of the auto pilot way back in 2014. And you'll know, I said self-driving rather than autonomous. And that's a really important distinction. Because whilst several current cars can park steer and accelerate for themselves, that is a pretty long way off smart, fully autonomous decision making. In fact, there's many stages and levels of autonomous driving. Here to explain them is Matt's Armstrong Barnes, a long time friend of the show and chief technologist at Hewlett Packard Enterprise, who specializes in issues around artificial intelligence

Matt Armstrong-Barnes - HPE:
Level zero, which is there is some form of autonomous driving. So at this point, we're talking about cruise control, emergency braking, some quite primitive things that are very rules based. As we start to move up the levels, we get to level one, which is adaptive cruise control. So I'm now going to speed up or slow down based on what's happening with the car in front of me. Level two, which is where we're starting to get into partial levels of automation. So we still have a human being in the driving seat and they need to take over and take corrective action. Level three is where we have the next level of that. But it's much more conditional based because there are a number of failure scenarios that the autonomous vehicle can't handle.

Matt Armstrong-Barnes - HPE:
When we get into level four, we still have a human being who has the capability to take over, but now the vehicle is capable of handling pretty much all failure scenarios. So now we're starting to think about mechanical failure so that there's complete depth and breadth in terms of how the car can operate. And then into level five, which is where we have completely unmanned vehicles driving around the streets, arriving at destinations and picking people up and then driving to the next destination on their own. Where we are today is probably the two, maybe 2.5. So we are not yet beyond where we can have autonomous vehicles that can drive with most scenarios that don't require human interaction.

Matt Armstrong-Barnes - HPE:
But then as we start to go up through the levels of autonomous vehicle, that's when we start to need to bring in artificial intelligence and artificial intelligence techniques. Specifically, because artificial intelligence is a way of having a car that doesn't have a defined set of rules that are going to govern how it operates. So AI uses more and more data to help it learn how to adapt to situations opposed to having a rigid set of rules. Emergency breaking, being a prime example. If I'm accelerating towards an object that is decelerating or stationary, then that is a scenario that where I could define a set of rules that would incorporate that safety. When we start thinking about spotting a human being crossing a road so that I can take effective action or understanding where road signs are changing or spotting lanes or making decisions about when to overtake, that's when we now need to start bringing in artificial intelligence techniques, because they can adapt those situations much more effectively.

Michael Bird - Host:
So we are currently at level 2 or 2.5 out of five. And I'll be honest, that's surprising. Because, well, there's a lot of hype about the amazing abilities of certain cars with self-driving features. And it'll be hard to do an episode without naming them. In fact, we already have. So, it is Tesla. They are going to get a lot of mentions in this episode, and come in for some flack. But that's largely because they are the market leaders and don't have a huge amount of commercial competition. So, if the best commercially available self-driving cars are only able to do the very basics, why are we treating them like fully functioning, self thinking machines? It's at least in part an issue of trust. Perhaps too much trust.

Dr Lionel Robert - Uni Michigan:
Okay. I'm Lionel Robert, a social professor in the school of information and core faculty member in a Robotics Institute, both at the University of Michigan and Arbor. I study human interactions with autonomous vehicles. I spent a lot of time focusing on trying to give humans an accurate estimate of the trust they should have in the autonomous vehicle.

Michael Bird - Host:
How did you get into this?

Dr Lionel Robert - Uni Michigan:
So I started off basically looking at humans and technology before autonomous vehicles ever came about. And I was studied in the context of cooperation, how can humans and robot cooperate to achieve a goal? And then when autonomous vehicles came out, at the very core of it, many of the questions were basically how was humans in this autonomous vehicle going to cooperate to ensure really a safe transportation? Because there was always going to be, there was discussion about handoffs. Even if the autonomous vehicle was going to be driving fully, what role was the human going to have? We started off believing that a big problem was trust. How can we get people to trust the vehicle? And so I think we're looking at things like communication between the human and vehicle. I think now we're realizing the big problem is over trusting.

Dr Lionel Robert - Uni Michigan:
How can we tamper down the degree of trust people have in autonomous vehicles, right. And so really I would say now is expectation and trust trying to get accurate understanding between the humans and autonomous vehicle on what each can do. Each agent is capable of doing a given circumstance. Do you remember a show called Knight Rider?

Michael Bird - Host:
Yep.

Dr Lionel Robert - Uni Michigan:
You know, him and K.I.T.T were like a team. There were times when he would drive, there were times when Kit would drive. They were times when K.I.T.T would provide cover for him. And very much it's a team. It's a human and autonomous vehicle engaging in collaborative activity to achieve a common goal. I think it's important to recognize that Tesla does not have an autonomous vehicle. So part of the problem with that relationship is that people believe it's a autonomous vehicle and they behave like it's an autonomous vehicle. And when the driving is stable, 90% of the time, that's fine. But when the driving becomes dynamic, people get killed. Right? This is the issue, right?

Michael Bird - Host:
That's not really what I was expecting to hear. Although it does make total sense. There have been more than a few instances of supposedly autonomous cars doing things they shouldn't. And people, specifically drivers haven't reacted to it in the right way. Here's Matt again.

Matt Armstrong-Barnes - HPE:
One of the challenges is whenever we start to think about man and machine operating together, which in most scenarios is more successful and man on his own or machine on its own. What we really need to think about is the ability for human beings to have the right level of attention when they're dealing with tasks that have high degree of autonomy with them. And because of that, what we end up with is the scenarios like the Uber incident in 2018, where unfortunately someone was killed in an accident. They did walk out in front of an autonomous vehicle. The lighting wasn't great. The person was pushing a bike at the time. So it became quite difficult for the autonomous vehicle to work out what the scenario that it was encountering.

Matt Armstrong-Barnes - HPE:
As a result of that couldn't quite process the information it needed to as quickly as it could. And didn't notify the human operator of the vehicle with enough time for them to take corrective action either. So as a result, the knock on implications of that are quite significant where the human operator should have been alert to situations, but because they believe that they were in a highly automated scenario, their attention span had reduced so significantly that even with a very short amount of notice they were given, they were unable to take any corrective accident to avoid someone being killed in an accident.

Michael Bird - Host:
It was a tragic case. In 2018, 49 year old Elaine Herzberg was hit by a self driving car as she wheeled a bicycle across the road in Arizona. The backup driver of the car had been watching TV at the time and was charged with negligent homicide as a result. We'll talk more about the implications of the crash later. But one of the ideas it did highlight was of human concentration. And the idea that actually when left to our own devices, we will stop concentrating. Honestly, that happens all too often, even when we are driving manually. So what can we do to protect people when things go wrong? Well, one solution is to design the cars and technology to accept that we humans are flawed, we're lazy and we have no attention span. Dr. Claire Blackett is a senior research scientist at the Institute for Energy Technology in Oslo, Norway. She's also an expert in human-centered design with a PhD in accident investigation and organizational factors, which sounds particularly relevant.

Dr Claire Blackett - IET:
So human-centered design. It is what it says on the tin. It's putting the human at the center of your design process for whatever it is that you are working towards. Are they relying on getting some information, some cues back from the system in order to be able to carry on with their job? And if so, how can we make sure that they get the right information at the right time to make the best decisions to minimize the chance of misuse or errors on their behalf? When the human, the end user, is kind of an afterthought after the technology has been designed and developed, then the human has to adapt to the technology rather than the technology, from the beginning, being designed to maximize the human potential or human opportunities and minimize the human limitations.

Dr Claire Blackett - IET:
Unfortunately, it is my experience that human-centered design, we're not quite where we would like to be. It is an age old kind of joke within my discipline of human factors that we are always brought in at the end. And then the designers get really mad with us because we tell them your design is really bad and you need to fix it.

Michael Bird - Host:
What's your view on trust with regards to autonomous vehicles?

Dr Claire Blackett - IET:
It's a huge issue. There was a survey done about, I think, it was around 2017 by Dacia, which is a Romanian Car Company, I think. They did a survey of, I think it was about 2000 British adults where they asked them not about autonomous vehicles, but just asking them about all of the gadgets that you have in your car. Do you know what they all do? Do you know what they all are? And the vast majority of people said, "No, it's too complicated. I don't like it. There's too many things. I don't know what a lot of the symbols mean. I don't know what they do. And so I don't use them." Older generations that have not grown up with digital technology may be more reluctant to trust the autonomous technology. But I think that the far bigger issue is the people who almost have this sort of worship full-

Michael Bird - Host:
Yeah. I agree.

Dr Claire Blackett - IET:
... approach to it. You know, there's no questioning attitude at all. It's just foot, this huge assumption that it works. It does what it's supposed to do. And I'm not really sure what that is, but I'm going to decide what I think it's supposed to do and it will do that. I found out that Tesla are actually banned from advertising their autopilot system in Germany. Because the German courts decided that it gives people the wrong perception of what the capabilities of the technology are. Even the name autopilot makes you think that I can just press the button and sit back and fall asleep or read my book, or climb into the backseat, like some people have done, and make YouTube videos of it. And we're really quite far away from that end of the spectrum that the over trust in the automation is far more dangerous.

Michael Bird - Host:
So, under trust and to an extent over trust are clearly major issues. But, how can clever human center design be used to either reassure drivers that everything is fine or alternatively, and probably more importantly, remind them that it's not, and that they probably need to put their phone down and look at the road?

Dr Claire Blackett - IET:
This isn't even a problem with autonomous cars. I mean, I don't know how many... I used to live in Bristol and I worked in Gloucester, and I remember many, many times I would arrive in Gloucester and I would have no memory of the car trip at all, even though I was driving.

Michael Bird - Host:
Sanctuary, isn't it?

Dr Claire Blackett - IET:
Yeah. When you're driving on really boring roads, we tend to switch off. But I think that this is what's getting to a really important question at the heart of human-centered design. Which is that, it is human nature for people's minds to wonder if we are designing cars where we're trying to go against people's nature. It is my opinion that we're not going to be successful with that. Because, people will always revert to doing what's the easiest thing to do. So I actually think that we therefore need to design these technologies with that in mind, saying that we know that people are going to get bored.

Dr Claire Blackett - IET:
We know people are going to pull out their phones. So let's not pretend that they're not going to do that. And instead let's actually design the cars with that in mind. Therefore we need to have better warning systems or better ways of keeping people aware of their surroundings of what's going on. So myself and my colleagues have been discussing the idea of, for example, could there be some other visual cues that they could see in their periphery vision? Or could you do something with the sort of tactile or haptic feedback to indicate whether the car is going fast or slow? Like how much traffic is around, what the weather conditions are and so on.

Dr Claire Blackett - IET:
So that the person, even if they're not actively engaged in the driving task, they're still aware of the surroundings and have some understanding of the situation. So that if the car does beep at them to take over, then they have some understanding of why instead of just being pulled rudely out of their daydream or out of their Facebook scrolling or something, and then being asked to take over the system without any awareness of it.

Michael Bird - Host:
So some sort of like innate feeling, whether that's like vibrations or motors or something in the seat, pushing them in certain ways. Say, okay I know that even though I'm not looking, I know there's a car there because I can feel it in my seat and, or that sort of thing.

Dr Claire Blackett - IET:
Yeah.

Michael Bird - Host:
It's a fascinating idea. And one which it could be argued, we need to keep aware of in our organizations and in our lives. After all, how often do we just assume a piece of software or system is working simply because it hasn't beeped at us and we haven't checked it in a while? How many of our systems, software or otherwise chime in to let us know that they're working? So we are maybe more aware when they're not. And speaking of systems and software, there is a second solution. As well as designing better autonomous vehicles, which keep us paying attention, we can design software technologies, which make human driving safer rather than promising will take it all off their hands. Essentially using computers to reach that ideal state of human machine interaction.

Erik Coelingh - Zenseact:
My name is Erik Coelingh and I am leading the product development at Zenseact. And we are a company that develops software for active safety and self-driving car functionality targeting consumer vehicles. We believe that this type of tech will bring us to a road transportation systems without any collisions. That's really our war star.

Michael Bird - Host:
So who are Zenseact? What's your story?

Erik Coelingh - Zenseact:
Yeah. We are a company of around 600 engineers. We are located in Sweden and in Shanghai. We are, to a large extent owned by Volvo cars. Many of us have our roots with Volvo cars as well. We develop safety software, targeting, active safety systems and self driving car stuff.

Michael Bird - Host:
How does having more compute more technology in a car, make it more safe?

Erik Coelingh - Zenseact:
We usually start by looking at why our car is not super safe today. Many of these accidents have human error as a contributing factor. We see that it is because people are intoxicated or distracted. Sometimes it may be situation in the infrastructure that contributes. I mean, a lot of different factors. But we also know that if you are a good sober attentive driver, then you accident risk is much lower. So by helping drivers to be these good attentive drivers, we think we can reduce a lot of different accidents. And then if we would extrapolate this, I mean, if we could automate traffic systems, cars that are like fully autonomously that are always paying attention 360 degrees around the car, that kind of plan ahead and make the cautious decisions, then I think we can drastically reduce the amount of accidents. And that is the journey that we try to understand. How to both complement the driver, and how to, in some cases, replace the driver for getting to zero collisions.

Erik Coelingh - Zenseact:
And we do that through warning a driver sometimes, automatic emergency braking, automatic emergency steering, when the driver gets into super critical situations, but we try to assist the driver also much earlier. Making sure that you are well prepared. That you get the nudging to maybe change lane because the other lane has some debris further down the road. I mean, there's a lot of things that you could help the driver with to get the combination of the car and the driver closer to what an ideal attentive driver would do. I remember one system we launched more than 10 years ago, automatic emerging breaking in low speed. After a couple of years, we saw that 20% of all collisions disappeared from the databases at insurances. It actually prevented accidents in real life. So the type of technology we work, that is what we know.

Michael Bird - Host:
It's not just the drivers who are over trusting what autonomous vehicles can do. As humans we interact with drivers in cars every time we cross the street or leave our house. And that does create a problem. Because, experience tells us that drivers will react and act in a certain way. Here in the UK, for example, we have crossings where cars are expected to stop every time a pedestrian is present. And 90% of the time they do. But again, when it comes to autonomous vehicles, there's an issue with trust and communication, as Lionel Robert explains.

Dr Lionel Robert - Uni Michigan:
Turns out that driving is a social activity, tremendously social activity, right? So how do you teach a vehicle to be social? So I'll give you a perfect example. We've all crossed the street in front of a vehicle, and we've almost certainly all been in a vehicle where someone has crossed the street in front of, right? So we have this implicit understanding of what the driver is probably looking for to see if they should stop or go, and what the pedestrian is looking for to see if they should stop and go. But the vehicle has never been a pedestrian has no idea. You know, an example I give, when I cross the street, even now, when I have the right of way, I pause and try to make eye contact with the driver. And so to see if the driver sees me, right?

Dr Lionel Robert - Uni Michigan:
And the driver will make eye contact with me to let me know, Hey man, I see you. Right? It happened instantly. No one gets taught to do this. We're not trained. There's no class. There's not even a legal requirement to make eye contact, but we do that. Right? That's social communication. Well, with autonomous vehicle, there is no eye contact, right? So, even if the vehicle sees the pedestrian, how can it communicate to the pedestrian? Hey, I see you, it's safe. These are some of the challenges that we're trying to work through.

Michael Bird - Host:
And it's not just pedestrians. Autonomous vehicles will be operating in a hybrid world with a lot of non autonomous vehicles. And that poses a risk because it pits humans against a machine that frankly doesn't behave like a human and doesn't use any intuition. If the driver of an autonomous vehicle is watching Shrek, they are essentially letting mask take the wheel in a world where it's intuition that dictates behaviors, as Claire Blackett explains.

Dr Claire Blackett - IET:
I live in Norway. There are a lot of self-driving vehicles on the roads here, especially Tesla. I think for a while, Norway was the number one importer of Tesla's in the world. The issue that I have with it is that I know the risks when I buy it. I implicitly give my permission that whatever's going to happen is going to happen. I know the risks when I buy it. And so if the technology doesn't function so great, then that's the risk that I take. But the person beside me in their Renault Clio or whatever have not signed up to make that same sacrifice. And so if the technology in my car goes wrong because it's not quite good enough, but I can have a devastating effect on the person in the car next to me who never really signed up to be part of this sort of mass experiment, of self-driving technology.

Dr Claire Blackett - IET:
And I do truly believe that at the moment it is still experimental. Last summer, there was a spade of accidents in the US where autonomous vehicles kept crashing into the back of ambulances and fire brigades that were stopped on the highway to deal with somebody else, because it just wasn't expecting to see a stationary object. And therefore the software tells it, ignore stationary objects. And so it just crashed into the back of these. There was a whole bunch of accidents last year with that. When you see this, you see this technology really isn't good enough yet, the software is not good enough yet to be on the roads. So I think that we need to have more openness in it because we're all participating in the test. And I think that the only way to deal with this is to be more open. And, also as I said, understanding better, what will people do in these different situations? It's not just what will the technology do, but what will people do?

Michael Bird - Host:
So how do we fix, or at least improve the inherent lack of ability in our AVS to be trusted? Well, one way is to start with the easy stuff and build up. And the easy stuff is on segregated roads, where everyone is doing roughly the same speeds as Erik explains.

Erik Coelingh - Zenseact:
Motorway driving is a very good point to start, because the infrastructure is so well defined with lanes and barriers and a lot of other different things. The thing which is maybe the most challenging there is that it's very high speed. So, if something goes wrong, it can be pretty severe quite quickly. So we foresee that for the consumer vehicles that we are targeting, that you can do unsupervised autonomous driving, starting in traffic jams on motorway. So we start in the right infrastructure we described, but we start in slower speed. We start in traffic jams, because it is from a technical feasibility perspective, very attractive, but also from a consumer perspective, it's not fun to drive in a traffic jam. People tend to pick up their phone anyway. So let's make that autonomous. And we will start to do that together with Volvo cars in California, because their weather conditions are so very stable. And there's a lot of confluence with this kind of tech.

Erik Coelingh - Zenseact:
So then we have like a bridgehead in that area, and then we want to start growing it. We want to go from California to other geographical locations. We want to go from traffic jam to being able to do that outside of the traffic jam. And we want to increase the speed. And this is something that we call the operational design domain. So, that we'll will start relatively small. And then through the probing of the data from the consumer fleet, we will learn, we will adapt and we will increase the operational design domain to go from traffic jam driving to maybe on ramp off ramp automation. And of course that in itself, the takeover procedures, I mean, there's, there's tons of challenges in that.

Michael Bird - Host:
Yeah, you can just imagine somebody going onto the motorway falling asleep and then coming off the motorway and still being asleep. And all these cars just parked up by the junction to wake their drivers up so they can carry on driving.

Erik Coelingh - Zenseact:
That will happen, right? I mean, all these kind of things that will happen.

Michael Bird - Host:
Matt's Armstrong Barnes agrees that the technology isn't quite there yet for more complicated driving scenarios. But there are solutions, and improvements in AI will play a huge part in making higher levels of autonomy, a safe reality.

Matt Armstrong-Barnes - HPE:
Some of the great things that are happening in the AI space is we are learning more and more about the unknown consequences of AI's failing. It is allowing us to building guardrails. So those guardrails in an autonomous driving situation could be very much alerting drivers about what's happening. Scenarios where they need to take a corrective action. And that's becoming, because of the significant amount of work that's been evolving in the space. We are getting to those points of alerting human beings to say the AI has arrived at a scenario that they either haven't encountered before. It's unclear as to what action it needs to take. And as a result, it needs for human being to take over as human beings have common sense. One of the areas of complexity though, of that is we're now seeing ensemble learning, which is how I can have multiple algorithms operating, where the overall output of those algorithms is significantly better than running one on its own.

Matt Armstrong-Barnes - HPE:
That creates a different set of problems, which is, A, the time that it takes for those algorithms to arrive at decisions between them. And secondly, it becomes much more complex to open up a black box. So when we start thinking about when we've hit an edge use case, so something detrimental has happened, what we want to understand is why that's happened. And if we take the instant in 2018, very unfortunately someone was killed by an autonomous vehicle. It took a very long period of time to understand how and why the algorithms that were running the autonomous vehicle made the decisions that they made. And that's one of the things that's hampering the evolution of autonomous vehicles, is generating the data. If we start to think about... And this came out of the very sad Uber instance in 2018, do we have enough data on cars crashing into things?

Matt Armstrong-Barnes - HPE:
Well, no, we don't. But we actually don't want to gather that based on real life situations. We can simulate it, but that means we are limited by our imaginations as to what can happen in these situations so that we can create simulations that we can then use to train autonomous vehicles. But again, there's this socially held view as to how we should react in situations, you and I would probably call it common sense. That is something that's quite difficult to, A, train, a algorithmic driving model around. And B, it's actually very difficult to test and to regulate that because of the diverse number of scenarios that exist when you are out on the open road.

Michael Bird - Host:
So we need more data for autonomous vehicles in order to make sure they make better decisions, so that we can trust them more and ensure that our trust in them isn't misplaced. But we can't get that data because quite rightly, nobody wants to see more autonomous cars crashing, because then we won't trust them. That's a bit of a conundrum, but it is one that academics and experts are looking into, because beyond simulators, the best ways for autonomous vehicles to learn is from each other, by communicating their own shared experience and communicating with the infrastructure around them to improve everyone's driving. So I asked Erik whether he thinks car to car communication would be a viable way of improving the safety of autonomous vehicles.

Erik Coelingh - Zenseact:
Yes. But it depends very much on what you mean with car to car communication. I mean, you can communicate in different ways, right? In some sense, a brake light is a communication channel as well.

Michael Bird - Host:
That's right. Yeah.

Erik Coelingh - Zenseact:
But I think the key thing will be through cellular connections. I'm not the strong believer in dedicated car to car communication, where cars are to communicate through a certain protocol directly without first communicating to a cell tower reasons. Being that it's very difficult to agree between different car brands and on what that standard should be. You really have to go through committees and standardization and that's usually super slow process. But probing data through cellular, do some back end, do something really good with the data, with aggregated data and then pushing it back to the fleet. That's a very powerful communication challenge. And I mean, today, we use that for probing data to make maps richer.

Erik Coelingh - Zenseact:
I mean, you can store information in the map that maybe not the map maker themselves can do. And simple example would be road friction that can be slippery out here. When cars measure it's slippery at the geo location. If a lot of cars probe that information, you store it in a map, you can see which roads are slippery, which are not. Or when is the slipperiness over. Cars are sharing information with each other through a cloud. But then, maybe even in the longer future there's technology on horizons that I think are maybe even more interesting, and that is something called edge learning. So today when we train our AIs, we train it all in our data factories. So we probe data from the fleet. We collect a lot of data. We store the data and then we train the network. When a network is good, we push it out to the car and you have a better capability. But it requires then that you take a lot of data out of the vehicles and train it separately.

Erik Coelingh - Zenseact:
The future, I think, will be that we will be able to train on the vehicle itself. So the car will detect something. It will learn something, it will train its network. And then instead of taking out the data set, we taking out the trained networks from the different vehicles. And then we aggregate federate that learned information and then send out a new network to the cars. So in some sense, cars will be able to learn from each other through that process. And I think that's interesting, because it will reduce the data communication needs and also the privacy challenges will be kind of avoided through that. And I also think that from a performance perspective, it should be super interesting to do that, because you can get a much higher bandwidth between sensing and actual training, because you do it on the edge instead of doing it in the cloud. So there's still tons to do here.

Michael Bird - Host:
So you heard Erik mention privacy there, and that's an interesting point when it comes to trust, and one will be coming back to in a second. In the meantime, Matt agrees with Erik's assessment about the importance of communications across a network. That he sees it going further with cars actually communicating with the road infrastructure itself, to avoid collisions and keep traffic flowing.

Matt Armstrong-Barnes - HPE:
Let's say that you and I are both heading towards an intersection at set of traffic lights. And if your car were to speed up by one mile an hour, and my car would speed were to slow down by one mile an hour, both in imperceptible to us as human beings, we could pass through the intersection without stopping. So you need some command and control at the intersection of the traffic lights. You can then provide the capability for all approaching vehicles to have some telemetric information that doesn't mean each individual car needs to talk to every other car. You offload that capability to a grid section of where you're driving. If we start to think of this as being smaller grids within a larger grid, then what we can have is we can group them together and provide that command and control capability over lots of grids, which is then taking or informing autonomous vehicles as to the what's happening across all of the other vehicles that are in that grid space as well.

Michael Bird - Host:
Fascinating stuff. So let's go back to Erik and the privacy concerns of autonomous vehicles, because it's really, really important. One of the features of autonomous vehicles is that it's widely considered inevitable that they will be offered as a service. So you won't generally own a car. You'll ask for one, it'll turn up, you'll pay a fee and that'll be it. Because why would a car that can drive itself, not be driving itself while you're at worked or doing something else? And your car will be collecting a ton of data about you, just like your phone does. The service provider will know where you go on a regular basis. It'll know where you go to work, where you like to stop, where you like to shop. And by communicating with your phone, what you are likely to buy. It will be another step in helping to complete a whole picture of you as a user and consumer to be sold to. And that can have worrying, if not fascinating and potentially useful implications.

Dr Lionel Robert - Uni Michigan:
I have this conversation with my students. People believe the benefit of autonomous vehicles is that you can check your email and drink coffee while you ride to work. Well, you can do that on the subway or the bus, right? It's not a big thing. So why these car companies are investing in this technology is because basically, imagine if you have an autonomous vehicle, you're driving alone. And it knows that you love McDonald's. So you turn the corner, it send you in an email saying, "Hey, we're around the corner from McDonald's. They have a special, we can stop by and pick you up a hamburger." That is the value of autonomous vehicles, right? It will be the platform by which you interact with the rest of the world. And unlike an iPhone, the one advantage that the autonomous vehicle has, it has you at a time and place, right?

Dr Lionel Robert - Uni Michigan:
You are in the vehicle. That information can be leveraged in a way that they can really make a lot of money. That is the benefit or the main reason that's driving of the development of autonomous vehicles. Let's suppose your drugstore doesn't have a store. It has five autonomous vans and they circle a city. And when you order something, that van drives up to your door, an autonomous robot rolls out with a box and places, all those items at you door and leaves. Don't have to hire any employees. Don't have to rent any stores, right? And they can serve customers a lot quicker and lot faster. So what was really driving the autonomous vehicles is just transformational opportunities. Now that's what's going to occur in the next 10, 20 years. That's what a major change is going to come to our society.

Michael Bird - Host:
Well like it, or lump it, that's probably the reality we'll face in the next couple of decades. It's widely accepted that it'll take roughly 20 years to get fully autonomous cars. Now that does seem like a long way away, but it is tantalizingly close. There's a lot of work to do to make that a reality though, including several issues we just haven't had time to discuss in this episode. Who is liable for accidents in a fully autonomous car? Who pays for the car insurance? What happens if you still want to drive yourself? How do you train autonomous vehicles to respect local customs and regional driving laws like using the horn to signal intent, flashing your lights and knowing when pedestrians have priority? There's a whole minefield of practical issues, many of which don't have solutions yet. But to finish this off and to answer the question of when we'll trust autonomous vehicles, here's Matt.

Matt Armstrong-Barnes - HPE:
There was a study in the states that looked at the level of safety that people would need before they'd be willing to trust or have confidence in a not autonomous vehicle. And I think 15% of respondents wanted a 100% safety record. I think the next level was a 50% safety record, which was like 37% of responders. So as a result, I think there's lack of trust. My personal metric is, if an autonomous vehicle can navigate the magic roundabout in Swindon, then I think I would be happy to use one for all modes of communication.

Michael Bird - Host:
And if you've not yet heard of the magic roundabout in Swindon, do look it up. It will figuratively make your head explode.

Michael Bird - Host:
You've been listening to Technology Untangled. I'm your host, Michael Bird. And a huge thanks to Matt Armstrong Barnes, Dr. Lionel Robert, Dr. Claire Blackett and Erik Coelingh for speaking to us. You can find more information on today's episode in the show notes, and this is the sixth episode in the third series of Technology Untangled. So be sure to hit subscribe on your podcasting app of choice so you don't miss out, and to catch up on the last two series. You can find more information on today's episode in the show notes, and we are going to be taking a short mid series break because I've just become a dad, but we will be back. Today's episode was written and produced by Sam Data and me, Michael Bird. Sound design and editing was by Alex Bennett with production support from Harry Morton and Sophie Cutler. Technology Untangled is a Lower Street Production for Hewlett Packard Enterprise.

Creators and Guests

Hewlett Packard Enterprise