Self-driving cars are becoming a reality. But for their correct integration into the urban environment, it is first necessary to dispel some myths about them.
In March 2018, a self-driving car of Uber, the private car transport service connecting passengers and drivers through an app, caused a fatal accident in Arizona. Many analyses, often contrasting, followed. “The accident was inevitable and Uber is not to blame”, commented the Tempe Police, following early investigations. “The accident was avoidable and it would not have occurred had there been a person driving”, reacted various experts. “Certainly something, from a technological point of view, failed”, others hypothesized.
Such clear and contrasting opinions enhance the division between two factions. On the one hand, there are those who would like to block the development of autonomous cars. On the other, there are those who would like to see them in the streets immediately.
For sure, such a technology needs support in the right ways and times. And in order to account for its development, it is useful to dispel four myths about self-driving cars.
Myth #1: self-driving cars drive themselves
The Tempe accident caused the death of 49-year-old Elaine Herzberg, who was crossing the street at night with her bicycle. Clearly, something in the Uber’s Volvo XC90 safety systems did not work. The video of the Tempe accidentshows distinctly that the car does not brake nor swerve, and hits the woman at a speed of 70 km/h. The police confirmed that they found no signs of braking, neither on the asphalt nor in the car’s “black box”. The fact that it was night and that the woman was not on the pedestrian crossing could be a mitigating factor for a driver, but not for the radar and lidar systems that autonomous cars are equipped with.
Uber’s autonomous cars
A survey by the New York Times shows that Uber’s self-driving vehicle project has had numerous problems for some time. Despite this, in 2017 Uber reduced the number of car operators used in vehicle tests from two to one. Until then, the two operators had supervised the driving and monitored the on-board computer system together.
The numerous difficulties of Uber’s self-driving project emerge from the documents and testimonies gathered by the New York Times. Uber was far away from reaching the set target of 13,000 miles travelled without human intervention. On the contrary, in recent tests in California human intervention was necessary every 1200 miles travelled: a rate 10 times greater than predicted.
And the results achieved by the self-driving division of General Motors are no better. Its cars travelled 131,676 miles, requiring the intervention of a driver 105 times: once every 1254 miles travelled.
Waymo’s autonomous cars
Performance of Waymo, Google’s competing project, is much better. But it is still far from allowing the circulation of autonomous cars without a pilot. In 2017, Google cars travelled 352,545 miles in California, and automatic driving was turned off 63 times, returning control of the vehicle to the driver. This is once every 5,596 miles travelled.
A matter of conditions?
In all these cases, companies have not communicated whether these miles were travelled in challenging environments for autonomous cars (such as bridges, tunnels, busy cities) or in adverse conditions (at night or when raining).
So how should we interpret this data? Are these numbers worrying or encouraging? This depends on the point of view. These figures are certainly reassuring with regards to the development of technology. They show enormous progress compared to previous years. One must be more prudent, however, if the goal is to get driverless cars in the street in a short time. Human intervention, however occasional, will remain necessary for a long time to come.
Myth #2: safety is an issue only concerning cars
The New York Times highlights another aspect that inevitably delays the moment we will see unmanned cars circulating. That is, sensors on Uber’s cars have huge problems identifying road signs and dangers near large buildings or heavy vehicles. Which is like saying: we can circulate autonomous cars on a highway, but in cities it is a whole different story!
This is certainly not just a problem of Uber. All the systems that function with deep neural networks, which allows autonomous machines to recognize signals, or to distinguish pedestrians from cyclists or other cars, and so on, struggle with such tasks.
Neural networks are artificial intelligence networks based on deep learning. And those who work with artificial intelligence are well aware of the limitations that deep learning models still present. Researchers from the Universities of Berekley, Ann Arbor, Washington and Stony Brook published a study called Robust Physical-World Attacks on Deep Learning Models in 2017. Their study focuses precisely on the difficulties artificial intelligence has in recognizing road signs in “altered” situations compared to standard ones. For example, if the signs are smeared with paint or covered by stickers, or if another similar sized object is found under the road sign.
These are natural technological problems for a sector in continuous development. Which, in any case, has made exceptional progress in recent years. The core of the problem, rather, is to succeed in combining two different needs. First, the one for research and experimentation of autonomous cars – which errors and partial failures will inevitably hinder. Second, the very important requirement of road safety. These need to find a balance, to minimize the possibility of accidents such as the Tempe one repeating themselves.
A matter of infrastructures
A first step in this direction should be to not leave car manufacturers alone. For example, by investing in infrastructuresthat would allow autonomous cars to circulate in a less chaotic, and therefore safer, environment. There is already a clear need to adapt infrastructures to allow the safe integration of unmanned drones in civil airspace. The same considerations are true for road transport as well.
Self-driving cars have more difficulty on city streets because these form chaotic and disorganized environments. Automated systems have been successfully introduced in railways, undergrounds and on internal roads in enclosed zones, or in the airspace. It is no coincidence. All ofthese are, in fact, “closed” systems: isolated, ordered, and therefore easier to interpret for an automated system.
Also, this is why it is a risky idea to put completely independent machines on the streets of today. First, it is necessary to rethink the entire circulation system, including roads, sidewalks, protective barriers, dedicated and preferential lanes, and the related communication framework, as well as road signs.
In the future, an autonomous car may be safer than a human-driven one. But artificial intelligence must be assistedwith infrastructural changes in its growth, development and learning process. We are not talking about futuristic scenarios: studies already exist in this sense, such as the Blueprint for Autonomous Urbanism of the US National Association of City Transportation Officials, which suggests guidelines for the future development of cities in parallel with the introduction of automated vehicles. The objective is to create a transport system centred on people and not on vehicles, capable of fully exploiting the beneficial potential of automation.
Myth #3: morals come before technology
Rethinking the environment of autonomous cars means developing a more centralized infrastructure system. One where each element connects with the surrounding elements. In such a system, an autonomous car will know in advance if it is approaching a traffic light, a pedestrian crossing, or an accident. It will have to be able to choose the best and safest route on the basis of the traffic, the weather and the road conditions, which will be all communicated by the infrastructure. Developing this kind of infrastructure also means creating a series of safety barriers. These will render the need to equip the unmanned vehicle with morality superfluous, minimizing the possibility of danger both for the car and for pedestrians. An in-depth defence system will, in fact, be able to make decisions that prevent a possible emergency.
The Moral Machine
This would render experiments such as the Moral Machineof the Massachusetts Institute of Technology superfluous. Or studies like The social dilemma of autonomous vehicles, published in 2016, needless. This study aimed at establishing which decisions made by an autonomous car in dangerous situations are moral. For example, what should the on-board computer do: run over a single person crossing the road, or crash the car carrying five passengers against a wall? What if there are five people crossing the road, instead of one? And also, what if there is only one child crossing?
All these scenarios, realistic today, would become highly unlikely (with close to zero probabilities) in the future. That is, if autonomous cars were released in a system equipped
with technological, structural and infrastructural defences, such as the one described above. In short, autonomous cars would have the task of preventing an accident, not of facing its moral consequences. A car would not have to decide who the right person to run over is. Rather, it would focus on choosing the best route for a more comfortable and relaxing journey. One that would be safe for everyone.
Myth #4: autonomous cars will never be safer than manned cars
Will autonomous vehicles one day increase road safety? We believe the answer is yes. However, we agree with Don Norman, a cognitive psychologist, the father of a user-cantered vision of design: their introduction must be cautious and gradual and undergo appropriate tests, a bit like what happens with a new drug before it is allowed on the market. The current debate on autonomous cars, polarized between those who oppose this technology and those who, on the contrary, defend it at all costs, is not new. Automation is a sector that has fuelled conflicting opinions ever since it was born, not only in the automotive field.
Transitioning to the future
We are indeed facing a historic transition. In the streets, in the skies, in workplaces, even in our homes, we are witnessing a revolution. A revolution that will forever change the way society appears. And as always, in the face of great changes, at first what dominates is fear of novelty, of the unknown. Insecurity generated by a future that presents itself in a disruptive way, but in which no-one is yet immersed, which is neither understood nor controlled.
Who can remember the debate when the internet was born? Some spoke of it as a great democratic revolution, which would give everyone free access to information, new job opportunities and much more. Others, on the other hand, saw it as a terrible instrument of alienation and control over our lives, reducing our freedom. In the end, the internet turned out to be both. It brought enormous social benefits, albeit with contradictions and controversial aspects, linked to the security of our digital identities.
All major innovations bring about profound changes and need time to assimilate. But above all, they need effective rules that allow for the introduction of these innovations in a gradual way, combining the old with the new. And reducing the connected risks as much as possible – which, in any case, cannot be avoided entirely.
In short, automation yes, but without rushing.