Driverless cars still have a long way to go

By on April 12, 2018 | Opinions , Technology

Last week, police in Tempe, Arizona, released dash cam footage showing the final seconds before an Uber self-driving vehicle crashed into 49-year-old pedestrian Elaine Herzberg, who died at the hospital shortly afterward.

While Herzberg is visible for less than two seconds in the camera footage, she might have become visible earlier to a human driver, since human eyes are better at picking out details in low-light situations. The police also released internal dash cam footage showing the car’s driver, Rafaela Vasquez, in the seconds before the crash. Vasquez can be seen looking down toward her lap for almost five seconds before glancing up again and realizing what is about to happen.

This is not the first collision involving a self-driving vehicle. Other crashes have involved Ubers in Arizona, a Tesla in ‘autopilot’ mode and many others in California. While it is still not clear as to who is to be blamed for the latest incident, most of the other crashes were due to human error.

Image source: ReadWrite

Image source: ReadWrite

Why do autonomous vehicles crash? The first cause is when the sensors don’t detect what is happening around the vehicle. In the case of Herzberg, she was walking with a bicycle across a poorly lit roadway. About 1.4 seconds elapse between the time when Herzberg starts to become visible and the video’s final frame. Each car and each sensor works differently. In some, GPS may work only with a clear view of the sky or the camera may function properly only with ample light. Researchers aren’t clear about the ‘ideal’ set of sensors in an autonomous vehicle, and can’t keep adding more and more to the fleet because of cost limitations.

The second cause is when the vehicle encounters a situation that the software didn’t plan for. Much like human drivers, self-driving cars also make many decisions at any given second. So, when a self-driving car sees something it’s not programmed to handle, it becomes difficult to assess what is going to happen next. So, the challenge for programmers is to combine information from all sensors to create an accurate representation of the surroundings. If that doesn’t happen, the vehicle can’t take a call.

Before we can release autonomous vehicles on the road completely, they need to be programmed with a whole set of instructions about dealing with the out-of-the-ordinary, and plan for extreme adverse situations. A human driver, for instance, might even break a few traffic rules to save lives. The autonomous car needs to be taught how to do the same instead of waiting or the situation to pass.

Additionally, self-driving cars must be taught to understand their surroundings and the context. In the crash involving Herzberg, the car should have been tested and rated based on how well it performed difficult tasks, including assessing danger in fraction of a second. This is a lot like a human driving test, but that is how it should be if the two have to co-exist safely on the road.

Another approach to this could also be explicitly stating rules in formal logic, like how nuTonomy, which is running an autonomous taxi pilot in cooperation with authorities in Singapore, is doing. NuTonomy’s approach to controlling autonomous vehicles is based on a rules hierarchy. Top priority goes to rules such as “don’t hit pedestrians”, followed by “don’t hit other vehicles” and “don’t hit objects”. Rules such as “maintain speed when safe” and “don’t cross the centreline” get a lower priority.

Image source: The Verge

Image source: The Verge

Learning how to drive is an ongoing process for both humans and their autonomous cars, as both adapt to new situations, road rules and technology. Time will tell the better approach, but for now, the research still has a long way to go.

GET TRAVHQ IN YOUR INBOX

Sign up for free to get the latest updates.