We are years away – decades, probably – from self-driving cars comprising the majority of the vehicles on our roadways.  Until then, we will have to put up with a mix of human-piloted vehicles and autonomous ones, in varying stages of testing and refinement.

As it is, nearly 40,000 people die every year in automobile crashes on U.S. roads.  Vastly more are injured.  And given the small amount of autonomous vehicles on the road now, it’s safe to say that nearly every one of these crashes are caused by people, crashing into people or other objects.  The promise of bringing these numbers down is the primary argument of advocates of robotic cars.

And it is a compelling argument.  Once the cars are all communicating with each other, using microchips and wireless tech over simple turn signals, they will bump into each other less, right? They will operate more like cars on a train than… thousands of individual trains, right?

Yes, it makes sense that once all, or the vast majority, of cars are replaced with self-driving ones, the roads are going to get much safer. But what happens in the meantime?

A Different Kind of Driver

There is a concern that, as our highways become testing grounds for robotics technology firms, injury and fatality numbers may increase.  And the basis for that is not just that the robot cars will malfunction.  Though they have, and they will.

No, the curious twist is that autonomous cars are getting into crashes at a significant rate, even when fully functioning and obeying all traffic rules.  The implication is that the crashes are being caused precisely because the cars are obeying every rule.  And humans don’t expect it.

Sure, it’s better to program these cars on the safe side, leaning more toward stopping for no good reason, than accelerating through a crowd.  But, right or wrong, human drivers expect a more subtle approach to the rules:  Acceleration through a yellow light; rolling through a stop; speeding to pass.  Humans certainly do not expect a car at full speed to stop in the middle of the road, for no reason, “just in case.”  But that is exactly what some self-driving cars are doing.  Data from the California DMV supports the conclusion that autonomous cars are making too many unexpected stops:  When self-driving cars crash, they are most frequently rear ended.

These reports and others on autonomous vehicle crashes are collected by the California government, where much of the testing has taken place.  Kyle Vogt, co-founder and CEO at Cruise (Chevy’s self-driving project) is reported in this month’s Wired as saying the crash reports make clear that humans expect other humans to bend or break traffic rules. But his robots won’t follow suit:

“We’re not going to make vehicles that break laws just to do things like a human would.  If drivers are aware of the fact that AVs [autonomous vehicles] are being lawful, and that’s fundamentally a good thing because it’s going to lead to safer roads, then I think there may be a better interaction between humans and AVs.”

 

Labeling the Experiment

If we know robots drive differently than humans, and we aren’t going to change the robots, then a good start to helping humans know what to expect would be regulations requiring clear marking of autonomous vehicles.  On all sides. We know that the big rig in front of us makes wide right turns because it tells us, with a sign.  We know the postal jeep in front of us may stop at any given house because it is painted like a post office vehicle.  By contrast, many self-driving cars are indistinguishable from human-driven ones.  If the public has been involuntarily drafted as subjects for a decades-long robotics experiment on our roads, the least we deserve is notice when one is driving in our midst.

0