So Google posts this thing on Medium about how cars have been involved in 11 accidents over 1.7 million miles of driving. And it’s a fascinating look into how Google is learning about what it takes for computers to drive cars. What’s most interesting is, of course, why these accidents occurred—stupid humans. “Not once was the self-driving car the cause of the accident,” writes Google’s Chris Urmson.
This is how the San Francisco Chronicle editorial board covered the story (paywall):
Self-driving cars, hailed as the wave of the future, might need to tap the brakes. According to the state’s Department of Motor Vehicles, four of the 48 self-driving cars currently operating in California have gotten into accidents since September, the first month that the state issued permits for companies to test the cars on public roads…
[Self-driving technology] must also be carefully monitored, like any new technology with the potential to cause harm. State regulators have been fairly hands-off with self-driving technology, but they might want to consider how to make more safety information available to the public.
I’m far from being one of those champions of Sergey and Larry’s private island utopia where technology can advance without interference from pesky laws, but it’s fascinating to contrast Google’s calm disclosure—we’re learning a lot, all our accidents have been minor, and it’s basically always been the fault of a human in another car—with a newspaper editorialist’s concern that we need to “tap the brakes” on self-driving cars.