This month in San Francisco, one of hundreds of self-driving cars on the city’s roads struck a driver, throwing her into the path and severely injuring her. San Francisco Fire Chief Janine Nicholson recently testified that as of August, self-driving cars had impeded firefighting efforts 55 times this year. Tesla’s driver-assistance system, Autopilot software, has been involved in 736 crashes and 17 fatalities nationwide since 2019.
For all the fuss about how artificial intelligence might one day threaten humanity, there’s been surprisingly little discussion about how it currently threatens us. When it comes to self-driving cars, we’re basically driving around blindfolded.
The reason is simple: there are no federal software safety testing standards for self-driving cars, a loophole big enough for Elon Musk, General Motors, and Waymo to operate thousands of cars. The National Highway Traffic Safety Administration regulates the hardware (windshield wipers, airbags, mirrors, etc.) of cars sold in the United States. And states issue licenses for human drivers. Most people must pass a vision test, a written exam, and a driver’s license test to earn the right to drive a car.
AI undergoes no such government scrutiny before getting behind the wheel. In California, companies can get permission to operate driverless cars by declaring that their vehicles have been tested and that “the manufacturer has reasonably determined that the vehicle is safe to operate.”
“There’s a weird disconnect about who’s in charge of licensing computer drivers: NHTSA or the states?” asks Missy Cummings, a professor at George Mason University and director of the Mason Center for Autonomous Robotics.
There’s an irony here: Many of the headlines focus on fears that computers will become too smart and take control of the world away from humans, but the reality is that computers are often too stupid to help harm us.
Despite publicized glitches, self-driving car makers insist their software is better than human drivers. That may be true — after all, self-driving cars don’t get tired, drink and text while driving — but we don’t have the data to tell the tale yet. Self-driving cars also make other kinds of mistakes, like stopping in a way that blocks an ambulance or clipping an accident victim.
Last month, Reps. Nancy Pelosi and Kevin Mullin wrote to the NHTSA requesting more data on self-driving car accidents, especially those involving stopped vehicles that get in the way of emergency workers. More comparative data on human-driven accidents would also be helpful, since the NHTSA only provides crash estimates based on a sample.
But why can’t we do more than just collect data?
After all, AI often makes surprising mistakes. This year, a GM Cruise vehicle mispredicted the movements of an articulated bus and crashed into it. GM updated its software after the incident. Last year, a self-driving car slammed on the brakes while making a left turn, apparently thinking that an oncoming car was turning right into its path. Instead, the oncoming car crashed into the stopped self-driving car, injuring passengers in both cars.
“The computer vision systems in these cars are highly fragile. They will fail in ways we simply cannot understand,” Dr Cummings said, writing that AI should be subject to licensing requirements equivalent to the vision and performance tests subjected to pilots and drivers.
Of course, the problem isn’t limited to cars. Every day we learn about AI chatbots failing in all sorts of ways, from fabricating legal cases to sexually harassing users. And we’ve long grappled with the failure of AI recommendation systems, from recommending gun parts and drug paraphernalia on Amazon despite restrictions on those items to pushing ideologically biased content on YouTube.
Despite these real-world examples of harm, many regulators are distracted by far-future and, for some, far-fetched disaster scenarios created by AI pessimists—leading tech researchers and executives who say the risk of future human extinction is a major concern. The UK government is hosting an AI safety summit in November, and Politico reports that its AI task force will be led by one such pessimist.
In the United States, a wide range of AI bills have been proposed in Congress, focusing mostly on negative concerns, such as banning AI from deciding to launch nuclear weapons and requiring licensing and registration for some high-risk AI models.
Pessimism is a “diversion tactic to drive people to chase infinite risk,” says Heidi Kraaf, a software safety engineer and engineering director at tech security firm Trail of Bits. In a recent paper, Dr. Kraaf argued for a focus on AI safety testing specific to each domain in which the AI operates, such as ChatGPT, which is used by lawyers.
In other words, we need to start recognizing that AI safety is a solvable problem that can and should be solved now with the tools we have.
Experts from various fields need to evaluate the AI being used in their fields and decide if it is too risky, starting with putting a large number of self-driving cars through vision and driving tests.
It may sound boring, but that’s what safety is all about: experts coming together to run tests and create checklists. And we need to start doing that now.
This article originally appeared in The New York Times.