Surviving as an AI in the US – the most litigious nation in the world
In the past few years, AI advancements have made self-driving cars more of a reality than the remote possibility it once was. Thanks to GPU advances, algorithm improvements, and lots of venture capital thrown at the problem, Waymo was the first company to release its fleet (albeit in a very limited fashion) in Phoenix, AZ.
I was talking to my real estate agent there about this amazing feat, when he raised a very good point – how can companies that develop self-driving cars protect themselves from class-action lawsuits brought about by ethical dilemmas like the one where AIs have to choose to kill the driver over an oncoming car or a grandma over a baby?
This is a delicate problem even for people, as we’ve seen in recent studies, where each culture determines the statistical outcome.
We can’t decide for ourselves, so it’s unlikely we can ask the AI to do that too and make it acceptable for all cultures, with a blanket procedure.
Here’s an example:
A self-driving car encounters a speeding human driver on the other side of the road swerving out of control. It evaluates the problem and sees that it has two options:
- It hits the other vehicle and throws it off a cliff, killing the other driver and its potential passengers, OR
- It swerves out of the road and into the canyon, killing its own passengers.
Regardless of what it does, the AI will kill someone. This, in the US and other jurisdictions, will likely cause the company operating the AI fleet to be sued and held responsible globally for an isolated incident that they couldn’t control anyway—meanwhile, hundreds of thousands of other self-driving cars operate normally.
This kind of blanket approach will likely cripple adoption and innovation at scale, due to the highly risky nature of driving today. Yes, you are far more likely to die in a car crash than with any other means of transportation known to man.
So what could be a solution here?
Treat AIs like franchises, once trained. Companies like Google, Uber, Lyft, Tesla, and others could treat each AI, once trained, as an independent entity, under a micro LLC, where the liability of that AI is limited to a number and to certain recourse. Whomever owns/operates it carries part of the liability—as to maintenance, configuration, potential fallback mechanisms—so that if an accident were to happen in Astoria, NY, it wouldn’t affect cars in Sacramento, CA.
Right now, car companies don’t carry all the liability for their products, as long as they operate within expected parameters, as the decisions are made always by humans at the wheel.
What if you could configure your AI when you buy the car to kill the grandma over the baby or protect the passengers over the other car? Then you would effectively release the AI and its parent of that liability and enable more innovation to be adopted faster.