AI-GOLDENBERG-NOV22
A self-driven Volvo SUV owned and operated by Uber Technologies Inc. is flipped on its side after a collision in Tempe, Arizona, U.S. on March 24, 2017. (Mark Beach/Fresco News/Reuters) REUTERS
Facebook Instant Articles

If an autonomous vehicle has an accident, who is legally responsible?

Opinion: In the year ahead, lawyers, judges and politicians will have to start considering non-human decisions within the framework of human law
By Adam Goldenberg

Adam Goldenberg is a commercial and public law litigator at McCarthy Tétrault LLP and an adjunct professor of law at the University of Toronto.

She might be in a hurry—late for an appointment or running a quick errand. Perhaps she’ll emerge between intersections, and from between parked cars. But the collision will leave her wheelchair-bound. Litigation will ensue. This happens every day, around the world.

No doubt she contributed to her own injuries; it was negligent to jaywalk as she did. But that alone won’t absolve others—most obviously, the driver—of liability. The question will be: who else should be required to compensate her?

But what if there is no driver? What if the vehicle is autonomous, powered by artificial intelligence? This is the sort of science non-fiction that will change the law forever. The law’s AI revolution will begin in the courtroom—and soon.

Our laws are designed to regulate human acts and omissions. They aim to hold humans to account. When a product malfunctions, we hold its manufacturer responsible. When a promise goes unfulfilled, we seek redress from the promisor. When a corporation defrauds investors, we demand answers from its leaders.

In doing so, we rarely expect perfection; doing so would expect too much from our fellow human beings. The law reflects this. The standard, across a swath of legal doctrines, is “reasonableness.” Yes, there was a patch of ice on the road that caused the fatal collision, but if the municipal authorities did all that could reasonably be required of them to make the street safe, a court won’t order them to pay compensation in tort. No, the official who denied your uncle’s visa application didn’t apply the law exactly the way a judge would, but so long as his interpretation was reasonable, a judge likely won’t intervene.

But what if the alleged wrongdoer is an artificially intelligent machine? How will our justice system—staffed by humans, hampered by human limitations—assess the “reasonableness” of conduct by AI-powered computers, whose capacity to “reason” by weighing alternatives and making optimal, split-second decisions already far exceeds our own?

Artificial intelligence, the technology by which machines can learn and make decisions, already has at least a toehold in practically every facet of our economy and society. It shapes the internet advertisements we see, the customer service we receive over the telephone and the behind-the-scenes processes that put gas in our vehicles, food on our tables and consumer goods in our mailboxes. It will soon power the federal government’s processing of tax disputes, immigration applications and benefits claims—and allow law enforcement to track our movements, and those of millions of others, on public streets everywhere, all the time, and all at once. It is also why our smartphones can turn our voices into text, recognize faces in our photos and find us the fastest route to work in the morning.

While the law has always found a way to adapt to technological change, technology has never been able to replace humans entirely in the doctrinal calculus—until now. Once AI surpasses the natural frailties of human reason, its advances will put legal concepts like “reasonableness” and “intent” in doubt. If an autonomous vehicle is programmed to behave more reasonably than a human driver ever could, and if that vehicle is involved in an accident without ever malfunctioning, then who is a human judge to hold it liable, and on what basis? If a bureaucratic process is automated such that a government decision is necessarily aligned with the law pursuant to which it is made, then how can a court reverse it as unreasonable, particularly once machine learning makes it impossible to explain how exactly the system arrived at the conclusion it did?

To date, disputes involving AI systems have largely been resolved out of court, and the questions that judges have answered in jurisdictions like ours have been relatively straightforward. Statements by internet “chatbots,” designed to emulate human communication, can support civil liability for fraud. Plaintiffs have been permitted to pursue claims related to alleged software defects despite struggling to explain how exactly, or even if at all, the software malfunctioned. Accidents involving autopilot technology have been litigated for decades.

But as AI becomes more ubiquitous and sophisticated, courts will face unprecedented challenges in regulating more complicated non-human conduct, and in adjudicating the consequences of superhuman decision-making. The first major fights will be debates about analogies, the tools that judges use to apply established legal principles to novel fact scenarios. Is an AI system more like a product or a service? Is autonomous technology sufficiently similar to a human employee to hold its “employer” responsible for injuries to third parties, or copyright infringement, or collusion, or price-fixing?

The liability rules that will apply in a particular AI context will depend, in significant measure, on distinctions like this, dull as they may seem. Sifting through them will be lawyers and judges with limited technical knowledge, engaged in the imperfect rituals of our adversarial litigation process. We will do our best to fit paradigm-altering innovations into existing legal vocabularies. As courts do so, one case at a time, the resolution of early AI disputes will have lasting implications for how our society handles technology that, in many contexts, will make humans obsolete.

The first litigated AI legal controversies won’t arise in an orderly fashion, either. They’ll come from all directions, and in every area of law.

Intellectual property is one example. The U.S. Copyright Office announced in 2014 that it would not register works created “without any creative input or intervention from a human author.” That’s a rule that AI-generated creations are already putting in question, and it’s increasingly ripe to be tested in law.

Professional liability is another. Many lawyers and other professionals have used different kinds of AI in their practices for years. They can be sued if they exercise their judgment unreasonably. We’ll be called upon to answer for our reliance on AI, or lack thereof, when things go awry.

Public law is a third. Governments that expand their use of AI in administrative decision-making, as all governments will, can expect to face challenges to those decisions on human rights grounds, as a University of Toronto study has recently warned.

For lawyers with their eye on history, these kinds of claims will prove irresistible. And why wait? Though the blockbuster rulings that will reshape how the law governs our machine-enabled world are still years away, the litigation that will generate those decisions—and the facts and arguments that will underpin them—won’t be long in coming.

All it might take, in 2019, is a bit of careless jaywalking.