Content start here

News
Research Spotlight: When AI takes the wheel, who’s responsible?

Larissa Puro • January 15, 2026
post image

On a gray Southern California afternoon, Professor Gregory Keating slides into the backseat of a self-driving car. It’s his first time in an autonomous vehicle, yet the longtime USC Gould School of Law tort scholar has spent years thinking about the legal world it will upend — or not.

As the car eases onto Exposition Boulevard, Keating marvels at the empty driver’s seat, a symbol of disruption for more than a century of liability law. That framework was built around humans — and while we now share the road with AVs, we do “not share with them membership in a moral community,” Keating writes in his 2024 paper Pouring New Wine into Old Skins: The Case of Self-Driving Cars.

Early scholarship about autonomous vehicles assumed the technology would require an entirely new liability regime, Keating recalls. But no such system has emerged. Federal law still treats AVs as human drivers, and so Keating is focused on how the current tort framework might rise to meet a challenge it was not designed for.

Video by Sydney Salinger and Larissa Puro

A tale of two expectations

Proposing new legal approaches is complicated by contradictory expectations of the technology, Keating says.

“On the one hand, when accidents happen, we appear to demand more of the humans involved than of the autonomous vehicles,” he writes. “On the other hand, when we think generally about the acceptability of autonomous vehicles on our roads, we think that driverless cars must be much safer than cars with drivers.”

Gregory Keating is William T. Dalessi Professor of Law and Philosophy at USC Gould School of Law. Photo: Sydney Salinger

The 2018 death of Elaine Herzberg in Arizona, perhaps the most widely discussed AV incident to date, illustrates this tension.

A self-driving Uber prototype struck and killed Herzberg as she crossed the street with her bicycle. Despite the vehicle’s failure to detect her, the human safety driver, Rafaela Vasquez, was charged with negligent homicide as she was looking down in the moments before the impact.

Uber itself faced minimal exposure from what we can tell, Keating says. A human driver who failed to see Herzberg would likely have been judged far more harshly, he believes.

“When a human being doesn’t see somebody crossing the street walking a bicycle, we call it a lapse,” he says. “We think you have the competency, but human beings are error prone.”

But in the Herzberg case, the AV lacked competency. “When you lack the competency, there’s a reason for people to lose confidence.” A flaw encoded in software erodes trust in a way that a human error does not: Once triggered, it reliably repeats the same failure whenever the same conditions arise.

Old laws, new tech

In Keating’s research, he looks beyond the obvious challenge of determining fault, to understand how centuries-old doctrines try — and sometimes fail — to stretch around technologies they were never meant to govern.

“What you do in the law is … wedge the new thing into the existing boxes,” Keating says. He sees AV liability drifting toward three possible frameworks:

1. Assigning Human Responsibility

Keating describes the centuries-old doctrine of animal trespass: “If it’s your cow that goes on to somebody else’s property and eats their grass, you’re responsible.” The animal “trespasses”; the owner pays.

A self-driving car, similarly non-moral and non-human, could be treated as an agent whose actions are automatically imputed to someone: the manufacturer, the owner or another designated responsible party.

While this framework has precedent, “it is anything but clear whether, and how, legal liability should run through the ‘agency’ of autonomous vehicles,” Keating writes. Do we run it through to the owner? To the manufacturer of the car? To the developer of the self-driving software?

2. Creating Insurance-Based Liability

Another model, borrowed from English law proposals, would require “all self-driving cars [to] carry their own insurance… and the responsibility is just imputed to the insurer.” Under this arrangement, AVs would be legally capable of being “liable,” but insurance would handle payouts, sidestepping the thorny question of whether an algorithm can be at fault.

“I’ve yet to discover what the insurance industry thinks about that,” Keating jokes.

Lately, he’s noticed insurers retreating from artificial intelligence related coverage, concerned that system-wide issues in the software could spark widespread, correlated losses — much like how a faulty power line can ignite a wildfire that consumes entire neighborhoods.

3. Layering Negligence With Product Liability

In practice, Keating suspects courts may default to negligence — treating the AV as a human driver and then layering product liability analysis on top. “The vehicle’s only liable if it was defectively designed,” he explains, meaning plaintiffs must show a better, feasible design existed. But design-defect litigation for rapidly evolving AI systems, he warns, is “very complicated,” especially if or when vehicles begin communicating with one another as a connected network.

As the Waymo makes its way down Jefferson Boulevard, Keating reflects on another unresolved issue: What is an AV platform, legally speaking? “There’s products, which are different from services,” he notes, but ride-hail automation “confounds that distinction.” Courts have already deemed certain apps — such as a Snapchat speed-recording feature — to be products. Yet ride-hail companies also provide services akin to taxis.

“So there’s a whole question as to how to figure out where to put this… in terms of legal boxes,” he says. Whether AV companies are product manufacturers, service providers, platform operators or some hybrid of the three will shape future lawsuits and regulatory frameworks.

Navigating the future

The steering wheel rotates and the car pulls itself over to the USC Hotel curb, chiming brightly as it notifies Keating to remember to take his belongings. In the background, more camera- and sensor-studded AVs are making their way down Figueroa Street.

Despite their rapid proliferation in big cities and the legal challenges they have already posed, AVs still remain at an early stage, says Keating. “Waymo is very impressive, but it’s still a bit of a curiosity.” Whether these vehicles “jump from there to being a major transportation technology” remains to be seen.

Even so, he is helping shape the emerging legal landscape. As a member of the American Law Institute’s project on Principles of Civil Liability for Artificial Intelligence, he is already thinking beyond vehicles, noting that “autonomous vehicles look less scary than chat bots” in light of cases where chatbot interactions have caused real-world harm.

The legal system will continue to confront new and unexpected issues raised by autonomous vehicles, and Keating says he is mindful of the unknowns: “The real world is an open universe, and it may turn out that human beings are actually better in an open universe than anything we can invent.”

For legal scholars, policymakers and the public, the coming years will test whether society can build — or adjust — a legal framework sturdy enough for the new technology driving us forward, Keating says. “Lots of things are possible.”

Explore Related

Related Stories

Tax Institute convenes leaders to navigate tax law’s next chapter

Tax Institute convenes leaders to navigate tax law’s next chapter

The 2026 Tax Institute, a three-day continuing legal education program, focused on the One Big Beautiful Bill Act (OBBBA) and shifting tax enforcement priorities.

Read More of Tax Institute convenes leaders to navigate tax law’s next chapter
From TikTok to tech policy: Angela Zhang brings international insight on law, China and AI

From TikTok to tech policy: Angela Zhang brings international insight on law, China and AI

USC Gould professor is quoted across the global media circuit as a leading expert in Chinese tech regulation, data privacy

Read More of From TikTok to tech policy: Angela Zhang brings international insight on law, China and AI
Bridging coasts and codes: USC Gould professor tackles AI’s legal future at Harvard
artistic illustration of Professor Daniel Sokol with circuitry

Bridging coasts and codes: USC Gould professor tackles AI’s legal future at Harvard

Law and Business Professor D. Daniel Sokol is a visiting scholar this fall in the Platform Laboratory at the Digital Data Design Institute

Read More of Bridging coasts and codes: USC Gould professor tackles AI’s legal future at Harvard