The Challenging Legal Environment of AI

Rapid advances in AI will continue to add to legal risks and regulatory needs, an industry expert said at WCX 2023.

Jennifer Dukarski, who has degrees in mechanical engineering and law, is in a unique position to observe the legal developments of rapidly advancing vehicular artificial intelligence. (Chris Clonts)

“Holy cats. What happens when this stuff goes wrong?”

That’s how mechanical engineer and attorney Jennifer Dukarski framed her chat Tuesday about developments in vehicular artificial intelligence (AI) and machine learning at the 2023 SAE WCX conference in Detroit. She linked the discussion to General Motors’ March announcement that it was exploring using ChatGPT as the driver interface in vehicles. (GM VP Scott Miller told CNBC at the time that among other things, it could give drivers quick access to information they would normally have to look up in an owner’s manual.)

“I remember about seven years ago, at CES, Toyota had a demo of a super smart vehicle that would have a conversation with you and ask you how you’re feeling,” Dukarski said. “So using that kind of fast forwarding to today, maybe using ChatGPT will give a good dialogue or conversation. But when you look at it, GM really is doubling down on their partnership with Microsoft, which is putting about $10 billion into the company that created ChatGPT. So where that will go?” The possible implication being that GM would look for more than a friendly interface out of such a large investment.

ChatGPT has already had its share of big problems, some of them on previous versions of the rapidly improving system: It had failed basic math problems and compliantly wrote morally reprehensible essays about sexual assault. Back on the subject of vehicles, though, Dukarski looked to history to relay the kinds of liability and other legal issues that can come from AI. She cited:

  • A 2017 lawsuit in which a tourist bus using Garmin GPS plowed into a bridge for which it was too tall, injuring passengers. The suit accused Garmin of negligence and of faulty software design. As many similar cases are, it was settled, which Dukarski said meant “we’ll never really know” the complete circumstances.
  • A 2018 case in which a lane-hopping motorcyclist was hit by a partially autonomous car. The case was settled but left industry eyebrows arched when GM said in a filing that it had a duty to design a system that acts responsibly.
  • The Tesla cases we’ve seen since 2018, some of which have concentrated on an assertion that Tesla should have known that people would misuse the system it calls Autopilot. She cited one case that should have gone to trial last week but may have been settled.

All liability cases generally depend on two legal tests, Dukarski said:

  • Consumer expectation: This is, more or less, the “reasonable person” test. It imposes liability if a product is defective in a way that does not match the expectations of a reasonable consumer.
  • Risk utility: This standard is a complex cost-benefit analysis that considers the chance of injury, severity of injury/injuries versus the diminished utility of the product under any alternative design or engineering choices.

The liability doesn’t just concern OEMs, either. “If you’re a consumer, and you have an issue, you have the right to go after the manufacturer, you have a right to go after the suppliers,” she said. “And as a trend, we’re seeing more suppliers get caught up in direct litigation, they’re being named more and more. So the supply chain is being brought to the table in class actions more frequently.” She did note that even though most supplier contracts have clear, strong indemnification language putting liability on the supplier, it’s an interesting dance the OEMs must perform, because it’s not in their best interests to shut down a supply chain.

Dukarski said the hope for either legislation or regulation heading off some situations that result in lawsuits is tentative, essentially writing off legislation due to the toxic political atmosphere preventing parties from working together. Even when they agree laws need modernization, they can’t agree to an approach, such as with data privacy. “I think with AI, as we’ve seen elsewhere, California will create a baseline, then everybody else will rebel against it. They’ve already done that with regulation of the Internet of Things.” She said much of this is right in the bailiwick of the NHTSA, but that despite the agency moving to order recalls against Tesla’s programmed “rolling stops” and investigating problematic driver-assistance systems, it has to be careful because of the Supreme Court’s willingness to “smack down regulatory bodies,” as it did in the West Virginia vs. EPA case related to the Clean Air Act.

She briefly approached the ethics of AI, asking rhetorically if “we should put bumpers on our AI systems to keep them from doing something illegal?” She said there are already reoccurring issues of bias in AI systems. As had already happened in the photo-recognition industry, a Georgia Tech study found that a certain vehicle training dataset (used to spur the learning of AI systems) led to safety software that did not recognize black people as pedestrians. Dukarski said it’s just a necessity that diversity be a consideration throughout any AI development process. The lawyer and mechanical engineer, who works with Detroit-based Butzel Long, encouraged SAE members to get involved in solving these complex challenges. “We’re going to continue to see SAE and other standards bodies weigh in on it, which I think is really going to be our guideposts,” she said. “So to the degree that you do see an SAE working group and you like to do working-group work, please get involved.”