Of Moose and Artificial Intelligence

A recent run of wacky accidents by Cruise robotaxis isn’t doing much for the artificial-intelligence revolution.

(Cruise)

I know nothing more about artificial intelligence (AI) than what I read and what learned people tell me. I know it’s supposed to bring new sophistication to all manner of processes and technologies, including automated driving. So, when a driverless robotaxi operated by GM’s Cruise plowed into a road section of freshly poured cement in San Francisco, it raised a lot more questions about the beleaguered Cruise. My mind wandered to AI, which many AV compute “stacks” are touted to leverage in abundance. Driving into wet cement isn’t intelligent.

Did somebody need to train the vehicle’s AV stack specifically to recognize wet cement? If that’s how it works, I’d prefer not to bet my life on whether some fairly oddball happenstance – is the term ‘edge case’ not cool anymore? – had been accounted for in that particular version of the AD system’s algorithm running that particular day.

The premise of whether AVs can be made sufficiently intelligent and able to account for any situation caused me to recall a wet-cement analogue from a quarter century ago. In 1997, Mercedes-Benz had just launched the A-Class, the company’s first run at the compact-car segment. It was an almost unthinkable stretch for the brand and a key challenge was convincing consumers that the tiny A-Class had the same level of safety as any larger Mercedes-Benz. But in an obscure-to-non-Scandinavians crash-avoidance maneuver called the “moose test” conducted by a Swedish auto magazine, the A-Class flipped. The failure made global headlines.

Mercedes-Benz acted swiftly to correct the A-Class’ stability, chiefly with the addition of the then gee-whiz technology of electronic stability control, or ESP. The controversy – and dent to the brand’s image – subsided. But I remembered an interview two decades later that Juergen Hubbert, the executive who ran Daimler’s Mercedes-Benz business during the moose-test crisis, gave to Europe’s Automobilwoche and which appeared in Automotive News Europe.

The reporter asked Hubbert if the engineers developing this crucial model were familiar with the moose test. “We really did every test you can imagine,” he responded. Except they didn’t. They hadn’t figured on a moose.

Back to the current day, it’s easy to pick on the misadventures of driverless taxis, but irrespective of the general safety implications of these failures, I’m struggling to understand why AI isn’t improving simpler aspects of ACES (automated, connected, electrified, shared) vehicle development. One example: I’ve long bored friends and colleagues with my ongoing war against what I see as the non-‘naturalistic’ and juvenile fashion in which almost all adaptive cruise-control systems execute certain common high-speed maneuvers. Yes, I include current ‘hands-off’ systems, too.

How about voice recognition in today’s supposedly high-function cabin user-experience (UX) environments? This technology, which has been commonplace for over a decade (Ford’s first Sync debuted in 2007!), is about the same hit-or-miss proposition it was in its early days. One writer’s recent review of a 2023 model said giving voice commands via the vehicle’s infotainment UX was essentially “useless.” Many of the cloud-based voice-response collaborations such as Amazon’s Alexa don’t seem to be substantively improving anything, either. Shouldn’t fixing voice-recognition’s comparatively modest problems be child’s play for AI?

Artificial intelligence seems destined to be useful for optimizing productivity in chaotic systems with lots of variables and patterns to be ordered and analyzed. It’s said to already be transformative in some manufacturing and research-and-development environments. I don’t doubt AI is going to have its triumphs. But wet cement and the Moose test make me think there always will be situations that go beyond what programmers can conceive.