Mobileye on the Road

With 15 million ADAS-equipped vehicles worldwide carrying Mobileye EyeQ vision units, Shashua's company has been called “the benchmark for applied image processing in the [mobility] community.”

Amnon Shashua, co-founder and Chief Technology Officer of Israel-based Mobileye, tells a story about when, in the year 2000, he first began approaching global carmakers with his vision—that cameras, processor chips and software-smarts could lead to affordable advanced driver-assistance systems (ADAS) and eventually to self-driving cars.

By recording only landmarks along roadways and using them to locate vehicles, Mobileye creates compact yet detailed route maps.

“I would go around to meet OEM customers to try to push the idea that a monocular camera and chip could deliver what would be needed in a front-facing sensor—time-to-contact, warning against collision and so forth,” the soft-spoken computer scientist from Hebrew University of Jerusalem told Automotive Engineering during a recent interview. “But the industry thought that this was not possible.”

The professor was politely heard, but initially disregarded: “They would say, 'Our radar can measure range out to a target 100 meters away with an accuracy of 20 centimeters. Can your camera do that?'

“And I would say: “No, I cannot do that. But when you drive with your two eyes, can you tell that the target is 100 meters away or 99.8 meters away? No, you can’t. That is because such accuracy is not necessary.”

In fact, Shashua and his engineers contended that a relatively simple and cheap monocular camera and an image-processing 'system-on-a-chip' would be enough to reliably accomplish the true sensing task at hand, thank you very much. And, it would do so more easily and inexpensively than the favored alternative to radar ranging: stereo cameras that find depth using visual parallax.

zFAS and furious

Mobileye's latest autonomous driving control units provide 360° awareness of road conditions and the locations of other road users.

Seventeen years later, some 15 million ADAS-equipped vehicles worldwide carry Mobileye EyeQ vision units that use a monocular camera. The company is now as much a part of the ADAS lexicon as are Tier-1 heavyweights Delphi and Continental. At CES 2017, Shashua found himself standing on multiple stages, in one case celebrated by Audi's automated-driving chief Alejandro Vukotich as “the benchmark for applied image processing in the community.”

Vukotich was introducing Audi’s new zFAS centralized control computer, which incorporates both Mobileye’s latest EyeQ3 product and most advanced driving features and partner Nvidia’s powerful image processors. The zFAS box conducts 360° sensor fusion and makes driving decisions based on camera, radar and lidar input.

Mobileye's fifth-generation EyeQ5 system-on-a-chip is designed to perform sensor fusion for self-driving cars that will appear in 2020.

Shashua called Audi’s zFAS "the most sophisticated and ambitious ADAS to date.” That's because when it arrives in the 2017 A8, it will debut a 10-s "take over" request, or "grace period" during which the driver can grab control should the vehicle encounter sudden trouble. It delivers an industry-first, SAE Level 3 driving autonomy (see sidebar).

When Shashua recounts the industry's early reactions to his vision, he tells the tales without any notion of triumph or self-justification. He’s just making a point about developing autonomous control: focusing on a single component of the system, such as sensing, can lead to costly miscalculations.

Shashua believes that a safe, self-driving car needs three capabilities: it must sense the road, it must find its place on the map of the road and it must successfully negotiate its place among the users of the road. These sensing, mapping and driving policy, or planning, elements—what he calls the three pillars of autonomy—are in fact intertwined.

"They’re not separate technology areas, but have to be developed together,” he explained. “If not, you can place unreasonable demands on each of the elements.”

Somewhere versus everywhere

The first pillar, sensing, is already pretty well defined, he said. “Sensing is relatively mature because of our many years of experience with driving assist,” which is primarily about interpreting sensing to prevent collisions. Cameras provide around three orders of magnitude greater resolution than radars or laser scanners. And resolution matters, he added, when driving because scene details are vital, especially in the city where density is higher.

The other distinguishing feature is that cameras capture texture and surface info. This "helps identify objects and capture semantic meaning, whereas others sensors see only silhouettes, the shapes of objects."

At the wheel: Shashua and business partner Ziv Aviram established Mobileye NV in 1999 after licensing basic image-processing technology they'd created from Hebrew University in Jerusalem.

Mapping, the second 'pillar,' is more complicated and less well-defined, Shashua noted. This task requires the development of an extremely detailed mapping system that provides the car with information on its location and the surrounding roads.

“The big difficulty with high-definition (HD) maps is finding how you can do it efficiently at a low cost,” he explained. “In mapping right now there is the ‘somewhere’ camp and the ‘anywhere’ camp,” he said. The ‘somewhere camp’ follows Google’s strategy: Start with an autonomous car and map with until you have full [autonomous driving] capability somewhere. Then send out a vehicle that records a high-resolution ‘cloud-of-points’ laser scan along routes to map out an area until full capability exists there. And so on.

But one of the things that makes HD mapping problematic is coping with the huge amount of roadway data needed to capture enough scene details to support a safe interpretation. In the case of the ‘somewhere’ approach, road data rapidly grows to gigabytes per kilometer—a logistical nightmare, Shashua noted.

Even so, “Once you have finished recording the cloud of points map, you can then subtract all the stationary objects, which leaves you with only the moving objects" which is quite enough to navigate by, he asserted. Another plus is that “only a small number of sensing points are needed to localize any moving object.”

But at some juncture, the ‘somewhere’ camp has to face this fact: All safety-critical maps must be updated in near-real time. “How to do that, I don’t know," he said.

In contrast, the 'everywhere' camp—Mobileye’s and the auto industry’s approach—aims to develop “partial self-driving capabilities that can be activated everywhere.”

Judging that automatic driving controls would need near-human-level perception capabilities, and to some extent even cognition, the ‘everywhere’ camp has pinned its hopes on “strong artificial intelligence and machine learning algorithms,” he said. Such a strategy is risky "because you’re not sure exactly how long it might take, and there are no guarantees of success.”

Despite the many recent successes of AI-based agents in tasks such as image recognition, strong AI is still hard to come by. So the industry instead is currently settling for limited AI capabilities but compensating for it "with very detailed maps.”

Crowd-source the routes

On the critical question of how to get sufficiently detailed, up-to-date maps at low cost, the professor offers a novel idea. Rather than wrestling with detailed cloud-of-points-type HD maps, driving-assist technology can be leveraged to take advantage of crowd-sourcing. "We harvest the collective road experiences of many connected vehicles fitted with forward-looking cameras and other sensors which send the collected data wirelessly via the cloud to update the HD map,” he explained.

Each time a vehicle equipped with a Mobileye EyeQ vision unit passes through any route, they collect critical road data that precisely define it—especially the position of landmarks like lane markings, road signs, lights, curbs, barriers and so forth. The data are then stored in a “road book.” Though these landmarks are comparatively few on the ground, these "path delimiters" and "semantic signals” nonetheless enable the control system to localize the position of vehicle on the road continuously within 10 cm (less than 4 in) using multiple 'triangulations' in successive road scenes.

It currently takes nine ADAS passes for Mobileye’s Road Experience Management (REM) system to HD-map-out a safe path through any roadway for the road book. And since REM, a collaboration with Delphi, needs to record comparatively few landmarks on the final detailed map, a rather ‘sparse’ data set of only 10 kilobytes per km is enough to reliably localize each car’s safe routes.

“The density of the data source—millions of connected vehicles on the road—is what makes this detailed, yet sparse HD map highly scalable and updatable at almost zero cost,” Shashua said.

Negotiate the road

The third and possibly most problematic 'pillar' of autonomous car tech is what the Shashua calls driving policy: emulating how human drivers not only negotiate the road but with other independent road users, so traffic still flows smoothly and safely. In other words, how humans know what moves to make in any driving circumstance.

“This is the reason we take driving lessons,” he opined. “We do not take lessons in order to train our senses; we know how to see. We take lessons to understand how to merge in chaotic traffic and other such maneuvers. What we learn to do is negotiate with all the other independent agents on the road.”

The task is to help driverless cars learn, even understand, the unspoken rules that govern road behavior. “Our motions signal to the other road users our intentions and some of them are very, very complicated,” he noted. Further, traffic rules and driving norms change from place to place: In Boston, people drive differently than they drive in California, for example.

Mobileye is teaching ever-more powerful ADAS processors to better negotiate the road, step by step by using AI neural networks to optimize performance and machine learning algorithms that “learn by observing data instead of by programming.” Such technology actually teaches the car to behave in a human way," according to Shashua, by repetitively viewing various realistic simulations that his company's engineers film and then feed into the vehicle’s computer.

“For the most part, the ingredients for autonomy exist,” he asserted. “At this point it is mostly a matter of engineering.”

By the end of 2017, around 40 modified BMW 7 Series sedans will be roaming U.S and European roads as part of a global trial conducted by development partners Mobileye, BMW and Intel. This is the start of a five-year plan where in 2021, "we are going to launch thousands of vehicles that are autonomously driven—tens of thousands of vehicles that will be autonomously driven on highways and a few thousands of vehicles that will be autonomously driven inside cities,” Shashua said.