Processing Capability TOPs-Out to Expand Vehicle Automation

As vehicle sensor-counts proliferate, determining how and where to process all the data is an emerging engineering issue.

Lidar sensors will include some processing capabilities. (Velodyne)

Sensors are the vehicle’s eyes and ears, but their inputs must be analyzed, processed and acted upon. A hefty amount of computing power is needed to fuse complex data from several sensors and determine how those inputs should be used. Many, but not all, sensor modules possess little intelligence, feeding information to zonal processing modules that do a bit of processing before the data goes to a centralized controller.

Andy Whydell, ZF’s VP of systems product planning. (ZF)

These powerful central controllers fuse data from cameras, radar and lidar, then decide how to assist the driver – or automatically brake, steer or accelerate. Meanwhile, Tier 1 suppliers are taking advantage of rapid increases in processing power. “We’ve gone from 30 TOPS (trillion operations per second) to 1,000 TOPS in four to five years,” said Andy Whydell, VP of systems product planning for ZF. “The industry is ramping up processing to handle all the inputs, but that needs to be balanced with power consumption.”

The performance gains in processors can be combined with sensors that also are leveraging semiconductor production advances. Design teams are using multiple sensor types to blend the strengths of different technologies, then they’re exploiting powerful controllers to run multiple analysis tools. It’s a potent combination.

“By putting cameras and radar together to check each other, it’s amazing what the processors can do with the extra information, especially if you have enough processing power to run different algorithms in parallel,” said Martin Duncan, ADAS division general manager at STMicroelectronics.

Processing capabilities have more than tripled in recent years. (ZF)

Stripping intelligence from sensors generates significant benefits. Sensor counts are rising, with plans for 30 or so on a highly automated vehicle. Eliminating power demand and shrinking size are important factors for hiding sensors. “Removing some processing power means sensors are smaller and power consumption is less,” Whydell said. “Smaller packages make it easier to integrate sensors into lighting structure or the A- and B-pillars.”

However, some sensor modules will include an integrated microprocessor. This “distributed processing” model reduces bandwidth requirements while also reducing the workload of the vehicle’s central controller. Some lidar packages include processors to analyze light that bounces back to the sensor, sending the main controller more pertinent data.

“We look to subsume more functions, doing some first-level computing,” said Anand Gopalan, Velodyne’s CEO. “The camera guys talk about using GPUs (graphics processing units), lidar does more at the ‘edge,’ so systems can get away with using low-cost FPGAs (field programmable gate arrays) or low-cost CPUs.”

Artificial intelligence will play a role when processors analyze sensor inputs; AI takes a fair amount of computing power, but its benefits far outweigh the processing demand. Determining what’s being seen by an array of sensors can be a confusing task for machines that only know what’s been written into software. “AI can help cameras deal with something they’re not trained to recognize,” Whydell said. “A European OEM [during on-road testing] came across a kangaroo that was sometimes on the ground, sometimes in the air, so the system couldn’t tell whether it was a bird or not.”