The Building Blocks of Autonomous Tech

Sensors, processors, architecture and communications trends for the self-driving future.

In combination with multiple cameras, radars and other sensors, high-resolution 3D flash lidar will be a key element in sensor suites that provide a detailed 360° field of vision around the vehicle. (Image: Continental)

Mimicking the many things humans do while driving requires a complex blend of technologies. An array of several sensors per vehicle is needed to monitor a 360° field of view around the car. Fast networks that send data to the electronic controls are required to analyze inputs and make decisions about steering, braking and speed.

OEMs, Tier 1s and other suppliers are vying with and in some cases partnering with and acquiring a relentless wave of start-ups and industry disruptors including Apple and Google, as they race to develop tomorrow’s mobility solutions. Their keys to winning reside in the following technology areas:

Processing power

The processors that analyze sensor data and make steering, braking and speed decisions will undergo major changes. Today’s safety systems use a range of conventional multicore processors from companies like NXP, Infineon, Renesas, STMicroelectronics and Intel. But the extreme challenges associated with autonomy will require a range of processing technologies.

GM purchased Strobe Inc. for its super-compact frequency-modulated (FM) lidar technology (prototype shown with a Sharpie for scale) that enables faster data processing than time-of-flight lidars. GM’s Cruise Automation subsidiary is aiming to reduce solid-state lidar costs to under $100 per unit. (Image: GM Cruise)

Nvidia’s highly parallel graphic processing units, each with thousands of small processing cores, burst onto the automotive scene in recent years. GPUs excel at processing multiple tasks simultaneously, like analyzing the many pixels streaming in from sensors. Nvidia’s latest multi-chip platform for SAE Level 3 through 5 driving, code-named Pegasus, is the size of a car license plate and delivers data-center-class processing power—up to 320 trillion operations per second.

Mobileye, now owned by Intel, has also developed a dedicated image processor. Specialized parallel devices can be made using field programmable gate arrays (FPGAs) from Xilinx, Intel (nee Altera), Lattice Semiconductor and Microsemi. FPGAs let designers create chips that are optimized for a given task. Mainstream suppliers like NXP and Renesas have licensed programmable technology from Tensilica that is being expanded from infotainment to safety systems.

Conventional CPUs won’t disappear. They are best at sequential processing techniques used to make decisions. They’ll help fuse sensor data after it’s processed by various dedicated processors housed on sensors or in the ECU.

Most systems today link a processor to each system—for example, lane departure or adaptive cruise control. It’s likely that a large control unit will collect all relevant data and decide how to navigate. That will push the demands for more cores, faster clock rates and a low power budget, especially in EVs.

Exponential growth in automotive on-board cameras has driven production efficiencies and greater scale. Shown are a quartet of image sensor boards at Magna’s Holly, Michigan, camera plant. (Image: Lindsay Brooke)

Radar and cameras

Humans only need two eyes and some mirrors to drive safely, but autonomous vehicles will need as many as 30 sensors to match the performance of an attentive person. Many of them will look forward, working together to identify objects and watch roadways. Additional sensors will be employed to provide a 360-degree view of the vehicle’s surroundings.

The radar component market is now dominated by established chipmakers like NXP and STMicroelectronics, which are pairing microcon-trollers and radar devices. A similar integration path is being undertaken by partners Analog Devices and Renesas as well as Imec and Infineon. For example, an IHS Markit teardown of the 77-GHz radar sensor supplied by Delphi for the 2015 Volvo XC90 revealed Infineon’s receiver and transmitter, and a TI apps processor among the unit’s bill of material. Infineon SiGe HBT-based receiver and transmitter are also used in Delphi’s pioneering RaCam radar + vision sensing technology.

A few start-ups including Oculii, Omniradar and Artsys360 are attempting to gain a foothold in radar. Most Tier 1s such as Bosch, ZF, Delphi and others are employing radar in safety systems.

The industry is moving steadily toward modular and scalable multi-domain controllers to manage increasingly complex sensor inputs and processing. (Image: Continental)

Many of these companies also develop camera technologies, a field in which Intel’s Mobileye has made a major impact — its EyeQ3 video processor also is part of the Delphi RaCam. Magna International, Bosch, Valeo, Continental and Denso are among suppliers that focus on vision systems. Chinese suppliers like Stonkam are making a major push into cameras, as are Panasonic Automotive Systems and airbag-supplier Autoliv.

Another growing camera application is driver-alertness monitoring and cabin sensing, vital to safe SAE L2 through L4 vehicle operation, as proven by GM’s Cadillac SuperCruise system. Tech collaborations are core to this emerging supply area; Denso has partnered with Xperi Corp., a Silicon Valley-based tech company whose FotoNation group specializes in image recognition technologies. Since 2014, Denso has provided a Driver Status Monitor for over-the-road trucks and large tour buses. Its system employs a cabin camera to capture images of the driver and computer vision technology to detect the driver’s face angle to determine the level of drowsiness.

Cameras and radar currently work together to identify objects, and it’s likely that lidar will be added when solid-state devices meet automotive requirements. Multiple overlapping sensors can safely identify objects without falsely sending alerts. Multiple radars are being used to provide 3D capabilities. The need for precise 3D images is behind a shift to 77-GHz radar, which offers more bandwidth than 24-GHz devices. Advanced techniques for sending and receiving signals are helping radar identify objects instead of simply providing range information.

Delphi V2V controller as used by Cadillac. (Image: Lindsay Brooke)

Increasing resolution is a key factor for cameras. Higher resolution adds clarity and helps extend range, helping systems identify far-away objects. Resolutions of 8-10 Mpixels will become common, displacing today’s 2-4 Mpixel cameras.

As more sensors are added, the volume of data they send to controllers is rising sharply. Integrating data processors is one solution but it’s not a universal view.

“Adding a processor in the sensor induces latency and adds to the bill of materials,” said Glenn Perry, General Manager of the Mentor Graphics Embedded Systems division. “When you have all the lidar, radar, cameras needed for SAE Level 5, I’m not sure this works. It will be expensive and consume an extraordinary amount of compute power.”

Lidar

Many automotive engineering managers consider Light Detection And Ranging (lidar) sensing, which combines a laser and camera, a necessity for SAE L5 driving. Operating much like radar, laser light goes out to objects and bounces back. Distance is measured by analyzing the time of this return, augmenting the camera data.

Various smaller companies now make electro-mechanical lidar systems, but they’re all developing solid-state lidar, which is needed to meet automotive reliability requirements. The auto industry’s interest is backed with hefty funding. Delphi, Ford and ZF invested in Innoviz Technologies, Velodyne and Ibeo, respectively. Quanergy’s investors include Delphi and Daimler. Continental acquired Advanced Scientific Concepts; Analog Devices Inc. acquired Vescent Photonics. And in October 2017, General Motors’ autonomous-tech subsidiary Cruise Automation purchased Strobe, a tiny firm that had been quietly developing next-generation lidar sensors.

AI- and camera-based driver monitoring systems are vital for SAE Levels 2 through 4 operation. (Image: Continental)

Cruise boss Kyle Vogt wrote in a blog post that collapsing the entire sensor down to a single chip will enable his engineers to reduce the cost of each vehicle lidar “by 99%.”

Some lidar chips are ready, but they’re mostly for short-distance applications like lane departure. Most research focuses on getting automotive-grade solid state devices that combine 200 to 300-m (656 to 984-ft) distances with high resolution. This combination will let the systems identify objects before they get too close to the car.

“Resolution is key,” said Anand Gopalan, CTO at Velodyne Lidar. “If you’re trying to see something small like tire debris that is far away, you want enough laser shots to hit it so you can recognize it and take evasive action.”

Architecture

The architecture is the foundation that supports all the pieces that comprise driverless cars. It encompasses many elements: how sensor data is collected and fused to create a single view of the surroundings, how data is shared throughout the vehicle, how decisions are made and cross-checked, to name a few.

Software will play a huge role as electronic controls determine actions in response to what’s happening around the car. Hardware must be powerful enough to make computations in time to avoid accidents. Today, hardware and software are often provided by the same supplier, though the Automotive Open System Architecture (AUTOSAR) has enabled some separation between hardware and software. That trend may take off as carmakers search for continuously better autonomous software.

“Most people now understand that hardware and software should be abstracted from each other,” said Karl-Heinz Glander, Chief Engineering Manager for ZF’s Automated Driving Team. “This makes it easier to bring third-party software in. OEMs can benefit from working with pure software companies like Apple or Google or with companies like Nvidia or Mobileye that put algorithms on their chips.”

One critical architectural aspect is whether the processing power is centralized in a powerful ECU or distributed. Many ADAS systems distribute processors in smart sensors to process raw data before sending it to the controller. This pre-processing trims the central controller’s processing task while also reducing the amount of data sent over networks. Some system architects think it’s more efficient to send raw sensor data to powerful centralized ECUs, eliminating the processors in sensors. Some OEMs may opt for a mixture, with some “smart” and some “simple” sensors.

It’s nearly impossible to create software that will respond correctly to the unlimited possibilities that autonomous vehicles will see on roadways. That’s sparking a huge investment in creating artificial intelligence programs that ‘learn’ as vehicles are driven. AI is already driving big advances in voice recognition and image analysis; see page 20.

Communications

If vehicles can share information, they can get information on-board sensors can’t get, such as emergency braking by a car hidden by a tractor-trailer rig. Vehicle-to-infrastructure communications to roadside beacons can also aid in safety and traffic flow. This information can be a treasure trove for autonomous vehicles. However, deployment of the two technologies often jointly called V2X is still in question.

NHTSA and automakers have worked for years to create standard dedicated short-range communication (DSRC) technologies that facilitate this communication, but there’s no implementation mandate yet. There’s skepticism that DSRC won’t be deployed unless it’s required. Cadillac has deployed V2V, but no other OEM has followed suit.

While regulators mull over factors like security, automotive and cellular suppliers are devising V2X communications that use 5G cellular technology. 5G’s emergence date and actual performance are still in question, but next-generation cellular modems will probably be on most cars once 5G pricing comes down. Cellular bandwidth may let these 5G modems handle many aspects of V2X, including black-ice notification that doesn’t require real-time performance.

For warnings that might cause autonomous cars to steer or brake, DSRC’s low latency is critical. On-highway tests have proven DSRC’s performance with large numbers of vehicles, so its rollout could be quick once NHTSA moves or a couple major proponents agree to start using it on passenger cars or commercial trucks. While some 5G proponents feel cellular can displace DSRC, many feel both may share roles.

“The industry should call for collaboration and how the two technologies should co-exist without sacrificing safety scenarios,” said Raed Shatara, Senior Principal Engineer, Business Development, at STMicroelectronics. “FM did not replace AM. HD did not replace AM or FM. Satellite radio did not replace AM, FM or HD. They all co-exist in vehicles today.”

Whichever technology is used, these benefits may be slow to come to fruition. V2X communications rely on being able to send messages to many vehicles, and it will take a long time for V2X-equipped vehicles to displace older cars. Aftermarket systems can’t offer many safety features, since it’s difficult to verify that messages are coming from a verified vehicle, not a hacker.