Imaging Radar Is the Next Big Thing

The technology is designed to bridge the gap between current-gen radar and lidar – while retaining the traditional advantages of RF-based sensors.

Imaging radar is expected to be a key element in improving the capabilities of driver-assist features. (Oculii)

Radar is nothing new to the automotive industry. Vehicles have been equipped with radar for adaptive cruise control since the 1990s and many current models have as many as five radar sensors. While radar is a great way to measure the distance and closing speed to other vehicles, current-generation sensors offer woefully low resolution. That’s why imaging radar is set to come on strong over the next several years.

Imaging radar, also referred to as high-definition or high-resolution radar, is designed to bridge some of the gap between current-generation radar and lidar – while retaining the traditional advantages of the radio frequency-based sensors. Like lidar, radar is an active sensor that emits a signal and listens for reflections. Since the sensor is the source and the speed of the signal is known, the distance to the reflecting object can be precisely measured. Radar signals also are unbothered by fog, rain, snow or adverse lighting conditions.

Passive cameras collect only ambient light radiation and have no known reference point for measuring distance and speed when used in the common monofocal configuration in vehicles. Multi-camera systems, however, can use geometry to calculate precise range.

Today’s typical automotive radars have three transmitters and four receivers to yield 12 virtual channels, usually in a single plane. That’s why they have trouble distinguishing between a vehicle stopped on the shoulder and a sign or overpass. Thus, most current ACC systems ignore stationary objects when moving more than about 40 mph (64 km/h) and look only for movement in the same or adjacent lane. They are fine for ACC and blindspot monitoring, but inadequate for higher levels of driving automation.

The first imaging radars coming to market from companies including Continental, Magna/Uhnder and Oculii over the next two years use the equivalent of 12 transmitters and 16 receivers to produce 192 virtual channels. The result is something that looks approximately like a lidar point cloud, with returns across a broader field of view. This allows the software to distinguish between the overpass, the road below and the vehicle in the gap in between. Other imaging-radar developers, including Arbe and Mobileye, are promising sensors with 2,000 points.

Just as with lidar, there are multiple flavors of imaging radar. Continental’s ARS540 sensor is an analog sensor like other current units, but with an increased number of transmitters and receivers. The Magna Icon sensor developed with Uhnder is a fully digital sensor on a chip that features programmable scan patterns. Like lidar sensors from Aeye and Luminar, it doesn’t emit only a fixed grid of pulses.

Instead, a denser pattern can be made in areas of interest – such as near the horizon – with a less-dense pattern off to the perimeter on the sides, or in-close or up-high. Uhnder claims its sensor can detect a pedestrian at 300 m (984 ft.) and a tire laying on the road at 200 m (656 ft.) with significantly lower power consumption than analog sensors – and at a cost comparable to current low resolution sensors. Oculii uses a software-based approach to do adaptive phase modulation using current-generation hardware. This enables Oculii’s imaging radar to produce 192 virtual channels.

Guidehouse Insights projects that the annual market for imaging radar will grow to more than 128 million units by 2030. These are expected to be a key element in improving the capabilities of driver-assist features as the industry and regulators globally strive to reduce the number of crashes and fatalities that have spiked in recent years.