Sensing Enters a New Era
NPS’s advanced information theory enables AVs to ‘see’ everything sooner, clearer and farther ahead.
More than 1.3 million people die on the world’s roadways each year and another 50 million are injured. While roadway safety has improved over time, all countries face formidable challenges in reducing the trend. To accelerate progress, Vision Zero and the U.S. National Roadway Safety Strategy have made the objective not just improved safety, but zero roadway deaths.
This article answers three questions:
How well must vehicles ‘see’ to eliminate preventable roadway deaths?
What is required to see this well?
How can these requirements be met?
We conclude that to eliminate preventable roadway deaths under the worst conditions, vehicles must sense (sample) and process information (the environment) a billion times faster than humans. We also conclude it is possible for an advanced sensor system to see this well and that this system can be commercialized within the next two years.
How well must vehicles see?
Accidents can be prevented when vehicles can avoid hazards by safely stopping or swerving. A vehicle suddenly crossing into oncoming traffic or a boulder suddenly falling in front of a car usually can’t be avoided. But, whenever vehicles obeying traffic laws can safely stop or swerve if required, such preventable accidents can be eliminated.
Our premise is that seeing sooner, clearer and farther under all roadway conditions will accelerate progress toward zero roadway deaths. Seeing sooner reduces sensing time so perception can start earlier; seeing clearer improves perception reliability so braking/swerving can start earlier and seeing farther allows sensing to start earlier.
To eliminate preventable roadway deaths, vehicles must sense things that humans can’t. Advances in radar, lidar, processors and analytics are allowing vehicles to sense up ahead and around much better than humans can see. This helps eliminate preventable accidents in worst driving conditions and allows smoother braking in less-than-worst conditions. Worst conditions include:
rain, snow, ice, fog and darkness
maximum allowable speeds for a given road design
curves, slopes, buildings and canyons that can obstruct vision
lower bounds on brake conditions relative to loads and tire conditions relative to temperature, pressure, and surface friction.
We used principles from information theory and physics to determine the data rate (bits/sec) required to reconstruct worst-case scenes with sufficient precision (fidelity) and frequency (frames/sec) such that preventable accidents do not occur.1
To illustrate, consider the scene faced by a heavy-duty truck operating on a highway in snow at night (Fig. 1). Reconstructing this scene with sufficient precision requires segmenting the short-range space around the truck and the long-range space up ahead into small “cube-like” building blocks called voxels. Greater precision can be attained by interrogating larger numbers of smaller voxels.
To safely stop/swerve under the worst conditions, trucking company operators and autonomous truck developers believe they need to see about 250 meters (820 ft.) around trucks and 1000 meters (3,281 ft.) up ahead with a long-range field of view (FoV) of 30 degrees. Using these requirements, about 3 billion voxels need to be probed by sensors to reconstruct the Fig. 1 scene. Voxels at 1,000 meters are about 2 x 2 x 2 ft. (0.6 x 0.6 x 0.6 m) and require orders of magnitude more sampling than voxels at 100 meters (328 ft.). For zero preventable roadway deaths, high precision is required, which means very detailed scenes must be updated rapidly.
Scene data is obtained by converting continuous analog signals representing physical measurements into time separated 0 and 1 digital streams representing information. When radar and lidar sensors interrogate raw data in voxels, they need to reliably detect targets and avoid false alarms. Because environments are noisy, sensors must probe voxels numerous times to attain over 90% reliability and less than one-in-a-million false alarms. To eliminate preventable accidents, this performance must be attained for the weakest signal-to-noise ratio (worst-conditions) at 1000 meters.
Based on representative values of key variables and sensitivity analyses, our calculations indicate the required data rate to enable zero preventable roadway deaths under the worst conditions approaches a staggering 7 x 1015 bits/sec (7 Pb/sec)!2 To put this in perspective, the input sensory data rate from our eyes to our brain is about 10 x 106 bits/sec3, about one-billionth of the required information processing rate to prevent accidents. Thus, humans cannot sense and process information fast enough to eliminate preventable roadway deaths.
Like humans, cameras alone also cannot sense well enough for zero preventable roadway deaths. Cameras are similar to our eyes in that they work well when there is sufficient ambient light and it is not necessary to see through objects and around corners. But, to get to zero roadway deaths, we need to quickly detect objects that are hidden from view in worst-case lighting conditions. Cameras can’t find these objects no matter how fast they can sample.
Enter the Atomic Norm
Any sensor system that physically scans the entire coverage volume will require an incredibly large data rate to reconstruct scenes soon enough, clear enough and far enough to eliminate preventable roadway deaths under worst conditions. No current system comes close.
Fortunately, a mathematical framework exists that, when combined with advanced sensors and system-on-a-chip (SoC) technology, allows the massive data rate requirement for scene reconstruction to be addressed and human-driven and autonomous vehicles to sense well enough to eradicate preventable roadway deaths. Called the Atomic Norm (AN), this method is based on compressed sensing (CS) which reduces the number of measurements required to maintain a certain level of performance and was developed to improve magnetic resonance imaging (MRI)4. AN uses much wider beams and better computation to allow each voxel to be interrogated individually. It also relies on the fact that over 99% of voxels are simply free space. This reduces the data rate to between one-hundredth and one-tenth the rate without AN while continually monitoring the entire space.
Based on sensitivity analyses, we conclude that with AN, human-driven and autonomous vehicles must be able to sense the environment at a peak rate on the order of 100 x 1012 bits/s (100 Tb/sec) to handle a surge of sensor data in the worst conditions. This is at least 98% less than the 7 Pb/sec rate without AN.
In addition to reducing the required data rate, AN can obtain the same performance as conventional radar algorithms using just 1/50th of the transmit power, yielding much higher resolution and many more hits on targets. Conventional automotive radars can barely detect a pedestrian up to 100 meters away, but with AN and multiband radar, it is possible to see a pedestrian in rain, snow, fog and darkness in excess of 500 meters (1,640 ft)5.
What’s needed to sense 100 Tb/s?
Fig. 2 shows how the requirement of zero roadway deaths flows down to specify requirements for sensing and how meeting sensing requirements flows up to enable other system requirements. Our premise is that 100% awareness of what is in the environment at all times is required to enable zero preventable roadway deaths, and by sensing and processing around 100 Tb/s in worst conditions, higher-level system requirements will be easier to meet. To do this, the Atomic Norm must be combined with multi-band radar, solid state lidar, SoC processors and digital maps.
Different radar bands complement each other by having different propagation and reflection properties – higher bands have better resolution and lower bands have longer range (especially in adverse weather conditions) and can go through and bend around objects. Because objects reflect differently at different frequency bands, one can use different radar responses to determine the constituent material of different targets (e.g., metal vs. soft tissue). This means the radar must use causal inference and reasoning to interpret the multi-spectral response from the environment.
Solid state lidar has precise scanning repeatability advantages and avoids moving parts. Along with radar, lidar is used for the long-range FoV. It relies on having multiple lasers and detectors looking at the same portion of the FoV to increase the signal-to-noise ratio and to improve the range while adding photon path diversity for higher reliability. Carefully designed laser pulse sequences can be used to boost the signal levels and detect and position multiple targets within a single voxel.
Highly advanced AI edge processors are needed to track, analyze and interpret the blistering data flow from each sensor modality. SoC processors are high-end, very low power, dedicated silicon chips with semi-flexible architecture. They are the computational workhorses required to process the 100 Tb/s, and several tens of them will be required to work in parallel and in symphony.
Finally, digital maps help us know when roads bend and slopes occur and provide clues to help focus on the voxels of greatest interest. Also, knowing where hills crest allows speeds to be safely adjusted to mitigate the risk of not being able to see over hills.
Can vehicles sense 100 Tb/s?
NPS has integrated three innovations to sense 100 Tb/s:
A new class of solid-state, long-range lidars that can detect a bicyclist next to a car at over 1000 meters
A new class of multi-band radars that can detect pedestrians at more than 500 meters in rain, snow and fog with very low false-positive frequency
The Atomic Norm mathematical framework that can achieve near optimal detection/estimation performance and can be commercialized as custom chips and AI software.
Pilot scale experiments verified that the range, angular resolution, and precision of the core sensor element6 for our novel sensor platform are close to theoretical performance limits. (Test results can be accessed here ). This proof-of-concept leads us to conclude that the technology exists to see well enough to sense 100 Tb/sec to enable zero preventable roadway deaths.
We now are developing a commercial sensor system platform, called AtomicSense (a name that NPS has trademarked), that can sense 100 Tb/s; over-the-road trucking is targeted as the first commercial application (Fig. 3). This will be followed by robotaxis, then personal vehicles.
Our prototype field-programmable gate array (FGPA) based sensor-fused system currently is being road-tested on a Chrysler Pacifica minivan on California highways and will be deployed on Class 8 trucks on U.S. midwestern wintery highways beginning in December 2022.
During 2022 and 2023, we will continue to enhance our Atomic Norm algorithms and implement our custom chip to improve performance and reduce power, size and cost. By 2024, AtomicSense is intended to achieve all key objectives required to see soon enough, clear enough and far enough to help enable zero preventable trucking accidents.
AtomicSense will allow companies developing advanced human driver-assistance systems (ADAS) and fully autonomous driving systems to accelerate progress toward zero roadway deaths. The key question for these companies is “What must be true to get to zero preventable roadway deaths?” We have concluded that seeing and processing about 100 Tb/sec is one of these necessary requirements and this is indeed possible by combining breakthrough analytics, advanced multi-band radar, solid state lidar, sensor fusion and SoC technology.
Notes:
1 The technical paper underlying this article can be accessed via NPS’s website here .
2 Light-duty vehicles in metropolitan areas require shorter long-range sensing than long-haul trucks (due to lower vehicle speeds and masses) and higher data sampling rates (to see around corners and through occluding objects). Light-duty vehicles need to see about 250 m around and 500 m up ahead with a 60º FoV. Using the same values for the remaining key variables as with long-haul trucks, the resulting data rate is about the same, 7Pb/s.
3 https://www.britannica.com/science/information-theory/Physiology
4 " Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information ". EJ Candès, J Romberg, and T Tao. IEEE Transactions on Information Theory 52 (2), 489-509 (2006)
4 “Compressed Sensing”. David L. Donoho. IEEE Transactions on Information Theory 52 (4): 1289-1306 (2006) “The Convex Geometry of Linear Inverse Problems”.
4 Venkat Chandrasekaran , Benjamin Recht , Pablo A. Parrilo and Alan S. Willsky . Foundations of Computational Mathematics. Volume 12, pages 805–849 (2012)
5 “ Letting Robocars See Around Corners ”. IEEE Spectrum, January 23, 2022. Behrooz Rezvani, Babak Hassibi, Fredrik Brannstrom and Majid Manteghi.
6 A sensor element is an observation device that can measure a signal of interest with certain accuracy. Because signals are often buried deeply in noise, observations measured in bits must be repeated thousands of times to recover a signal of interest.
Top Stories
INSIDERDefense
This Robot Dog Detects Nuclear Material and Chemical Weapons
INSIDERManned Systems
Testing the Viability of Autonomous Laser Welding in Space
INSIDERTest & Measurement
Germany's New Military Surveillance Jet Completes First Flight
NewsUnmanned Systems
The Unusual Machines Approach to Low-Cost Drones and Drone Components
INSIDERSoftware
Accelerating Climate-Compatible Aircraft Design with AI
INSIDERManufacturing & Prototyping
Webcasts
Software
Best Practices for Developing Safe and Secure Modular Software
Power
Designing an HVAC Modeling Workflow for Cabin Energy Management...
Aerospace
Countering the Evolving Challenge of Integrating UAS Into...
Manned Systems
How Pratt & Whitney Uses a Robot to Help Build Jet Engines
Manufacturing & Prototyping
Scaling Manufacturing and Production for 'Data as a Service' Electric Drone
Test & Measurement
A Quick Guide to Multi-Axis Simulation and Component Testing