Navigating Deep Learning to Improve ADAS

How edge computing drives real-time decision making in AI for smarter, safer automated vehicles.

SAE J3016 is the global standard establishing the five levels of autonomous driving.

Automotive ‘big data’ is here. The immense scope of data generated by automated or ADAS-integrated vehicles spans the five SAE levels of autonomous driving, with reliance on high-resolution cameras, radar, LiDAR, ultrasonic sensors, GPS and other sensors for vehicles to see or perceive their surroundings. Ultimately, this sensory information – massive amounts of data – is used to navigate, avoid obstacles and read road markers necessary for safe driving. Artificial intelligence (AI) is at the heart of these operations, grounded in software algorithms and fueled by deep-learning training and deep-learning inference models that are essential to faultless performance.

Teaching or training a deep neural network is accomplished by feeding the deep neural network data, allowing the DNN to make a prediction as to what the data represents.

Enabling these vital and instantaneous processes requires AI algorithms to be trained and then deployed on-vehicle. It’s a process that has developers tapping into both sophisticated software design and smart hardware strategies to protect vehicle performance that could be a matter of life or death.

While deep-learning training and deep-learning inference may sound like interchangeable terms, each has a very different role to play in systems that keep drivers safe and distinguish OEMs with increasingly intelligent auto features. Deep-learning training employs datasets to teach a deep neural network to complete an AI task, like image or voice recognition. Deep-learning inference is the process of feeding the same network with novel or new data, to predict what that data means based on its training. These data-intensive compute operations require specialized solutions. Systems must feature large amounts of high-speed, solid-state data storage. They also must be hardened for deployment in vehicles that are constantly moving, subject to violent shock and vibration and other harsh environmental factors. Ideal design pairs the software-based functions of deep learning with ruggedized hardware strategies optimized for both edge and cloud processing.

Deep-learning training, explained

Although the most challenging and time-consuming method of creating AI, deep-learning training gives a deep neural network (DNN) its ability to accomplish a task. DNNs, comprised of many layers of interconnected artificial neurons, must learn to perform a particular AI task, such as translating speech to text, image classification, video cataloging, or generating a recommendation system. This is achieved by feeding data to the DNN, which it then uses to predict what the data signifies.

Deep learning inference is performed by feeding new data, such as new images, to the network, giving the DNN a chance to classify the image.

For instance, a DNN might be taught how to differentiate three different objects – a dog, a car, and a bicycle. The first step puts together a data set consisting of thousands of images that include dogs, cars, and bikes. The second step feeds the images to the DNN and empowers it to ascertain what the image represents. When an inaccurate prediction is made, the artificial neurons are revised, correcting the error so future inferences are more accurate. In this process, it is likely that the network will better predict the image’s true nature each consecutive time it is presented.

The training process continues until the DNN’s predictions meet the desired level of accuracy. At this point, the trained model is sufficiently prepared to use new images to make predictions.

Deep-learning training can be extremely compute-intensive, with billions upon billions of calculations often necessary for training a DNN. The method relies on robust computing power to run calculations quickly. Performed in data centers, deep neural-network training leverages multi-core processors, GPUs, VPUs and other performance accelerators to advance AI workloads with enormous speed and accuracy.

Deep-learning inference

An extension of deep-learning training, deep-learning inference uses a fully trained DNN to make predictions based on new, never-seen-before data closer to where its generated. By feeding new data, such as images, to the network, deep learning inference enables DNN classification of the image. For example, adding to the ‘dog, car, bicycle’ example, new images of these and other objects can be loaded into the DNN allowing image classification. The fully trained DNN now can accurately predict the image’s identity.

Once a DNN is fully trained, it can be copied to other devices. DNNs can be extremely large, containing hundreds of layers of artificial neurons and connecting billions of weights. Before it can be deployed, the network must be modified to require less computing power, energy and memory. The result is a slightly less accurate model, but this is offset by its simplification benefits.

Two methods can be deployed to modify the DNN; pruning or quantization. In pruning, a data scientist feeds data to the DNN and observes. Non-firing or rarely firing neurons are identified and removed without causing significant reduction in prediction accuracy. Quantization involves reducing weight precision. For example, a 32-bit floating-point reduced to an 8-bit floating-point creates a small model that consumes fewer compute resources. Both methods have negligible impact on model accuracy. At the same time, the models become much smaller and faster, resulting in less energy use and lower consumption of compute resources.

Making the edge work in ADAS

Deep-learning inference ‘at the edge’ has commonly used a hybrid model in which an edge computer harvests information from a sensor or camera and transmits that information to the cloud. However, latency occurs as data often requires a few seconds to be delivered to the cloud, analyzed, and returned – unacceptable for applications requiring real-time inference analysis or detection. An AV moving at 60 mph (96 km/h) could travel more than 100 feet (30 m) without guidance in just a few seconds.

In contrast, purpose-built edge computing devices perform inference analysis in real time for split-second autonomous decision-making. These industrial-grade AI inference computers are designed to endure challenging in-vehicle deployments. Tolerant to a variety of power-input scenarios, including being powered by a vehicle battery, systems are ruggedized for expected exposure to impact, vibration, extreme temperature, dust and other environmental challenges.

These characteristics alleviate many of the issues associated with processing deep-learning inference algorithms via the cloud, coupled with unique high performance. For example, GPUs and TPUs accelerate the ability to perform myriad linear algebra computations, enabling the system to parallelize such operations. Rather than the CPU performing AI inference computations, the GPU or TPU – better at performing math computations – tackles the workload, significantly accelerating inference analysis while the CPU focuses on running the rest of the applications and the operating system.

With configurable processing power, edge inference computers (Premio AI shown) are capable of running machine learning and deep learning inference analysis at the edge. Performance accelerators including multi-core processors, GPUs, VPUs, TPUs, FPGAs, and NVMe computational storage devices increase capability, especially in modern computer architecture designs. Equipped with a rich I/O, autonomous vehicle data recorders offer ample USB Type-A ports (Gen 3.2 10Gbps), RJ45 and M12 Gigabit Ethernet ports, RJ45 and M12 PoE+ ports, Serial COM ports, and others, enabling connection to the cameras and sensors to data computers.

Local inference processing also eliminates latency problems and solves internet bandwidth issues related to raw data transmission, particularly large video feeds. Multiple wired and wireless connectivity technologies, such as Gigabit Ethernet, 10 Gigabit Ethernet, Wi-Fi 6, and Cellular 4G LTE allow the system to maintain internet connection in a range of situations. Up-and-coming 5G wireless connectivity expands options even further with its significantly faster data rate, much lower latency, and improved bandwidth. These rich connectivity options enable the offload of mission-critical data to the cloud and accommodate over-the-air updates. In addition, CANBus support empowers the solution to log vehicle data from vehicle buses and networks. Vehicle speed, wheel speed, engine rpm, steering angle and other rich data can be assessed for real-time insight and important information about the vehicle.

Big data, big opportunity

To bring an ever-growing list of automated-driving capabilities to market, ADAS developers have been focused on improving the algorithms that impact features and performance. But specialized hardware is necessary – AI edge-inference computers are the hardened computing solutions developed for this process, built to withstand exposure to dust, debris, shock, vibration and extreme temperatures and designed to collect process, and store a vast amount of data from multiple sources.

Amassing data is only step one to fueling ADAS; software development and specialized hardware strategies must work together for smarter, safer, and more highly-automated vehicles.

Dustin Seetoo is director of product marketing at Premio Inc., a global solutions provider specializing in computing technology for enterprises with complex, highly specialized requirements. This email address is being protected from spambots. You need JavaScript enabled to view it..