Artificial Intelligence and Autonomous Vehicles

The use of artificial intelligence (AI) based machine learning technologies in autonomous vehicles is on the rise. Helping to drive this trend is the availability of a new class of embedded AI processors. A good example is NVIDIA’s Jetson family, which includes small form factor system on modules (SoMs) that provide GPU-accelerated parallel processing. These high-performance, low-power devices are designed to support the deep learning and computer vision capabilities needed to build software-defined autonomous machines. They derive massive computing capabilities from the use of a parallel processing GPU device with many cores, enabling next-gen computing devices to take on many of the tasks that were historically handled by humans or multiple, traditional computers.

How AI provides navigation and obstacle avoidance for autonomous vehicles on land, air, and sea gets a lot of attention, but machine learning is also being used in other ways on unmanned vehicles. These machine learning applications are usually related to the types of sensors that are on board a particular platform. For example, a number of our customers are using AI engines in unmanned situational awareness airborne applications, taking and processing data from different types of sensors, such as cameras, radars, and lidars, which are used for surveillance and detection, and then report that data back to operators.

The ability of unmanned aircraft to monitor areas of interest, staying up in the air for long periods of time, makes them a compelling alternative to the use of human crews, especially in battlefield situations. In one application, a homeland security agency is protecting critical national energy infrastructure by using drones to monitor oil pipelines. In addition to surveillance applications, AI engines are now being used on unmanned air vehicles for in-vehicle health monitoring. Sensors on board the platform are monitored by a predictive maintenance data hub that uses AI to monitor the health of various onboard subsystems, which helps to schedule maintenance and other updates.

The available market for military use of AI engines is relatively small compared to the industrial applications for which most of these devices were originally intended. The larger opportunities for vendors of GPU-based AI engines are likely found in places like factory floors and production lines. That said, these devices are increasingly used in the commercial vehicle market on board autonomous test cars. For military applications, the use of these devices on unmanned platforms deployed in harsh environmental conditions requires expertise in packaging and ruggedization to ensure that devices built for industrial use can perform optimally when deployed in the battlefield.

NVIDIA’s Jetson TX2i small form factor system on module (SoM)

The NVIDIA Jetson TX2i module provides a great example of how an industrial AI engine can be adapted for use on-board military platforms, most of which are especially sensitive to any additional size, weight, and power (SWaP) burden. Another key issue for defense applications is long lifecycle support, since commercial graphics processors are notorious for short product lifecycles. The good news for military system designers is that SOMs like the Jetson offer some of the best ratios of FLOPS of computing performance per watts of power consumed. The TX2i, specifically, combines a high performance 6-core Arm processor with a powerful 256-core NVIDIA GPU that is compatible with their CUDA development libraries and software applications. Optimized to reduce SWaP, this device is packaged in a small form factor pre-integrated with processor, graphics, RAM memory, Flash storage and I/O. With its support for extended operating temperatures (all of its components are rated to the full industrial temperature range of -40°C to +85°C or beyond) the TX2i can be effectively cooled using straightforward conduction cooling, eliminating the need for complex cooling alternatives or fans that have undesirable moving parts.

For enhanced reliability, the module uses RAM memory with Error Corrected Code (ECC) support, which is particularly useful to mitigate a potential single event upset (SEU) at high altitude due to solar radiation. Even better, NVIDIA supports the industrial version of this Jetson device with a ten-year lifecycle, which matches the long program requirements typical of military systems. Compare that to the five-year lifecycle of other embedded Jetson devices and an even shorter two-year lifecycle for consumer-grade graphics devices.

To ease and speed application software development, the TX2i is supported by NVIDIA’s JetPack Software Development Kit (SDK) that includes a large number of free CUDA developer tools and comprehensive libraries for building AI applications. TensorRT, a deep learning, inference runtime, and the CUDA development environment, are just a couple of free tools bundled in JetPack. If the system integrator is already familiar with the NVIDIA Jetson development environment, they will also be comfortable with this technology, and since it’s portable across the entire Jetson family ecosystem, the software development environment is portable across all Jetson modules, reducing costs through a build-once and deploy multiple times approach.

NVIDIA also offers lab-grade developer kits at an extremely low cost. Designed to jumpstart application development, these boards use standard commercial connectors and enable customers to get their program started with very little investment in the lab before porting it over to a rugged platform when they are ready to take the next step towards deployment.

An example of a rugged mission computer ideal for use in deployed machine learning applications is Curtiss-Wright’s DuraCOR 312, a miniature system (5.2" × 5.4" × 2.0") based on the industrial Jetson TX2i SoM. The DuraCOR 312 can be used in a wide range of AI-based applications and is especially useful for computer vision applications. For example, the mission computer can be deployed on a small drone and flown around a platform of interest to inspect for damage or any visible anomalies. At sea, the system can be used to locate and identify surface vessels. In the battlefield, a DuraCOR 312 can be launched on a small unmanned aircraft to provide object detection and visual detection, to provide real-time actionable intelligence of enemy troop strength and deployment, without needing to request and wait for air support, such as a large helicopter, to provide surveillance.

In the case of visual detection, the video captured by the unmanned aircraft’s camera is sent to the GPU, which runs a machine learning inference, basically scanning the images to identify, as it’s been trained to do, any objects of interest, such as a soldier or tank, and it will tag the detected objects with time and location data. The inference code, which is stored on the DuraCOR 312, was developed earlier on a larger system using an algorithm of choice.

Images associated with the objects that you want to detect are formatted appropriately and then used to train the system to recognize them. After the system is trained satisfactorily to the desired percentage of accuracy, the machine-learning model, written using a deep learning framework such as Caffe or TensorFlow, is then run in TensorRT, which helps to compress the software code for use on small systems deployed at the edge of the battlefield network. The inference code, once downloaded onto the DuraCOR 312, can now be deployed on a drone and begin to process incoming images to identify those objects it was trained to recognize. The results can then be downlinked to the warfighter in real-time or stored on the drone until it returns for offline post-mission analysis. How much storage is required on the aircraft will be determined by the distance and duration of the mission and the amount of imagery captured.

Curtiss-Wright’s DuraCOR 312 miniature system based on NVIDIA’s industrial Jetson TX2i SoM.

When deployed in battlefield environments, cooling the TX2i’s power-efficient ARMv8 processor cores and CUDA GPU is aided through the use of an aluminum heat spreader on top of the module that makes contact with all of the hot components. The heat spreader attaches directly to the DuraCOR 312’s system chassis, using phase change material, and the metal of the chassis as an extension of the heatsink. To further adapt the TX2i for use in rugged military conditions, Curtiss-Wright submits all system components to additional ruggedization. The DuraCOR 312 also uses miniature versions of traditional rugged circular military connectors that, when mated, provide full sealing against dust and water. In fact, the chassis can be fully immersed and will experience no water ingress at all. What’s more, all the circuit boards are covered with a conformal coating to protect against humidity and corrosion. Designed for SWaP optimization, this compact AI-engine based mission computer weighs less than 2.0 lbs., and requires less than 25W of power.

The DuraCOR 312 is fully compliant to extremely demanding MIL-STD-810G, MIL-STD-461F, MIL-STD-1275D, MIL-STD-704F, and RTCA/DO-160G environmental, power, and EMI conditions, including high altitude, wide temperature, humidity, extreme shock, and vibration and noisy electrical environments. The unit also provides an aerospace-grade power supply in a fanless IP67-rated mechanical package that handles harsh shock and vibration and operates over extended temperatures without requiring a cold plate or airflow.

Through our experience collaborating with companies like Cisco Systems to deliver rugged network switch solutions for challenging embedded applications, we’ve learned the benefits, both for the customer and for the COTS vendor, that come from partnering with a technology pacesetter. Now, by leveraging NVIDIA’s best-in-class industrial AI engines, and packaging them for the military COTS user, we can bring proven, deep learning technologies, supported with a rich and familiar development environment, that are viable for the unique requirements of unmanned military platforms. This approach reduces program risk and provides system designers with the shortest and easiest learning curve to shorten time to deployment.

This article was written by Mike Southworth, Senior Product Manager, Curtiss-Wright Defense Solutions (Salt Lake City, UT). For more information, visit here  .