Autonomous “Wingman” Vehicles
The Future of Military Unmanned Vehicle Technology
The US Army’s Futures Command is the most important administrative reorganization of the modern Army. Responding to the world’s changing priorities— especially the “near peer” threat of ascendant Russia and China—the Army is no longer modernizing, but re-inventing its ground vehicle fleet against new realities. Just like the U.S. Air Force stopped inventing better jets and pilot aids and moved to unmanned aerial vehicles (UAV) for “dull, dirty and dangerous” missions, the Army envisions multiple autonomous vehicle concepts. Instead of a heavier Abrams main battle tank, or a replacement to the aging M113 APC, autonomous “wingman” vehicles may replace some of the human-heavy tasks on the future battlefield.
The questions are many: What is to be the deployment doctrine and the sensors capabilities? Are these new vehicles armed or merely for C4ISR? And indeed, are they actually “autonomous” or controlled, like USAF UAVs, by remote operators from mobile ground stations? The good news is, regardless of how the doctrine or vehicle takes shape, the Army’s own MVD (Multi-function Video Display) system can be “appliqued” from the Type II MRAP Mine Clearing Vehicle and used to remotely control all aspects of tomorrow’s “wingman” autonomous vehicles.
In this article, we briefly examine what the Army’s Futures Command has published on future vehicle fleet, autonomy, and the evolving use cases. The article examines the type of sensors, control and telemetry needed between an autonomous vehicle and its “chase Mobile Ground Station,” and describes the capabilities of the MVD on MRAP vehicles and how it directly applies to the battlefield of the future. Brief references to Next Generation Combat Vehicles will be made, although at time of writing, this program is still evolving and far from settled.
National Defense Strategy Stays Ahead of Russia and China
In less than 20 years, the US has changed its defense doctrine multiple times in response to world events like 9/11, urban warfare in Iraq, fighting ISIS—and now, a realization that Russia and China are legitimate battlefield foes. While the first part of the Department of Defense’s mission statement says America still intends to effectively fight in two battle theaters simultaneously (a carry-over from World War II), the second half of the mission statement guaranteeing to “prevail using overwhelming force” has DoD planners worried.
In early 2018, the Pentagon’s National Defense Strategy (NDS) recognized that on land, sea and in the air, the reality is that both Russia and China pose serious threats. Military planners believe there is no longer 100 percent confidence in America’s technical and weapons superiority. In Syria, DoD planners watched Russia “test bed” new offensive and defensive strategies and technology—and concluded that America needed to re-evaluate many of our deployed platforms.
The Army brass knew something had to change. While weapons like the M1A2 Abrams main battle tank were designed for head-to-head European combat against Russian tanks, the reality shown in Crimea is that Russian EW and cyber capabilities will affect Army infrastructure sooner than Russian armored vehicles. Apparently with little effort, Russia was able to “see” and “kill” the opposing forces’ command and control structure. Currently using a fixed and slow-to-erect Forward Operating Base (FOB) concept dating to World War I/II, the Army has realized that speed—coupled with a “shoot and scoot” model—is now essential for modern warfare.
New Army Cross Functional Teams (CFTs) and NGCV
In late 2017, Army Chief of Staff Mark Milley outlined a set of eight cross-functional teams (CFTs) designed to address new battlefield realities, and more importantly, to bring essential capabilities right to the warfighter as quickly as possible. The CFTs are listed in Figure 1.
The CFTs bring together multiple Army organizations, labs, vendors and industry to openly discuss out-of-the-box ways to address the (primarily) Russian threat. From a ground vehicle perspective, platforms will have increased lethality and mobility, effectively inter-communicate in a denied network and GPS environment, and bring back the “overwhelming force” part of the DoD’s mission statement. In the past, bigger was better. In the near future— by 2025, in fact—smaller, more nimble, networked, and autonomous vehicles will enter the ground vehicle fleet. Sensors will be an essential enabler to realize an autonomous vehicle fleet.
At the Future Ground Combat Vehicles Summit near TACOM in Detroit, Michigan, in December, Col. Warren Sponsler, Deputy Director, NGCV CFT, Army Futures Command, contrasted the difference between today’s one-on-one/ vehicle-to-vehicle approach versus the future multi-domain approach (Figure 2, upper left frame).
In the lower frame, each ground vehicle is networked to all other joint battlefield assets. This is essentially no different from today’s digital battlefield concept— except it’s still not a reality. Joint assets are rarely directly inter-connected due to interoperability constraints, relying instead on intermediaries like FOBs, AWACS, E2C, and other human latency-prone “switch” points. In the upper right frame, however, the NGCV CFT envisions new platforms supplementing ground vehicles by providing reconnaissance and offensive strike. Shown here, autonomous drone swarms complement the indicated autonomous vehicles. Both are inter-linked to human-operated ground vehicles and dismounted soldiers.
Enter NGCV: Not One, but Five
While NGCV is one of the Army’s top eight CFTs, NGCV itself is not a single vehicle but instead a set of CFTs that are creating over five different new ground vehicle concepts that all rely on sensors like cameras, LIDAR, multiple battlefield networks, and massive on-board processing and digital storage. Designed to supplement or replace some current weapons platforms like the M113 and Bradley Fighting Vehicle (BFV or M2), the concept of Manned Unmanned Teaming (MUM-T) currently used on the Army’s Apache helicopter will become a ground reality as long as the technology enables the platform.
Each of the five NGCV CFT concepts includes some form of autonomy. The Robotic Combat Vehicles (RCV) shown in Figure 3 are light, medium and heavy RCVs designed to act as a “wingman” to other ground vehicles. Used ahead of a strike force, the RCVs might enable reconnaissance, offensive weaponry, mobile FOB/command post capability, and even “feint” operations to draw an enemy away from the main Army fighting force.
Technology Required for Autonomy
Despite the positive press in the commercial world about self-driving cars from Google, Uber, Tesla and others, the DoD believes it will be the first to actually deploy self-driving vehicles. Michael Griffin, defense undersecretary for research and engineering, told Congress in April 2018: “We’re going to have self-driving vehicles in theater for the Army before we’ll have self-driving cars on the streets.” To realize his vision, plus the NDS top-down goals, the Army’s top 8, and the NGCV’s CFT’s 5 vehicle-type portfolio, dramatic improvements in embedded, deployed technology are needed.
That’s because the Army’s use case is different from the commercial one. Civilian self-driving cars require both a sensor infrastructure built into roadways and intersections, along with predefined inter-vehicle protocols that allow vehicles to talk to each other. On the battlefield, there are no pre-defined roadways with embedded sensors, and always-on inter-vehicle communication would be either a homing beacon for an enemy missile or another network to be hacked or denied.
Instead, on-board sensors in Robotic Combat Vehicles (RCVs) need to see, understand, and navigate the terrain around them. High-resolution closed-circuit TV (ccTV) cameras mounted on the vehicle will provide a 360-degree view from all sides and embedded computers will fuse the scene into a digital “cylinder” as if an operator was looking all around him inside the cylindrical scene.
In some cases where the RCV is in a leader-follower role, a remote operator might actually be viewing the scene in real-time, but onboard the RCV, machine learning is compressing the video while reducing it to predictable quadrants to save RF bandwidth. In this role, passive sensors like ccTV, LIDAR (Light Detection and Ranging), GPS and position/ navigation (pos/nav) using EMP-resistant fiber optic gyros will be used.
Artificial intelligence (AI)—different from machine learning by predicting outcomes never previously encountered or reasonably linearly extrapolated—will go many steps further by merging myriad battlefield sensors, metadata, onboard databases, and occasional use of active sonar and radar sensors. By consuming as much data from any available battlefield sensor feed, whether on the vehicle or not, AI can autonomously drive the vehicle without any operator input beyond a designated destination or objective.
The more sensors and data added to the vehicle, the more the embedded computer challenge grows exponentially. For example, while a quad core, 8th Generation Intel Core i7 (Coffee Lake) small form factor computer can decode/ encode streaming HD H.254 video in real time at 60 Hz, it will struggle with two simultaneous HD streams. If each vehicle requires two HD cameras per corner (90-degree field-of-view per camera, with overlap), these eight HD video feeds would require server-class processing just for the CODECs. Image processing requires even more processing power, lower latency, and faster networks.
Image processing—sensor fusion, enhancement, anti-aliasing, target tracking, compression, and identifying areas of interest— now becomes a digital signal processing challenge requiring a dozen or more Intel Xeon-class server cores and multiple high-performance computing (HPC) chassis, all small enough to be mounted inside the vehicle. Onboard networks from cameras and sensors to the servers and operator displays, need at least gigabit throughput and likely 10 Gbits/s for sensor growth. Finally, since these are forward-deployed RCVs, they will likely sustain battle damage so fault-tolerant processors and redundant in-vehicle networks are needed—requiring multi-gigabit Ethernet switches and mass storage.
MVD: Right, and Right Now
Figure 4 shows the DoD’s “Unmanned Systems Integrated Roadmap: 2017-2042” (the latest version, it identifies numerous key technologies needed for future autonomous vehicles) (Table 1).
In 2017 General Micro Systems announced a contract win with the Army for the Multi-Function Video Display system (MVD) designed to bring video and sensor data to Type II Mine Clearing Vehicles on the MRAP chassis. Accepting input from multiple high-resolution video cameras, FLIR and the Vehicle Optic Sensor System (VPOSS)— all called “vehicle enablers”—the system converts real-time data to ultra-low latency video-over-IP and transmits it over an industry-standard 1 Gbits/s Ethernet network, first to a rugged, embedded Intel Xeon-class server and mass storage system, then outward to multiple operator smart screen consoles, each with embedded Intel Core i7 processors.
Except for the sensors, GMS builds 100 percent of this intelligent, machine-learning, open-standards network/ server/display system. The software that brings the system to life is from the Army’s Night Vision Labs and RDECOM and is used to provide situational awareness, operator hand-off, and a common user interface (UI) to disparate operator stations and workloads.
MVD makes available today, not in the future, many of the key technologies identified in Table 1 that are essential for RCVs as part of the Army’s NGCV CFT. The Type II MRAP on which the system is installed is not robotic, but MVD helps control the platform’s robotic arm. However, due to the under-one-frame latency of MVD, the vehicle’s operators can drive the vehicle completely “head down” by only looking at the display. This is possible without inducing nausea because the under-one-frame latency presents outside views practically in real time. This capability is absolutely essential for NGCV leader-follower vehicles where operators are looking out of sensors on the RCV but are located separately and distant from the RCV.
GMS is working on enhancements to MVD that will tick off the list even more of the technologies in Table 1. For example, the GMS X422 “Lightning”, announced last October, brings AI to ground vehicles by adding two Nvidia Tesla V100 GPGPU “data center accelerators” in a conduction-cooled vetronics chassis. An upgraded embedded server and open standard network switch raise the in-vehicle speed to 10 Gbits/s, providing more bandwidth for high speed and myriad more sensors. However, except for the addition of the AI chassis (about the size of a shoebox), the system remains exceptionally small. It’s designed to fit into ground vehicles wherever there’s room: under a seat, mounted vertically on a bulkhead, or tucked behind a piece of existing equipment.
NGCV and Autonomous Vehicle Enablers
As this article has outlined, autonomous “wingman” vehicles are poised to replace many of the human-heavy tasks on the future battlefield. As the Army works to integrate the most advanced technologies into MVD on MRAPs and other Next Generation Combat Vehicles, the US will be more prepared to respond to the “near peer” threats we are seeing.
This article was written by Chris A. Ciufo, CTO, General Micro Systems (Rancho Cucamonga, CA). For more information, click here .