AI Research Strengthens Certainty in Battlefield Decision-Making
![](https://res.cloudinary.com/tbmg/c_scale,w_auto,f_auto,q_auto/v1629231455/sites/adt/articles/2021/insider/20210818_Defense_Story2.jpg)
A new framework for neural networks’ processing enables artificial intelligence to better judge objects and potential threats in hostile environments. Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and university partners from the Internet of Battlefield Things Collaborative Research Alliance (IoBT CRA) developed a method for neural networks to be more confident in their understanding of battlefield environments.
To achieve this, researchers reviewed frameworks to represent uncertainty, categorized sources of uncertainty in military information-networks’ common operating environment, and most importantly created solutions to manage uncertainty within systems. The researchers developed insights from the uncertainty management approaches into a workflow that maximizes effectiveness in accomplishing mission goals despite the presence of uncertainty in data inputs. Through this process, they teach neural networks when to say, “I am sure,” and be right about it.
This improved confidence in neural networks has significant implications for the battlefield, as certainty in AI conclusions and behaviors is paramount to ensure ethical and effective decision-making autonomy in combat.
“Modern defense applications, like Aided Target Recognition, increasingly leverage advances in AI to enhance automation of various battlefield functions” said Dr. Maggie Wigness, Army researcher and deputy collaborative alliance manager of the IoBT CRA. “A key component of improving automation is to improve machine confidence in understanding its environment, so that the machine can exercise ‘good judgment.’”
Older intelligent-system technologies often relied on approaches that were well-understood by engineers to deliver answers, but the rise of AI in general, and neural networks in particular changes that.
“Older data fusion technologies like a green circular radar screen, resembling those often shown in older movies, would show targets as dots bleeping on the screen,” said Dr. Tarek Abdelzaher, a professor at the University of Illinois and the academic lead of the lab’s IoBT CRA. “Operators knew something was approaching because they could see the dots and knew what a dot meant.”
Tomorrow’s operating environment will be filled with smart autonomous devices and platforms that create diverse and complex information signatures. “AI can pick up the data from these complex information signatures, but the logic that connects those signals to a conclusion such as, ‘this is a target,’ is a lot more complicated and difficult for the machine to indicate to the operator,” he said.
Because of more subtle signals that operators may not understand, it is no longer always clear why a data fusion system thinks an item is, for example, a tank versus a civilian, nor is it always clear how confident the system is in its assessment. The researchers address this through their paper, On Uncertainty and Robustness in Large-Scale Intelligent Data Fusion Systems, published in the 2nd IEEE International Conference on Cognitive Machine Intelligence, and through solutions developed in the IoBT CRA, which are helping to enable unconstrained command and control of complex, intelligent, pervasive systems-of-systems in modern battlespaces.
This work and additional related work in the IoBT CRA, unlike other mainstream AI research, was specifically designed by the IoBT CRA to work within the battlefield setting, focusing on mitigating uncertainty in hostile environments under significant resource constraints and communication bottlenecks. Hostile environments create unique issues for the Army — platforms are destroyed, communication links are disrupted, sensors get infiltrated to give bad data, yet the Army relies on the AI to continue to work correctly. The researchers said an AI-enabled common operating environment is expected to withstand failures, circumvent its inability to communicate, and reach accurate conclusions.
Top Stories
INSIDERElectronics & Computers
Army Launches CMOSS Prototyping Competition for Computer Chassis and Cards
INSIDERSoftware
The Future of Aerospace: Embracing Digital Transformation and Emerging...
ArticlesAerospace
Making a Material Difference in Aerospace & Defense Electronics
INSIDERRF & Microwave Electronics
Germany's New Military Surveillance Jet Completes First Flight
ArticlesAerospace
Microchip’s New Microprocessor to Enable Generational Leap in Spaceflight...
EditorialConnectivity
Webcasts
Power
Phase Change Materials in Electric Vehicles: Trends and a Roadmap...
Automotive
Navigating Security in Automotive SoCs: How to Build Resilient...
Automotive
Is Hydrogen Propulsion Production-Ready?
Unmanned Systems
Countering the Evolving Challenge of Integrating UAS Into Civilian Airspace
Power
Designing an HVAC Modeling Workflow for Cabin Energy Management and XiL Testing
Defense
Best Practices for Developing Safe and Secure Modular Software
Similar Stories
INSIDERDefense
New Institute to Research Biological, Cognitive AI Foundations for DoD
INSIDERRF & Microwave Electronics
Skunk Works Successfully Demonstrates AI in Air-To-Air Tactical Intercepts
INSIDERDefense
DOD Plans for Responsible Use of Artificial Intelligence
ArticlesWeapons Systems
Artificial Intelligence in the Battlespace
NewsDefense
Why the Air Force Is Using the Virtualitics AI Approach to Weapon Sustainment
INSIDERAR/AI
Northrop Grumman to Develop Prototype Artificial Intelligence Assistant