Update on Machine-Learned Correctness Properties

A novel method which has the potential for improving the U.S. Navy’s ability to perform continuous assurance on autonomous and other cyberphysical systems.

A novel new method has the potential to provide formal guarantees about the dependability of autonomous systems from the realm of doctoral-level experts into the domain of system developers and engineers. (Image: Kristian/Adobe Stock)

Autonomous systems are poised to provide transformative benefits to society. Autonomous vehicles (AVs) have the potential to reduce the frequency and severity of collisions, enhance mobility for blind, disabled, and underage drivers, lower energy consumption and environmentally harmful emissions, and reduce population density in metropolitan regions. In civilian aviation, increasingly autonomous systems could mitigate two of the most costly features of human pilots: the cost associated with training and paying highly skilled operators, and the reduced efficiency incurred by flight time limitations and crew rest requirements.

Additionally, autonomous air traffic management systems could reduce the cognitive burden on air traffic controllers by automating the monitoring and analysis of high volumes of data, alerting a human operator only when complex decisions must be made to mitigate risk. Within the power distribution industry, innovations in “micro-grid” technology can allow better utilization of alternate energy sources while decreasing vulnerability to failure compared to current centralized power distribution, but such decentralization necessitates highly adaptive autonomous systems to carefully synchronize energy production and consumption. Medical devices are currently designed to function for a large group of patients with similar conditions, but adaptive patient-specific algorithms could respond more effectively to individual patient needs, increasing lifespan and quality of life.

In military aviation, Automatic Ground Collision Avoidance System (AGCAS) has saved the lives of eleven pilots by fusing terrain mapping and sensor data to autonomously maneuver to avoid ground contact if necessary. Elsewhere in the military, autonomous weapons systems are increasingly critical to enhancing warfighting capability. In today’s warfighting environment, autonomous vehicles, unlike unmanned vehicles which require uninterrupted interactive remote control, are becoming increasingly necessary for operation in denied, degraded, or otherwise unsafe environments.

Also, the vast number of sensors streaming data from these systems is overwhelming the ability of human analysts to conduct processing, exploitation, and dissemination (PED) for intelligence purposes. Algorithms supporting autonomous operation, combined with advances in edge computing, can be used to lower the cognitive burden on human analysts by performing autonomous data triage for alerting three humans to data patterns of interest. Despite the potential advantages to be derived from employing autonomous systems, evidence-based assurance of dependability is a key to obtaining users’ trust in these systems.

Such assurance of dependability is often lacking, leading to a lack of trust. When we look at the optimistic statements made within the last five years forecasting the rapid proliferation of AVs, it is clear that such optimism did not survive contact with reality. For these reasons, AVs have become emblematic of the issues associated with trust in autonomy, and therefore, we use them as the system of interest for our research and experimentation. For AVs, the primary barrier to garnering trust in their dependability is answering the question “Are they safe enough?”

This work was performed by James Michael, Doron Drusinsky, and Matthew Litton for the Naval Postgraduate School. For more information, download the Technical Support Package (free white paper) below. NPS-08235



This Brief includes a Technical Support Package (TSP).
Document cover
Update on Machine-Learned Correctness Properties

(reference NPS-08235) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
Aerospace & Defense Technology Magazine

This article first appeared in the August, 2023 issue of Aerospace & Defense Technology Magazine (Vol. 8 No. 5).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document presents a technical report focused on enhancing the U.S. Navy's ability to ensure the reliability and dependability of autonomous and cyber-physical systems. It introduces a novel method for simulation-driven data generation of explainable machine-learned correctness properties, referred to as ML-assertions. This approach aims to bridge the gap between complex formal verification processes, typically reserved for experts, and the practical needs of system developers and engineers.

The report highlights the transformative potential of autonomous systems across various sectors, including transportation, aviation, energy distribution, healthcare, and military applications. Autonomous vehicles (AVs) are noted for their ability to reduce accidents, improve mobility for underserved populations, and lower environmental impacts. In aviation, autonomous systems can alleviate the burdens of human pilots, such as training costs and operational limitations, while enhancing air traffic management efficiency. The report also discusses the role of autonomous systems in military operations, where they are increasingly vital for functioning in challenging environments and managing vast amounts of sensor data.

A key focus of the report is the development of ML-assertions, which serve as a state-of-the-art method for verifying and validating the behavior of complex multi-agent systems. These assertions provide a framework for predicting system behaviors and ensuring that autonomous systems operate within defined correctness properties. The preliminary experimentation detailed in the report demonstrates the effectiveness of ML-assertions in behavior prediction, showcasing their potential to improve runtime verification processes.

Overall, the report emphasizes the importance of making formal verification accessible to a broader audience, thereby enhancing the development and deployment of reliable autonomous systems. By equipping engineers with tools to generate and utilize ML-assertions, the Navy aims to improve the assurance of its autonomous technologies, ultimately leading to safer and more efficient operations in various domains. The findings suggest that this innovative approach could significantly advance the field of autonomous systems, making them more dependable and capable of meeting the demands of modern applications.