New Cyber Algorithm Shuts Down Malicious Robotic Attacks

Safe and secure operations of robotic systems are of paramount importance. Aiming for achieving the trusted operation of a military robotic vehicle under contested environments, we introduce a new cyber-physical system based on the concepts of deep learning convolutional neural networks (CNNs).

The GVR-BOt used in the experiment by UniSA and Charles Sturt AI researchers. (Image: UniSA.)

Australian researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds.

In an experiment using deep learning neural networks to simulate the behavior of the human brain, artificial intelligence experts from Charles Sturt University and the University of South Australia (UniSA) trained the robot’s operating system to learn the signature of a MitM eavesdropping cyberattack. This is where attackers interrupt an existing conversation or data transfer.

The algorithm, tested in real time on a replica of a U.S. Army combat ground vehicle, was 99 percent successful in preventing a malicious attack. False positive rates of less than 2 percent validated the system, demonstrating its effectiveness.

The results have been published in IEEE Transactions on Dependable and Secure Computing.

UniSA autonomous systems researcher, Professor Anthony Finn, says the proposed algorithm performs better than other recognition techniques used around the world to detect cyberattacks.

Professor Finn and Dr. Fendy Santoso from Charles Sturt Artificial Intelligence and Cyber Futures Institute collaborated with the U.S. Army Futures Command to replicate a man-in-the-middle cyberattack on a GVT-BOT ground vehicle and trained its operating system to recognize an attack.

“The robot operating system (ROS) is extremely susceptible to data breaches and electronic hijacking because it is so highly networked,” Prof Finn says.

“The advent of Industry 4, marked by the evolution in robotics, automation, and the Internet of Things, has demanded that robots work collaboratively, where sensors, actuators and controllers need to communicate and exchange information with one another via cloud services.

“The downside of this is that it makes them highly vulnerable to cyberattacks.

“The good news, however, is that the speed of computing doubles every couple of years, and it is now possible to develop and implement sophisticated AI algorithms to guard systems against digital attacks.”

Dr. Santoso says despite its tremendous benefits and widespread usage, the robot operating system largely ignores security issues in its coding scheme due to encrypted network traffic data and limited integrity-checking capability.

“Owing to the benefits of deep learning, our intrusion detection framework is robust and highly accurate,” Dr. Santoso says. “The system can handle large datasets suitable to safeguard large-scale and realtime data-driven systems such as ROS.”

Professor Finn and Dr. Santoso plan to test their intrusion detection algorithm on different robotic platforms, such as drones, whose dynamics are faster and more complex compared to a ground robot.

This work was performed by Anthony Finn and Fendy Santoso for the Defence and Systems Institute (DASI), University of South Australia (Adelaide, Australia). For more information, download the Technical Support Package (free white paper) below. TSP-04244



This Brief includes a Technical Support Package (TSP).
Document cover
Trusted Operations of a Military Ground Robot in the Face of Man-in-the-Middle Cyberattacks Using Deep Learning Convolutional Neural Networks: Real-Time Experimental Outcomes

(reference TSP-04244) is currently available for download from the TSP library.

Don't have an account? Sign up here.