360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle

Multiple architectures provide situational awareness for safe operation of ASVs.

Operation of autonomous surface vehicles (ASVs) poses a number of challenges, including vehicle survivability for long-duration missions in hazardous and possibly hostile environments, loss of communication and/or localization due to environmental or tactical situations, reacting intelligently and quickly to highly dynamic conditions, re-planning to recover from faults while continuing with operations, and extracting the maximum amount of information from onboard and offboard sensors for situational awareness. Coupled with these issues is the need to conduct missions in areas with other possible adversarial vessels, including the protection of high-value fixed assets such as oil platforms, anchored ships, and port facilities.

A block diagram of CARACaS autonomy architecture. The network in the behavior engine is built from primitive (dark gray) and composite (light gray) behaviors. The dynamic planning engine interacts with the network at both the primitive and composite behavior levels.
An autonomy system for an ASV detects and tracks vessels of a defined class while patrolling near fixed assets. The ASV’s sensor suite includes a wide-baseline stereo system for close-up perception and navigation (less than 200 m), and a 360-degree camera head for longer-range contact detection, identification, and tracking. Situation awareness for the addressed patrol missions is primarily determined through processing images from the 360-degree camera head in the perception system called Surface Autonomous Visual Analysis and Tracking (SAVAnT). The SAVAnT system is integrated into the CARACaS (Control Architecture for Robotic Agent Command and Sensing) autonomy architecture, enabling the ASV to reason about the appropriate response to the vessels it has identified, and then to execute a particular motion plan.

CARACaS is composed of a dynamic planning engine, a behavior engine, and a perception engine. The SAVAnT system is part of the perception engine, which also includes a stereo-vision system for navigation. The dynamic planning engine leverages the CASPER (Continuous Activity Scheduling Planning Execution and Replanning) continuous planner. Given an input set of mission goals and the autonomous vehicle’s current state, CASPER generates a plan of activities that satisfies as many goals as possible while still obeying relevant resource constraints and operation rules. CARACaS uses finite state machines for composition of the behavior network for any given mission scenarios. These finite state machines give it the capability of producing formally correct behavior kernels that guarantee predictable performance.

For the behavior coordination mechanism, CARACaS uses a method based on multiobjective decision theory (MODT) that combines recommendations from multiple behaviors to form a set of control actions that represents their consensus. CARACaS uses the MODT framework, coupled with the interval criterion weights method, to systematically narrow the set of possible solutions (the size of the space grows exponentially with the number of actions), producing an output within a time span that is orders of magnitude faster than a brute-force search of the action space.

SAVAnT receives sensory input from an inertial navigation system (INS) and six cameras, which are mounted in weather-resistant casings, each pointed 60 degrees apart to provide 360-degree capability, with 5-degree overlap between each adjacent camera pair. The core components of the system software are as follows. The image server captures raw camera images and INS pose data and “stabilizes” the images (for horizontal, image-centered horizons). The contact server detects objects of interest (contacts) in the stabilized images and calculates absolute bearing for each contact. The OTCD server (object-level tracking and change detection) interprets series of contact bearings as originating from true targets or false positives, localizes target position (latitude/longitude) by implicit triangulation, maintains a database of hypothesized true targets, and sends downstream alerts when a new target appears or a known target disappears.

This work was done by Michael T. Wolf, Christopher Assad, Yoshiaki Kuwata, Andrew Howard, Hrand Aghazarian, David Zhu, Thomas Lu, Ashitey Trebi-Ollennu, and Terry Huntsberger of NASA’s Jet Propulsion Laboratory, California Institute of Technology, for the Office of Naval Research. ONR-0024



This Brief includes a Technical Support Package (TSP).
Document cover
360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle

(reference ONR-0024) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
Defense Tech Briefs Magazine

This article first appeared in the June, 2011 issue of Defense Tech Briefs Magazine (Vol. 5 No. 3).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document discusses advancements in autonomous surface vehicles (ASVs) focusing on a system called CARACaS (Cooperative Autonomous Robotic Agent for Command and Control). Developed by the Jet Propulsion Laboratory, CARACaS integrates various components to enhance the autonomy and operational capabilities of ASVs, particularly in maritime environments.

The paper outlines the architecture of the CARACaS system, which includes a dynamic planning engine, a behavior engine, and a perception engine. This architecture is designed to handle the uncertainties of dynamic sea operations, ensuring effective hazard detection, situational awareness, and compliance with maritime navigation rules. The system also facilitates cooperation among different vehicles, whether they are on the surface, underwater, or in the air.

A significant focus of the research is on the SAVAnT (Surface Autonomous Visual Analysis and Tracking) system, which is responsible for contact detection, target tracking, and change detection. The document details on-water experimental setups conducted in Virginia, where ASVs equipped with SAVAnT were tested in various scenarios. These tests aimed to evaluate the system's ability to recognize and track targets, such as a white boat used as a reference, under different conditions.

The results from these experiments demonstrated the effectiveness of the SAVAnT system in identifying and tracking targets over considerable distances, contributing to the development of an omnidirectional maritime perception system. This system is expected to improve the reliability and efficiency of ASV patrol operations, enabling them to operate autonomously for extended periods while ensuring safety and operational effectiveness.

The paper also acknowledges the support and funding received from the Office of Naval Research and Spatial Integrated Systems, Inc., highlighting the collaborative nature of the research. Overall, the document presents a comprehensive overview of the technological advancements in ASV autonomy, emphasizing the potential for enhanced maritime surveillance and operational capabilities through innovative detection and tracking systems. The findings contribute to the broader field of robotics and autonomous systems, showcasing the importance of integrating advanced technologies for real-world applications in maritime security and asset protection.