Making Machines Curious

An uncrewed underwater vehicle (UUV).

Remote and autonomous vehicles are now helping us make discoveries in places where humans cannot easily go. A recent mission to explore marine life at the bottom of the Clarion-Clipperton Zone between Hawaii and Mexico, over 5,000 meters below the surface at its deepest point, discovered over 30 potential new species.

These uncrewed underwater vehicles (UUVs) are running missions that allow humans to make better decisions (the Clarion-Clipperton survey was conducted to assess the impact of seabed mining). They’re also mapping the unknown deep sea, following a trajectory and mission instructions designed before the dive, then “flying” just a few meters off the seafloor for hours while relaying data to a ship above them. These are just two examples; vehicles can be programmed to find old sea mines buried in the ocean floor, inspect pipelines for leaks, or search for lost shipping containers.

AutoTRap Onboard™ automated target recognition and sonar image processing is being tested on Teledyne Gavia platforms for mine detection. (Image: Charles River Analytics)

The problem is, right now these autonomous vehicles lack an important trait—curiosity. They execute a pattern (like a lawnmower search) or collect data and pass it to humans to decide which objects are interesting, and then get directed back to those targets.

Soon these missions will be executed with autonomous vehicles as true teammates, where a machine is given a general mission directive and a search area, then is released to explore and make decisions about what needs further inspection, instead of needing to be remote-controlled or remain in constant contact for instructions.

Curious machines can modify their mission to get more information. They don’t just identify targets; they can realize they’ve found a new kind of target. They know enough to say, ‘That looks like a starfish, kind of, but it’s not what I was expecting. I need to know more.’ Of course, machines don’t really speak like this… yet. However, the recently established field of XAI (Explainable AI) is giving deep learning systems a voice with which they can convey the reasons behind their decision and actions in a way that is natural and easy for humans to understand.

NEMO combines information from the Awarion™ autonomous lookout system with deep-learning YOLO neural networks and probabilistic data fusion methods to accurately detect and classify marine mammals in visual and acoustic sensor feeds. (Image: Charles River Analytics)

The future of autonomous vehicles means deployment for very long-term missions in unexplored locations and environments, from the deep sea to outer space. When they come across an anomaly, they need to know that what they’re observing is new and interesting, and either make the decision to get closer or follow it to learn more.

But creating a curious machine is not easy. To be curious, a machine must discern what’s out of the ordinary; in other words, it must perform anomaly detection. Anomaly detection is complicated. It requires the machine to perceive something (detection) and know what it is or isn’t (classification). This applies to a stationary object, like an undersea mine, or a moving one, or even groups of objects moving together, like marine life, which can require observation over days or weeks to gather enough data to determine what normal behavior is (that is, a baseline in a pattern of life analysis).

How Do We Construct Curiosity?

To be a curious member of a human-machine team, at-sea vehicles require more advanced R&D in many fields: perception, AI algorithms, world modeling, navigation, anomaly detection, human-understandable communication, and more. Let’s examine a few.

The ALPACA machine learning agent not only learns how to navigate in outdoor environments, it also learns about its own competence at the task. (Image: Charles River Analytics)

Perception: Signal Processing and Data Interpretation. Visual sensors (EO/IR) become less effective underwater, so their information must be fused with acoustic sensors (sonar, doppler, vibrometers). Advances in onboard image processing are needed to remove artifacts like “marine snow”—bright spots in underwater images from the reflection of particles suspended in the water—so the images are clear enough to support decision-making. All sensors must become smaller and require less energy to support longer missions to fully explore an area or track species’ patterns of life.

Onboard Intelligence: Autonomy, Planning, Situational and Self-Awareness. We need a big shift in AI and machine learning to incorporate true situational awareness; that is, awareness of the surrounding external and internal environment. Artificial “intelligence” doesn’t guarantee mission success—how AI performs depends on its knowledge of the environment and everything in it, including itself.

To accurately identify whether something is worth investigating in an image of the seafloor, an AI must also know the seafloor type, the state of the water column, and what is usual for that environment. It also needs to be aware of its own performance; if something is wrong with a sensor, it must discount that sensor’s observations, the same way we would if we were wearing glasses with a smudge on the lens. At a higher level, it needs to know if it has enough experience in a certain environment, or if it has previously performed well or poorly in that environment.

All this information must feed into a decision-making process that supports curious behaviors. Decision-making software, that is, the combination of AI algorithms and knowledge models that can make a machine exhibit curiosity, must be able to run on small, lightweight, low-power processors onboard the vehicle.

Anomaly Detection: Target Detection, Classification, and World Modeling. Advances in target detection and classification on their own can’t determine whether an object is unusual. That judgment requires a model of the environment, and an understanding of what is normal within it.

Currently, baseline parameters are mostly provided by humans, but by replacing these hardcoded rules with complex models, a machine can know, for example, how water and sediment tend to mix in a certain underwater environment, and how the information being returned by the sonar sensor should be interpreted differently as a result.

Development of a world model is in progress for the topography of the seafloor. Companies like Terradepth are using a fleet of autonomous submersibles to generate a set of detailed bathymetric maps so we can Google Earth above and below the water’s surface.

Human-Machine Collaboration and Communication: Current methods of docking and transferring data miss opportunities for discovery. By transferring (or even live-transmitting) raw data for human analysis, autonomous vehicles must wait to be redeployed or directed towards an interesting target.

Consider the alternative: a machine determines a finding is interesting enough to share, so it modifies its behavior—navigating toward the surface where it can communicate what it’s found, then diving back down to find the next new thing. To make this real, humans must trust their machine teammate. To earn that trust, a machine must know when it may need human help to perform well, and it must be able to explain its decisions and actions in a language humans can understand.

Curiosity-Enabling Technologies

Scientists at Charles River Analytics are conducting leading-edge R&D in many of the areas required for humans to encode curiosity into autonomous underwater and surface vessels. AutoTRap Onboard™ is a target detection and classification system that delivers object type, confidence of detection, and position to a UUV’s navigation system, letting it act on sonar data in real time. Awarion™ is an AI and computer vision system with a camera that sweeps the horizon and autonomously finds and tracks objects for follow-up observations, including whales, ships, and other objects at sea. Laser doppler vibrometry is also being explored for aerial, surface, and subsurface sensing.

In projects sponsored by NOAA and DARPA, sensors, detection, and classification software are being developed to support surveys of marine mammals. Learning agents aware of their own competence and software that can explain how an AI performs classification, such as detecting pedestrians in images, or how it makes decisions in game environments, have also been delivered.

Research on decision making under uncertainty is ongoing with the use of Figaro, a probabilistic programming language. World modeling is supported by Scruff, a newly released framework for combining different modeling paradigms in a coherent framework so they can be used for AI reasoning. In one application, these languages made it possible to convert “normal” software to adaptive software, which navigation and path-planning software onboard a UUV could use to quickly acclimate to new conditions.

For uncrewed systems to be truly autonomous and a trusted human-machine teammate, they must be curious. Curiosity is a specific form of intelligence, encoded by knowing what’s normal and what’s not, having the autonomy and ability to make a decision, and then executing behaviors to investigate further. To be trustworthy, machines must be aware of their own performance and communicate understandably about their decisions and actions. The pace of discovery will increase exponentially once a fleet of curious machines can be deployed in service to exploration and conservation.

This article was written by Arjuna Balasuriya, Sr. Robotics Scientist and Karen Pfautz, Principal Science Writer, Charles River Analytics (Cambridge, MA). For more information, go here .