Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers

Developing a more effective means to communicate with robotic devices.

Afuture vision of the use of autonomous and intelligent robots in dismounted military operations is for soldiers to interact with robots as teammates, much like soldiers interact with other soldiers. Soldiers will no longer be operators in full control of every movement, as the autonomous intelligent systems will have the capability to act without continual human input. However, soldiers will need to use the information available from, or provided by, the robot. One of the critical needs to achieve this vision is the ability of soldiers and robots to communicate with each other. One way to do that is to use human gestures to instruct and command robots.

The use of gestures as a natural means of interacting with devices is a very broad concept that encompasses a range of body movements, including movements of the hands, arms, and legs, facial expressions, eye movements, head movements, and/or 2-dimensional (2-D) swiping gestures against flat surfaces such as touch screens. Gesture-based technology is already in place and commonly used without special instruction required for effective use. A common example of a well-designed gestural command is the use of hands to “wave” to activate devices (e.g., public bathroom faucet). This concept is also common to gaming interfaces and is now extending to other private and public domains such as automobile consoles.

Soldier using instrumented glove for robot control (left) and communications (right).

Gestures using hand motion are the most common, and can be classified by purpose, such as: a) conversational, b) communicative, c) manipulative, and d) controlling. Conversational gestures are those used to enhance verbal communications, while communicative gestures, such as sign language, comprise the language itself. Manipulative gestures can be used for remote manipulation of devices or in virtual reality settings to interact with virtual objects. Controlling gestures can be used in both virtual and reality settings, and are distinguished in that they direct objects through gestures such as static arm/hand postures and/or dynamic movements.

Of particular interest are applications and advancements with regard to controlling gestures for human-robot interactions. Gestures can be as simple as a static hand posture or may involve coordinating movements of the entire body. Gesture-based commands to robots have been used in a variety of different settings, such as assisting users with special needs, assisting in grocery stores, and home assistance. Examples of gestural commands in these settings include “follow me”, “go there”, or “hand me that”. There are also advancements in various industrial settings to control robotic assembly and maneuver tasks.

One type of gesture control not included in this research are stroke gestures made upon a screen (e.g., tablet, smartphone). Rather this research focuses on free-form gestures made by the hand and arm, technology approaches to recognition of these, and how they may impact effectiveness within a military human-robot application.

This research focuses on five types of tasks that can impact the choice of technological approach. They are: simple commands, complex commands, pointing commands, remote manipulation, and robot-user dialogue.

  • Simple Commands consist of a small set of specific commands/alerts, usually involving movements of the arms and fingers, that are easily distinguishable from one another.
  • Complex commands are characterized by higher demands for deliberative cognitive processing, often through use of a larger gesture set and/or combinations of gestural units to communicate multiple concepts.
  • Pointing commands can be used to direct the movement of ground robots, either to convey direction information or to clarify ambiguous speech-based commands.
  • Ground-based mobile robots are often used for remote manipulation of objects, such as bomb disposal, necessitating the development of gestures to communicate the need for remote manipulation.
  • Finally, as robots become more autonomous, command of the robot transitions from direct and detailed teleoperation commands to higher-level commands, necessitating the need for robot-user dialog.

This work was done by Linda R Elliott, Susan G Hill, and Michael Barnes of the Army Research Laboratory. ARL-0198



This Brief includes a Technical Support Package (TSP).
Document cover
Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers

(reference ARL-0198) is currently available for download from the TSP library.

Don't have an account? Sign up here.