Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers

Developing a more effective means to communicate with robotic devices.

Afuture vision of the use of autonomous and intelligent robots in dismounted military operations is for soldiers to interact with robots as teammates, much like soldiers interact with other soldiers. Soldiers will no longer be operators in full control of every movement, as the autonomous intelligent systems will have the capability to act without continual human input. However, soldiers will need to use the information available from, or provided by, the robot. One of the critical needs to achieve this vision is the ability of soldiers and robots to communicate with each other. One way to do that is to use human gestures to instruct and command robots.

The use of gestures as a natural means of interacting with devices is a very broad concept that encompasses a range of body movements, including movements of the hands, arms, and legs, facial expressions, eye movements, head movements, and/or 2-dimensional (2-D) swiping gestures against flat surfaces such as touch screens. Gesture-based technology is already in place and commonly used without special instruction required for effective use. A common example of a well-designed gestural command is the use of hands to “wave” to activate devices (e.g., public bathroom faucet). This concept is also common to gaming interfaces and is now extending to other private and public domains such as automobile consoles.

Soldier using instrumented glove for robot control (left) and communications (right).

Gestures using hand motion are the most common, and can be classified by purpose, such as: a) conversational, b) communicative, c) manipulative, and d) controlling. Conversational gestures are those used to enhance verbal communications, while communicative gestures, such as sign language, comprise the language itself. Manipulative gestures can be used for remote manipulation of devices or in virtual reality settings to interact with virtual objects. Controlling gestures can be used in both virtual and reality settings, and are distinguished in that they direct objects through gestures such as static arm/hand postures and/or dynamic movements.

Of particular interest are applications and advancements with regard to controlling gestures for human-robot interactions. Gestures can be as simple as a static hand posture or may involve coordinating movements of the entire body. Gesture-based commands to robots have been used in a variety of different settings, such as assisting users with special needs, assisting in grocery stores, and home assistance. Examples of gestural commands in these settings include “follow me”, “go there”, or “hand me that”. There are also advancements in various industrial settings to control robotic assembly and maneuver tasks.

One type of gesture control not included in this research are stroke gestures made upon a screen (e.g., tablet, smartphone). Rather this research focuses on free-form gestures made by the hand and arm, technology approaches to recognition of these, and how they may impact effectiveness within a military human-robot application.

This research focuses on five types of tasks that can impact the choice of technological approach. They are: simple commands, complex commands, pointing commands, remote manipulation, and robot-user dialogue.

  • Simple Commands consist of a small set of specific commands/alerts, usually involving movements of the arms and fingers, that are easily distinguishable from one another.
  • Complex commands are characterized by higher demands for deliberative cognitive processing, often through use of a larger gesture set and/or combinations of gestural units to communicate multiple concepts.
  • Pointing commands can be used to direct the movement of ground robots, either to convey direction information or to clarify ambiguous speech-based commands.
  • Ground-based mobile robots are often used for remote manipulation of objects, such as bomb disposal, necessitating the development of gestures to communicate the need for remote manipulation.
  • Finally, as robots become more autonomous, command of the robot transitions from direct and detailed teleoperation commands to higher-level commands, necessitating the need for robot-user dialog.

This work was done by Linda R Elliott, Susan G Hill, and Michael Barnes of the Army Research Laboratory. ARL-0198



This Brief includes a Technical Support Package (TSP).
Document cover
Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers

(reference ARL-0198) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
Aerospace & Defense Technology Magazine

This article first appeared in the May, 2017 issue of Aerospace & Defense Technology Magazine (Vol. 2 No. 3).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document titled "Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers," authored by Linda R. Elliott, Susan G. Hill, and Michael Barnes, was published by the US Army Research Laboratory in July 2016. It presents a comprehensive examination of the integration of gesture-based control systems in robotic platforms, particularly focusing on their application in military settings.

The report begins by outlining the evolution of robotic systems and the increasing need for intuitive control mechanisms that can enhance human-robot interaction. Gesture-based controls are highlighted as a promising solution, allowing soldiers to operate robots using natural movements, which can improve situational awareness and reduce cognitive load during operations.

The authors discuss various types of gesture recognition technologies, including vision-based systems that utilize cameras and sensors to interpret human gestures. The report emphasizes the importance of accuracy and reliability in these systems, as they must function effectively in diverse and dynamic environments typical of military operations.

Furthermore, the document explores the implications of adopting gesture-based controls for soldiers. It addresses potential benefits such as increased operational efficiency, enhanced communication between soldiers and robots, and the ability to perform complex tasks without the need for extensive training. The authors also consider the challenges associated with implementing these technologies, including the need for robust training programs, the integration of gesture controls with existing systems, and the potential for user fatigue.

The report includes a review of existing research and case studies that demonstrate the effectiveness of gesture-based controls in various scenarios. It highlights the importance of user-centered design in developing these systems to ensure they meet the needs of soldiers in the field.

In conclusion, the document advocates for further research and development in gesture-based control technologies, suggesting that they hold significant potential to transform the way soldiers interact with robotic systems. By enhancing the intuitive nature of these interactions, gesture-based controls could lead to improved mission outcomes and greater operational success in complex military environments. The report is approved for public release, indicating its relevance and importance to broader discussions on military technology and human-robot collaboration.