3D Data Acquisition Platform for Human Activity Understanding
Implementing motion capture devices, 3D vision sensors, and EMG sensors to cross validate multi-modality data acquisition and address fundamental research problems involving the representation and invariant description of 3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.
Reliable online recognition and prediction of human actions and activities in temporal sequences has many potential applications in a wide range of Army-relevant fields, ranging from video surveillance, warfighter assistance, human computer interface, intelligent humanoid robots, and unmanned and autonomous vehicles, to diagnosis, assessment and treatment of musculoskeletal disorders, etc. A computational approach for action prediction can extend these findings to machines and also promote further research in human prediction and intention sensing.
Apparently, a practical prediction system must output a rapid response for partial observations. This brings up a new challenge to the computational models and motivates machine learning researchers to make more progress. Moreover, action prediction will need to model temporal structures and may raise an important advance for action recognition. The underlying basic goal is to enhance the DoD’s capabilities of visual intelligence for leveraging automatic human activity understanding using 3D data acquisition platforms.
Using 2D visual information captured by single or multiple cameras for human activity recognition has been extensively studied and applied to real-world systems in the past decade. However, a remaining open problem is how to generalize existing models and frameworks to robust and viewpoint-independent recognition and even prediction of diverse human actions and activities in a real environment. Recent advances in 3D motion capture technology, 3D depth cameras using structured light or time-of-flight sensors, and 3D information recovery from 2D images/videos have provided commercially viable approaches and hardware platforms to capture 3D data in real-time and have been nurturing a potential breakthrough solution to such problems by using 3D data.
A computational approach for action prediction can extend their findings to machines and also promote further research in human prediction and intention sensing. Apparently, a practical prediction system must output a rapid response for partial observations. This brings up a new challenge to the computational models and motivates machine learning researchers to make more progress. Moreover, action prediction will need to model temporal structures and may raise an important advance for action recognition.
The 3D human data acquisition platform used for this research consists of a set of 3D motion capture sensors (e.g. Vicon) and a set of 3D cameras (e.g. Kinect) that are synchronized and integrated to cross-validate data acquisition, as shown in the accompanying figure. As illustrated in the computing (right) module, new methodologies of 3D motion reconstruction and 3D visual modeling will be developed to fill in the gap between vision and motion data and form the computational component to drive interactions. The gap between the middle level and low level data flow is filled by parametric and composable low-dimensional manifold representations. Such integrated data acquisition and methodologies will link the visual representations to quantitative biomechanical assessment of the human movements in the form of immersive activities, which aid the development of human models and assist in the progressive parametric refinement of modeling.
This work was done by Yun Fu of Northeastern University for the Army Research Office. For more information, download the Technical Support Package (free white paper) below. ARL-0243
This Brief includes a Technical Support Package (TSP).
3D Data Acquisition Platform for Human Activity Understanding
(reference ARL-0243) is currently available for download from the TSP library.
Don't have an account?
Top Stories
INSIDERDefense
This Robot Dog Detects Nuclear Material and Chemical Weapons
INSIDERManned Systems
Testing the Viability of Autonomous Laser Welding in Space
INSIDERTest & Measurement
Germany's New Military Surveillance Jet Completes First Flight
NewsUnmanned Systems
The Unusual Machines Approach to Low-Cost Drones and Drone Components
INSIDERSoftware
Accelerating Climate-Compatible Aircraft Design with AI
INSIDERManufacturing & Prototyping
Webcasts
Software
Best Practices for Developing Safe and Secure Modular Software
Power
Designing an HVAC Modeling Workflow for Cabin Energy Management...
Aerospace
Countering the Evolving Challenge of Integrating UAS Into...
Manned Systems
How Pratt & Whitney Uses a Robot to Help Build Jet Engines
Manufacturing & Prototyping
Scaling Manufacturing and Production for 'Data as a Service' Electric Drone
Test & Measurement
A Quick Guide to Multi-Axis Simulation and Component Testing