Leveraging Hierarchical Parametric Networks for Skeletal Joints Based Action Segmentation and Recognition

Wu, Di and Shao, Ling (2014) Leveraging Hierarchical Parametric Networks for Skeletal Joints Based Action Segmentation and Recognition. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014-06-23 - 2014-06-28, Columbus, OH, USA.

Full text not available from this repository.

Abstract

Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data. A number of approaches have been proposed to extract representative features from 3D skeletal data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skeletal features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.

Item Type: Conference or Workshop Item (Other)
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: Pure Connector
Date Deposited: 10 Feb 2017 02:29
Last Modified: 08 Jul 2020 23:28
URI: https://ueaeprints.uea.ac.uk/id/eprint/62420
DOI: 10.1109/CVPR.2014.98

Actions (login required)

View Item View Item