Spatio-temporal steerable pyramid for human action recognition

Zhen, Xiantong and Shao, Ling (2013) Spatio-temporal steerable pyramid for human action recognition. In: Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on. IEEE Press. ISBN 978-1-4673-5545-2

Full text not available from this repository.

Abstract

In this paper, we propose a novel holistic representation based on the spatio-temporal steerable pyramid (STSP) for human action recognition. The spatio-temporal Laplacian pyramid provides an effective technique for multi-scale analysis of video sequences. By decomposing spatio-temporal volumes into band-passed sub-volumes, spatio-temporal patterns residing in different scales will be nicely localized. Then three-dimensional separable steerable filters are conducted on each of the sub-volume to capture the spatio-temporal orientation information efficiently. The outputs of the quadrature pair of steerable filters are squared and summed to yield a more robust measure of motion energy. To make the representation invariant to shifting and applicable with coarsely-extracted bounding boxes for the performed actions, max pooling operations are employed between responses of the filtering at adjacent scales, and over spatio-temporal local neighborhoods. Taking advantage of multi-scale and multi-orientation analysis and feature pooling, STSP produces a compact but informative and invariant representation of human actions. We conduct extensive experiments on the KTH, IXMAS and HMDB51 datasets, and the proposed STSP achieves comparable results with the state-of-the-art methods.

Item Type: Book Section
Faculty \ School: Faculty of Science > School of Computing Sciences
Depositing User: Pure Connector
Date Deposited: 16 Feb 2017 02:26
Last Modified: 22 Apr 2020 11:07
URI: https://ueaeprints.uea.ac.uk/id/eprint/62634
DOI: 10.1109/FG.2013.6553732

Actions (login required)

View Item View Item