Action recognition from arbitrary views using transferable dictionary learning

Zhang, Jingtian, Shum, Hubert P. H., Han, Jungong and Shao, Ling (2018) Action recognition from arbitrary views using transferable dictionary learning. IEEE Transactions on Image Processing, 27 (10). pp. 4709-4723. ISSN 1057-7149

[thumbnail of Published manuscript]
Preview
PDF (Published manuscript) - Published Version
Available under License Creative Commons Attribution.

Download (4MB) | Preview

Abstract

Human action recognition is crucial to many practical applications, ranging from human-computer interaction to video surveillance. Most approaches either recognize the human action from a fixed view or require the knowledge of view angle, which is usually not available in practical applications. In this paper, we propose a novel end-to-end framework to jointly learn a view-invariance transfer dictionary and a view-invariant classifier. The result of the process is a dictionary that can project real-world 2D video into a view-invariant sparse representation, and a classifier to recognize actions with an arbitrary view. The main feature of our algorithm is the use of synthetic data to extract view-invariance between 3D and 2D videos during the pre-training phase. This guarantees the availability of training data, and removes the hassle of obtaining real-world videos in specific viewing angles. Additionally, for better describing the actions in 3D videos, we introduce a new feature set called the 3D dense trajectories to effectively encode extracted trajectory information on 3D videos. Experimental results on the IXMAS, N-UCLA, i3DPost and UWA3DII data sets show improvements over existing algorithms.

Item Type: Article
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 20 Jul 2018 10:53
Last Modified: 22 Oct 2022 03:58
URI: https://ueaeprints.uea.ac.uk/id/eprint/67682
DOI: 10.1109/TIP.2018.2836323

Actions (login required)

View Item View Item