Long, Yang, Liu, Li and Shao, Ling (2016) Attribute Embedding with Visual-Semantic Ambiguity Removal for Zero-shot Learning. In: Proceedings of the BMVC 2016. UNSPECIFIED, GBR.
Preview |
PDF (Published manuscript)
- Published Version
Download (7MB) | Preview |
Abstract
Conventional zero-shot learning (ZSL) methods recognise an unseen instance by projecting its visual features to a semantic space that is shared by both seen and unseen categories. However, we observe that such a one-way paradigm suffers from the visual-semantic ambiguity problem. Namely, the semantic concepts (e.g. attributes) cannot explicitly correspond to visual patterns, and vice versa. Such a problem can lead to a huge variance in the visual features for each attribute. In this paper, we investigate how to remove such semantic ambiguity based on the observed visual appearances. In particular, we propose (1) a novel latent attribute space to mitigate the gap between visual appearances and semantic expressions; (2) a dual-graph regularised embedding algorithm called Visual-Semantic Ambiguity Removal (VSAR) that can simultaneously extract the shared components between visual and semantic information and mutually align the data distribution based on the intrinsic local structures of both spaces; (3) a new zero-shot recognition framework that can deal with both instance-level and category-level ZSL tasks. We validate our method on two popular zero-shot learning datasets, AwA and aPY. Extensive experiments demonstrate that our proposed approach significantly outperforms the state-of-the-art methods.
Item Type: | Book Section |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
Related URLs: | |
Depositing User: | Pure Connector |
Date Deposited: | 07 Feb 2017 02:42 |
Last Modified: | 22 Oct 2022 00:00 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/62338 |
DOI: |
Downloads
Downloads per month over past year
Actions (login required)
View Item |