Decoding visemes: improving machine lip-reading

Bear, Helen and Harvey, Richard ORCID: https://orcid.org/0000-0001-9925-8316 (2016) Decoding visemes: improving machine lip-reading. In: International Conference on Acoustics, Speech, and Signal Processing, 2016-03-21 - 2016-03-25.

[thumbnail of Template]
Preview
PDF (Template) - Accepted Version
Download (469kB) | Preview

Abstract

To undertake machine lip-reading, we try to recognise speech from a visual signal. Current work often uses viseme classification supported by language models with varying degrees of success. A few recent works suggest phoneme classification, in the right circumstances, can outperform viseme classification. In this work we present a novel two-pass method of training phoneme classifiers which uses previously trained visemes in the first pass. With our new training algorithm, we show classification performance which significantly improves on previous lip-reading results.

Item Type: Conference or Workshop Item (Poster)
Uncontrolled Keywords: visemes,weak learning,visual speech,lip-reading,recognition,classification
Faculty \ School: Faculty of Science
Faculty of Science > School of Computing Sciences
UEA Research Groups: Faculty of Science > Research Groups > Interactive Graphics and Audio
Faculty of Science > Research Groups > Smart Emerging Technologies
Related URLs:
Depositing User: Pure Connector
Date Deposited: 22 Mar 2016 09:51
Last Modified: 20 Apr 2023 01:15
URI: https://ueaeprints.uea.ac.uk/id/eprint/57978
DOI:

Downloads

Downloads per month over past year

Actions (login required)

View Item View Item