Speaker-independent machine lip-reading with speaker-dependent viseme classifiers

Bear, Helen L., Cox, Stephen and Harvey, Richard ORCID: https://orcid.org/0000-0001-9925-8316 (2015) Speaker-independent machine lip-reading with speaker-dependent viseme classifiers. In: FAAVSP - The 1st Joint Conference on Facial Analysis, Animation and Auditory-Visual Speech Processing, 2015-09-11 - 2015-09-13, Austria.

Full text not available from this repository. (Request a copy)

Abstract

In machine lip-reading, which is identification of speech from visual-only information, there is evidence to show that visual speech is highly dependent upon the speaker (Cox et al, 2008). Here, we use a phoneme-clustering method to form new phoneme-to-viseme maps for both individual and multiple speakers. We use these maps to examine how similarly speakers talk visually. We conclude that broadly speaking, speakers have the same repertoire of mouth gestures, where they differ is in the use of the gestures.

Item Type: Conference or Workshop Item (Paper)
Faculty \ School: Faculty of Science
Faculty of Science > School of Computing Sciences
UEA Research Groups: Faculty of Science > Research Groups > Interactive Graphics and Audio
Faculty of Science > Research Groups > Smart Emerging Technologies
Depositing User: Pure Connector
Date Deposited: 25 Jul 2015 06:50
Last Modified: 20 Jun 2023 14:36
URI: https://ueaeprints.uea.ac.uk/id/eprint/53966
DOI:

Actions (login required)

View Item View Item