Automatic visual-only language identification: A preliminary study

Newman, Jacob and Cox, Stephen (2009) Automatic visual-only language identification: A preliminary study. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2009-04-19 - 2009-04-24.

Full text not available from this repository. (Request a copy)

Abstract

We describe experiments in visual-only language identification, in which only lip-shape and lip-motion are used to determine the language of a spoken utterance. We focus on the task of discriminating between two or three languages spoken by the same speaker, and we have recorded a suitable database for these experiments. We use a standard audio language identification approach in which the feature vectors are tokenized and then a language model for each language is estimated over a stream of tokens. Although rate of speaking appeared to affect our results, it was found that different languages spoken at rather similar speeds were as well discriminated as a single language spoken at three extreme speeds, indicating that there is a language effect present in our results.

Item Type: Conference or Workshop Item (Paper)
Faculty \ School: Faculty of Science > School of Computing Sciences
UEA Research Groups: Faculty of Science > Research Groups > Interactive Graphics and Audio
Faculty of Science > Research Groups > Smart Emerging Technologies
Faculty of Science > Research Groups > Data Science and AI
Related URLs:
Depositing User: Nicola Talbot
Date Deposited: 14 Mar 2011 09:18
Last Modified: 10 Dec 2024 01:14
URI: https://ueaeprints.uea.ac.uk/id/eprint/26023
DOI: 10.1109/ICASSP.2009.4960591

Actions (login required)

View Item View Item