Towards video realistic synthetic visual speech

Theobald, B, Bangham, JA, Matthews, I and Cawley, GC (2002) Towards video realistic synthetic visual speech. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP- 2002), 2002-05-13 - 2002-05-17.

Full text not available from this repository. (Request a copy)

Abstract

In this paper we present initial work towards a video-realistic visual speech synthesiser based on statistical models of shape and appearance. A synthesised image sequence corresponding to an utterance is formed by concatenation of synthesis units (in this case phonemes) from a pre-recorded corpus of training data. A smoothing spline is applied to the concatenated parameters to ensure smooth transitions between frames and the resultant parameters applied to the model—early results look promising.

Item Type: Conference or Workshop Item (Paper)
Faculty \ School: Faculty of Science > School of Computing Sciences
University of East Anglia > Faculty of Science > Research Groups > Computational Biology (subgroups are shown below) > Machine learning in computational biology
?? RGCB ??
?? RGGVS ??
?? RGMLS ??
?? RGCOASC ??
Related URLs:
Depositing User: Vishal Gautam
Date Deposited: 04 Jul 2011 09:49
Last Modified: 25 Jul 2018 01:58
URI: https://ueaeprints.uea.ac.uk/id/eprint/21917
DOI: 10.1109/ICASSP.2002.5745507

Actions (login required)

View Item