Towards video realistic synthetic visual speech

Theobald, B, Bangham, JA, Matthews, I and Cawley, GC ORCID: (2002) Towards video realistic synthetic visual speech. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2002-05-13 - 2002-05-17.

Full text not available from this repository. (Request a copy)


In this paper we present initial work towards a video-realistic visual speech synthesiser based on statistical models of shape and appearance. A synthesised image sequence corresponding to an utterance is formed by concatenation of synthesis units (in this case phonemes) from a pre-recorded corpus of training data. A smoothing spline is applied to the concatenated parameters to ensure smooth transitions between frames and the resultant parameters applied to the model—early results look promising.

Item Type: Conference or Workshop Item (Paper)
Faculty \ School: Faculty of Science > School of Computing Sciences

University of East Anglia > Faculty of Science > Research Groups > Computational Biology (subgroups are shown below) > Machine learning in computational biology
Related URLs:
Depositing User: Vishal Gautam
Date Deposited: 04 Jul 2011 08:49
Last Modified: 02 Oct 2022 23:48
DOI: 10.1109/ICASSP.2002.5745507

Actions (login required)

View Item View Item