Theobald, BJ, Cawley, GC ORCID: https://orcid.org/0000-0002-4118-9095, Matthews, I and Bangham, JA (2003) Near-videorealistic synthetic visual speech using non-rigid appearance models. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2003-04-06 - 2003-04-10.
Full text not available from this repository. (Request a copy)Abstract
We present work towards videorealistic synthetic visual speech using non-rigid appearance models. These models are used to track a talking face enunciating a set of training sentences. The resultant parameter trajectories are used in a concatenative synthesis scheme, where samples of original data are extracted from a corpus and concatenated to form new unseen sequences. Here we explore the effect on the synthesiser output of blending several synthesis units considered similar to the desired unit. We present preliminary subjective and objective results used to judge the realism of the system.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
UEA Research Groups: | Faculty of Science > Research Groups > Computational Biology Faculty of Science > Research Groups > Interactive Graphics and Audio Faculty of Science > Research Groups > Data Science and Statistics Faculty of Science > Research Groups > Centre for Ocean and Atmospheric Sciences |
Depositing User: | Vishal Gautam |
Date Deposited: | 04 Jul 2011 08:28 |
Last Modified: | 22 Apr 2023 02:47 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/22081 |
DOI: | 10.1109/ICASSP.2003.1200092 |
Actions (login required)
View Item |