Theobald, B, Cawley, GC ORCID: https://orcid.org/0000-0002-4118-9095, Glauert, JRW, Abider, JA and Matthews, I (2003) 2.5D Visual Speech Synthesis Using Appearance Models. In: British Machine Vision Conference, 2005-09-05 - 2005-09-08, Oxford Brookes University.
Full text not available from this repository. (Request a copy)Abstract
Two dimensional (2D) shape and appearance models are applied to the problem of creating a near-videorealistic talking head. A speech corpus of a talker uttering a set of phonetically balanced training sentences is analysed using a generative model of the human face. Segments of original parameter trajectories, corresponding to the synthesis unit (e.g.~triphone), are extracted from a codebook, then normalised, blended, concatenated and smoothed before being applied to the model to give natural, realistic animations of novel utterances. The system provides a 2D image sequence corresponding to the face of a talker. It is also used to animate the face of a 3D avatar by displacing the mesh according to movements of points in the shape model and dynamically texturing the face polygons using the appearance model.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences Faculty of Arts and Humanities > School of Political, Social and International Studies (former - to 2014) |
UEA Research Groups: | Faculty of Science > Research Groups > Computer Graphics (former - to 2018) Faculty of Science > Research Groups > Computational Biology Faculty of Science > Research Groups > Interactive Graphics and Audio Faculty of Science > Research Groups > Data Science and Statistics Faculty of Science > Research Groups > Centre for Ocean and Atmospheric Sciences |
Depositing User: | Vishal Gautam |
Date Deposited: | 23 Jul 2011 19:45 |
Last Modified: | 20 Jun 2023 14:34 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/21914 |
DOI: |
Actions (login required)
View Item |