Theobald, Barry-John, Matthews, Iain A., Cohn, Jeffrey F. and Boker, Steven M. (2007) Real-time expression cloning using appearance models. In: 9th International Conference on Multimodal Interfaces, 2007-11-12 - 2007-11-15.
Full text not available from this repository.Abstract
Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images and video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects in real-time. The main advantages of our approach are that: 1) the mapping is computed automatically and does not require high-level semantic information describing facial expressions or visual speech gestures. 2) The mapping is simple and intuitive, allowing expressions to be transferred and rendered in real-time. 3) The mapped expression can be constrained to have the appearance of the target producing the expression, rather than the source expression imposed onto the target face. 4) Near-videorealistic talking faces for new subjects can be created without the cost of recording and processing a complete training corpus for each. Our system enables face-to-face interaction with an avatar driven by an AAM of an actual person in real-time and we show examples of arbitrary expressive speech frames cloned across different subjects.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
UEA Research Groups: | Faculty of Science > Research Groups > Interactive Graphics and Audio |
Depositing User: | Vishal Gautam |
Date Deposited: | 19 May 2011 07:45 |
Last Modified: | 22 Apr 2023 02:44 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/22059 |
DOI: | 10.1145/1322192.1322217 |
Actions (login required)
View Item |