Greenwood, David (2018) Predicting Head Pose From Speech. Doctoral thesis, University of East Anglia.
Preview |
PDF
Download (9MB) | Preview |
Abstract
Speech animation, the process of animating a human-like model to give the impression it is talking, most commonly relies on the work of skilled animators, or performance capture. These approaches are time consuming, expensive, and lack the ability to scale. This thesis develops algorithms for content driven speech animation; models that learn visual actions from data without semantic labelling, to predict realistic speech animation from recorded audio.
We achieve these goals by _rst forming a multi-modal corpus that represents the style of speech we want to model; speech that is natural, expressive and prosodic. This allows us to train deep recurrent neural networks to predict compelling animation.
We _rst develop methods to predict the rigid head pose of a speaker. Predicting the head pose of a speaker from speech is not wholly deterministic, so our methods provide a large variety of plausible head pose trajectories from a single utterance. We then apply our methods to learn how to predict the head pose of the listener while in conversation, using only the voice of the speaker. Finally, we show how to predict the lip sync, facial expression, and rigid head pose of the speaker, simultaneously, solely from speech
Item Type: | Thesis (Doctoral) |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
Depositing User: | Megan Ruddock |
Date Deposited: | 19 Feb 2019 14:59 |
Last Modified: | 19 Feb 2019 14:59 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/69976 |
DOI: |
Downloads
Downloads per month over past year
Actions (login required)
View Item |