Theobald, Barry-John (2012) Relating objective and subjective performance measures for AAM-based visual speech synthesis. IEEE Transactions on Audio, Speech and Language Processing, 20 (8). pp. 2378-2387. ISSN 1558-7916
Full text not available from this repository.Abstract
We compare two approaches for synthesizing visual speech using Active Appearance Models (AAMs): one that utilizes acoustic features as input, and one that utilizes a phonetic transcription as input. Both synthesizers are trained using the same data and the performance is measured using both objective and subjective testing. We investigate the impact of likely sources of error in the synthesized visual speech by introducing typical errors into real visual speech sequences and subjectively measuring the perceived degradation. When only a small region (e.g. a single syllable) of ground-truth visual speech is incorrect we find that the subjective score for the entire sequence is subjectively lower than sequences generated by our synthesizers. This observation motivates further consideration of an often ignored issue, which is to what extent are subjective measures correlated with objective measures of performance? Significantly, we find that the most commonly used objective measures of performance are not necessarily the best indicator of viewer perception of quality. We empirically evaluate alternatives and show that the cost of a dynamic time warp of synthesized visual speech parameters to the respective ground-truth parameters is a better indicator of subjective quality.
Item Type: | Article |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
UEA Research Groups: | Faculty of Science > Research Groups > Interactive Graphics and Audio |
Depositing User: | Barry-John Theobald |
Date Deposited: | 27 Jan 2013 21:37 |
Last Modified: | 20 Jun 2023 14:39 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/38902 |
DOI: | 10.1109/TASL.2012.2202651 |
Actions (login required)
View Item |