Visually-derived Wiener filters for speech enhancement

Almajai, I., Milner, B. P., Darch, J. and Vaseghi, S. V. (2007) Visually-derived Wiener filters for speech enhancement. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2007-04-15 - 2007-04-20.

Full text not available from this repository. (Request a copy)

Abstract

This work begins by examining the correlation between audio and visual speech features and reveals higher correlation to exist within individual phoneme sounds rather than globally across all speech. Utilising this correlation, a visually-derived Wiener filter is proposed in which clean power spectrum estimates are obtained from visual speech features. Two methods of extracting clean power spectrum estimates are made; first from a global estimate using a single Gaussian mixture model (GMM), and second from phoneme-specific estimates using a hidden Markov model (HMM)-GMM structure. Measurement of estimation accuracy reveals that the phoneme-specific (HMM-GMM) system leads to lower estimation errors than the global (GMM) system. Finally, the effectiveness of visually-derived Wiener filtering is examined

Item Type: Conference or Workshop Item (Paper)
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: Vishal Gautam
Date Deposited: 04 Apr 2011 12:09
Last Modified: 24 Jul 2019 12:20
URI: https://ueaeprints.uea.ac.uk/id/eprint/22606
DOI: 10.1109/ICASSP.2007.366980

Actions (login required)

View Item View Item