Almajai, I. and Milner, Ben (2011) Visually derived Wiener filters for speech enhancement. IEEE Transactions on Audio, Speech, and Language Processing, 19 (6). pp. 1642-1651. ISSN 1558-7916
Full text not available from this repository. (Request a copy)Abstract
The aim of this work is to examine whether visual speech information can be used to enhance audio speech that has been contaminated by noise. First, an analysis of audio and visual speech features is made, which identifies the pair with highest audio-visual correlation. The study also reveals that higher audio-visual correlation exists within individual phoneme sounds rather than globally across all speech. This correlation is exploited in the proposal of a visually derived Wiener filter that obtains clean speech and noise power spectrum statistics from visual speech features. Clean speech statistics are estimated from visual features using a maximum a posteriori framework that is integrated within the states of a network of hidden Markov models to provide phoneme localization. Noise statistics are obtained through a novel audio-visual voice activity detector which utilizes visual speech features to make robust speech/nonspeech classifications. The effectiveness of the visually derived Wiener filter is evaluated subjectively and objectively and is compared with three different audio-only enhancement methods over a range of signal-to-noise ratios.
Item Type: | Article |
---|---|
Faculty \ School: | Faculty of Science > School of Computing Sciences |
UEA Research Groups: | Faculty of Science > Research Groups > Interactive Graphics and Audio Faculty of Science > Research Groups > Smart Emerging Technologies Faculty of Science > Research Groups > Data Science and AI |
Depositing User: | Rhiannon Harvey |
Date Deposited: | 20 Mar 2012 09:46 |
Last Modified: | 10 Dec 2024 01:18 |
URI: | https://ueaeprints.uea.ac.uk/id/eprint/38352 |
DOI: | 10.1109/TASL.2010.2096212 |
Actions (login required)
View Item |