Article ID Journal Published Year Pages File Type
4977362 Signal Processing 2018 6 Pages PDF
Abstract

•A Deep architecture for audio-visual voice activity detection is proposed.•Specifically designed auto-encoders fuse audio and video while reducing interferences.•Incorporated into an RNN, the deep architecture outperforms recent detectors.

We address the problem of voice activity detection in difficult acoustic environments including high levels of noise and transients, which are common in real life scenarios. We consider a multimodal setting, in which the speech signal is captured by a microphone, and a video camera is pointed at the face of the desired speaker. Accordingly, speech detection translates to the question of how to properly fuse the audio and video signals, which we address within the framework of deep learning. Specifically, we present a neural network architecture based on a variant of auto-encoders, which combines the two modalities, and provides a new representation of the signal, in which the effect of interferences is reduced. To further encode differences between the dynamics of speech and interfering transients, the signal, in this new representation, is fed into a recurrent neural network, which is trained in a supervised manner for speech detection. Experimental results demonstrate improved performance of the proposed deep architecture compared to competing multimodal detectors.

Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, , ,