Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
10368598 | Computer Speech & Language | 2015 | 18 Pages |
Abstract
A fine-grained segmentation of radio or TV broadcasts is an essential step for most multimedia processing tasks. Applying segmentation algorithms to the speech transcripts seems straightforward. Yet, most of these algorithms are not suited when dealing with short segments or noisy data. In this paper, we present a new segmentation technique inspired from the image analysis field and relying on a new way to compute similarities between candidate segments called vectorization. Vectorization makes it possible to match text segments that do not share common words; this property is shown to be particularly useful when dealing with transcripts in which transcription errors and short segments makes the segmentation difficult. This new topic segmentation technique is evaluated on two corpora of transcripts from French TV broadcasts on which it largely outperforms other existing approaches from the state-of-the-art.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Vincent Claveau, Sébastien Lefèvre,