Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6939521 | Pattern Recognition | 2018 | 40 Pages |
Abstract
Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to filter banks. We develop a new DT analysis method based on a CNN method applied on three orthogonal planes. We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier trained end-to-end. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Vincent Andrearczyk, Paul F. Whelan,