Article ID Journal Published Year Pages File Type
6940180 Pattern Recognition Letters 2018 11 Pages PDF
Abstract
The success of deep neural networks in computer vision tasks requires a large number of annotated samples which are not available for many applications. In the absence of annotated data, domain adaptation provides an avenue to train deep neural networks effectively by utilizing the labeled data from a different but similar domain. In this paper, we propose a new Deep Domain Similarity Adaptation Network (DDSAN) architecture, which can exploit the labeled data from the source domain and unlabeled data from the target domain simultaneously. The DDSAN assumes that the parameters of the deep networks from source and target domains should be close to each other. Then, we transfer the deep network parameters from different domains explicitly instead of matching the deep hidden representations implicitly. By plugging a subnet into the typical deep neural networks, the DDSAN can project the high-dimensional parameters to a lower dimensional subspace and reduce their domain discrepancies. Comparative experiments demonstrate that the proposed network outperforms previous methods on the standard domain adaptation benchmarks.
Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,