Article ID Journal Published Year Pages File Type
6864087 Neurocomputing 2018 25 Pages PDF
Abstract
Recently, thanks to the state-of-the-art techniques in Generative Adversarial Networks, a lot of work achieves remarkable performance on learning the mapping between an input image and an output image without any paired relation. However, traditional methods on image-to-image translation merely consider the visual appearance properties, they fail to maintain the true semantics of an image during the transfer learning procedure from source to target domain. We propose a new approach that utilizes GAN to translate unpaired images between domains and remain high level semantic abstraction aligned. Our model controls the hierarchical semantics of images by processing semantic information on label level and spatial level respectively by constructing label and attention consistent losses. The experimental results on several benchmark datasets show that generated samples are both visually similar with target images and semantically consistent with their source counterparts. Furthermore, the experiment also suggests that our method can effectively improve the classification performance in unsupervised domain adaptation problem.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,