In IEEE transactions on pattern analysis and machine intelligence ; h5-index 127.0
The deep-learning-based approaches which have been repeatedly proven to bring benefits to visual recognition tasks usually make a strong assumption that the training and test data are drawn from similar feature spaces and distributions. However, such an assumption may not always hold in various practical application scenarios on visual recognition tasks. Inspired by the hierarchical organization of deep feature representation that progressively leads tomore abstract features at higher layers of representations, we propose to tackle this problem with a novel feature learning framework, which is called GMFAD, with better generalization capability in a multi-layer perceptron manner. We first learn feature representations at the shallow layer where shareable underlying factors among domains (a subset of which could be relevant for each particular domain) can be explored. In particular, we propose to align the domain divergence between domain pair(s) by considering both inter-dimension and inter-sample correlations, which have been largely ignored by many crossdomain visual recognition methods. Subsequently, to learn more abstract information which could further benefit transferability, we propose to conduct feature disentanglement at the deep feature layer. Extensive experiments based on different visual recognition tasks demonstrate that our proposed framework can learn better transferable feature representation compared with state-of-the-art baselines.
Li Haoliang, Wang Shiqi, Wan Renjie, Kot Chichung Alex