Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In ISA transactions

Domain adaption has become an effective solution to train neural networks with insufficient training data. In this paper, we investigate the vulnerability of domain adaption that potentially breaches sensitive information about the training dataset. We propose a new membership inference attack against domain adaption models, to infer the membership information of samples from the target domain. By leveraging the background knowledge about an additional source-domain in domain adaptation tasks, our attack can exploit the similar distributions between the target and source domain data to determine if a specific data sample belongs in the training set with high efficiency and accuracy. In particular, the proposed attack can be deployed in a practical scenario where the attacker cannot obtain any details of the model. We conduct extensive evaluations for object and digit recognition tasks. Experimental results show that our method can achieve the attack against domain adaptation models with a high success rate.

Zhang Yuanjie, Zhao Lingchen, Wang Qian

2023-Jan-20

Deep learning, Domain adaptation, Membership inference attack, Privacy