ArXiv Preprint
Machine learning models are increasingly used in high-stakes decision-making
systems. In such applications, a major concern is that these models sometimes
discriminate against certain demographic groups such as individuals with
certain race, gender, or age. Another major concern in these applications is
the violation of the privacy of users. While fair learning algorithms have been
developed to mitigate discrimination issues, these algorithms can still leak
sensitive information, such as individuals' health or financial records.
Utilizing the notion of differential privacy (DP), prior works aimed at
developing learning algorithms that are both private and fair. However,
existing algorithms for DP fair learning are either not guaranteed to converge
or require full batch of data in each iteration of the algorithm to converge.
In this paper, we provide the first stochastic differentially private algorithm
for fair learning that is guaranteed to converge. Here, the term "stochastic"
refers to the fact that our proposed algorithm converges even when minibatches
of data are used at each iteration (i.e. stochastic optimization). Our
framework is flexible enough to permit different fairness notions, including
demographic parity and equalized odds. In addition, our algorithm can be
applied to non-binary classification tasks with multiple (non-binary) sensitive
attributes. As a byproduct of our convergence analysis, we provide the first
utility guarantee for a DP algorithm for solving nonconvex-strongly concave
min-max problems. Our numerical experiments show that the proposed algorithm
consistently offers significant performance gains over the state-of-the-art
baselines, and can be applied to larger scale problems with non-binary
target/sensitive attributes.
Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
2022-10-17