ArXiv Preprint
Algorithm fairness in the application of artificial intelligence (AI) is
essential for a better society. As the foundational axiom of social mechanisms,
fairness consists of multiple facets. Although the machine learning (ML)
community has focused on intersectionality as a matter of statistical parity,
especially in discrimination issues, an emerging body of literature addresses
another facet -- monotonicity. Based on domain expertise, monotonicity plays a
vital role in numerous fairness-related areas, where violations could misguide
human decisions and lead to disastrous consequences. In this paper, we first
systematically evaluate the significance of applying monotonic neural additive
models (MNAMs), which use a fairness-aware ML algorithm to enforce both
individual and pairwise monotonicity principles, for the fairness of AI ethics
and society. We have found, through a hybrid method of theoretical reasoning,
simulation, and extensive empirical analysis, that considering monotonicity
axioms is essential in all areas of fairness, including criminology, education,
health care, and finance. Our research contributes to the interdisciplinary
research at the interface of AI ethics, explainable AI (XAI), and
human-computer interactions (HCIs). By evidencing the catastrophic consequences
if monotonicity is not met, we address the significance of monotonicity
requirements in AI applications. Furthermore, we demonstrate that MNAMs are an
effective fairness-aware ML approach by imposing monotonicity restrictions
integrating human intelligence.
Dangxing Chen, Luyao Zhang
2023-01-17