In AI & society
UNLABELLED : Uncovering the world's ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people's ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with regards to ethnicity, but relatively less among gender and age groups. In contrast, the Twitter-trained NamePrism and the Wikipedia-trained Ethnicolr are more balanced among ethnicity, but less among gender and age. We relate these biases to global power structures manifested in naming conventions and NECs' input distribution of names. To improve on the uncovered biases, we program a novel NEC, N2E, using fairness-aware AI techniques. We make N2E freely available at www.name-to-ethnicity.com.
SUPPLEMENTARY INFORMATION : The online version contains supplementary material available at 10.1007/s00146-022-01619-4.
Hafner Lena, Peifer Theodor Peter, Hafner Franziska Sofia
2023-Feb-09
AI fairness audit, Algorithmic bias, Artificial intelligence, Critical tech, Ethnic inequalities, Machine learning, Name-ethnicity-classification