In Frontiers in sociology
Ageism has not been centered in scholarship on AI or algorithmic harms despite the ways in which older adults are both digitally marginalized and positioned as targets for surveillance technology and risk mitigation. In this translation paper, we put gerontology into conversation with scholarship on information and data technologies within critical disability, race, and feminist studies and explore algorithmic harms of surveillance technologies on older adults and care workers within nursing homes in the United States and Canada. We start by identifying the limitations of emerging scholarship and public discourse on "digital ageism" that is occupied with the inclusion and representation of older adults in AI or machine learning at the expense of more pressing questions. Focusing on the investment in these technologies in the context of COVID-19 in nursing homes, we draw from critical scholarship on information and data technologies to deeply understand how ageism is implicated in the systemic harms experienced by residents and workers when surveillance technologies are positioned as solutions. We then suggest generative pathways and point to various possible research agendas that could illuminate emergent algorithmic harms and their animating force within nursing homes. In the tradition of critical gerontology, ours is a project of bringing insights from gerontology and age studies to bear on broader work on automation and algorithmic decision-making systems for marginalized groups, and to bring that work to bear on gerontology. This paper illustrates specific ways in which important insights from critical race, disability and feminist studies helps us draw out the power of ageism as a rhetorical and analytical tool. We demonstrate why such engagement is necessary to realize gerontology's capacity to contribute to timely discourse on algorithmic harms and to elevate the issue of ageism for serious engagement across fields concerned with social and economic justice. We begin with nursing homes because they are an understudied, yet socially significant and timely setting in which to understand algorithmic harms. We hope this will contribute to broader efforts to understand and redress harms across sectors and marginalized collectives.
Berridge Clara, Grigorovich Alisa
artificial intelligence, big data, dementia, long-term care, machine learning, older adults, privacy, technology