Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In AI and ethics

Ethics of technology systems have become an area of interest in academic research as well as international policy in recent years. Several organisation have consequently published principles of ethical artificial intelligence (AI) in line with this trend. The documents identify principles, values, and other abstract requirements for AI development and deployment. Critics raise concerns about whether these documents are in fact constructive, or if they are produced as a higher form of virtue signalling. A theme that is beginning to become apparent in the academic literature regarding these documents is the inherent lack of effective and practical methods and processes for producing ethical AI. This article attempts a critical analysis which draws upon ethical AI documents from a range of contexts including company, organisational, governmental, and academic perspectives. Both the theoretical and practical components of AI guidelines are explored and analysed, consequently bringing to light the necessity of introducing a measurable component to such documents for the purpose of ensuring a positive outcome of deploying AI systems based on ethical principles. We propose a minimal framework for stakeholders to develop AI in an ethical and human-centred manner.

Rees Connor, Müller Berndt

2022-Nov-16

AI ethics, AI regulation, AI security, Design guidelines, Human–AI interaction, Policy