Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Frontiers in artificial intelligence

Machine learning applications have become ubiquitous. Their applications range from embedded control in production machines over process optimization in diverse areas (e.g., traffic, finance, sciences) to direct user interactions like advertising and recommendations. This has led to an increased effort of making machine learning trustworthy. Explainable and fair AI have already matured. They address the knowledgeable user and the application engineer. However, there are users that want to deploy a learned model in a similar way as their washing machine. These stakeholders do not want to spend time in understanding the model, but want to rely on guaranteed properties. What are the relevant properties? How can they be expressed to the stake- holder without presupposing machine learning knowledge? How can they be guaranteed for a certain implementation of a machine learning model? These questions move far beyond the current state of the art and we want to address them here. We propose a unified framework that certifies learning methods via care labels. They are easy to understand and draw inspiration from well-known certificates like textile labels or property cards of electronic devices. Our framework considers both, the machine learning theory and a given implementation. We test the implementation's compliance with theoretical properties and bounds.

Morik Katharina J, Kotthaus Helena, Fischer Raphael, Mücke Sascha, Jakobs Matthias, Piatkowski Nico, Pauly Andreas, Heppe Lukas, Heinrich Danny

2022

care labels, certification, probabilistic graphical models, testing machine learning, trustworthy AI