Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Frontiers in genetics ; h5-index 62.0

Despite its promising future, the application of artificial intelligence (AI) and automated decision-making in healthcare services and medical research faces several legal and ethical hurdles. The European Union (EU) is tackling these issues with the existing legal framework and drafting new regulations, such as the proposed AI Act. The EU General Data Protection Regulation (GDPR) partly regulates AI systems, with rules on processing personal data and protecting data subjects against solely automated decision-making. In healthcare services, (automated) decisions are made more frequently and rapidly. However, medical research focuses on innovation and efficacy, with less direct decisions on individuals. Therefore, the GDPR's restrictions on solely automated decision-making apply mainly to healthcare services, and the rights of patients and research participants may significantly differ. The proposed AI Act introduced a risk-based approach to AI systems based on the principles of ethical AI. We analysed the complex connection between the GDPR and AI Act, highlighting the main issues and finding ways to harmonise the principles of data protection and ethical AI. The proposed AI Act may complement the GDPR in healthcare services and medical research. Although several years may pass before the AI Act comes into force, many of its goals will be realised before that.

Meszaros Janos, Minari Jusaku, Huys Isabelle


AI Act, European Union, GDPR–General Data Protection Regulation, artificial intelligence, automated decision-making, data protection, healthcare, medical research