The EMA and FDA publish 10 Key Guiding Principles for Good AI practice

EMA and FDA published 10 AI principles for medicinal product development, focusing on ethics, risk, standards, and transparency; further guidance to follow.

15 January 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

On 14 January 2025, the European Medicines Agency and the US Food and Drug Administration (FDA) jointly published 10 Guiding Principles for the use of Artificial Intelligence (AI) in medicinal product development. These Principles address the full medicinal product lifecycle.

The use of AI can bring many unparalleled benefits. It also brings risks. Most applications of AI in the medicinal product lifecycle, will not be subject to the High Risk AI System (HRAIS) requirements of the EU AI Regulation (AIA). As we report here, (in vitro) medical devices including companion diagnostics, may be excluded from the AIA HRAIS requirements too.

To guide medicinal product developers, marketing authorization holders and other stakeholders, the EMA and the FDA have published 10 Principles on the use of AI, which reflect the principles of the AIA.

A. The EMA will develop further guidance in the future, building on these Principles

Related risks would, however, remain present. The EMA points out that to realise these benefits, AI and the risks that its use pose, need to be expertly managed.

The EMA will supplement these 10 Principles with additional guidance in the future. This guidance will take legal requirements and upcoming new EU legislation into account. The upcoming new legislation would include the EU Proposal for a BioTech Act as well as the Pharmaceutical Package.

While these Principles are not legally binding on any stakeholders, they will serve the development of future AI guidance in the different jurisdictions. The development of these guidelines, further, builds on the EMA AI reflection paper published in 2024.

B. The use of AI in the medicinal product lifecycle may be subject to multiple requirements

Other legislation and standards, including Good Manufacturing (GMP) and Clinical Practice (GCP)1 will still apply and govern aspects of the use of AI, depending on the use setting. The requirements of the Cyber Resilience Regulation (CRA) may, for example, also apply to cybersecurity aspects of an AI.

C. The 10 Principles reflect the provisions of the AIA

At their core, these Principles emphasize the importance of ethical and human-centric values, ensuring that AI technologies are designed to serve the interests of patients and society. The Principles also emphasize a risk-based approach, requiring proportionate validation and oversight tailored to the specific context and risk profile of each AI application. Stakeholders can only comply with applicable requirements when they correctly identified and limited the use context.

The 10 principles are:

1. Human-centric by Design

AI technologies must be developed and used in alignment with ethical and human-centric values.

2. Risk-based Approach

AI development and use should follow a risk-based approach, with proportionate validation, risk mitigation, and oversight tailored to the context and model risk.

3. Adherence to Standards

AI technologies must comply with relevant legal, ethical, technical, scientific, cybersecurity such as the CRA or the (IV)MDR where applicable, as well as GMP & GCP.

4. Clear Context of Use

The context of use for AI technologies must be well-defined, specifying the role and scope of their application.

5. Multidisciplinary Expertise

Multidisciplinary expertise, covering both AI technology and its context of use, should be integrated throughout the technology's life cycle.

6. Data Governance and Documentation

Data provenance, processing steps, and analytical decisions must be documented in a detailed, traceable, and verifiable manner, in line with GMP & GCP requirements. Appropriate governance, including privacy and protection for sensitive data, must be maintained throughout the technology's life cycle.

7. Model Design and Development Practices

AI development should follow best practices in model and system design and software engineering, using fit-for-use data and considering interpretability, explainability, and predictive performance. Good development promotes transparency, reliability, generalisability, and robustness, contributing to patient safety.

8. Risk-based Performance Assessment

Performance assessments should be risk-based, evaluating the complete system, including human-AI interactions, using appropriate data and metrics for the intended context, and supported by robust validation methods.

9. Life Cycle Management

Risk-based quality management systems must be implemented throughout the AI technology life cycle, supporting issue capture, assessment, and resolution. Scheduled monitoring and periodic re-evaluation are required to ensure ongoing performance, such as addressing data drift.

10. Clear, Essential Information

Information about the AI technology's context of use, performance, limitations, underlying data, updates, and interpretability or explainability must be presented in plain language, accessible and relevant to the intended audience, including users and patients.

1 See also the European Commission Stakeholders' Consultation on EudraLex Volume 4 - Good Manufacturing Practice Guidelines: Chapter 4, Annex 11 and New Annex 22. Revision of Good Manufacturing Practice (GMP) Guidelines Chapter 4 (Documentation), Annex 11 (Computerised Systems) and New Annex 22 (Artificial Intelligence) (Stakeholders' Consultation on EudraLex Volume 4 - Good Manufacturing Practice Guidelines: Chapter 4, Annex 11 and New Annex 22 - Public Health)

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.