EU and US medicines regulators have jointly set out ten principles for good artificial-intelligence (AI) practice in the medicines lifecycle.
The principles give broad guidance on AI use in evidence-generation and monitoring across all phases of a medicine, from early research and clinical trials to manufacturing and safety monitoring.
The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) say that the principles are relevant for those developing medicines, as well as for marketing-authorisation applicants and holders.
"The guiding principles of good AI practice in drug development are a first step of a renewed EU-US co-operation in the field of novel medical technologies,” said EU Commissioner for Health and Animal Welfare Olivér Várhelyi.
Among the principles is that the use of AI technologies aligns with “ethical and human-centric values”.
The regulators also say that oversight of AI should be proportionate to the risks involved, while it should also be clear why AI is being used.
Another principle says that data-source provenance, processing steps, and analytical decisions should be documented in a “detailed, traceable, and verifiable manner”.
The joint statement comes after negotiators from the European Parliament and EU Council reached agreement late last year on new pharmaceutical legislation.
The legislation accommodates the broader use of AI in the lifecycle of medicines in regulatory decision-making, and creates additional possibilities for testing AI-driven methods for medicines in a controlled environment.