What should be considered when using AI in clinical trials? Key Considerations and Challenges

The use of artificial intelligence (AI) in clinical trials involving medical devices has potential to optimize study design and documents, patient recruitment, data analysis and security monitoring, among other things. At the same time, however, this technological integration poses ethical, regulatory, technical and organizational challenges that must be carefully considered.

AI in clinical trials?

Ethical Aspects and Justice

Avoiding distortions and equal opportunities

A key requirement when dealing with AI systems is to actively avoid distortions that can lead to disadvantages for individual groups. Artificial intelligence models very clearly reflect the inequalities of their training data, which everyone who already uses such systems in everyday life knows. In healthcare in particular, this can lead to serious consequences that cannot be acceptable. Analyses show that proven previous disadvantages of extensive patient groups continue to exist in algorithms. A broad and balanced database, taking into account age, gender, origin and other characteristics, is therefore essential. Regular checks for distortions, known as audit audits, are therefore necessary.

Comprehensibility and explanatory power

A major problem of many AI processes is their lack of transparency. Medical staff and study participants must be able to understand the path taken to make a decision in order to develop trust in the technology. Methods for explaining how algorithms work are crucial for this. The declaration must be adapted to the respective target groups and must be easy to understand in order to enable responsible decision-making.

Self-determination and informed consent

Participants must be fully informed about whether and in what form AI is being used. It must be ensured that they understand the role of AI in the study process and can provide conscious, independent consent. In doing so, education should be adapted in such a way that language and cultural differences are bridged.

Legal framework and data protection

European and national regulations

The legal regulation of artificial intelligence in clinical trials is based on several superimposed European and international legislative texts, which work together to form a comprehensive protective framework. The basis is Regulation (EU) 2024/1689, better known as the Artificial Intelligence Act (AI Act), which introduces a risk-based classification system for AI applications. This four-stage model divides AI systems into categories ranging from impermissible to minimal risk.

AI systems in clinical trials are usually classified as high-risk systems, as they can make decisions that directly affect the health and safety of study participants. This applies in particular to AI applications in the following areas:

  • Patient recruitment and selection process
  • Division of study participants into treatment groups
  • Carrying out diagnostic evaluations
  • Monitoring and detection of safety signals
  • Generation of synthetic control groups
  • General decision support

Regulation (EU) 2024/1689 requires strict compliance requirements for high-risk AI systems before they can be used.

Data quality and data management requirements: The training, validation and test data sets used must be of high technical and content quality. This includes ensuring that the data adequately reflects the diversity of the target audience and does not contain systematic distortions. According to MDCG documents AIB 2025-1/MDCG 2025-6, data must meet special quality standards when AI systems are used in medical applications.

Die Regulation (EU) No 536/2014 (Clinical Trials Regulation — CTR) is the primary European legislative text for carrying out studies with medicinal products for human use. It sets out the requirements for study design, test subject protection, approval procedures and quality control. AI systems used in clinical trials must meet all CTR requirements.

Die European Medicines Agency (EMA) has published several specialized guidelines to regulate the use of modern technologies in clinical trials. The guideline EMA/INS/GCP/112288/2023 deals specifically with computer-aided systems and electronic data in clinical trials. This policy addresses, among other things:

  • Reliability and security requirements for electronic data systems
  • Measures to ensure the integrity of study documents
  • Control mechanisms to prevent accidental or unauthorized changes
  • System validation, training, and ongoing review requirements
  • Traceability and logging of all system processes

In addition, the EMA has a Reflection paper on the use of artificial intelligence in the life cycle of drugs publishes, which provides basic guidance for the responsible use of AI from the early stage of drug development to approval and beyond. This paper underlines the need for AI systems to comply with basic ethical principles and that their areas of application must be clearly defined and documented.

Die Documents AIB 2025-1/MDCG 2025-6 The European Commission provides practical guidance on the application of the AI Regulation to medical devices and medical software, including AI systems in clinical applications. This documentation explains how the various European regulations (AI regulation, medical device regulation, in vitro diagnostics regulation) interact and what specific requirements apply to AI-based medical devices.

Regulation (EU) 2024/1689 does not replace existing regulations, but supplements this. This means that AI systems in clinical trials must be subject to several sets of regulations in parallel.

Die Regulation (EU) 2016/679 (General Data Protection Regulation — GDPR) forms the comprehensive protection framework for personal data in Europe, which must not be breached even when using AI.

All requirements for the design, planning, implementation and evaluation of clinical studies or trials must also comply with all international standards and good clinical practice, in particular Declaration of Helsinki, according to which the basic principles

  • Respect for test subjects
  • Scientific honesty
  • patient safety
  • Transparency and traceability

have already been regulated and must be complied with.

Basic ethical principles

Regulatory compliance and oversight

International regulatory authorities have also developed principles on how AI should be used safely and responsibly in the area of drug trials. These include requirements for data quality, traceability, continuous monitoring and clear protocols for assigning responsibility in the event of errors or incidents.

Technical and methodological requirements

Data quality and reliability

The reliability of AI models in research depends on the quality of the underlying data. Carefully leveled criteria are required:

  • Measurement accuracy and timeliness of the clinical data used
  • Ensuring that data reflects the entire target group and the diversity of real care environments
  • Protection against errors and contradictions in data collection

Evidence of suitability for practice

AI models must prove their reliability not only in the laboratory, but also under the diverse conditions of everyday life. This requires:

  • Verification of data from various medical test centers or institutions
  • Demonstrating consistent performance with changing data structures
  • Continuous monitoring and adjustment of the model when necessary

Particular attention should be paid to the fact that simple artificial intelligence models are often easier to explain, but complex models occasionally achieve higher accuracy. The selection of the model must always take into account the clinical framework and risks.

Integration into clinical trials and human control

Forms of supervision and decision making

The responsible introduction of AI into clinical trials always requires a wise balance between automation and control by medical professionals. Depending on the risk, one or more people must be integrated into the system:

  • Decisions of great importance should always be reviewed by a doctor (principle of “loop with people”)
  • Minor support services may require less direct control but always require options for intervention
  • The control levels are flexible and must be adapted to the specific requirements of the study

Integration into existing processes

AI should support work processes in clinical research in such a way that everyday work becomes more productive, but not additional burdensome. Barriers such as technical compatibility, lack of training or team resistance must be addressed in a targeted manner. Ideally, the introduction takes place in small steps and always in close coordination with the practitioners.

Education and training

Targeted continuing education is essential so that those responsible are fully aware of the possibilities, but also the limits and risks of artificial intelligence. This includes

  • Basic knowledge of how various AI applications work
  • Training to identify and prevent distortions
  • Learn how to critically handle recommendations from automated systems

Ensuring security and recording adverse events

Artificial intelligence can help identify unusual or safety-relevant patterns in large amounts of data. This is achieved through automated test procedures, which extract indications of adverse effects from free texts or files, for example. Yet people remain irreplaceable: false evaluations caused by AI or unclear results must always be checked in order to protect patients in the long term.

Taking diversity and participation into account

In order for the results of clinical research to benefit as many people as possible, attention must be paid to the broadest possible involvement of different population groups. Artificial intelligence can help to include previously underrepresented groups in a more targeted manner and to make the division into study groups more equitable. However, it is important that algorithms do not themselves become new barriers by further increasing systematic distortions in the data. Such risks can only be limited through targeted review and external control.

Implementation challenges and outlook

The introduction of AI systems in clinical trials requires a holistic approach based on ethical principles, technical robustness, comprehensible process flows and fair participation of all parties involved. A balance must always be sought between technological progress and maintaining human control, protecting sensitive data and fair treatment of all patient groups. Clear rules, ongoing training and a transparent and responsible use of new methods are the prerequisites for the benefits of artificial intelligence to be used responsibly and for the benefit of the entire society.

Artificial intelligence offers clinical trials considerable efficiency potential in patient recruitment, data validation and safety monitoring, but its responsible use requires the simultaneous compliance with several complex European and international regulations — an additional effort that leads to more robust and credible evidence in the long term. The central task is not simplification through AI, but rather in its well-thought-out integration as a professionalization tool that complements, not replaces, human responsibility and critical judgment.

(Status fall 2025)

Sources: (can be requested from MEIDIACC)

Are you interested in whether and how MEDIACC uses AI systems as a CRO in digital clinical trials for medical devices? Stay tuned or feel free to contact us for an initial consultation!

Icon sources
Table of contents

Show the medical benefits of your product

With our many years of experience and expertise, we offer effective solutions to demonstrate the medical benefits of your product.

From the conception to the execution of preclinical and clinical investigations, we support you with customized services.

Find out how MEDIACC can help you achieve reimbursability for your products.