MDCG guideline clarifying the requirements for AI products in accordance with the AI Regulation, MDR and IVDR
A comprehensive guide on the interplay of the Medical Devices Regulation (MDR), the In Vitro Diagnostics Regulation (IVDR) and the Artificial Intelligence Regulation (AIA) has recently been published, which supports manufacturers, notified bodies and authorities in meeting regulatory requirements for AI-based medical devices.

What is covered in document AIB 2025-1 MDCG 2025-6?
The MDCG guideline for requirements for AI products in the area of medical devices can be found in document “MDCG 2025-6” (FAQ on the interaction between MDR, IVDR and the AI Regulation/AI Act), published in June 2025 by the Medical Device Coordination Group (MDCG). This guideline provides detailed explanations of regulatory requirements, risk classification, clinical evaluation and conformity assessment of medical devices with AI components. The interplay of the Medical Devices Regulation (MDR), the In Vitro Diagnostics Regulation (IVDR) and the Regulation (EU) 2024/1689 on Artificial Intelligence (also known as AI Regulation, KI-VO, English AI Act) will be examined. The document itself is a compilation of frequently asked questions (FAQ) on regulatory issues arising from the interaction of the Medical Devices Regulation (MDR), the In Vitro Diagnostics Regulation (IVDR) and the AI Regulation (AIA).
The document was prepared by representatives of all member states of the European Union (EU) and was led jointly by the Medical Device Coordination Group (MDCG) of the EU Commission and a member state (in the Joint Artificial Intelligence Board, AIB). The group consists of expert committees appointed on the basis of their expertise in the area of medical devices and in vitro diagnostics.
It explains the regulatory principles, classifications, data management requirements, transparency, human oversight, clinical evaluation, and conformity assessment and post-market follow-up (PMCF), with a focus on the safe and legally compliant use of AI in the medical technology sector.
AI systems that are used for medical purposes are called Artificial intelligence in medical devices or AI in medical devices (English Medical Device Artificial Intelligence (MDAI); all references to MDAI also include products in accordance with MDR Annex XVI, accessories for medical devices, in vitro diagnostics and accessories for in vitro diagnostics.
What are the first important aspects of AI systems for medical purposes?
Briefly summarized (does not replace reading the 27 pages of MDCG 2025-6 and further observation of this topic, source see below) The following topics are covered in 7 sections on the interplay between MDR, IVDR and AI Act:
- What does MDCG 2025-6 regulate?
It provides answers to frequently asked questions about the interaction of the Medical Devices Regulation (MDR), the In Vitro Diagnostics Regulation (IVDR) and the KI Regulation (AI Act), in particular for AI-based medical devices in accordance with Article 2 (1) MDR or Article 2 (2) IVDR. MDAI also relate to medical devices in accordance with MDR Annex XVI. For software, on MDCG 2019-11 remanded. - When is an MDAI considered a high-risk AI system in the sense of AIA?
An MDAI is considered a high-risk AI system if the following two conditions are met: First, the MDAI is either a safety-relevant component or the AI system itself represents a medical device; secondly, the MDAI is subject to a conformity assessment by a notified body in accordance with the requirements of the MDR/IVDR. - When is an MDAI classified as a high-risk AI system under AIA?
When the system performs safety-related functions and is subject to conformity assessment by a notified body. - Does the KI VO influence risk classification according to MDR/IVDR?
The MDR/IVDR risk class determines the status as a high-risk AI system under AIA; the KI VO does not change this. - How is management added over the life cycle?
Both call for ongoing risk management, monitoring and quality management for MDAI. - What are the requirements for quality assurance systems?
Manufacturers must document and implement comprehensive, risk-based systems, including the requirements of both regulations. - What are the risk management requirements?
Continuous identification, assessment, and mitigation of reasonably foreseeable risks that high-risk MDAI may pose for fundamental rights, data distortions, and system robustness, including identification, analysis, and mitigation of risks associated with system design, development, and deployment, and may include training for providers. across the entire life cycle of the product. - What are the data and data management requirements?
Data must be representative, error-free and transparent; manufacturers are advised to identify and minimize data risks and discrimination from well-designed studies. To ensure that sufficient clinical evidence is generated, the clinical evaluation or performance evaluation must be based on clinical data representative of the intended use and target population of the device. - What are the requirements for monitoring and mitigating unwanted distortions in MDAI?
Manufacturers of high-risk AI medical devices (MDAI) are required to implement appropriate data management and administration practices that ensure that data quality meets the intended purpose and that potential distortions that could affect health, safety, or fundamental rights are identified, prevented and mitigated. MDAI must also have technical functions for automatic event recording (logging) to ensure traceability, in particular to identify and document risks due to data set distortions or changes in the system over its lifetime; these requirements are supplemented by MDR and IVDR, which require robust clinical data to ensure consistent performance across the target population. - What are the different types of data defined to prove the compliance of an AI system?
- Training data is data that is used to train an AI system by adjusting its learning parameters;
- Validation data is data that is used to evaluate the trained AI system and to coordinate its unlearnable parameters and its learning process, including to prevent under- or over-adjustment;
- validation data set means a separate data set or part of the training data set, either as a fixed or variable distribution;
- Test data is data that is used for an independent evaluation of the AI system to confirm the expected performance of that system before it is placed on the market or put into operation.
- How is training, validation, and testing data used for high-risk MDAI handled?
Using appropriate data is critical for accurate and clinically relevant results with AI medical devices. The training data must represent the target population.
In addition, clinical data must be robust and derived from well-designed studies. Data collection protocols should ensure that relevant characteristics of the target group — such as age, gender, ethnic origin and illness — are sufficiently represented in the data sets. This allows for appropriate generalization and helps to avoid distortions.
Manufacturers must use strict data management practices and validate training data. According to the KI Regulation, data sets for high-risk MDAI must be of high quality, sufficiently representative and free from distortions that could affect health or fundamental rights. The KI Regulation also requires measures for data protection and transparency in data processing.
The European Commission is developing horizontal guidelines for the practical implementation of these requirements, and the CEN/CENELEC Joint Technical Committee 21 is working on harmonised standards on data and distortions.
- Which Are technical documents required for MDAI?
The existing requirements of MDR, IVDR and KI-VO require comprehensive technical documentation for MDAI. MDR and IVDR require detailed descriptions of software, architecture, data processing, and risk management. The KI Regulation also requires information on transparency and accountability, including risk assessments and performance tests.
The documentation must describe design, development, functionality, system architecture, computing resources, and intended use. Manufacturers must provide evidence of compliance, including training, validation, and testing data, and quality management processes. The KI Regulation also requires that uniform technical documentation be created for high-risk MDAI.
- What applies to the evaluation of technical documentation by a notified body?
According to Annex VII of the AIA, products in risk classes IIa/IIb of MDR and classes B/C of IVDR are subject to sample testing, as defined in MDCG 2019-13 of the applicable conformity assessment process. (in accordance with Sections 2.3 and 3.4 of Annex IX of the IVDR and MDR (as well as in Section 10 of Annex XI of MDR)
14. What are the requirements for transparency and MDAI?
The KI-VO and MDR/IVDR establish additional obligations for manufacturers and users to ensure transparency in AI medical devices.
The KI Regulation requires that high-risk AI medical devices be developed in such a way that users correctly understand expenditure and can use the system properly. With direct human-machine interaction, users must be informed that they are communicating with an AI system.
The MDR/IVDR enshrine transparency requirements in general safety and performance requirements. Manufacturers must provide clear information about the purpose, operation, and limits of the device and carry out state of the art software development.
Both regulations form a coherent framework that requires AI medical devices to be developed and used transparently. Transparency requirements are essential obligations within risk and quality management systems and must be proven through the conformity assessment.
15. What are the transparency, explainability, and data processing requirements for high-risk MDAI?
The AI regulation requires transparency and understandable user instructions so that users can make informed decisions and use the system properly.
There are strict requirements for data processing: The data for training, validation and testing must be meaningful, free from errors and distortions, and sufficiently comprehensive to ensure robustness and performance.
The MDR/IVDR require users to receive comprehensive and understandable information about the medical device, its performance, and risks — including a description of how software components contribute to performance.
Together, the AI Regulation and MDR/IVDR form a binding framework in which transparency and explainability are not optional requirements, but manufacturers must ensure that users can understand the logic, limits and behavior of AI components — from development to documentation to post-marketing monitoring.
16. How is MDAI's accountability viewed?
The documentation must describe how inputs are processed and outputs are generated, high-risk AI medical devices must be developed in such a way that users can understand how the system works and achieves its results — with documentation on features, capabilities, and limits.
This should make it possible to verify and communicate AI decisions and to ensure safe and trustworthy operating conditions across the entire product life cycle.
17. How is usability defined for MDAI?
Manufacturers must apply usability principles when designing and developing AI medical devices to ensure safe and effective use. Manufacturers must eliminate or reduce risks due to operating errors as much as possible, taking into account user knowledge and potential training requirements.
AI systems should be developed for post-user oriented principles to enable safe and effective interaction. The usability processes and their results must be documented.
18. What human monitoring requirements are included in MDR, IVDR and KI-VO for high-risk AI medical devices?
The AI regulation requires manufacturers to equip high-risk AI medical devices with mechanisms for human monitoring. These must include built-in operating restrictions that the system cannot override and must allow human intervention in critical decision-making processes.
Monitoring mechanisms must be commensurate with risks and the level of autonomy. Documented monitoring mechanisms and instructions for use should ensure safe use and monitoring by healthcare professionals.
19. Can human monitoring of medical devices be understood as part of existing risk management measures?
Human monitoring is a risk mitigation measure to prevent or minimize risks associated with intended use or foreseeable misuse. Manufacturers must eliminate or reduce risks through safe design and take appropriate protective measures.
In the case of AI medical devices, the manufacturer must consider as part of the risk assessment which level of human monitoring is required — for example in robot-assisted surgical procedures, without jeopardizing the highly autonomous functionality in critical phases.
20. How are consent forms viewed in the context of AI medical devices?
It will require consent forms for clinical trials and performance studies. Participants must be informed about risks, benefits, and goals.
The AI Regulation supplements this with additional transparency obligations in the general use of high-risk AI systems in order to provide users and affected persons with sufficient information on capabilities, limits and risks.
21. How is the traceability of high-risk AI medical devices considered?
Traceability is a key element of both regulations. The MDR/IVDR require that medical devices, including those with AI components, be traceable across the entire supply chain and product life cycle, including unique product identification, registration and post-marketing monitoring.
The AI regulation requires functional traceability: high-risk AI systems must keep records of performance and behavior during the life cycle. Traceability is therefore used in two ways: traceability of device movement and traceability of system function in order to adequately monitor both hardware and software dimensions.
22. What cybersecurity measures are required?
All 3 regulations emphasize robust cybersecurity measures in both phases — before and after marketing. Manufacturers must implement measures to prevent unauthorized access, cyber attacks, and data manipulation.
The AI regulation requires technical solutions to address AI-specific weaknesses. Manufacturers must secure data transmission and storage, prevent unauthorized access, and identify and respond to cybersecurity incidents. Cybersecurity is part of the key requirements and must be considered in risk and quality management systems.
23. What criteria are set for evaluating the performance of AI medical devices?
Criteria to ensure the safety, reliability, and effectiveness of high-risk AI medical devices are specified and requirements such as accuracy, robustness and cybersecurity, as well as testing against predefined metrics and thresholds, are required to ensure that AI medical devices consistently meet their intended purpose.
It also requires the validation of AI training processes to ensure reliability and accuracy, including validation of design, data collection, model training, and a quality management system under various conditions with continuous monitoring.
24. What are the specific clinical (MDR) or performance evaluation (IVDR) requirements for high-risk AI medical devices?
MDR/IVDR require manufacturers to validate AI spending through rigorous testing and prove that AI medical devices work securely and deliver accurate, reliable, and clinically relevant results. The AI regulation also requires validation in terms of transparency, human monitoring, accuracy, robustness and cybersecurity, as well as verification that AI medical devices do not infringe fundamental rights.
All three regulations emphasize testing under various conditions, documentation of evaluation processes, and continuous monitoring. The regulations require clinical evidence and clinical benefits for the intended patient population. For AI medical devices that are able to learn after they have been placed on the market, the AI Regulation also requires control of predefined changes.
25. How can manufacturers conduct clinical trials and performance studies in accordance with MDR/IVDR and KI Regulation?
High-risk AI medical devices must be supported by clinical evidence to demonstrate safety, performance and, where appropriate, clinical benefits. This can be done through clinical trials (MDR) or performance studies (IVDR).
When high-risk AI medical devices are subjected to clinical testing or performance studies, this represents a real operational situation. The AI regulation allows testing of high-risk AI systems before they are placed on the market under certain conditions, without affecting MDR and IVDR requirements.
26. What processes are in place to generate clinical evidence to support the safety and performance of AI medical devices?
MDR/IVDR require the generation of clinical evidence through clinical or performance evaluations, including clinical trials and performance studies. This includes designing and carrying out studies to assess performance, reliability, and clinical impact with specific study design, data collection, and statistical analysis requirements.
All three regulations emphasize the importance of robust evidence to demonstrate the safety, performance, and effectiveness of AI medical devices.
27. What are the conformity assessment procedures for AI systems within the scope of MDR/IVDR and KI Regulation?
The relevant conformity assessment process for AI systems classified as high-risk AI medical devices is determined by MDR/IVDR. Annex I has priority for high-risk AI medical devices to which both KI-VO systems apply.
AI systems that are classified as high-risk AI medical devices based on KI-VO Article 6 (2) and fall within the areas of Annex III points 2—9 follow the conformity assessment procedure set out in Annex VI KI-VO (internal control without the involvement of a notified body). AI systems under Annex III point 1 follow one of the conformity assessment procedures provided for in Article 43 KI Regulation.
28. What is the process for proving compliance?
High-risk AI medical devices are subject to the relevant conformity assessment process based on their risk classification. Most AI medical devices are classified in Class IIa (MDR), B (IVDR) or higher and require a notified body to conduct quality management system audits and technical documentation review.
According to KI-VO Article 43 (3), the requirements of KI-VO Articles 8-15 and specific provisions on quality management systems and technical documentation must be taken into account as part of the MDR/IVDR conformity assessment process.
29. How are significant changes under KI-VO coordinated with changes that could require a new compliance assessment under MDR/IVDR?
The concept of significant change is an independent concept under the AI regulation. According to KI-VO Article 43 (4), high-risk AI medical devices that have undergone a conformity assessment must undergo a new conformity assessment if a significant change is made.
The European Commission will develop guidelines for practical implementation.
30. When are post-marketing changes to a high-risk AI medical device not a significant change?
High-risk AI medical devices that continue learning after they have been placed on the market can have predefined change plans reviewed during the conformity assessment. Changes predefined by the manufacturer and assessed during the initial conformity assessment do not represent a significant change.
Such predefined changes must be clearly specified and documented and should not be treated as a change to the certified medical device in accordance with MDR/IVDR. This must be enshrined in the change management system and in the technical documentation.
31. Do high-risk AI medical devices on the market that go through a significant design change before August 2, 2027, have to go through a new conformity assessment in accordance with the KI Regulation?
No The term “placing on the market” refers to each individual product. The application deadline for high-risk AI systems in accordance with KI-VO Article 6 (1) is 2. August 2027.
Were medical devices placed on the market before this date: Significant design changes from August 2, 2027 are subject to the KI Regulation, but no changes before that date. If medical devices are placed on the market from August 2, 2027, the AI-VO requirements apply in full.
32. Post-market monitoring requirements for AI medical devices?
Manufacturers must establish and implement monitoring systems to monitor performance and safety after they have been placed on the market. This includes systematic data collection and analysis, risk analysis, evaluation of adverse events and, where necessary, taking corrective and preventive measures. Manufacturers must implement a vigilance system and report adverse events to authorities.
33. What are the requirements for continuous performance monitoring mechanisms for AI medical devices?
MDR/IVDR require monitoring systems to monitor performance and safety after marketing. The AI Regulation requires a monitoring plan to systematically collect and analyze relevant performance data over the life cycle time and ensure continuous compliance with the requirements of KI Regulation Articles 8-15.
34. What new requirement dimensions does the KI Regulation introduce into the existing requirements for post-marketing monitoring?
The AI Regulation maintains the manufacturer monitoring obligation, but requires additional monitoring of interaction with other AI systems, devices and software. Users must monitor operation and, if necessary, inform the manufacturer.
The European Commission will adopt implementing acts by 2 February 2026 with a template for the post-marketing monitoring plan. These can be integrated with existing MDR/IVDR monitoring plans if equivalent protection is ensured.
35. Should “in-house” AI medical devices that are only manufactured and used within healthcare facilities be classified as high-risk AI systems?
One condition for classification as a high-risk AI system is that the AI medical device is subject to a conformity assessment by a notified body. Medical devices developed internally by healthcare institutions in the Union and used only within these institutions are therefore not subject to external monitoring.
Such AI medical devices are therefore not classified as high-risk AI systems. However, there are other AI-VO obligations, including the prohibition of certain practices.
36. What should training for healthcare professionals look like?
MDR/IVDR and AI Regulation require manufacturers to provide training for users of AI medical devices when appropriate to ensure proper use and risk reduction. The KI Regulation requires transparency and information for users so that they can understand the system and make appropriate decisions.
If human monitoring requirements are identified, the persons concerned must be able to understand and adequately monitor the capabilities and limitations of the AI medical device. Manufacturers should recommend training that provides a sufficient understanding of the interpretability of the system. The KI-VO also obliges manufacturers and users to ensure that their personnel have sufficient basic AI knowledge.
Source: https://health.ec.eu ropa.eu/ document/down load/b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en? file nam e=mdcg_2025-6_en.pdf (copy space for link)
The regulatory requirements for AI medical devices in Europe are complex and require manufacturers to take a high level of responsibility for patient protection, safety and ethics. Many unanswered questions remain, such as the practical implementation of transparency, explainability and monitoring of learning systems, which requires intensive scientific discourse. Undermining by products from countries without comparable standards threatens the high level of protection and requires consistent market monitoring and international harmonization. MEDIACC is committed to scientifically excellent, ethically responsible solutions and supports manufacturers in implementing these demanding requirements.
Would you like a competent partner to generate evidence for your AI medical device? Contact us for an initial consultation!
Show the medical benefits of your product
With our many years of experience and expertise, we offer effective solutions to demonstrate the medical benefits of your product.
From the conception to the execution of preclinical and clinical investigations, we support you with customized services.
Find out how MEDIACC can help you achieve reimbursability for your products.