DiGAs - what needs to be considered from 2026?

The second amendment to the Digital Health Applications Ordinance comes into force on January 29, 2026. This once again tightens the requirements for manufacturers of digital health applications (DIGAs).
Evidence for DiGA

What does this mean for the evidence for DIGAs that must be provided?

The requirements are set out in version 3.6 of 10.12.2025. It is noticeable that the content is being significantly expanded and clarified. Insiders know that the actual requirements go far beyond that and that the writing provides no certainty of understanding the interpretation by the various actors within the authority. We have deliberately linked the current version to the first version and thus summarize all previous changes without presenting the different adjustment cycles in order to avoid confusion.

The main differences compared to 2020 in terms of evidence generation (chapter 4) are after 6 years in version 3.6:

1. Definition of positive supply effects (chapter 4.1)

  • 2020: defines positive care effects (PvE) more descriptively; with 4.1.1 “Medical benefits (mN)” and 4.1.2 “Patient-relevant structural and procedural improvements (pSVV)” are explained separately without much discussion of their subsequent operationalization in studies.
  • Versie 3.6: mN and pSVV are clearly positioned as sub-categories of PvE and placed in the context of subsequent verification requirements.
  • While in 2020, the definition remains largely at the level of conceptual understanding (e.g. improvement of symptoms, functioning, quality of life, etc.) without formulating specific requirements for measurement instruments and endpoint structure, Versione 3.6 underlines that PvE must be proven via clearly defined, patient-relevant endpoints and must be consistent with medical purpose and care path; 4.1 thus forms the normative anchor for the detailed study requirements in 4.3 and 4.6., for example when it comes to external validity, study design, endpoint selection and the design of the evaluation concept.

    2. Disclosure of positive pension effects in the application (chapter 4.2)

    In 2020, the focus is on a coherent description of target population and PvE without deep guidelines for deriving the endpoints. Version 3.6 requires manufacturers to clearly show which subgroups (e.g. indication, severity, supply context) of PvE apply, how these groups coincide with the study populations and how the PvE category specified in the application can be found specifically in the study protocol. What is new is a significantly closer link between the application text and study planning.

    3. General requirements for studies demonstrating PvE (chapter 4.3)

    This point has been extended and specified from 3 sub-items to 10 sub-items and thus represents the much more precise scientific extension. It is no longer enough to present “just any” good study. The planning must meet GCP-like requirements (randomization, ITT definition, imputation, registry entry, external validity, stringent reporting). This point has received the most specifications and applies centrally to all forms of evidence generation.

  • - Method selection (4.3.1): 2020 describes more generally that the method must be suitable and scientifically recognized. Version 3.6 clarifies under which conditions RCT, controlled observational studies or alternative designs are acceptable and how risks of distortion should be minimized.
  • - Evaluation and evaluation (4.3.2): completely new. Manufacturers must explain the principles they use to analyze endpoints, how they handle multiple testing, sensitivity analyses and subgroup analyses, and how they avoid bias in favour of the intervention group.
  • - Supply path (4.3.3): new. Manufacturers must explicitly describe in which supply path DiGA is used, which standard supply is compared and why this comparison is suitable for PvE.
  • - randomization (4.3.4): new and very specific. Requirements for randomization lists, block lengths, allocation blinding, documentation of access rights and auditability of the process are defined. For manufacturers, this means that “simple” or opaque randomizations are no longer sufficient.
  • - Study design and methodological planning (4.3.5): new and significantly deepened. Robust primary comparisons, definition of the ITT population and conservative imputation methods (e.g. reference-based multiple imputation, J2R) are required. This increases the requirements for statistical planning and documentation.
  • - application documents (4.3.6) and study reports (4.3.7): new. You define the minimum content components of the study protocol, SAP, register entry and report so that the BfArM can understand the PvE.
  • - External validity and consistency (4.3.8): new. Manufacturers must show that the study population represents the target population; inconsistencies in subgroups can lead to restrictions on indications.
  • - Implementation in Germany (4.3.9) and presecification/study register (4.3.10): Version 3.6 expands on the concise requirements of 2020 (4.3.2—4.3.3), requires comprehensible reasons when parts of the study are carried out abroad, and a clear specification of hypotheses and endpoints in the study register.
  • Overall, a sound scientific understanding and consistent planning are required.

    4. Publication of complete study results (chapter 4.4)

    In addition to the requirements to comply with “International Standards for Study Reports,” the “presentation of results and discussion” required by practice is now also being written down. The usual international standards for the scientific presentation of results are required, as in established journals with impact factors in the top two quarters (internal medicine) with an expert review process. In addition, following scientific practice, the discussion should include existing literature and in particular existing results on DiGA.

    5. Request for preliminary admission to testing (chapter 4.5)

    In addition to justifying the improvement in care, the systematic data evaluation is now also specified in version 3.6 with reference to chapters 4.3.1 to 4.3.9. (e.g. clearly defined endpoints, evaluation methods, planned sample size, handling of dropouts) and ties the evaluation concept closely to the separate chapter “Evaluation concept and systematic data evaluation” in the appendix. Here, it is methodically explained more clearly what a systematic data evaluation should look like and that, as was also the case in the past, it is no longer enough to evaluate any existing data.

    6. Specific requirements for study types and study designs (chapter 4.6)

  • In essence, it is described in 2020 that a comparative design with appropriate control, sufficient number of cases and appropriate endpoints is required without providing detailed guidelines on statistical procedures or dealing with distortions. Version 3.6 integrates the methodological requirements specified in chapter 4.3 (care path, ITT, conservative imputation, randomization) and makes it clear that observational studies are only accepted under strict conditions. This also includes collecting characteristics “completely and correctly in type and scope.” This places high demands on the quality of study, which requires stringent design, planning and implementation. Examples are given to illustrate this.
  • While in 2020 “diagnostic quality studies” with general requirements for sensitivity, specificity and comparison to the reference standard were relatively short and focused more on classic diagnostics, these are expanded in version 3.6 for “DiGA with diagnostic component”, including requirements such as how diagnostic outputs of DiGA are embedded in the care path, how misclassifications (false positive/false negative) are assessed and how diagnostic performance with the bean must be linked to PvE. For manufacturers of diagnostic-related DiGA, this makes it clear that pure test quality is not enough; it must be shown that the diagnostic component results in measurable PvE in care.
  • Sources: BfArM, “The Fast-Track Process for Digital Health Applications (DiGA) in accordance with § 139e SGB V. A guide for manufacturers, service providers and users”, as of 17.04.2020 (DiGA-Leitfaden_2020.pdf) and version 3.6 from 10.12.2025 (diga_leitfaden-1.pdf).

    The changes in chapter 4 of version 3.6 of the BfArM DIGA Guide significantly increase the requirements for planning, carrying out and evaluating studies by linking PvE more closely to clearly defined endpoints, care paths, randomization, ITT analyses, imputation and external validity. For manufacturers, this means that in future, the proof of positive supply effects will have to be more methodically based on GCP-like standards and that exploratory, less robust study designs will be much more difficult to recognize. The BFarmExplicit requires the significantly increased methodological and regulatory requirements to be translated into a formally valid study design, clean implementation and verifiable documentation. This reduces the risk for manufacturers that studies will not be accepted due to methodological weaknesses and that inclusion or retention in the DIGA list will be delayed or failed.

    Is that interesting for you? contact us We would be happy to arrange a non-binding initial consultation so that we can generate the evidence so that it can be accepted by BfArM!

    Icon sources
    Table of contents

    Show the medical benefits of your product

    With our many years of experience and expertise, we offer effective solutions to demonstrate the medical benefits of your product.

    From the conception to the execution of preclinical and clinical investigations, we support you with customized services.

    Find out how MEDIACC can help you achieve reimbursability for your products.