EU AI Act & Medical Devices: Dual-Framework Compliance Guide for SaMD Manufacturers

EU AI Act & Medical Devices: Navigating Dual-Framework Compliance for AI-Powered SaMD The EU AI Act applies to your AI medical device. There is no exemption. This guide covers the obligations under Articles 9–15, maps the overlap with EU MDR, and sets out the practical compliance pathway manufacturers should follow before the August 2027 deadline.
Executive Overview
The European Union's Artificial Intelligence Act (Regulation (EU) 2024/1689) introduces a horizontal regulatory framework for AI systems across all sectors. For manufacturers of AI-powered medical devices already regulated under the Medical Device Regulation (EU 2017/745), this creates a dual compliance obligation: devices must satisfy both MDR and AI Act requirements simultaneously.
⚠️ THIS IS NOT A FUTURE PROBLEM The AI Act entered into force in August 2024. Penalties for prohibited practices apply from February 2025. Full obligations for high-risk AI systems — which includes virtually all AI medical devices — apply from August 2027, with transitional provisions for systems already on the market requiring compliance from August 2026.
For SaMD manufacturers with existing CE certification, the critical question is not whether the AI Act applies — it does — but how to integrate AI Act compliance into existing MDR quality and regulatory systems without duplicating effort or creating parallel compliance workstreams.
This paper sets out the regulatory framework, identifies the key obligations, maps the overlap between MDR and AI Act requirements, and provides a practical compliance pathway for medical device manufacturers.
1. Why AI Medical Devices Are Classified High-Risk
The AI Act classifies AI systems by risk tier: unacceptable, high-risk, limited, and minimal. AI-powered medical devices fall into the high-risk category through two independent routes:

| Route | Description | | :--- | :--- | | Route 1: Annex I (Sector Legislation) | AI systems that are safety components of products covered by EU harmonisation legislation — including MDR (2017/745) and IVDR (2017/746) — and that require third-party conformity assessment are automatically classified as high-risk AI. | | Route 2: Annex III (Use Case) | Annex III explicitly lists AI systems intended to be used in healthcare and medical diagnosis as high-risk, regardless of whether they are also covered by sector-specific legislation. |
For any SaMD manufacturer whose devices require Notified Body conformity assessment under MDR, both routes apply. There is no exemption, no opt-out, and no ambiguity: the AI Act applies.
KEY CLARIFICATION (MDCG/AIB FAQ, June 2025): The MDCG and the AI Board jointly confirmed that a system qualifying as both an AI system under the AI Act AND a medical device under the MDR must comply with both frameworks. The AI Act conformity assessment for medical devices can be integrated into the MDR conformity assessment conducted by the same Notified Body — provided the NB is designated for AI Act assessments.
2. The Five Pillars of AI Act Compliance for Medical Devices
The AI Act imposes specific obligations on providers of high-risk AI systems through Articles 9–15. For medical device manufacturers, these articles create requirements that are additional to — but in many cases overlap with — existing MDR obligations.

Article 9: Risk Management System
The AI Act requires a continuous, iterative risk management system throughout the AI system's lifecycle. This extends beyond ISO 14971 device risk management to include AI-specific risks:
| AI-Specific Risk Category | Description | | :--- | :--- | | Model Drift | Performance degradation as real-world data distributions shift from training data | | Algorithmic Bias | Differential performance across demographic subgroups, imaging equipment, or clinical settings | | Edge Case Failures | Unpredictable behaviour on inputs outside the training distribution | | Feedback Loop Risks | Where AI outputs influence future training data, creating self-reinforcing errors | | Automation Bias | The risk that clinicians over-rely on AI output and reduce independent clinical judgement |
MDR OVERLAP: ISO 14971 risk management covers device hazards but typically does not address algorithmic risks in sufficient depth. Manufacturers will need to extend their existing risk management process to incorporate AI-specific risk categories — not replace it. For manufacturers already maintaining robust risk management files for Class IIb or Class III devices, this is an extension, not a rebuild.
Article 10: Data and Data Governance
This is the most technically demanding requirement. Manufacturers must document and demonstrate that training, validation, and testing datasets are:
- Relevant and representative of the intended patient populations and clinical settings
- Free from errors and sufficiently complete for the intended purpose
- Subject to appropriate data governance including provenance tracking, annotation quality assurance, and bias assessment
- Validated across demographic subgroups (age, sex, ethnicity), imaging equipment types, and geographic regions

For companies deploying AI across multiple countries with models trained on millions of annotated images, this means demonstrating that performance is equitable and representative — not just accurate in aggregate. An overall AUC of 0.95 can mask significant underperformance in specific subpopulations.
MDR OVERLAP: MDR Annex I requires clinical evidence of safety and performance, but does not explicitly require subgroup bias analysis or training data governance documentation. The AI Act fills this gap. Manufacturers will need new documentation artefacts that do not currently exist in standard MDR technical files.
Article 13: Transparency and Information to Deployers
High-risk AI systems must be designed to ensure transparency. For medical AI, this means clear documentation of:
- What the AI system does, its capabilities, and its limitations
- Intended purpose, including specific patient populations and clinical contexts
- Performance metrics and known failure modes
- How to interpret AI outputs correctly
- Circumstances under which the AI system may produce unreliable results
MDR OVERLAP: MDR labelling requirements (Annex I, Chapter III) cover intended purpose and basic safety information, but the AI Act requires deeper technical transparency specifically about algorithmic behaviour, limitations, and interpretation guidance. Instructions for Use will likely need updating.
Article 14: Human Oversight
High-risk AI systems must be designed to allow effective human oversight. For medical AI, the critical question is whether the system operates as:
| Deployment Model | Characteristics | | :--- | :--- | | Decision Support | AI provides information or recommendations. A clinician independently reviews and acts. Human oversight is inherent in the workflow. | | Autonomous / Semi-Autonomous | AI auto-prioritises worklists, triggers alerts, or escalates cases without per-case clinician review before action. Higher human oversight documentation burden. |

Products that auto-prioritise PACS worklists or trigger automated notifications (e.g., critical finding alerts in emergency triage) have autonomous elements that require robust documentation demonstrating that clinicians can override, understand, and meaningfully supervise the AI output.
MDR OVERLAP: MDR Annex I Section 22 addresses software-specific requirements and requires information about the degree of clinical decision support. The AI Act goes further by requiring documented human oversight mechanisms per deployment model, not just per device.
Article 15: Accuracy, Robustness and Cybersecurity
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. For medical AI, this includes:
- Documented accuracy metrics with confidence intervals, per intended use
- Robustness against adversarial inputs, data corruption, and environmental variation
- Resilience to attempts to alter system behaviour through manipulation of training data or inputs
- Cybersecurity measures appropriate to the risk level
MDR OVERLAP: MDR Annex I Section 17 addresses cybersecurity for medical devices. The AI Act adds requirements specifically for adversarial robustness and training data integrity that are unique to AI systems.
3. Practical Compliance Pathway
The key principle confirmed by the MDCG/AIB FAQ is regulatory coherence: manufacturers should integrate AI Act compliance into their existing MDR quality system rather than building parallel compliance structures. The practical pathway is:
- Step 1 — Gap Analysis: Assess current MDR documentation and QMS processes against AI Act Articles 9–15. Identify what is already covered, what needs extending, and what is entirely new.
- Step 2 — Notified Body Engagement: Confirm whether your Notified Body is designated (or plans to seek designation) for AI Act conformity assessments. If not, identify alternative pathways.
- Step 3 — Documentation Development: Develop AI Act-specific documentation artefacts: AI risk management extension, data governance framework, transparency documentation, human oversight assessment per product, and accuracy/robustness evidence.
- Step 4 — QMS Integration: Embed AI Act requirements into existing QMS processes — design controls, risk management, PMS, document control — rather than creating separate AI Act procedures.
- Step 5 — Post-Market Monitoring Extension: Extend PMS system to include AI-specific monitoring: model performance tracking, drift detection, bias monitoring, and integration into PSUR reporting.
4. Timeline and Urgency

| Date | Obligation | | :--- | :--- | | 1 August 2024 | AI Act enters into force | | 2 February 2025 | Prohibited AI practices enforcement begins. AI literacy training requirements apply. | | 2 August 2025 | General-purpose AI model obligations, governance, and penalties apply. | | 2 August 2026 | High-risk AI systems already on the market must demonstrate compliance. Transitional conformity assessment provisions apply. | | 2 August 2027 | Full AI Act obligations apply to all high-risk AI systems, including medical devices. NB conformity assessment under AI Act required for new and modified devices. |
⚠️ DON'T WAIT FOR HARMONISED STANDARDS Waiting for harmonised standards to be published is a common but risky strategy — the obligations are defined in the regulation text, and harmonised standards provide presumption of conformity, not the requirements themselves. Manufacturers with products already on the EU market should begin gap analysis and documentation development now.
Frequently Asked Questions
Does the EU AI Act apply to my medical device?
If your device uses AI and requires Notified Body conformity assessment under MDR, the AI Act classifies it as high-risk through two independent routes (Annex I and Annex III). There is no exemption. You must comply with both EU MDR and the AI Act.
When do I need to be compliant?
The AI Act entered into force in August 2024. If your AI device is already on the EU market, transitional compliance applies from August 2026. Full obligations for all high-risk AI systems apply from 2 August 2027. Gap analysis should begin now, not closer to the deadline.
What are the main new requirements beyond EU MDR?
The AI Act adds five key obligations through Articles 9–15: AI-specific risk management (model drift, algorithmic bias, automation bias), training data governance and bias assessment, transparency documentation about AI limitations and failure modes, human oversight per deployment model, and adversarial robustness testing. Many overlap with MDR but go deeper into algorithmic territory.
Can my Notified Body handle AI Act conformity assessment?
The MDCG/AIB FAQ confirmed that AI Act conformity assessment can be integrated into MDR conformity assessment by the same NB — but only if the NB is designated for AI Act. Check with your NB now. If they are not seeking designation, you may need to identify alternative pathways.
Do I need to build a separate AI Act QMS?
No. The recommended approach is to integrate AI Act requirements into your existing MDR quality system — extending risk management, PMS, design controls, and document control processes rather than creating parallel structures. This is more efficient and more sustainable.

Start Your AI Act Gap Analysis
VigilaMed delivers multi-framework gap assessments covering ISO 13485, EU MDR, EU AI Act, IEC 62304, and ISO 14971 — for SaMD manufacturers from Class I through Class III.
Download the Full White Paper PDF
About the Author
Michelle Hilling is Managing Director of VigilaMed Ltd, a Glasgow-based QARA consultancy specialising in EU MDR, UKCA, FDA QMSR, and EU AI Act compliance for medical device manufacturers across Class I, Class II, and Class III classifications.
With over 10 years embedded in medical device manufacturing environments — including Class III cardiovascular implants, active surgical devices, and diagnostic equipment — Michelle maintains a 100% audit pass rate with zero major findings across 20+ Notified Body and regulatory authority audits.
ISO 13485:2016 Lead Auditor · FDA 21 CFR Part 820 · Six Sigma Yellow Belt Michelle.Hilling@VigilaMed.com · www.VigilaMed.com
Need help with Regulatory Intelligence?
Our team of experts can help you navigate these regulatory requirements seamlessly. Book a discovery call today.
Book a Discovery Call