Artificial intelligence (AI) is rapidly becoming integrated into hospital workflows, yet regulatory oversight is struggling to keep pace. The current framework, built for static medical devices like pacemakers, struggles to adapt to AI software that continuously evolves through updates. This poses a critical question: who ensures patient safety once AI is already in use?
The FDA’s Shift Towards Post-Market Oversight
On December 29th, the U.S. Food and Drug Administration (FDA) opened public comment on a citizen petition proposing a change in how certain radiology AI tools are regulated. The proposal suggests shifting away from repeated pre-market FDA reviews toward ongoing oversight after AI is deployed. This means safety evaluations would happen in real-world clinical settings, rather than relying solely on upfront approvals.
This shift is not without risk. The core issue is accountability. While regulators can set requirements and manufacturers can monitor performance, the ultimate consequences fall on patients, physicians, and healthcare systems. As AI expands its role in patient care, the lines of responsibility blur.
Speed Versus Safety: The Challenge of Adaptive AI
Modern clinical AI is designed to change. Updates refine performance, expand capabilities, and incorporate new medical knowledge. This adaptability is essential for relevance, but it clashes with the traditional regulatory model of fixed products.
Dr. Kei Nakagawa, director at UC San Diego Health, emphasizes that reduced pre-market friction requires robust post-market surveillance: “If we reduce pre-market friction, post-market surveillance isn’t optional… We’re dealing with a new species of clinical AI, and the framework has to work beyond the ivory towers.”
Chris Wood, CEO of RevealDx, warns that the petition’s implications extend beyond simple detection tools. He points out that unchecked changes could have “dire” consequences, as the proposal impacts not just CADe (computer-aided detection) but also CADx (computer-aided diagnosis).
Uneven Oversight Across Healthcare Systems
Some institutions have dedicated governance committees, validation protocols, and informatics infrastructure to monitor AI performance. However, many others do not. The FDA’s decision to solicit public comment underscores the uncertainty surrounding adaptive AI regulation.
Andrew Menard, executive director at Johns Hopkins Health System, notes the divide among experts: some argue existing pathways are burdensome, while others fear shifting oversight could introduce new risks. The core question remains: who is accountable? Academic centers may have the resources to monitor AI safety, but smaller hospitals lack the same capacity.
Brenton Hill, head of operations at the Coalition for Health AI (CHAI), stresses that effective lifecycle oversight must extend beyond major academic centers. “If we’re going to let innovation move faster, we have to build safety nets that work for small hospitals, not just the biggest systems.”
Physicians as the Final Line of Defense?
Some experts believe physicians may become the ultimate safeguard as AI integrates more deeply into clinical workflows. Dr. Scott Mahanty, a practicing radiologist, supports the petition but insists on “teeth” in the enforcement mechanisms: “Minimum elements for post-market plans, explicit expectations around handling performance drift, attention to under-represented populations, and clear requirements for how key real-world findings are communicated back to users.”
However, not all AI carries the same risk. Dr. Lauren Nicola, CEO of Triad Radiology, points out that some tools allow physicians to independently verify results, while others do not. Radiologists also lack the time or resources to act as regulators themselves.
The Future of Clinical AI Regulation
The FDA’s invitation for public comment signals that existing regulatory tools may be insufficient for software that evolves after deployment. Lifecycle oversight is likely inevitable, shifting responsibility to manufacturers and clinicians.
The critical issue is not just how medical AI is regulated, but who owns safety once it becomes part of patient care. Clear rules are needed for performance checks and handling problems, ensuring that health systems of all sizes can maintain patient safety over time. This petition highlights the urgent need for collaboration between manufacturers, healthcare systems, clinicians, and regulators to ensure AI’s safe integration into everyday care.































