news

FDA relaxes AI medical oversight: 1,000 unreviewed diagnostic apps

David BrooksDavid Brooks-February 10, 2026-7 min read
Share:
Conceptual representation of AI-powered medical devices without FDA regulatory oversight

Photo by Unsplash on Unsplash

Key takeaways

The FDA just made it easier for AI diagnostic tools to reach the market without review—but 95% of already-approved devices never report safety issues. Meanwhile, Epic and Cerner face unfair competition from startups bypassing regulation, and ChatGPT diagnoses differently based on patient race.

On January 6, 2026, the FDA published guidance that reduces oversight of clinical decision support software and AI-powered wearables. Over 1,000 diagnostic applications can now enter the market without premarket review if they meet minimal criteria for non-medical software.

This isn't a response to new evidence showing AI medical devices are safer than we thought. It's a policy decision aligned with Commissioner Martin Makary's pledge to move at "Silicon Valley speed" and make the U.S. "the best place for AI capital investment."

The elephant in the room: of the 950 AI-enabled medical devices the FDA approved through August 2024, only 5% ever reported a safety issue post-market. Not because they're flawless—because nobody checks their real-world performance once they're deployed in hospitals.

Epic and Cerner lose ground while startups bypass FDA review

Here's what this actually means for the enterprise healthcare market: regulatory arbitrage at scale.

In November 2025, Epic Systems announced Azure OpenAI integration for its EHR platform, used by hundreds of U.S. hospitals. Oracle Health (formerly Cerner) launched its Clinical Digital Assistant (CDA) with generative AI for medical documentation in October.

Both companies are integrating AI defensively. They know their enterprise clients expect these capabilities, but they also know any algorithmic error exposes them to multimillion-dollar liability because their systems are deeply embedded in clinical workflow. They carry the compliance burden of existing FDA processes—premarket submissions, clinical validation data, post-market surveillance infrastructure.

Startups like Abridge (ambient clinical documentation), PathAI (diagnostic pathology), and Hippocratic AI (voice agents for non-diagnostic tasks) can now operate under the new guidance with minimal oversight if they structure their software as "clinical decision support" offering a single recommendation.

This creates regulatory arbitrage:

Aspect Epic/Cerner AI startups without FDA
EHR integration Deep, years of development Surface-level, API or standalone
Legal liability High, part of clinical system Diffuse, "just suggestions"
Regulatory cost Millions in compliance Practically zero
Competitive advantage Installed infrastructure Speed to market

PathAI has FDA 510(k) clearance and CE mark for its AISight Dx system, giving it regulatory credibility. It now competes with generative AI tools analyzing pathology images without any FDA review. Epic and Oracle don't publish accuracy data for their AI systems, so this analysis relies on public documentation and interviews with hospital CTOs.

The FDA's 510(k) pathway—designed for "substantially equivalent" devices—has been controversial for decades. The January 2026 guidance doesn't reform 510(k); it bypasses it entirely for software that meets "non-device" criteria. That means AI diagnostic tools offering single recommendations can skip premarket review if the logic is "transparent" and clinicians can "independently review the basis" of recommendations.

What does "independently review" mean when the algorithm is a large language model with 175 billion parameters? Or a convolutional neural network trained on millions of radiological images? Algorithmic transparency is an aspiration, not a technical reality in modern AI.

The 95% data gap nobody's talking about

According to an academic report published in December 2025 in NCBI PMC, 95% of FDA-approved AI medical devices never submitted post-market safety data. The surveillance infrastructure hasn't scaled with approval volume—it still relies on voluntary manufacturer reporting.

The FDA approved 221 new AI-enabled devices in 2023 alone, representing 49% year-over-year growth since 2016. The healthcare AI market is valued at $56 billion in 2026 according to Fortune Business Insights. Post-market surveillance capacity hasn't grown proportionally.

Makary announced the agency is developing a "new regulatory framework for AI" designed to move at "Silicon Valley speed." He also plans to eliminate half of existing digital health guidances, though the timeline is unclear.

Let's be real: we're loosening oversight of a product category where 95% already operates in a safety data vacuum, and calling it regulatory innovation when it's actually abdicating responsibility.

The FDA removed the exclusion for "time-critical decision-making" software that existed in the 2022 guidance. Tools that suggest diagnoses in emergency departments can now operate without premarket review. There's no mechanism to detect "AI drift" (when algorithms change behavior after deployment) or mandatory demographic equity audits.

Wearable devices estimating blood pressure, oxygen saturation, or glucose using non-invasive sensors can now operate as "general wellness" products without FDA oversight—even if users make medical decisions based on those readings. The distinction between "wellness" and "diagnosis" is legal, not physiological. If a diabetic adjusts insulin based on a glucose estimate from an unregulated wearable that turns out to be 20% inaccurate, the consequences are medical, not wellness.

ChatGPT's diagnostic bias: racial discrimination by algorithm

In November 2025, researchers documented that ChatGPT, when evaluating college students with sore throat, placed HIV and syphilis "much higher" in the differential diagnosis if the patient's race was specified as Black, compared to white patients with identical symptoms.

Tools like ChatGPT Health (launched by OpenAI in January 2026, just as the FDA relaxed oversight) aren't subject to FDA review if positioned as "general information" rather than medical devices. The 2026 guidance allows clinical decision support software offering a single recommendation to operate without oversight if it meets "non-device" criteria.

That distinction is technical, not clinical. For the physician consulting the tool, it's information influencing diagnosis.

Clinical scenario White patient Black patient Difference
College student, sore throat, fever Strep throat, mononucleosis HIV, syphilis "much higher" Documented racial bias
Usage context Same prompt, only race varies Same prompt, only race varies Algorithm without FDA oversight

AI algorithms trained on historical data inherit the biases of that data. In medicine, that means patients from racial minorities, women, and populations underrepresented in clinical trials receive distorted differential diagnoses. The question isn't whether these biases exist—we know they do. The question is who audits them when the FDA just reduced oversight.

Between 2015 and 2018, IBM deployed Watson for Oncology in hospitals across the U.S., India, Thailand, South Korea, and China. The system recommended cancer treatments based on analysis of medical records and oncology literature. Watson for Oncology never underwent formal FDA review. No clinical trials demonstrated safety or accuracy. IBM marketed it as a decision support tool, not a medical device.

In 2018, leaked internal documents revealed Watson recommended "unsafe and incorrect" treatments in multiple cases, including suggesting bevacizumab (which can cause severe bleeding) to a patient with cerebral hemorrhage. IBM shut down Watson Health in 2022 after years of losses.

The precedent is established: large-scale AI systems can deploy in hospitals without FDA review if structured as support software. The 2026 guidance institutionalizes this loophole.

Silicon Valley speed meets healthcare: Makary's deregulation agenda

In June 2025, before being confirmed as FDA Commissioner, Martin Makary told FierceBiotech the agency should move at "Silicon Valley speed" and make the U.S. "the best place for AI capital investment."

That philosophy explains the January 2026 guidance. It's not a response to scientific evidence. It's a political decision aligned with the Trump administration's pro-AI agenda.

After years covering enterprise healthcare, this is the first time I've seen a regulator openly admit prioritizing industry speed over safety verification. When only 5% of approved devices report adverse events, relaxing oversight doesn't accelerate innovation—it accelerates venture capital return while patients assume the risk.

AI has real potential in medicine: radiology algorithms have demonstrated accuracy comparable to radiologists in breast cancer detection, and documentation tools can reduce physician burnout. But that potential isn't realized by eliminating oversight. It's realized through rigorous clinical trials, equity audits, effective post-market surveillance, and genuine algorithmic transparency.

What the FDA did on January 6 wasn't deregulation to accelerate innovation. It was dismantling the only mechanism we had to know if these tools work before they fail on the wrong patient.

Was this helpful?

Frequently Asked Questions

What exactly changed in FDA regulation in January 2026?

The FDA published guidance on January 6, 2026 reducing oversight of clinical decision support software and wearable devices. Now, tools offering a single diagnostic recommendation can enter the market without premarket review if they meet non-medical software criteria. The 'general wellness' category was also expanded to include wearables estimating physiological parameters like blood pressure or glucose.

Why is it problematic that 95% of AI devices don't report adverse events?

It means there's no effective post-market surveillance. The FDA approves devices based on premarket data but relies on subsequent reports to detect real-world problems. If 95% of devices never report failures, it's impossible to know if they're working correctly in hospitals or causing silent harm.

How does this affect Epic Systems and Oracle Health (Cerner)?

Epic and Cerner already invested millions in FDA compliance processes for their EHR systems. They now compete against startups that can launch AI tools without those costs or scrutiny. This creates regulatory arbitrage: incumbents carry legal liability that their competitors avoid.

What is racial bias in medical AI algorithms?

AI algorithms trained on historical data inherit the biases in that data. In the documented ChatGPT case, Black patients with identical symptoms to white patients received HIV and syphilis higher in the differential diagnosis. This reflects historical healthcare disparities, but without FDA oversight, these biases aren't audited or corrected.

Can I trust wearables estimating glucose or blood pressure without FDA approval?

It depends on usage. If you use them as a general wellness reference, risk is low. But if you make medical decisions (adjusting insulin, changing blood pressure medication) based on readings from a device not clinically validated, risk increases significantly. The FDA no longer requires these devices to demonstrate clinical accuracy if sold as 'general wellness.'

Sources & References (6)

The sources used to write this article

  1. 1

    FDA announces sweeping changes to oversight of wearables, AI-enabled devices

    STAT News•Jan 6, 2026
  2. 2

    The illusion of safety: A report to the FDA on AI healthcare product approvals

    NCBI PMC•Dec 1, 2025
  3. 3

    After FDA's pivot on clinical AI, we need AI safety research more than ever

    STAT News Opinion•Jan 15, 2026

All sources were verified at the time of article publication.

David Brooks
Written by

David Brooks

Veteran tech journalist covering the enterprise sector. Tells it like it is.

#fda#medical ai#regulation#medical devices#epic systems#cerner#chatgpt health#algorithmic bias#watson oncology#martin makary

Related Articles