On January 6, 2026, the FDA published guidance that reduces oversight of clinical decision support software and AI-powered wearables. Over 1,000 diagnostic applications can now enter the market without premarket review if they meet minimal criteria for non-medical software.
This isn't a response to new evidence showing AI medical devices are safer than we thought. It's a policy decision aligned with Commissioner Martin Makary's pledge to move at "Silicon Valley speed" and make the U.S. "the best place for AI capital investment."
The elephant in the room: of the 950 AI-enabled medical devices the FDA approved through August 2024, only 5% ever reported a safety issue post-market. Not because they're flawless—because nobody checks their real-world performance once they're deployed in hospitals.
Epic and Cerner lose ground while startups bypass FDA review
Here's what this actually means for the enterprise healthcare market: regulatory arbitrage at scale.
In November 2025, Epic Systems announced Azure OpenAI integration for its EHR platform, used by hundreds of U.S. hospitals. Oracle Health (formerly Cerner) launched its Clinical Digital Assistant (CDA) with generative AI for medical documentation in October.
Both companies are integrating AI defensively. They know their enterprise clients expect these capabilities, but they also know any algorithmic error exposes them to multimillion-dollar liability because their systems are deeply embedded in clinical workflow. They carry the compliance burden of existing FDA processes—premarket submissions, clinical validation data, post-market surveillance infrastructure.
Startups like Abridge (ambient clinical documentation), PathAI (diagnostic pathology), and Hippocratic AI (voice agents for non-diagnostic tasks) can now operate under the new guidance with minimal oversight if they structure their software as "clinical decision support" offering a single recommendation.
This creates regulatory arbitrage:
| Aspect | Epic/Cerner | AI startups without FDA |
|---|---|---|
| EHR integration | Deep, years of development | Surface-level, API or standalone |
| Legal liability | High, part of clinical system | Diffuse, "just suggestions" |
| Regulatory cost | Millions in compliance | Practically zero |
| Competitive advantage | Installed infrastructure | Speed to market |
PathAI has FDA 510(k) clearance and CE mark for its AISight Dx system, giving it regulatory credibility. It now competes with generative AI tools analyzing pathology images without any FDA review. Epic and Oracle don't publish accuracy data for their AI systems, so this analysis relies on public documentation and interviews with hospital CTOs.
The FDA's 510(k) pathway—designed for "substantially equivalent" devices—has been controversial for decades. The January 2026 guidance doesn't reform 510(k); it bypasses it entirely for software that meets "non-device" criteria. That means AI diagnostic tools offering single recommendations can skip premarket review if the logic is "transparent" and clinicians can "independently review the basis" of recommendations.
What does "independently review" mean when the algorithm is a large language model with 175 billion parameters? Or a convolutional neural network trained on millions of radiological images? Algorithmic transparency is an aspiration, not a technical reality in modern AI.
The 95% data gap nobody's talking about
According to an academic report published in December 2025 in NCBI PMC, 95% of FDA-approved AI medical devices never submitted post-market safety data. The surveillance infrastructure hasn't scaled with approval volume—it still relies on voluntary manufacturer reporting.
The FDA approved 221 new AI-enabled devices in 2023 alone, representing 49% year-over-year growth since 2016. The healthcare AI market is valued at $56 billion in 2026 according to Fortune Business Insights. Post-market surveillance capacity hasn't grown proportionally.
Makary announced the agency is developing a "new regulatory framework for AI" designed to move at "Silicon Valley speed." He also plans to eliminate half of existing digital health guidances, though the timeline is unclear.
Let's be real: we're loosening oversight of a product category where 95% already operates in a safety data vacuum, and calling it regulatory innovation when it's actually abdicating responsibility.
The FDA removed the exclusion for "time-critical decision-making" software that existed in the 2022 guidance. Tools that suggest diagnoses in emergency departments can now operate without premarket review. There's no mechanism to detect "AI drift" (when algorithms change behavior after deployment) or mandatory demographic equity audits.
Wearable devices estimating blood pressure, oxygen saturation, or glucose using non-invasive sensors can now operate as "general wellness" products without FDA oversight—even if users make medical decisions based on those readings. The distinction between "wellness" and "diagnosis" is legal, not physiological. If a diabetic adjusts insulin based on a glucose estimate from an unregulated wearable that turns out to be 20% inaccurate, the consequences are medical, not wellness.
ChatGPT's diagnostic bias: racial discrimination by algorithm
In November 2025, researchers documented that ChatGPT, when evaluating college students with sore throat, placed HIV and syphilis "much higher" in the differential diagnosis if the patient's race was specified as Black, compared to white patients with identical symptoms.
Tools like ChatGPT Health (launched by OpenAI in January 2026, just as the FDA relaxed oversight) aren't subject to FDA review if positioned as "general information" rather than medical devices. The 2026 guidance allows clinical decision support software offering a single recommendation to operate without oversight if it meets "non-device" criteria.
That distinction is technical, not clinical. For the physician consulting the tool, it's information influencing diagnosis.
| Clinical scenario | White patient | Black patient | Difference |
|---|---|---|---|
| College student, sore throat, fever | Strep throat, mononucleosis | HIV, syphilis "much higher" | Documented racial bias |
| Usage context | Same prompt, only race varies | Same prompt, only race varies | Algorithm without FDA oversight |
AI algorithms trained on historical data inherit the biases of that data. In medicine, that means patients from racial minorities, women, and populations underrepresented in clinical trials receive distorted differential diagnoses. The question isn't whether these biases exist—we know they do. The question is who audits them when the FDA just reduced oversight.
Between 2015 and 2018, IBM deployed Watson for Oncology in hospitals across the U.S., India, Thailand, South Korea, and China. The system recommended cancer treatments based on analysis of medical records and oncology literature. Watson for Oncology never underwent formal FDA review. No clinical trials demonstrated safety or accuracy. IBM marketed it as a decision support tool, not a medical device.
In 2018, leaked internal documents revealed Watson recommended "unsafe and incorrect" treatments in multiple cases, including suggesting bevacizumab (which can cause severe bleeding) to a patient with cerebral hemorrhage. IBM shut down Watson Health in 2022 after years of losses.
The precedent is established: large-scale AI systems can deploy in hospitals without FDA review if structured as support software. The 2026 guidance institutionalizes this loophole.
Silicon Valley speed meets healthcare: Makary's deregulation agenda
In June 2025, before being confirmed as FDA Commissioner, Martin Makary told FierceBiotech the agency should move at "Silicon Valley speed" and make the U.S. "the best place for AI capital investment."
That philosophy explains the January 2026 guidance. It's not a response to scientific evidence. It's a political decision aligned with the Trump administration's pro-AI agenda.
After years covering enterprise healthcare, this is the first time I've seen a regulator openly admit prioritizing industry speed over safety verification. When only 5% of approved devices report adverse events, relaxing oversight doesn't accelerate innovation—it accelerates venture capital return while patients assume the risk.
AI has real potential in medicine: radiology algorithms have demonstrated accuracy comparable to radiologists in breast cancer detection, and documentation tools can reduce physician burnout. But that potential isn't realized by eliminating oversight. It's realized through rigorous clinical trials, equity audits, effective post-market surveillance, and genuine algorithmic transparency.
What the FDA did on January 6 wasn't deregulation to accelerate innovation. It was dismantling the only mechanism we had to know if these tools work before they fail on the wrong patient.




