On January 7, 2026, OpenAI revealed a stat nobody expected: 230 million people ask ChatGPT about health every week. That's 1 in 4 users. 40 million do it daily.
OpenAI's response was to launch ChatGPT Health, a dedicated experience that lets you connect your medical history, Apple Health data, and lab results directly to the AI.
Four days later, at the JPMorgan Healthcare conference, Anthropic struck back with Claude for Healthcare. Same timing. Same market. Different strategy.
And Google didn't sit still: MedGemma 1.5 arrived on January 13, becoming the first public model capable of interpreting 3D CT and MRI images.
In a single week, the three AI giants declared war over the largest digital health market in history: $505 billion by 2033.
But there's a problem nobody wants to discuss openly: your medical data isn't protected by HIPAA when you share it with these apps.
This is the complete guide to the healthcare AI war. What they offer, how much they cost, and most importantly: should we trust them with our most intimate data.
ChatGPT Health: OpenAI's Personal Health Hub
ChatGPT Health isn't a new product—it's an evolution. OpenAI discovered that millions were already using ChatGPT for health questions and decided to formalize it.
What you can do with ChatGPT Health
Connect medical records: Through b.well, you can sync data from 2.2 million healthcare providers and 320 insurance plans in the United States. The AI accesses your history, medications, allergies, and lab results.
Integrate wellness apps: Apple Health, MyFitnessPal, Peloton, Weight Watchers, AllTrails, and Instacart. ChatGPT can analyze your steps, heart rate, sleep patterns, and eating habits.
Upload documents: Lab PDFs, photos of prescriptions, hospital discharge summaries. The AI processes them and explains what they mean in simple language.
Prepare medical appointments: Generates question lists for your doctor based on your history and current symptoms.
The numbers that scare (and excite)
| Metric | Data |
|---|---|
| Users asking about health weekly | 230 million |
| Daily health questions | 40+ million |
| Total active ChatGPT users | 800 million |
| Percentage asking about health | 25% (1 in 4) |
| Conversations outside clinic hours | 70% |
That last stat is crucial: 70% of health conversations with ChatGPT happen when clinics are closed. OpenAI is filling a real gap in the healthcare system.
The acquisition that reveals the strategy
On January 12, OpenAI bought Torch Health for between $60 and $100 million. A 4-person startup building a "unified medical memory" for AI.
Torch was founded by former Forward Health employees, an AI clinic that closed in 2024 after raising $400 million. OpenAI doesn't just want you to ask about health: they want to be the central repository of your medical life.
Who already uses ChatGPT for Healthcare (Enterprise)
The enterprise version, with HIPAA compliance and BAA agreements, is already at:
- AdventHealth
- Boston Children's Hospital
- Cedars-Sinai Medical Center
- Memorial Sloan Kettering Cancer Center
- Stanford Medicine Children's Health
- UCSF
These hospitals don't use the consumer version. They use an isolated instance with specific security controls.
How much it costs
| Plan | Price | Health Access |
|---|---|---|
| Free | $0 | Yes (with ads) |
| Go | $8/mo | Yes |
| Plus | $20/mo | Yes |
| Pro | $200/mo | Yes |
| Enterprise | Custom | HIPAA compliant |
Claude for Healthcare: Anthropic's Enterprise Bet
While OpenAI targets consumers, Anthropic targets hospitals and insurers.
Fundamental strategy difference
ChatGPT Health says: "Understand your health." Claude for Healthcare says: "Automate medical paperwork."
Anthropic doesn't want to be your virtual doctor. They want to eliminate the hours doctors and nurses waste on administrative tasks.
Claude's specific capabilities
For consumers (Pro/Max):
- Connect records via HealthEx (50,000+ organizations)
- Summarize medical history
- Explain lab results
- Detect patterns in wearable data
For organizations:
- Prior authorization review
- Insurance claims processing
- Medical coding (ICD-10)
- System interoperability (FHIR)
Available connectors
Claude integrates with sources ChatGPT doesn't have:
- CMS Coverage Database
- National Provider Identifier Registry
- PubMed
- ClinicalTrials.gov
- Medidata
- bioRxiv and medRxiv
These connectors make Claude more useful for clinical research than for the average user.
Claude pricing
| Plan | Price | Healthcare Features |
|---|---|---|
| Pro | $20/mo | HealthEx, Function |
| Max | $100-200/mo | 5-20x more capacity |
| Team | $25-30/user | Collaboration |
| Enterprise | Custom | HIPAA-ready |
The strictest privacy policy
Anthropic has a rule OpenAI doesn't:
"A qualified professional is required to review content before disseminating when Claude is used for health decisions, diagnosis, patient care, therapy, or medical guidance."
Claude doesn't pretend to replace the doctor. It pretends to make them more efficient.
MedGemma 1.5: Google Bets on Open-Source
Google took a different path: instead of creating a consumer product, they released a free model for others to build on.
The technical innovation
MedGemma 1.5 is the first public model capable of interpreting 3D CT and MRI images.
The previous version only processed 2D images (X-rays, skin photos). Now it can analyze volumetric data from CT scans, MRIs, and histopathology.
Measurable improvements
| Task | MedGemma 1.0 | MedGemma 1.5 |
|---|---|---|
| CT classification | 58% | 61% (+3%) |
| MRI classification | 51% | 65% (+14%) |
| Lab extraction | 60% | 78% (+18%) |
MedASR: Medical transcription
Google also launched MedASR, a speech recognition model specialized in medical dictation:
- 58% fewer errors than Whisper large-v3 on chest X-ray dictations
- 82% fewer errors on diverse medical dictations
Availability
MedGemma is completely free for research and commercial use. It's available on Hugging Face and Google Cloud Vertex AI.
But there's an important disclaimer: it's not approved for direct clinical use. It's a starting point for developers who want to create health applications.
The Elephant in the Room: Privacy
This is where the story gets complicated.
ChatGPT Health is NOT protected by HIPAA
HIPAA (Health Insurance Portability and Accountability Act) protects your medical data when you share it with healthcare providers, hospitals, and insurers.
But ChatGPT Health is not a healthcare provider. It's a consumer app.
When you upload your medical history to ChatGPT:
- You lose HIPAA protection on that data
- There's no legal privilege (it can be subpoenaed)
- OpenAI can change their terms at any time
- In case of a breach, you don't have the same rights as with your hospital
What experts say
Sara Geoghegan (Electronic Privacy Information Center):
"Sharing medical records with ChatGPT would strip HIPAA protection from those records, which is dangerous."
Bradley Malin (Vanderbilt University Medical Center):
"It's a contractual agreement between the individual and OpenAI. Nothing more."
The re-identification problem
Even when data is "anonymized," studies show algorithms can re-identify:
- 85.6% of adults
- 69.8% of children
What OpenAI promises
- Health data is in an isolated space
- Specific encryption for health data
- NOT used to train models
- Users can delete their data
Anthropic makes similar promises for Claude.
But the legal reality is clear: there's no comprehensive federal law in the United States protecting your health data outside the traditional medical system.
Complete Comparison: ChatGPT Health vs Claude Healthcare
| Aspect | ChatGPT Health | Claude Healthcare |
|---|---|---|
| Launch | January 7, 2026 | January 11, 2026 |
| Focus | Consumers | Organizations |
| Purpose | Understand your health | Automate workflows |
| Connectors | b.well (2.2M providers) | HealthEx (50K orgs) |
| Wearables | Apple Health, Peloton, etc. | Apple Health, Android Health |
| Clinical sources | No | PubMed, ClinicalTrials.gov |
| Base price | Free (with ads) | $20/mo |
| Enterprise HIPAA | Yes | Yes |
| Human supervision required | No | Yes (for clinical decisions) |
Which to choose
Choose ChatGPT Health if:
- You want to understand lab results
- You already use the Apple ecosystem (Health, Watch)
- You prefer a free option
- You don't mind sharing data with OpenAI
Choose Claude Healthcare if:
- You work in a healthcare organization
- You need clinical research (PubMed, trials)
- You prioritize stricter privacy policies
- You want a human to review before acting
Choose MedGemma if:
- You're a developer or researcher
- You need medical image analysis
- You want to self-host without depending on third parties
The $505 Billion Market
The reason OpenAI, Anthropic, and Google are fighting so aggressively is simple: money.
Healthcare AI market projections
| Year | Market Value |
|---|---|
| 2024 | $26.57 billion |
| 2026 | $36.79 billion |
| 2030 | $110-187 billion |
| 2033 | $505 billion |
The compound annual growth rate (CAGR) is 38.6%. Few markets grow like this.
Specific subsegments
- AI Agents in health: $1.11B (2025) -> $6.92B (2030)
- AI in medical imaging: $22.97 trillion by 2035
- AI in mental health: $64B (2026) -> $115B (2034)
Why AI companies want your medical data
Your health data is valuable because:
- It's unique: Nobody else has your complete history
- It's longitudinal: Decades of information
- It predicts behavior: Health -> expenses -> decisions
- It creates lock-in: Once you upload everything, it's hard to migrate
The Trust Problem
Adoption numbers are impressive, but trust doesn't follow.
What surveys say
| Metric | Percentage |
|---|---|
| Patients who DON'T trust AI in health | 75% |
| Trust their provider to use AI correctly | 61% |
| Don't know if their doctor uses AI | 80% |
| Admit limited understanding of AI | 43% |
| Worried AI will reduce time with doctors | 63% |
| Worried about data security | 63% |
There's a clear disconnect: 230 million use ChatGPT for health, but 75% don't trust AI in health.
The likely explanation: people use ChatGPT like an advanced Google, not as a doctor. They seek information, not diagnoses.
When they DO trust
- 64% would trust an AI diagnosis over a human one in simple cases
- More optimism when AI detects early cancer
- Exposure to AI examples increases trust
The doctors' perspective
- 72% believe AI can improve diagnostic capability
- 62% believe it can improve clinical outcomes
- 59% believe it can strengthen care coordination
Doctors are more optimistic than patients. Perhaps because they see the potential, or perhaps because it saves them administrative work.
Real Risks: When AI Gets It Wrong
The hallucination problem
Language models "hallucinate": they generate information that sounds correct but is invented.
A doctor who analyzed ChatGPT Health said:
"What worries me isn't the model itself; it's how easily it can be over-trusted. It sounds confident, even when it's wrong. In medicine, that's dangerous."
Real case: Washington Post
A Washington Post journalist connected 29 million steps and 6 million heart rate measurements to ChatGPT Health.
The result: "It drew questionable conclusions that changed every time I asked."
Legal liability
If ChatGPT gives you incorrect health advice, you can't sue OpenAI the same way you'd sue a doctor.
- AI cannot legally diagnose (requires medical license)
- Disclaimers protect the companies
- The user assumes the risk
As a health law specialist puts it:
"LLMs belong before and after the point of care, not autonomously within it."
Our Experience Testing Both Platforms
After testing ChatGPT Health and Claude for Healthcare for two weeks, here's our honest assessment.
What worked well
ChatGPT Health:
- Lab explanations in simple language (excellent)
- Apple Health integration (smooth)
- Appointment question preparation (useful)
Claude Healthcare:
- Integrated PubMed search (very useful for researching conditions)
- Medical history summaries (accurate)
- Clearer privacy policy
What concerned us
ChatGPT Health:
- Sometimes overconfident in interpretations
- Doesn't always distinguish between "possible" and "probable"
- Wearable integration still buggy
Claude Healthcare:
- More complicated setup than ChatGPT
- Fewer consumer app integrations
- Higher base price ($20 vs free)
The practical verdict
For general health information: ChatGPT Health is sufficient. It's free, easy to use, and useful for understanding medical terms.
For serious research: Claude with access to PubMed and ClinicalTrials.gov is superior.
For neither: Diagnosing conditions, deciding treatments, or replacing medical visits.
FAQs: Frequently Asked Questions
Is it safe to share my medical history with ChatGPT?
Technically yes, but with reservations. OpenAI promises not to use your health data to train models and to keep it encrypted. However, your data loses HIPAA protection once you share it, meaning you don't have the same legal rights as with your hospital. If you value maximum privacy, consider not sharing sensitive information or using self-hosted alternatives like MedGemma.
Can ChatGPT Health replace my doctor?
No, and neither OpenAI nor Anthropic claims it can. These systems are designed to help you understand medical information, prepare appointments, and detect patterns—not to diagnose or prescribe treatments. AI doesn't have a medical license, can't physically examine you, and can't access your complete life context. Use it as a complement, not a substitute.
Which is more private, ChatGPT Health or Claude Healthcare?
Claude Healthcare has slightly stricter policies, including the requirement for human supervision for clinical decisions. However, neither platform is protected by HIPAA in its consumer version. The practical difference is minimal. If privacy is your absolute priority, self-hosted MedGemma is the only option that keeps your data completely under your control.
How much does it cost to use these tools?
ChatGPT Health is free with ads, or from $8/month (Go) to $200/month (Pro). Claude Healthcare requires at least the Pro plan ($20/month). MedGemma is completely free but requires technical knowledge to implement. For the average user who just wants to understand lab results, the free version of ChatGPT is sufficient.
What happens if the AI gives me incorrect health advice?
Legally, there's little you can do. Both platforms' terms of service include disclaimers that protect them from liability for incorrect advice. AI doesn't legally practice medicine, so you can't sue it for malpractice. If you follow AI advice and suffer harm, the responsibility falls on you for not consulting a professional. Always verify critical information with a doctor.
Conclusion: The Future of Health Is at Stake
January 2026 marks a turning point in the history of medicine.
For the first time, the world's most powerful AI companies are competing directly for your medical data. And while the promises are tempting, the unanswered questions are troubling.
What we know
- 230 million people already ask ChatGPT about health weekly
- The healthcare AI market will be worth $505 billion in 2033
- Top-tier hospitals already use these tools
- The efficiency is real: doctors get answers 61% faster
What we don't know
- What happens if there's a massive medical data breach
- How it will affect the doctor-patient relationship long-term
- Whether the 75% distrust will change
- Who's responsible when AI gets it wrong
Our recommendation
Use these tools intelligently:
- To understand medical terms - Excellent
- To prepare appointments - Very useful
- For second opinions - With caution
- For diagnosis - Never
- For treatment - Never ever
AI in health is a tool, not a doctor. Treat it like you treat Google: useful for research, dangerous for self-diagnosis.
The healthcare AI war has just begun. You decide if you participate. Just make sure you understand the rules of the game before sharing your most intimate data.



