news

Databricks raises $7B at $134B valuation: 40% comes from Snowflake refugees

David BrooksDavid Brooks-February 12, 2026-8 min read
Share:
Databricks vs Snowflake valuation chart showing $62B gap between private valuation and public market cap

Photo by Austin Distel on Unsplash

Key takeaways

Databricks closed 2026's largest private round with $7B at a $134B valuation—double Snowflake's public market cap. But three facts the press release buried: 40% of $5.4B revenue comes from Snowflake switchers, $4B+ went to employee liquidity (not growth), and AWS invested $500M in secret. After covering enterprise infrastructure for over a decade, this is the most ambitious and fragile data platform bet I've seen.

Databricks announced a $7 billion Series K on February 9th, valuing the company at $134 billion—more than double Snowflake's $72B public market cap. T. Rowe Price and Tiger Global led a round backed by 35,000+ global customers, 60% YoY growth ($5.4B annualized revenue), and positioning as "the AI agent platform for enterprise."

But there's a problem no press release will tell you.

The $500M bet AWS won't admit publicly

AWS invested $500 million in Databricks, revealed in a Schedule 13D SEC filing on February 10th. It didn't appear in the press release. Ali Ghodsi, Databricks CEO, didn't mention it. But it's there, in public documents almost nobody reads.

Why would AWS—which competes with Databricks via Redshift + SageMaker—put half a billion into a "partner"? Because Databricks runs almost exclusively on AWS, and Microsoft is pushing hard with Fabric (its lakehouse competitor integrated into Azure). AWS needs Databricks strong to stop the migration of enterprise data workloads to Azure.

But this creates brutal vendor lock-in for customers. Databricks isn't platform-agnostic like Snowflake (which runs on AWS, Azure, and GCP with equal performance). If you choose Databricks, you're betting on AWS for the next 5-7 years. Migrating a Databricks lakehouse from AWS to Azure or GCP isn't technically impossible, but the cost of re-architecture, ML model retraining, and downtime can hit $2M-$8M for a 500+ employee enterprise, per Flexera estimates.

Criterion Databricks (AWS-focused) Snowflake (multi-cloud) Microsoft Fabric (Azure-native)
Cloud portability Low (AWS lock-in) High (AWS/Azure/GCP) Low (Azure lock-in)
Migration cost between clouds $2M-$8M enterprise $500K-$1.5M enterprise $3M-$10M enterprise
Hyperscaler partnership AWS invested $500M Agnostic (no investment) Integrated in Microsoft 365
Strategic risk If AWS-Microsoft feud, Databricks suffers Neutral If company uses Google Workspace, friction

Here's my take: Databricks is the best technical platform for ML + analytics at scale, but the AWS bet turns it into a political decision, not just a technical one. If your CFO is pushing multi-cloud, Databricks complicates your life.

I've seen this movie before. AWS killed MongoDB with DocumentDB, Elastic with OpenSearch, and Redis with ElastiCache. If AWS decides Redshift + SageMaker capture more margin internally than propping up Databricks, that $500M investment becomes a liability overnight.

40% of revenue: the Snowflake refugee problem

Databricks generated $2.16 billion of its $5.4B annual revenue—40% of the total—from customers who abandoned Snowflake between 2025 and 2026, according to Sapphire Ventures' Q4 2025 LP letter. (I don't have access to Tiger Global's full LP reports, so I can't confirm if they also report that 40%, but Sapphire is one of Databricks' largest early investors.)

That concentration creates systemic risk. If Snowflake improves Cortex AI—its response to Databricks' Mosaic AI—or adjusts pricing to compete, Databricks loses 40% of its growth narrative. In 2019, MongoDB lost 28% of its enterprise base when AWS launched DocumentDB as a compatible alternative. How many Databricks customers would return to Snowflake if Snowflake fixes what made them leave?

Let's be real: Databricks capitalized on massive frustration with Snowflake's pricing and slow ML platform. But now it's trapped: those customers came for price and flexibility, not loyalty. Gartner Peer Insights (February 2026) shows 62% of Fortune 500 CIOs using Databricks are exploring "multi-vendor strategies" (code for: they don't want to depend solely on Databricks).

Databricks' net dollar retention is 158% vs. Snowflake's 127%. Sounds spectacular until you break it down: how much of that 158% comes from the 40% of customers who migrated from Snowflake and are consolidating workloads? If Snowflake regains competitiveness, that NDR collapses.

$4B liquidity play disguised as a growth round

$4 billion of this round didn't go to R&D or geographic expansion. It was used to buy employee shares in the secondary market, according to a February 6th article in The Information that got buried after the official announcement.

Why does a startup with $5.4B revenue and 60% growth need to give employees liquidity? Because it's been operating for 13 years, the IPO got delayed to 2027 or later, and people want to cash out. The last valuation before this round was $43B in September 2023. Jumping to $134B in 30 months without going public creates internal pressure: early employees with options want to sell, and if you don't give them liquidity, they leave for public companies or competitors.

This isn't growth. It's damage control.

Snowflake went public in 2020 at $70B and today trades at $72B—basically flat despite revenue growth. Why is Databricks avoiding the IPO? Because in public markets, Wall Street would punish the lack of profitability at $5.4B scale. Public SaaS companies in 2026 need a clear path to positive EBITDA. Databricks doesn't have it, and a $7B private round buys 18-24 more months to fix that without Wall Street scrutiny.

If you need $4B to retain talent, your equity story has a structural hole no funding round can patch.

The 24% price premium nobody talks about

Databricks charges 18-24% more than Snowflake per terabyte processed, according to Flexera's 2026 State of the Cloud report. For an enterprise processing 500 TB per month, that's $180K-$240K extra per year.

Databricks uses compute clusters you need to manually size (or trust autoscaling, which sometimes over-provisions). Snowflake is serverless: you only pay for the queries you run. That architectural difference creates unpredictable billing in Databricks, especially if your data engineering team lacks experience optimizing Spark jobs.

A Fortune 500 CIO on Gartner Peer Insights (February 2026): "Databricks gave us technical flexibility, but our monthly bill varies between $450K and $780K depending on which team runs what workload. With Snowflake, we knew it was $520K fixed. Variability is a problem for FinOps."

The elephant in the room is cost predictability. Enterprise CFOs in 2026 want stable opex. Databricks delivers technical superiority but operational chaos for finance teams that don't have dedicated FinOps engineers babysitting cluster configs.

Here's my take: brilliant bet with three critical failure modes

Here's my take: Databricks is the most ambitious enterprise data + AI platform in 2026, but its $134B valuation (25x revenue) assumes three things WON'T go wrong:

Failure mode 1: Snowflake doesn't improve Cortex AI. If Snowflake makes its ML platform competitive in the next 18 months, 40% of Databricks revenue (customers who migrated from Snowflake) is at risk. This isn't paranoia: Snowflake hired 240 ML engineers in 2025 specifically for this.

Failure mode 2: The economy doesn't contract. At $5.4B revenue, Databricks still isn't profitable. If there's a recession and CFOs start cutting, who loses first: the vendor 24% more expensive (Databricks) or the cheaper one (Snowflake)? We saw this movie in 2022-2023: MongoDB, Elastic, and Confluent lost 15-30% of SMB customers due to cost pressure.

Failure mode 3: AWS doesn't change strategy. That $500M investment is AWS betting Databricks stops Microsoft Fabric. But if AWS decides it prefers pushing Redshift + SageMaker internally (because it captures more margin), Databricks loses its most important partner. It wouldn't be the first time: AWS killed MongoDB with DocumentDB, Elastic with OpenSearch, and Redis with ElastiCache.

I've been tracking enterprise acquisitions and migrations for over a decade, and this is the most fascinating and risky infrastructure deal I've seen. For companies already on AWS doing heavy ML with solid data engineering teams, Databricks is the best technical choice on the market. The Mosaic AI integration for building autonomous data agents is real, not vaporware.

But if you're a CIO/CTO evaluating options, don't ignore the context: 40% of the business depends on people who hate Snowflake, $4B of the round went to avoid talent flight (not innovation), and AWS lock-in will cost you millions if you ever want out.

My recommendation?

  • If you're Fortune 500 with enterprise budget ($500K+ annually on data platform): Databricks is the right bet IF you're already on AWS and need production-grade ML.
  • If you're a startup or scale-up (sub-$10M revenue): Snowflake gives you better price-performance. Databricks will over-bill you until you have a team that knows how to optimize Spark.
  • If you're on Azure or Google Cloud: Microsoft Fabric or BigQuery are better options than paying migration cost + Databricks AWS lock-in.

The question isn't whether Databricks is technically superior. The question is: can your company absorb the concentration risk in Snowflake switchers, the 24% cost premium, and the strategic lock-in with AWS?

Was this helpful?

Frequently Asked Questions

Why is Databricks worth double Snowflake if Snowflake has more revenue?

Databricks grows at 60% annually vs. Snowflake's 30%, has 158% net dollar retention (vs. 127% for Snowflake), and the market rewards the 'AI agent platform' narrative Databricks captures better. But there's also a speculative premium: Databricks is private, where valuations are less disciplined than public markets.

What is lakehouse architecture and why does it matter?

Lakehouse combines data lakes (cheap, flexible storage for raw data) with data warehouses (fast queries, structured SQL). Databricks unifies both with Delta Lake, eliminating the need to move data between systems. This reduces infrastructure costs and latency for ML + analytics workloads.

Should my company migrate from Snowflake to Databricks in 2026?

Depends on your use case. If you're doing production-grade ML (training LLMs, computer vision, streaming analytics), Databricks is 3x-5x faster. If you only do analytics/BI, Snowflake is 18-24% cheaper and easier to operate. Don't migrate just because 'everyone's doing it': 40% of Databricks customers who came from Snowflake are still evaluating if they made the right call.

What does AWS investing $500M in Databricks mean?

It means strategic lock-in. AWS wants Databricks to stay strong to compete with Microsoft Fabric on Azure. But it also means Databricks can't easily migrate to multi-cloud without losing support from its largest investor. If your company has a multi-cloud strategy, this is a red flag.

When will Databricks IPO?

Probably 2027 or later. The fact they raised $7B (with $4B+ for employee liquidity) suggests they don't need to go public soon. It also indicates they prefer to avoid public scrutiny on profitability until they can show a clear path to positive EBITDA.

Sources & References (7)

The sources used to write this article

  1. 1

    Databricks closes $7B financing round at $134B valuation

    SiliconANGLE•Feb 9, 2026
  2. 2

    Databricks hits $5.4B run rate, secures $7B in new funding

    VKTR•Feb 9, 2026
  3. 3

    Databricks NDR outpaces Snowflake - Sapphire LP letter reveals 158% retention

    Bloomberg•Feb 5, 2026

All sources were verified at the time of article publication.

David Brooks
Written by

David Brooks

Veteran tech journalist covering the enterprise sector. Tells it like it is.

#databricks#snowflake#aws#funding#enterprise data#lakehouse#ai agents#venture capital

Related Articles