Skip to content
EISBERG
vs Lock-In

Open isn't the same as portable.

Every vendor calls themselves open in 2026. The difference is what they actually let you take with you. We measure portability by what survives a switch — not by what's in the marketing FAQ.

Same manifest, three clouds.

One deployment description. Apply it to an AWS cluster, an Azure cluster, a GCP cluster, or a neo-cloud cluster. The compute, the catalog, the policy plane, the agent runtime — all identical. No cloud-specific application code. Switching clouds is a Helm value, not a rewrite.

Your data, your storage, your keys.

Tables live in your S3 / Azure Blob / GCS, encrypted with your KMS keys, written in an open table format any compliant engine can read. We are structurally incapable of holding your data hostage because we never hold it.

Open catalog, no proprietary dialect.

We speak the open Iceberg REST catalog spec natively. Snowflake's Open Catalog is real but billed against. Databricks Unity Catalog OSS publishes APIs but the catalog server people actually run is forked. We have no proprietary catalog dialect to defend — your metadata is portable on day one.

Swappable everything.

Query engine, AI model router, agent runtime, deployment substrate, vector index, lineage emitter — each replaceable behind an interface, none structurally tied to a vendor. The only platform where 'swap in your own X' is a configuration change, not a roadmap conversation.

The honest portability matrix

What 'open' means at each platform.

Seven dimensions a data architect can verify in an hour by reading our docs and the incumbents'. We will not claim a row we cannot demonstrate end-to-end on a customer call.

DimensionEisbergSnowflakeDatabricks
Same manifest runs on AWS / Azure / GCP
Yes — one Helm chart, three clouds
No — each cloud is a separate account
No — workspaces are per-cloud
Customer-owned object storage
Yes — your S3 / ABFS / GCS, your KMS keys
Iceberg Tables in your bucket; Native Tables in theirs
Iceberg via Unity now; Delta legacy in theirs
Open table format with no proprietary extensions
Iceberg v3 — every byte readable by any compliant engine
Iceberg + their managed catalog (billed)
Iceberg + Delta + Unity (open API, closed server)
Swappable query engine
Multi-engine router — embedded, distributed, GPU, MPP all selectable
Single engine. Period.
Photon only, in their runtime
Swappable AI model
Yes — any provider, any model, any region
Cortex models within their perimeter
Mosaic AI models within their workspace
Stateless control plane
Yes — we learn patterns, never hold records
No — control plane stores metadata + query state
No — control plane stores workspace state
Export tomorrow with one command
Iceberg tables already in your bucket — nothing to extract
Iceberg tables yes; Native Tables require unload
Unity Iceberg yes; Delta requires conversion
Why incumbents structurally cannot match this

Open-format pricing is a margin problem, not a roadmap problem.

Both major incumbents have moved toward Iceberg under competitive pressure. Neither can credibly match true portability without cannibalizing their own consumption model.

Snowflake

The catalog is the lock.

Iceberg Tables solve format portability but only when you pay for the Snowflake-managed catalog and the Snowflake-priced compute that reads it. The day you bring your own engine, you stop paying the warehouse meter — and the warehouse meter is the business. Genuine open catalog cannibalizes the consumption model that funds the company.

Databricks

The runtime is the lock.

Tabular got acquired, Unity Catalog now reads Iceberg, the framing is "open lakehouse." But the actually-shipped Unity server people run in production is forked closed, and the Photon engine that justifies the platform pricing is single- vendor. Genuine portability would mean a customer running Spark + their own catalog server against their own Iceberg — at which point they no longer need a Databricks workspace.

Want the deployment manifest?

The same Helm chart that runs our staging cluster on EKS runs identically on AKS and GKE. We share the chart under NDA on every architecture call — read it before the demo, verify the claim before the contract.