GPU-native warehouses are not a feature. They are an architecture.
2026-04-22 · Eisberg Engineering
Every legacy data warehouse vendor has a GPU strategy now. None of them are GPU-native. This distinction matters more than the marketing makes it sound.
A CPU-era query planner makes assumptions about parallelism, memory hierarchy, and shuffle cost that are baked into the cost model. You can layer GPU execution on top of that planner — and many vendors have. What you cannot do is fix the planner's assumptions about which queries should run on a GPU at all. That is what produces the benchmark slides where GPU helps on three queries and hurts on the other twenty.
GPU-native means the planner picks the engine. Per query. With a cost model that treats GPU memory bandwidth, kernel launch latency, and host-to-device transfer as first-class signals — not exceptions to a CPU baseline.
The practical difference shows up on a customer's monthly bill. Bolted-on GPU saves money on hand-picked workloads. GPU-native compounds — the planner gets better at choosing, the customer pays less every month, and the gap to the bolted-on competitors widens with every learned routing decision.
Within a few cycles, every analytical workload over a hundred gigabytes will run on a GPU somewhere. The question is whether your platform's planner was built knowing that — or whether it is going to keep guessing wrong on twenty out of twenty-three queries.