5 min read

The Case for an Iceberg-Native Database: Why Spark Jobs and Zero-Copy Kafka Won’t Cut It

Richard Artoul
Co-Founder
September 17, 2025
the-case-for-an-iceberg-native-database-why-spark-jobs-and-zero-copy-kafka-wont-cut-it
HN Disclosure: WarpStream sells a drop-in replacement for Apache Kafka built directly on-top of object storage.

TLDR;

We launched a new product called WarpStream Tableflow that is the easiest, cheapest, and most flexible way to convert Kafka topic data into Iceberg tables with low latency, and keep them compacted. If you’re already familiar with the challenges of converting Kafka topics into Iceberg tables, feel free to skip ahead to our solution in the “What if we had a magic box?” section.


Apache Iceberg and Delta Lake are table formats that provide the illusion of a traditional database table on top of object storage, including schema evolution, concurrency control, and partitioning that is transparent to the user. These table formats allow many open-source and proprietary query engines and data warehouse systems to operate on the same underlying data, which prevents vendor lock-in and allows using best-of-breed tools for different workloads without making additional copies of that data that are expensive and hard to govern.

Table formats are really cool, but they're just that, formats. Something or someone has to actually build and maintain them. As a result, one of the most debated topics in the data infrastructure space right now is the best way to build Iceberg and Delta Lake tables from real-time data stored in Kafka.

The Problem With Spark

The canonical solution to this problem is to use Spark batch jobs.

This is how things have been done historically, and it’s not a terrible solution, but there are a few problems with it:

  1. You have to write a lot of finicky code to do the transformation, handle schema migrations, etc.
  2. Latency between data landing in Kafka and the Iceberg table being updated is very high, usually hours or days depending on how frequently the batch job runs if compaction is not enabled (more on that shortly). This is annoying if we’ve already gone through all the effort of setting up real-time infrastructure like Kafka.
  3. Apache Spark is an incredibly powerful, but complex piece of technology. For companies that are already heavy users of Spark, this is not a problem, but for companies that just want to land some events into a data lake, learning to scale, tune, and manage Spark is a huge undertaking.

Problems 1 and 3 can’t be solved with Spark, but we might be able to solve problem 2 (table update delay) by using Spark Streaming and micro-batching processing:

Well not quite. It’s true that if you use Spark Streaming to run smaller micro-batch jobs, your Iceberg table will be updated much more frequently. However, now you have two new problems in addition to the ones you already had:

  1. Small file problem
  2. Single writer problem

Anyone who has ever built a data lake is familiar with the small files problem: the more often you write to the data lake, the faster it will accumulate files, and the longer your queries will take until eventually they become so expensive and slow that they stop working altogether.

That’s ok though, because there is a well known solution: more spark!

We can create a new Spark batch job that periodically runs compactions that take all of the small files that were created by the spark streaming job and merges them together into bigger files:

The compaction job solves the small file problem, but it introduces a new one. Iceberg tables suffer from an issue known as the “single writer problem” which is that only one process can mutate the table concurrently. If two processes try to mutate the table at the same time, one of them will fail and have to redo a bunch of work [1].

This means that your ingestion process and compaction processes are racing with each other, and if either of them runs too frequently relative to the other, the conflict rate will spike and the overall throughput of the system will come crashing down.

Of course, there is a solution to this problem: run compaction infrequently (say once a day), and with coarse granularity. That works, but it introduces two new problems: 

  1. If compaction only runs once every 24 hours, the query latency at hour 23 will be significantly worse than at hour 1.
  2. The compaction job needs to process all of the data that was ingested in the last 24 hours in a short period of time. For example, if you want to bound your compaction job’s run time at 1 hour, then it will require ~24x as much compute for that one hour period as your entire ingestion workload [2]. Provisioning 24x as much compute once a day is feasible in modern cloud environments, but it’s also extremely difficult and annoying.

Exhausted yet? Well, we’re still not done. Every Iceberg table modification results in a new snapshot being created. Over time, these snapshots will accumulate (costing you money) and eventually the metadata JSON file will get so large that the table becomes un-queriable. So in addition to compaction, you need another periodic background job to prune old snapshots.

Also, sometimes your ingestion or compaction jobs will fail, and you’ll have orphan parquet files stuck in your object storage bucket that don’t belong to any snapshot. So you’ll need yet another periodic background job to scan the bucket for orphan files and delete them.

It feels like we’re playing a never-ending game of whack-a-mole where every time we try to solve one problem, we end up introducing two more. Well, there’s a reason for that: the Iceberg and Delta Lake specifications are just that, specifications. They are not implementations. 

Imagine I gave you the specification for how PostgreSQL lays out its B-tree’s on disk and some libraries that could manipulate those B-trees. Would you feel confident building and deploying a PostgreSQL-compatible database to power your company’s most critical applications? Probably not, because you’d still have to figure out: concurrency control, connection pool management, transactions, isolation levels, locking, MVCC, schema modifications, and the million other things that a modern transactional database does besides just arranging bits on disk.

The same analogy applies to data lakes. Spark provides a small toolkit for manipulating parquet and Iceberg manifest files, but what users actually want is 50% of the functionality of a modern data warehouse. The gap between what Spark actually provides out of the box, and what users need to be successful, is a chasm.

When we look at things through this lens, it’s no longer surprising that all of this is so hard. Saying: “I’m going to use Spark to create a modern data lake for my company” is practically equivalent to announcing: “I’m going to create a bespoke database for every single one of my company’s data pipelines”. No one would ever expect that to be easy. Databases are hard.

Most people want nothing to do with managing any of this infrastructure. They just want to be able to emit events from one application and have those events show up in their Iceberg tables within a reasonable amount of time. That’s it.

It’s a simple enough problem statement, but the unfortunate reality is that solving it to a satisfactory degree requires building and running half of the functionality of a modern database.

It’s no small undertaking! I would know. My co-founder and I (along with some other folks at WarpStream) have done all of this before

Can I Just Use Kafka Please?

TLDR; If you don’t care about buzzwords like “Iceberg topics”, “Zero Copy Iceberg”, and “Diskless Kafka” feel free to skip this section and jump directly to the “What if we had a magic box?” section.

Hopefully by now you can see why people have been looking for a better solution to this problem. Many different approaches have been tried, but one that has been gaining traction recently is to have Kafka itself (and its various different protocol-compatible implementations) build the Iceberg tables for you.

The thought process goes like this: Kafka (and many other Kafka-compatible implementations) already have tiered storage for historical topic data. Once records / log segments are old enough, Kafka can tier them off to object storage to reduce disk usage and costs for data that is infrequently consumed.

Why not “just” have the tiered log segments be parquet files instead, then add a little metadata magic on-top and voila, we now have a “zero-copy” streaming data lake where we only have to maintain one copy of the data to serve both Kafka consumers and Iceberg queries, and we didn’t even have to learn anything about Spark!

Problem solved, we can all just switch to a Kafka implementation that supports this feature, modify a few topic configs, and rest easy that our colleagues will be able to derive insights from our real time Iceberg tables using the query engine of their choice.

Of course, that’s not actually true in practice. This is the WarpStream blog after all, so dedicated readers will know that the last 4 paragraphs were just an elaborate axe sharpening exercise for my real point which is this: none of this works, and it will never work.

I know what you’re thinking: “Richie, you say everything doesn’t work. Didn’t you write like a 10 page rant about how tiered storage in Kafka doesn’t work?”. Yes, I did.

I will admit, I am extremely biased against tiered storage in Kafka. It’s an idea that sounds great in practice, but falls flat on its face in most practical implementations. Maybe I am a little jaded because a non-trivial percentage of all migrations to WarpStream get (temporarily) stalled at some point when the customer tries to actually copy the historical data out of their Kafka cluster into WarpStream and loading the historical from tiered storage degrades their Kafka cluster.

But that’s exactly my point: I have seen tiered storage fail at serving historical reads in the real world, time and time again.

I won’t repeat the (numerous) problems associated with tiered storage in OSS Kafka and most vendor implementations in this blog post, but I will (predictably) point out that changing the tiered storage format fixes none of those problems, makes some of them worse, and results in a sub-par Iceberg experience to boot.

Iceberg Makes Existing (already bad) Tiered Storage Implementations Worse

Let’s start with how the Iceberg format makes existing tiered storage implementations that already perform poorly, perform even worse. First off, generating parquet files is expensive. Like really expensive. Compared to copying a log segment from the local disk to object storage, it uses at least an order of magnitude more CPU cycles and significant amounts of memory.

That would be fine if this operation were running on a random stateless compute node, but it’s not, it’s running on one of the incredibly important Kafka brokers that is the leader for some of the topic-partitions in your cluster. This is the worst possible place to perform computationally expensive operations like generating parquet files.

To make matters worse, loading the tiered data from object storage to serve historical Kafka consumers (the primary performance issue with tiered storage) becomes even more operationally difficult and expensive because now the Parquet files have to be decoded and converted back into the Kafka record batch format, once again, in the worst possible place to perform computationally expensive operations: the Kafka broker responsible for serving the producers and consumers that power your real-time workloads.

This approach works in prototypes and technical demos, but it will become an operational and performance nightmare for anyone who tries to take this approach into production at any kind of meaningful scale. Or you’ll just have to massively overprovision your Kafka cluster, which essentially amounts to throwing an incredible amount of money at the problem and hoping for the best.

Tiered Storage Makes Sad Iceberg Tables

Let’s say you don’t believe me about the performance issues with tiered storage. That’s fine, because it doesn’t really matter anyways. The point of using Iceberg as the tiered storage format for Apache Kafka would be to generate a real-time Iceberg table that can be used for something. Unfortunately, tiered storage doesn't give you Iceberg tables that are actually useful.

If the Iceberg table is generated by Kafka’s tiered storage system then the partitioning of the Iceberg table has to match the partitioning of the Kafka topic. This is extremely annoying for all of the obvious reasons. Your Kafka partitioning strategy is selected for operational use-cases, but your Iceberg partitioning strategy should be selected for analytical use-cases.

There is a natural impedance mismatch here that will constantly get in your way. Optimal query performance is always going to come from partitioning and sorting your data to get the best pruning of files on the Iceberg side, but this is impossible if the same set of files must also be capable of serving as tiered storage for Kafka consumers as well.

There is an obvious way to solve this problem: store two copies of the tiered data, one for serving Kafka consumers, and the other optimized for Iceberg queries. This is a great idea, and it’s how every modern data system that is capable of serving both operational and analytic workloads at scale is designed.

But if you’re going to store two different copies of the data, there’s no point in conflating the two use-cases at all. The only benefit you get is perceived convenience, but you will pay for it dearly down the line in unending operational and performance problems.

In summary, the idea of a “zero-copy” Iceberg implementation running inside of production Kafka clusters is a pipe dream. It would be much better to just let Kafka be Kafka and Iceberg be Iceberg.

I’m Not Even Going to Talk About Compaction

Remember the small file problem from the Spark section? Unfortunately, the small file problem doesn’t just magically disappear if we shove parquet file generation into our Kafka brokers. We still need to perform table maintenance and file compaction to keep the tables queryable.

This is a hard problem to solve in Spark, but it’s an even harder problem to solve when the maintenance and compaction work has to be performed in the same nodes powering your Kafka cluster. The reason for that is simple: Spark is a stateless compute layer that can be spun up and down at will.

When you need to run your daily major compaction session on your Iceberg table with Spark, you can literally cobble together a Spark cluster on-demand from whatever mixed-bag, spare-part virtual machines happen to be lying around your multi-tenant Kubernetes cluster at the moment. You can even use spot instances, it’s all stateless, it just doesn’t matter!

The VMs powering your Spark cluster. Probably.

No matter how much compaction you need to run, or how compute intensive it is, or how long it takes, it will never in a million years impair the performance or availability of your real-time Kafka workloads.

Contrast that with your pristine Kafka cluster that has been carefully provisioned to run on high end VMs with tons of spare RAM and expensive SSDs/EBS volumes. Resizing the cluster takes hours, maybe even days. If the cluster goes down, you immediately start incurring data loss in your business. THAT’S where you want to spend precious CPU cycles and RAM smashing Parquet files together!?

It just doesn’t make any sense.

What About Diskless Kafka Implementations?

“Diskless” Kafka implementations like WarpStream are in a slightly better position to just build the Iceberg functionality directly into the Kafka brokers because they separate storage from compute which makes the compute itself more fungible.

However, I still think this is a bad idea, primarily because building and compacting Iceberg files is an incredibly expensive operation compared to just shuffling bytes around like Kafka normally does. In addition, the cost and memory required to build and maintain Iceberg tables is highly variable with the schema itself. A small schema change to add a few extra columns to the Iceberg table could easily result in the load on your Kafka cluster increasing by more than 10x. That would be disastrous if that Kafka cluster, diskless or not, is being used to serve live production traffic for critical applications.

Finally, all of the existing Kafka implementations that do support this functionality inevitably end up tying the partitioning of the Iceberg tables to the partitioning of the Kafka topics themselves, which results in sad iceberg tables as we described earlier. Either that, or they leave out the issue of table maintenance and compaction altogether.

A Better Way: What If We Just Had a Magic Box?

Look, I get it. Creating Iceberg tables with any kind of reasonable latency guarantees is really hard and annoying. Tiered storage and diskless architectures like WarpStream and Freight are all the rage in the Kafka ecosystem right now. If Kafka is already moving towards storing its data in object storage anyways, can’t we all just play nice, massage the log segments into parquet files somehow (waves hands), and just live happily ever after?

I get it, I really do. The idea is obvious, irresistible even. We all crave simplicity in our systems. That’s why this idea has taken root so quickly in the community, and why so many vendors have rushed poorly conceived implementations out the door. But as I explained in the previous section, it’s a bad idea, and there is a much better way.

What if instead of all of this tiered storage insanity, we had, and please bear with me for a moment, a magic box.

Behold, the humble magic box.

Instead of looking inside the magic box, let's first talk about what the magic box does. The magic box knows how to do only one thing: it reads from Kafka, builds Iceberg tables, and keeps them compacted. Ok that’s three things, but I fit them into a short sentence so it still counts.

That’s all this box does and ever strives to do. If we had a magic box like this, then all of our Kafka and Iceberg problems would be solved because we could just do this:

And life would be beautiful.

Again, I know what you’re thinking: “It’s Spark isn’t it? You put Spark in the box!?”

What's in the box?!

That would be one way to do it. You could write an elaborate set of Spark programs that all interacted with each other to integrate with schema registries, carefully handle schema migrations, DLQ invalid records, handle upserts, solve the concurrent writer problem, gracefully schedule incremental compactions, and even auto-scale to boot.

And it would work.

But it would not be a magic box.

It would be Spark in a box, and Spark’s sharp edges would always find a way to poke holes in our beautiful box.

I promise you wouldn't like the contents of this box.

That wouldn’t be a problem if you were building this box to run as a SaaS service in a pristine environment operated by the experts who built the box. But that’s not a box that you would ever want to deploy and run yourself.

Spark is a garage full of tools. You can carefully arrange the tools in a garage into an elaborate rube goldberg machine that with sufficient and frequent human intervention periodically spits out widgets of varying quality.

But that’s not what we need. What we need is an Iceberg assembly line. A coherent, custom-built, well-oiled machine that does nothing but make Iceberg, day in and day out, with ruthless efficiency and without human supervision or intervention. Kafka goes in, Iceberg comes out.

THAT would be a magic box that you could deploy into your own environment and run yourself.

It’s a matter of packaging.

We Built the Magic Box (kind of)

You’re on the WarpStream blog, so this is the part where I tell you that we built the magic box. It’s called Tableflow, and it’s not a new idea. In fact, Confluent Cloud users have been able to enjoy Tableflow as a fully managed service for over 6 months now, and they love it. It’s cost effective, efficient, and tightly integrated with Confluent Cloud’s entire ecosystem, including Flink.

However, there’s one problem with Confluent Cloud Tableflow: it’s a fully managed service that runs in Confluent Cloud, and therefore it doesn’t work with WarpStream’s BYOC deployment model. We realized that we needed a BYOC version of Tableflow, so that all of Confluent’s WarpStream users could get the same benefits of Tableflow, but in their own cloud account with a BYOC deployment model.

So that’s what we built!

WarpStream Tableflow (henceforth referred to as just Tableflow in this blog post) is to Iceberg generating Spark pipelines what WarpStream is to Apache Kafka.

It’s a magic, auto-scaling, completely stateless, single-binary database that runs in your environment, connects to your Kafka cluster (whether it’s Apache Kafka, WarpStream, AWS MSK, Confluent Platform, or any other Kafka-compatible implementation) and manufactures Iceberg tables to your exacting specification using a declarative YAML configuration.

source_clusters:
 - name: "benchmark"
   credentials:
     sasl_username_env: "YOUR_SASL_USERNAME"
     sasl_password_env: "YOUR_SASL_PASSWORD"
   bootstrap_brokers:
     - hostname: "your-kafka-brokers.example.com"
      port: 9092

tables:
   - source_cluster_name: "benchmark"
     source_topic: "example_json_logs_topic"
     source_format: "json"
     schema_mode: "inline"
     schema:
       fields:
         - { name: environment, type: string, id: 1}
         - { name: service, type: string, id: 2}
         - { name: status, type: string, id: 3}
         - { name: message, type: string, id: 4}
   - source_cluster_name: "benchmark"          
     source_topic: "example_avro_events_topic"
     source_format: "avro"
     schema_mode: "inline"
     schema:
       fields:
           - { name: event_id, id: 1, type: string }
           - { name: user_id, id: 2, type: long }
           - { name: session_id, id: 3, type: string }
           - name: profile
             id: 4
             type: struct
             fields:
               - { name: country, id: 5, type: string }
               - { name: language, id: 6, type: string }

Tableflow automates all of the annoying parts about generating and maintaining Iceberg tables:

  1. It auto-scales.
  2. It integrates with schema registries or lets you declare the schemas inline.
  3. It has a DLQ.
  4. It handles upserts.
  5. It enforces retention policies.
  6. It can perform stateless transformations as records are ingested.
  7. It keeps the table compacted, and it does so continuously and incrementally without having to run a giant major compaction at regular intervals.
  8. It cleans up old snapshots automatically.
  9. It detects and cleans up orphaned files that were created as part of failed inserts or compactions.
  10. It can ingest data at massive rates (GiBs/s) while also maintaining strict (and configurable) freshness guarantees.
  11. It speaks multiple table formats (yes, Delta lake too).
  12. It works exactly the same in every cloud.

Unfortunately, Tableflow can’t actually do all of these things yet. But it can do a lot of them, and the missing gaps will all be filled in shortly. 

How does it work? Well, that’s the subject of our next blog post. But to summarize: we built a custom, BYOC-native and cloud-native database whose only function is the efficient creation and maintenance of streaming data lakes.

More on the technical details in our next post, but if this interests you, please check out our documentation, and contact us to get admitted to our early access program. You can also subscribe to our newsletter to make sure you’re notified when we publish our next post in this series with all the gory technical details.

Footnotes
  1. This whole problem could have been avoided if the Iceberg specification defined an RPC interface  for a metadata service instead of a static metadata file format, but I digress.
  2. This isn't 100% true because compaction is usually more efficient than ingestion, but its directionally true.
Get started with WarpStream today and get $400 in credits that do not expire. No credit card is required to start.