5 min read

What React and Apache Iceberg Have in Common: Scaling Iceberg with Virtual Metadata

Richard Artoul
Co-Founder
December 12, 2025
what-react-and-apache-iceberg-have-in-common-scaling-iceberg-with-virtual-metadata
HN Disclosure: WarpStream sells a drop-in replacement for Apache Kafka built directly on-top of object storage.

I want to start this blog post by talking about React. I know what you’re thinking: “React? I thought this was a blog post about Iceberg!” It is, so please just give me a few minutes to explain before you click away.

React’s Lesson in Declarative Design

React launched in 2013 and one of the main developers, Pete Hunt, wrote a thoughtful and concise answer to a simple question: “Why did we build react?”. The post is short, so I encourage you to go read it right now, but I’ll summarize what I think are the the most important points:

React’s declarative model makes building applications where data changes over time easy. Without React, developers had to write the declarative code to render the application’s initial state, but they also had to write all of the imperative code for every possible state transition that the application would ever make. With React, developers only have to write one declarative function to render the application given any input state, and React takes care of all the imperative state transitions automatically.
React is backend-agnostic. Most popularly, it’s used to render dynamic web applications with HTML, but it can also be used to drive UI rendering outside the context of a browser entirely. For example, react-native allows developers to write native iOS and Android applications as easily as they write web applications.

The Similarity Between the DOM and Iceberg Metadata  

So what does any of this have to do with data lakes and Iceberg? Well, it turns out that engineers building data lakes in 2025 have a lot in common with frontend engineers in 2013. At its most basic level, Iceberg is “just” a formal specification for representing tables as a tree of metadata files.

Similarly, the browser Document Object Model (DOM) is “just” a specification for representing web pages as a tree of objects.

While this may sound trite (“Ok, they’re both trees, big deal”) the similarities are more than superficial. For example, the biggest problem for engineers interacting with either abstraction is the same: building the initial tree is easy, keeping the tree updated in real-time (efficiently and without introducing any bugs) is hard.

For example, consider the performance difference between inserting 10,000 new items into the DOM iteratively vs. inserting them all at once as a batch. The batch approach is almost 20x faster because it performs significantly less DOM tree mutations and re-rendering.

Thanks to Cursor, a WarpStream customer, all my benchmarks come with a sweet UI now.

In this particular scenario, writing the batching code is an easy fix. But it’s not always easy to remember, and as applications grow larger and more complex, performance problems like this can be hard to spot just by looking at the code.

React's Virtual DOM: A Layer of Indirection for Efficiency and Correctness

React solves this problem more generally by introducing a layer of indirection between the programmer and the actual DOM. This layer of indirection is called the “virtual DOM”. When a programmer creates a component in React, all they have to do is write a declarative <span class="codeinline">render()</span> function that accepts data and generates the desired tree of objects. 

However, the render function does not interact with the browser DOM directly. Instead, React takes the output of the render function (a tree of objects) and diffs it with the previous output. It then uses this diff to generate the minimal set of DOM manipulations to transition the DOM from the old state to the new desired state. A programmer could write this code themselves, but React automates it even for large and complex applications.

This layer of indirections also introduces many opportunities for optimization. For example, React can delay updating the DOM for a short period of time so that it can accumulate additional virtual DOM manipulations before updating the actual DOM all at once (automatic batching).

Managing Iceberg Metadata: A Tedious, Error-Prone Chore

Let’s transition back to Iceberg now. Let’s walk through all of the steps required to add a new Parquet file (that’s already been generated) to an existing Iceberg table:

  1. Locate and read the current metadata.json file for the table.
  2. Validate compatibility with the Iceberg table’s schema.
  3. Compute the partition values for the new file.
  4. Create the DataFile metadata object.
  5. Read the old manifest file(s).
  6. Create a new manifest file listing the new data file(s).
  7. Generate a new version of the metadata.json file.
  8. Optionally (but must be done at some point):
    1. Expire old snapshots (metadata cleanup).
    2. Rewrite manifests for optimization.
    3. Reorder or compact files if needed for read performance.

That’s a lot of steps, and it all has to be done 100% correctly or the table will become corrupted. Worse, all of these steps have to be done single-threaded for the most part. 

This complexity is the reason that there is very little Iceberg adoption outside of the Java ecosystem. It’s almost impossible to do any of this correctly without access to the canonical Java libraries. That’s also the reason why Spark has historically been the only game in town for building real time data lakes.

How We Built WarpStream Tableflow

WarpStream’s Tableflow implementation has a lot in common with React. At the most basic level, the goal of our Tableflow product is to continuously modify the <s>DOM</s> Iceberg metadata tree efficiently and correctly. There are two ways to do this:

  1. Manipulate the metadata tree directly in object storage. This is what Spark and everybody else does.
  2. Create a “virtual” version of the metadata tree, manipulate that, and then reflect those changes back into object storage asynchronously, akin to what React does.

We went with the latter option for a number of reasons, foremost of which is performance. Normally, Iceberg metadata operations must be executed single-threaded, but our virtual metadata system can be updated millions of times per second. This allows us to reduce ingestion latency dramatically and scale seamlessly from from 1MiB/s to 10+GiB/s of ingestion with minimal orchestration.

But what exactly is the “virtual metadata tree”? In WarpStream’s case, it’s just a database. The exact same database that powers metadata for WarpStream’s Kafka clusters! This database isn’t just fast, it also provides extremely strong guarantees in terms of consistency and isolation (all transactions are fully serializable) which makes it much easier to implement data lake features and ensure that they’re correct.

Tableflow in Action: Exactly-Once Ingestion at Scale

So what does this all look like in practice?

Let’s track the flow of data starting with data stored in a Kafka topic (WarpStream or otherwise):

  1. The WarpStream Agent issues fetch request(s) against the Kafka cluster to fetch records for a specific topic.
  2. The Agent deserializes the records, optionally applies any user-specified transformation functions, makes sure the records match the table schema, creates a Parquet file, and then flushes that file to the object store.
  3. The Agent commits the existence of a new Parquet file to the WarpStream control plane. This operation also atomically updates the set of consumed offsets tracked by the control plane which provides the system with exactly-once ingestion guarantees (practically for free!).
  4. At this point the records are “durable” in WarpStream Tableflow (the source Kafka cluster could go offline or the Kafka topic could be deleted and we wouldn’t lose any records), but not yet queriable by external query engine. The reason for this is that even though the records have been written to a Parquet file in object storage, we still need to update the Iceberg metadata in the object store to reflect the existence of these new files.
  5. Finally, the WarpStream control plane takes a new snapshot of its internal table state and generates a new set of Iceberg metadata files in the object store. Now the newly-ingested data is queryable by external query engine [1].

That’s just the ingestion path. WarpStream Tableflow provides a ton of other table management services like:

  • Data expiration
  • Orphan file cleanup
  • Background compaction
  • Custom partitioning
  • Sorting, etc

But for brevity, we won’t go into the implementation details of those features in this post.

It’s easy to see why this approach is much more efficient than the alternative despite introducing an additional layer of indirection: we can perform concurrency control using a low-latency transactional database (millions of operations/s), which reduces the window for conflicts when compared to a single-writer model on top of object storage alone. For table operations which don’t conflict, we can freely execute them concurrently and only abort those with true conflicts. The most common operation in Tableflow, the append of new records, is one of those operations that is extremely unlikely to have a true conflict due to how our append jobs are scheduled within the cluster.

In summary, unlike traditional data lake implementations that perform all metadata mutation operations directly against the object store (single-threaded), our implementation trivially parallelizes itself and scales to millions of metadata operations per second. In addition, we don’t need to worry about the number of partitions or files that participate in any individual operation.

Earlier in the post, I alluded to the fact that this approach makes developing new features easier, as well as guaranteeing that they’re correct. Take another look at step 3 above: WarpStream Tableflow guarantees exactly once ingestion into the data lake table and I almost never remember to brag about it because it just falls so naturally out of the design we barely thought about it. When you have a fast database that provides fully serializable transactions, strong guarantees and correctness (almost) just happen [2].

Multiple Table Formats (for Free!)

We’ve spent a lot of time talking about performance and correctness, but the original React creators had more than that in mind when they came up with the idea of the virtual DOM: multiple backends. While it was originally designed to power web applications, today React has a variety of backends like React Native that enable the same code and libraries to power UIs outside of the browser like native Android and iOS apps.

Virtual metadata provides the same benefit for Tableflow. Today, we only support the Iceberg table format, but in the near future we’ll add full support for Delta Lake as well. Let’s take another look at the Tableflow architecture diagram:

The only step that needs to change to support Delta Lake is step 4. There is (effectively) a single transformation function that takes Tableflow’s virtual metadata snapshot as an input and outputs Iceberg metadata files. All we need to do is write another function that takes Tableflow’s virtual metadata snapshot as input and outputs Delta Lake metadata files, and we’re done.

Every other feature of Tableflow (compaction, orphan file cleanup, stateless transformations, partitioning, sorting, etc) remains completely unchanged. If Tableflow operated on the metadata in the object store directly, every single one of those features would have to be rewritten to accommodate Delta Lake as well.

Interested in trying WarpStream Tableflow? You can learn more about Tableflow here, read the Tableflow docs, and fill out this form to get early access to Tableflow.

Footnotes

[1] For security reasons, the WarpStream control plane doesn’t actually have access to the customer’s object storage bucket, so instead it writes the new metadata to a WarpStream-owned bucket and sends the Agents a pre-signed URL they use to copy the files into the customer’s bucket.

[2] I’m exaggerating a little bit, our engineers work very hard to guarantee the correctness of our products.

Get started with WarpStream today and get $400 in credits that never expire. No credit card is required to start.