Stream more,
spend less

WarpStream is a drop-in replacement for Apache Kafka®, designed from the ground up to minimize costs and operations.

80%
Cheaper than Apache Kafka

Scalable Pricing

Write throughput  is metered in tiers starting at 1 cent per GiB for the first 5 TiB of data.

Our  estimate includes the total cost for your workload with no hidden fees or gotchas.

Reduced operational burden is not taken into account by the calculator. Those savings are significant, but we like to think you can't put a price on sleeping through the night.

Have questions?
Need custom pricing?

Unit Prices

BYOC

BYOC clusters run the data plane in your cloud account, and the control plane in ours. This is the most cost-effective way to run WarpStream, and data never leaves your cloud account.

There are no per-Agent, per-core, or per-partition fees.
Dimension
Price
Write Throughput (uncompressed)
Write throughput is the amount of logical data (uncompressed) produced to all the topic-partitions in the cluster.
$0.01 - $0.0024 / GiB
First 5 TiB/month
$0.01
Next 20 TiB/month
$0.0075
Next 50 TiB/month
$0.00563
Next 1,750 TiB/month
$0.00422
Next 2,500 TiB/month
$0.00316
> 5,000 TiB/month
$0.00237
Storage (uncompressed)
Storage is the amount of logical data (uncompressed) stored in the cluster at any given moment.
$0.002 / GiB-month
Cluster Minutes
Cluster minutes are billed in 15-minute increments for any 15-minute interval that the cluster receives requests.
$0.0345 / 15 min

Serverless

Serverless clusters run both the data plane and the control plane in WarpStream's infrastructure.

Serverless clusters use the same billing dimensions as BYOC clusters, with the addition of two additional dimensions for network ingress and egress.
Dimension
Price
Write Throughput (uncompressed)
Write throughput is the amount of logical data (uncompressed) produced to all the topic-partitions in the cluster.
$0.02 - $0.0047 / GiB
First 5 TiB/month
$0.02
Next 20 TiB/month
$0.015
Next 50 TiB/month
$0.0113
Next 1,750 TiB/month
$0.0084
Next 2,500 TiB/month
$0.0063
> 5,000 TiB/month
$0.0047
Network Ingress (compressed)
$0.04 / GiB
Network Egress (compressed)
Intra-region
$0.04 / GiB
Internet
$0.12 / GiB
Storage (uncompressed)
Storage is the amount of logical data (uncompressed) storage in the cluster at any given moment.
$0.02 / GiB-month
Cluster Minutes
Cluster minutes are billed in 15-minute increments for any 15-minute interval that the cluster receives requests.
$0.0345 / 15 min

Serverless

Serverless clusters run both the data plane and the control plane in WarpStream's infrastructure.

Serverless clusters use the same billing dimensions as BYOC clusters, with the addition of two additional dimensions for network ingress and egress.
Dimension
Price
Write Throughput (uncompressed)
Write throughput is the amount of logical data (uncompressed) produced to all the topic-partitions in the cluster.
$0.02 - $0.0047 / GiB
First 5 TiB/month
$0.02
Next 20 TiB/month
$0.015
Next 50 TiB/month
$0.0113
Next 1,750 TiB/month
$0.0084
Next 2,500 TiB/month
$0.0063
> 5,000 TiB/month
$0.0047
Network Ingress (compressed)
$0.04 / GiB
Network Egress (compressed)
Intra-region
$0.04 / GiB
Internet
$0.12 / GiB
Storage (uncompressed)
Storage is the amount of logical data (uncompressed) storage in the cluster at any given moment.
$0.02 / GiB-hours
Cluster Minutes
Cluster minutes are billed in 15-minute increments for any 15-minute interval that the cluster receives requests.
$0.0345 / 15 min

Add Ons

Managed Single-Tenant Control Plane Cell
Enterprise Support

FAQs

What assumptions do you make in your cost estimator?

It's difficult to model every workload, so your experience may vary based on a wide range of factors, however, we have gone out of our way to model realistic parameters for both Kafka and WarpStream. Our objective is not to skew the results in favor of WarpStream, and we have tried to model configurations that make sense for most use cases, but if you would like a custom estimate, please contact us.

The cost estimator assumes:

  • ~10 MiB/sec of compressed write throughput per CPU core can be achieved for both WarpStream Agents and Kafka Brokers.
  • We assume a 5:1 compression ratio.
  • Both Kafka and WarpStream utilize highly available three availability zone deployments.
  • Kafka is deployed with 3x replication.
  • Kafka uses a minimum instance size of r4.xlarge.
  • For both Kafka and WarpStream, we assume 3x consumer fanout.
  • For Kafka, target EBS utilization is 50%, which is generally recommended to avoid filling disks if write throughput increases suddenly.
  • EBS volume pricing is based on the lowest cost offering (GP2) and we assume that no more than 32TiB of EBS storage can be attached to a single broker. EBS has a hard-limit of 64TiB, but even at 32 TiB broker restarts and partition rebalancing will become unmanageable.
  • Kafka leader partitions are zone-aligned with producers for 1/3 of write throughput.
  • With Fetch From Follower disabled, 1/3 of Kafka read throughput is zone-aligned.
  • With Fetch From Follower enabled, 100% of Kafka read throughput is zone-aligned.
  • WarpStream Agents are zone-aligned for all produce and consume throughput.
  • WarpStream Agents flush to object storage every 250ms (this is the default flush interval).

These assumptions are intended to be generally accurate for most use cases, however, your workload may have different characteristics and therefore you may obtain different results in practice. For a detailed analysis of your specific workload, contact us.

How can WarpStream be so inexpensive? It seems too good to be true.

WarpStream is compatible with the Apache Kafka® protocol, but we are not running Kafka. Instead, we run a stateless Agent that has no local disks, and writes directly to object storage, which avoids 100% of cross-AZ replication charges. These stateless Agents are also much easier to operate than Kafka brokers, so you can drastically reduce the amount of time that you spend managing your streaming infrastructure by switching to WarpStream.

WarpStream effectively uses object storage as both the storage layer and the network layer, which avoids much of the cost associated with running Kafka in cloud environments. By writing directly to object storage, WarpStream avoids replicating data between Agent nodes. Instead, data is durably persisted to object storage before WarpStream provides an acknowledgement to the producer. Once data is written to object storage, replication is handled by the object store. If you use Amazon S3 as the object store, for example, this means that the data that you write to WarpStream has an eleven nines (99.999999999%) durability guarantee. And because your data is not replicated between Agents before reaching object storage, you don't pay anything extra for this durability.

In addition, the WarpStream Service Discovery system ensures that your clients are 100% zone-aligned with Agents for both Produce() and Fetch() requests, which completely avoids cross-AZ traffic in both the write path and read path by default. Kafka's Fetch from Follower feature helps reduce cross-AZ traffic in the read path, but because Kafka clients must always write to a leader partition, cross-AZ traffic in the write path is unavoidable. WarpStream Agents do not have the concept of leader partitions, so any producer can produce to any Agent.

Finally, because the compute layer is stateless, you can autoscale the Agents, which means you don't need to overprovision compute for your cluster to be able to handle peak load. This stateless model also enables WarpStream to run on smaller instance types with lower memory requirements for lower-throughput workloads. For example, whereas Kafka is recommended to run on at least d2.xlarge or r4.xlarge EC2 instances in AWS, WarpStream can use much smaller instances in less expensive instance families, and smaller instance sizes as well.

Will I still be charged for an idle cluster?

No. WarpStream's stateless compute model, consumption-based billing system, and automatic scale-to-zero functionality make idle WarpStream clusters free. This is true for both our Serverless and BYOC products, although for BYOC clusters you will need to scale the Agent nodes to zero yourself.

Keep in mind that when a WarpStream is scaled to zero, the data is not gone. The data is still persisted in object storage, and will be available as soon as the cluster scales back up.

WarpStream cluster-minutes are accrued in 15 minute intervals, so you are only charged for a 15-minute interval of cluster-minutes if there was traffic during that period. You are not charged for idle clusters, and there are no per-partition charges, either.

Why does WarpStream charge for uncompressed writes?

Normally, networking charges associated with running Kafka are accrued based on compressed network throughput. However, WarpStream charges for uncompressed data written because we want to offer predictable pricing, and we believe that your bill should not fluctuate based on which compression algorithm you choose to use in your client. This also aligns incentives so that we are encouraged to reduce your cloud infrastructure costs for the WarpStream BYOC product. This philosophy is also why we don't charge fees per Agent or per core.

For Serverless clusters, we also charge for network ingress and egress, which are metered as compressed data volume produced to and consumed from your WarpStream Serverless cluster. For network egress traffic (e.g., consumer traffic), we differentiate pricing based on where your consumer is running. If your consumer is running in the same region as your WarpStream cluster, you will be charged for the inter-region egress rate. If you avoid consuming data over the internet, and compress your data client-side, your network ingress and network egress charges will be minimized.

BYOC clusters are only charged for uncompressed writes, cluster-minutes, and storage. There are no network ingress or egress charges for self-hosted clusters.

Does WarpStream charge me for the number of Agents that I am running?

No, unlike other vendors with BYOC or self-hosted software deployment models, WarpStream has no per-Agent, per node, or per-vCPU charges. Feel free to scale your clusters to best fit your workload, without needing to worry about the cost implications of doing so. You can even set up autoscaling, and your cluster will scale out and in as quickly as your autoscaler can respond and provision containers or instances.

Is replication included in WarpStream's storage pricing?

Unlike other Kafka and Kafka-compatible systems, WarpStream does not charge extra for replication. That's because there is no local data to replicate. Object storage is the primary and only storage, so replication is handled transparently by the object store itself. Writes are durably persisted to object storage and committed to the metadata store before providing acknowledgement to the client, so you can be confident that your data is being replicated behind the scenes.

Keep in mind that WarpStream's storage fees are all-inclusive, meaning the advertised price is what you will actually pay. Some other vendors list unit prices pre-replication, which means that the actual fees that you pay are 3x higher.

Does WarpStream charge for partitions?

No. Unlike some other Kafka-compatible systems, WarpStream has no per-partition charges. And unlike Kafka, there is no requirement to increase the number of WarpStream Agents when the number of partitions increases. However, similar to Kafka and related systems, performance is improved by not using an excessive number of partitions.

Is there a limit on the number of partitions per Agent?

No. WarpStream has no limits on the number of partitions per Agent because WarpStream doesn't have partition "leaders" like Kafka does. We also don't charge any additional fee per partition.

Idle partitions do not contribute to to Agent resource utilization at all, but the number of active partitions (i.e., how many partitions are being written to at any given moment) does contribute towards Agent resource utilization.

By default, WarpStream clusters have a limit on the total number of partitions in a cluster, but these can be raised upon request.

Ready to
get started?