• Write
Unified storage
producer.send(
"events",
data
)
Continuum collapses the streaming pipe, the data lake, and the replay engine into one system — with mutable-history corrections as a first-class primitive.
• Write
producer.send(
"events",
data
)
• Query
consumer.replay(
from: "Jan 1",
to: "Mar 31"
)
• Replay
SELECT *
FROM iceberg
WHERE date
>= "2026"
One source of truth. Three access patterns.
Up to 100x Compression (Parquet, columnar)
40-60% Cheaper than Kafka alternatives
Iceberg-native Lakehouse
Built-in Correction & reorg handling
Stop duplicating data across Kafka, queues, warehouses, and replay pipelines. Continuum unifies the stack.
Queue (Rabbit, SQS)
Data warehouse (Snowflake, Databricks)
Replay / export pipelines
Four systems. Three pipelines. Data duplicated at every hop.
Why we built Continuum
No solution handled reorganisations natively. Compression ratios were terrible for structured on-chain data. Retaining full chain history from block zero was either ruinously expensive or simply not supported. And the throughput demands of ingesting every block, every transaction, across every chain — simultaneously and in order — pushed existing tools past their limits.
We needed all of these things — so we built a platform that handles them from the ground up.
Chain reorgs, restatements, late-arriving corrections, retroactive patches — all handled at the storage layer as first-class primitives. Every other streaming platform is append-only. Continuum rewrites history without rebuilding pipelines. Implement two functions: `handleEvents` and `cleanUpEvents`. Everything else is automatic.
Write via Kafka, REST, AMQP, or SQS — whatever your producers already speak. Consume as streams, queues, or SQL tables via the Iceberg catalog — queryable directly from Snowflake, Trino, Databricks, Spark, and Athena. One source of truth, many access patterns.
Body copy stays the same (stateless producers, no broker layer, no replication tax).
Parquet format, not log segments. 3-5x more space-efficient than any Kafka-compatible system. 17x on blockchain data; structured or repetitive data typically achieves 30-100x.
S3 is the source of truth — 11 nines of durability. Full replay from any point in history. Retention costs scale with compression, not replication. Running in production for 3+ years across 50+ chains, 11.9 PB, and 2B+ events per month.
Architecture
No brokers. No disks. No replication tax.
Producers write directly to S3 through stateless pods.
Write directly to S3. No broker layer, no replication. Scale horizontally with zero coordination overhead.
Parquet format, columnar from the write path. 17-20x on blockchain event data, 30-100x on structured time-series. 3-5x advantage over Kafka-compatible systems in the general case.
`Multi-protocol reads: streams, queues, or SQL via Iceberg. Consume as JSON, Arrow, or Parquet. Bridge to Kafka, RabbitMQ, or SQS. Query directly from Snowflake, Trino, Databricks, or Spark — no CDC pipeline required.
Continuum was designed from scratch with no legacy compatibility constraints. Here's how it stacks up.
| Feature |
|
|
|
|
|---|---|---|---|---|
| Architecture | S3-native, no brokers | Broker cluster + EBS/tiered | S3-native, agents + brokers | S3-native, Kafka-compatible |
| Storage format | Columnar Parquet | Log segments | Log segments (Kafka format) | Log segments (Kafka format) |
| Compression | 17-100x (Parquet) | 3-5× | 3-5× | 3-5× |
| Replication | S3 durability (none needed) | RF=3 (3× storage + network) | S3 durability (none needed) | S3 durability (none needed) |
| Correction handling (reorgs) | Built-in | |||
| Full replay | From any point in history | Retention-limited | S3-backed | S3-backed |
| Protocol bridging | Kafka, RabbitMQ, SQS | Kafka only | Kafka only | Kafka only |
| Deployment | Managed or self-hosted | Self-managed or managed (MSK, Confluent) |
BYOC (their control plane) | BYOC (their control plane) |
| Cost @ 200 MiB/s, 90d | $15.3K/mo | $33K–$235K/mo | $28K/mo | $25.5K/mo |
| Queryable from SQL engines (Iceberg) | Native | Requires CDC / ETL pipeline | Requires CDC / ETL pipeline | Requires CDC / ETL pipeline |
Cost Analysis
200 MiB/s sustained throughput, 90-day retention. Same workload, different architectures.
200 MiB/s uncompressed, 90-day retention, 3 AZs, us-east-1 pricing.
Continuum retention is infinite at this price — 90 days shown for comparison.
Cost Analysis
Usage-based pricing. All infrastructure included. No separate AWS bill.
In production today
Write directly to S3. No broker layer, no replication.Scale horizontally with zero coordination overhead.
Onboarding design partners
Sensor telemetry, feature recomputation, LLM inference logs, agent traces. Late labels and mutable-history corrections handled at the storage layer — no pipeline rebuilds. Replay for retraining from any point in time.
Corrections and restatements handled natively. No more “oops” events.Full audit trail with replay.
Edge-to-cloud, long-retention sensor data. Columnar compression achieves 30-100x on structured time-series — storage costs drop by an order of magnitude.
Immutable event log with full replay. S3 durability (11 nines). No broker state to lose. No application rewrites needed.
Bridge to and from Kafka, RabbitMQ, and SQS. Migrate incrementally — no rip-and-replace required.
Powering Moralis infrastructure: 50+ blockchain networks,
2B+ events/month, 11.9 PB retained.
S3 encryption at rest. TLS in transit. Data isolation per customer. SOC 2 compliant infrastructure.
Run Continuum on your infrastructure, your cloud, your terms. No control plane dependency. Full data sovereignty.
Dedicated support engineers. SLA guarantees. Architecture reviews and migration planning included.
Built to handle chain reorganisations, network forks, and sustained high throughput — 24/7, without intervention.
Join teams already using Continuum for event streaming at scale with native correction handling