Skip to content

Using Clickhouse to scale an events engine

Anh-Tho Chuong edited this page Feb 24, 2024 · 1 revision

Like many companies, we had to change our database stack midway while scaling our core product Lago, an open-source usage-based billing platform. As we grew more popular, we began ingesting millions of events every minute. And our rudimentary Postgres-only stack wasn’t cutting it. We were suffering heavy load times, impacting our entire app’s performance.

After some exploration, we decided to use a distributed ClickHouse instance strictly for our streamed events. Our analytics services were now able to directly query ClickHouse, an OLAP database. For all other data needs, we kept Postgres.

The strategy was successful. Since the refactor, we haven’t looked back.

Today, we’re going to explore that decision for a hybrid database stack, and more specifically, why we decided to go with ClickHouse.

OLTP versus OLAP databases

Most developers, including junior developers, have experience using OLTP (online transactional processing) databases such as Postgres. As the name implies, OLTP databases are designed for processing transactions. A transaction is one of many different types of instructions that software might invoke to a database. The most common are: (i) read, (ii) insert, (iii) update and (iv) delete.

OLTP databases are typically general-purpose databases. Because they support every type of data processing, they could be used for any data problem within limits. And, even at a large scale, they are fantastic for software that require:

  • atomic transactions, where a set of grouped transactions either all occur or don’t occur at all
  • consistency, where queries in-between writes and updates are deterministic and predictable

For most problems, these are important qualities. For some, they are crucial. A banking application can’t have discrepancies whenever money is transferred between accounts. For those problems, an OLTP database is needed for cents-level accuracy. Today, we still use Postgres as our primary database, configured [via our database.yml file](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/config/database.yml#L5). And given that we use Ruby on Rail’s, [our Postgres schema](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/db/schema.rb) is automatically generated by Rail’s [Active Record](https://guides.rubyonrails.org/active_record_basics.html), an ORM that manages our various models such as [charges](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models/charge_spec.rb), [credit notes](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models/credit_note_spec.rb), [invoices](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models/invoice_spec.rb), [invites](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models/invite_spec.rb), [fees](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models/fee_spec.rb), [coupons](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models/coupon_spec.rb), and [much, much more](https://github.com/getlago/lago-api/tree/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/spec/models). We write some custom queries given [the performance limits of the ORM](https://github.com/getlago/lago/wiki/Is-ORM-still-an-%27anti-pattern%27%3F), but otherwise lean heavily on Active Record for most transactions.

So where do OLAP (online analytical processing) databases like ClickHouse come in? Well, Postgres was designed to be strictly atomic and consistent; two properties that require for data to be fully ingested before any query that might process them is run. This creates a problem for tables where entries are ingested in the millions per minutes (e.g. billable events, especially those for infrastructure services like managed servers). Specifically, the issue isn’t ingesting data, but rather simultaneously handling expensive analytical queries without locking up the queue. These data-summarizing problems are where OLAP databases like ClickHouse shine.

OLAP databases are designed for two primary problems—(i) efficiently answering complex read queries with approximate accuracy and (ii) batch processing a large number of write queries. However, OLAP databases are terrible for mutating data (where the entire database often needs to be re-written) or deleting data.

Different OLAP solutions (e.g. ClickHouse, QuestDB, Druid) have different strengths, and we’ll dive into the specific strain of traits that made ClickHouse a winning solution in the next section. But all OLAP solutions share a common quality—data is stored in an inverted layout relative to OLTP databases like Postgres. Storage types

Now, from the user’s standpoint, the table’s columns and rows are still just columns and rows. But, physically in memory, data is scanned column-by-column, not row-by-row. This makes aggregations—such as adding every value in a certain field—very, very fast, as the relevant data is read sequentially.

Enter ClickHouse, our chosen OLAP solution

[ClickHouse](https://clickhouse.com) is an open-source tool spun out from a closed-source algorithm used by Yandex’s website analytics product. Today, ClickHouse is shepherded by [ClickHouse Inc](https://clickhouse.com/company/our-story) with notable contributions by [Altinity](https://altinity.com). To date, it is one of the most successful OLAP databases, both commercially and qualitatively.

ClickHouse has three notable features that make it an analytics powerhouse—(i) dynamic materialized views, (ii) specialized engines, and (iii) vectorized query execution.

To summarize each:

  • Dynamic Materialized Views. Materialized Views are query-able views that are generated from raw data in underlying tables. While many databases do support materialized views, including Postgres, ClickHouse’s materialized views are dynamic, efficiently refreshing content whenever new content is ingested. These contrasts with ordinary materialized views which are just snapshots of a specific point of time, and are very expensive to refresh.
  • Specialized Engines. Many databases have a single engine for utilizing hardware to process queries / transactions. ClickHouse, however, has dedicated engines for specific mathematical functions, such as summing or averaging numbers.
  • Vectorized Query Execution. ClickHouse’s specialized engines leverage vectorized query execution, where the hardware uses multiple units in parallel to achieve a communal result (known as SIMD—Single Instruction, Multiple Data).

Combined with its columnar storage, these traits allow ClickHouse to easily sum, average, or generally aggregate database values.

As a caveat, Postgres isn’t entirely incapable of achieving similar results, but only via a bastion of optimizations. For instance, there is a third-party [vectorized executor](https://github.com/citusdata/postgres_vectorization_test) designed for Postgres that imitates ClickHouse’s native support. There is also [a Fast Refresh Module](https://aws.amazon.com/blogs/database/building-fast-refresh-capability-in-amazon-rds-for-postgresql/) that uses Postgres’s log to dynamically update materialized views. Coupled with Postgres triggers, developers could create a ClickHouse-like set-up. But all of these techniques require significant set-up work and additional columns to reach any efficiency that is even comparable to ClickHouse’s.

Untitled (2)

A relevant meme from my Postgres vs Clickhouse guide for PostHog

Recently, the most interesting rift in the Postgres vs OLAP space is [Hydra](https://www.hydra.so), an open-source, column-oriented distribution of Postgres that was very recently launched (after our migration to ClickHouse). Had Hydra been available during our decision-making time period, we might’ve made a different choice. However, ClickHouse remains an incredible pick, given its mature product, large community, hardware optimizations, and ease of use side-by-side with Postgres.

Of course, migrating analytics processes to ClickHouse is only half the battle. The next is actually deploying ClickHouse to production—where a few strategies exist.

How we utilize ClickHouse

When discussing our ClickHouse implementation, there are fundamentally two different topics—what we use ClickHouse for, and how our ClickHouse instance is deployed and maintained.

What we query ClickHouse for

Our ClickHouse instance [ingests raw billable events](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/app/models/clickhouse/events_raw.rb#L3) dispatched by our users. While we don’t write our own ClickHouse schema (as it is auto-generated by ActiveRecord), it is written to a file, [available in our open-source repository](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/db/clickhouse_schema.rb#L4). Our ClickHouse instance only has two tables—raw_events and raw_events_queue—alongside one materialized view, events_raw_mv . That’s it. We don’t store any of the other “business-critical” data on ClickHouse because they aren’t analytical queries.

In detail, our [raw_events_queue](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/db/clickhouse_migrate/20231026124912_create_events_raw_queue.rb) is where events are initially streamed to via [Apache Kafka](https://kafka.apache.org), open-source event streaming software. From it, the [events_raw_mv](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/db/clickhouse_migrate/20231030163703_create_events_raw_mv.rb) is generated with ClickHouse’s [cast()](https://clickhouse.com/docs/en/sql-reference/functions/type-conversion-functions) function, which maps the event’s metadata from a JSON blob to a string array. Finally, this materialized view pushes data to the [raw_events](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/db/clickhouse_migrate/20231024084411_create_events_raw.rb) table. This is a [MergeTree](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree) table that is apt for a large number of writes.

raw_events is what Lago’s general codebase interfaces with via our [ClickHouseStores](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/app/services/events/stores/clickhouse_store.rb#L14) class, which is tapped when [aggregating billable metrics](https://github.com/getlago/lago-api/blob/e0da0a0b136577bffe5a1b8dac8747c913f7cdf1/app/services/billable_metrics/aggregation_factory.rb#L18). raw_events uses a tuple of organization_id, external_subscription_id, code, and a timestamp as primary keys; given ClickHouse’s [sophisticated support for primary key tuples](https://medium.com/datadenys/how-clickhouse-primary-key-works-and-how-to-choose-it-4aaf3bf4a8b9), this helps ClickHouse locate rows very quickly.

How we deploy ClickHouse

Because ClickHouse is an open-source database, it could be self-hosted on any ordinary Linux server. However, many companies trust managed database solutions because they (i) often reduce overall costs, (ii) make scaling databases easier, and (iii) take care of safe replication/backups.

One of the most popular options is ClickHouse Inc’s ClickHouse Cloud offering, which offers a serverless ClickHouse instance with decoupled compute and storage.

However, we instead opted for Altinity Operator, which deploys and manages ClickHouse in a Kubernetes cluster in our existing cloud offering. We preferred this approach given more flexibility due to custom definitions, efficiency on cost, and ease of maintenance.

Other notable open-source projects that use ClickHouse

We aren’t the only open-source project that uses ClickHouse; in fact, we aren’t even the only open-source project that migrated from Postgres to ClickHouse. A notable example is [PostHog](https://posthog.com), an open-source analytics suite that switched from [Postgres to ClickHouse](https://posthog.com/blog/clickhouse-announcement) given the sheer amount of web events they were processing per second.

Another great example is Gitlab, which used ClickHouse to store data of streamed events [in their observability suite](https://docs.gitlab.com/ee/architecture/blueprints/clickhouse_usage/). In general, it’s common for open-source companies (and closed-source projects alike) to find their general-purpose database like Postgres or mySQL ill-suited as they start to scale.

Even some closed-source solutions, like the HTTP data-streaming product TinyBird, have made [open-source contributions to ClickHouse](https://www.tinybird.co/blog-posts/we-launched-an-open-source-clickhouse-knowledge-base) given their dependence on it. Slowly, ClickHouse is building the same level of success in the OLAP world as Postgres is achieving in the OLTP space.

Closing Thoughts

Due to the hardware optimizations of inverting table layouts, there is no one-size-fits-all database as applications scale. We ran into that problem fairly early in our journey given the event-heavy nature of our product. However, that doesn’t meant that every team needs to start with an OLTP + OLAP stack—just to be ready for it when the moment arrives.

Clone this wiki locally