TL;DR
· On identical hardware and data, ClickHouse matches PostgreSQL for single-row UPDATEs and is up to 4,000× faster in our tests for bulk UPDATEs.
· Why it matters: Bulk updates are common in OLTP workloads, and ClickHouse’s columnar design + parallelism make them far faster.
· Caveat: PostgreSQL is fully transactional by default; ClickHouse isn’t. Results compare each engine’s native execution model, not identical transaction guarantees.
PostgreSQL is the most popular open-source OLTP database in the world.
It’s a fair assumption that, when a developer thinks about UPDATE performance, they probably think of the baseline that PostgreSQL sets.
In contrast, ClickHouse is the most popular open-source OLAP database in the world.
While ClickHouse sets the expectation of OLAP performance, its OLTP potential is often underrated.
In July 2025, ClickHouse v25.7 brought high-performance SQL-standard UPDATEs to ClickHouse.
Naturally, we want to know how ClickHouse stacks up against the best. Are UPDATEs in ClickHouse competitive with UPDATEs in Postgres?
Comparing PostgreSQL and ClickHouse isn’t apples-to-apples
In our previous post, we benchmarked the new high-performance SQL-standard UPDATEs against other ClickHouse update methods. Here, out of curiosity, we’re running the same benchmarks against PostgreSQL on identical hardware and data.
We focus on two OLTP staples: single-row updates and bulk changes on a classic orders dataset. These tests reveal how much update speed depends on finding rows efficiently, a strength ClickHouse inherits from its analytical roots.
Before diving into the numbers, we want to acknowledge something: a PostgreSQL vs. ClickHouse comparison isn’t apples-to-apples.
Both can run UPDATEs, but their execution models differ in ways that matter when interpreting results.
TL;DR: PostgreSQL wraps every statement in a fully transactional context; ClickHouse doesn’t. That means these results aren’t a perfect measure of transaction performance. However, they’re still an interesting and relevant look at how each system handles update workloads under its native execution model.
PostgreSQL and ClickHouse may both run UPDATE statements, but they don’t do it the same way. They share a similar approach to isolation, both use Multi-Version Concurrency Control (MVCC), but differ in how they handle transactions and commit semantics.
This difference matters when interpreting benchmark results.
Multi-Version Concurrency Control (MVCC)
Both PostgreSQL and ClickHouse use MVCC to let readers and writers work in parallel without blocking each other. Every query sees a consistent snapshot of the data as it existed when it started, which means:
-
Readers never see partial changes from concurrent updates.
-
Readers and writers don’t block each other.
-
Data is never overwritten in place; new versions are created, and old ones are kept until no query needs them.
How this looks in practice:
PostgreSQL
-
Creates a new row version with the updated data.
-
Keeps the old version until all readers of the old one finish.
-
Removes old versions later via VACUUM.
ClickHouse
-
Keeps old data parts until all readers of the old one finish.
-
Merges updates into the source data part during a background merge and discards old versions.
On isolation alone, PostgreSQL and ClickHouse behave very similarly; both ensure readers always see a consistent snapshot while updates are in flight.
Transactions, commit, and rollback
Here’s where the paths diverge:
PostgreSQL
-
Runs every statement, even a single UPDATE, inside a transaction context.
-
If you don’t start one, it still wraps the statement in an implicit transaction (BEGIN → execute → COMMIT).
-
Writes all changes to the Write-Ahead Log (WAL) before touching the table for durability and crash recovery.
-
COMMIT makes changes visible; ROLLBACK discards them before commit, marking old row versions invalid for later cleanup.
ClickHouse
-
Single inserts and updates on a MergeTree table are atomic, consistent, isolated, and durable, applied as a whole, if they are packed and inserted as a single block.
-
By default, it doesn’t wrap every statement in a transaction.
-
Supports experimental multi-statement transactions with explicit BEGIN, COMMIT, and ROLLBACK (not yet in ClickHouse Cloud).
PostgreSQL is fully transactional by default, adding per-update-statement overhead. ClickHouse avoids extra bookkeeping unless you explicitly start a transaction (experimental feature, not yet in ClickHouse Cloud).
With the key execution model differences in mind, let’s look at how we put both systems to the test.
Dataset and benchmark setup
In our previous blog on fast updates in ClickHouse, we walked through the dataset, schema, and setup in detail. Here’s a quick recap of the essentials so we can focus on the new ClickHouse–PostgreSQL comparison.
Dataset
We use the TPC-H lineitem table, which models items from customer orders. This is a classic OLTP-style scenario for updates, as quantities, prices, or discounts might change after the initial insert.
ClickHouse
CREATE TABLE lineitem ( l_orderkey Int32, l_partkey Int32, l_suppkey Int32, l_linenumber Int32, l_quantity Decimal(15,2), l_extendedprice Decimal(15,2), l_discount Decimal(15,2), l_tax Decimal(15,2), l_returnflag String, l_linestatus String, l_shipdate Date, l_commitdate Date, l_receiptdate Date, l_shipinstruct String, l_shipmode String, l_comment String) ORDER BY (l_orderkey, l_linenumber);
PostgreSQL
CREATE TABLE lineitem ( l_orderkey INT, l_partkey INT, l_suppkey INT, l_linenumber INT, l_quantity DECIMAL(15,2), l_extendedprice DECIMAL(15,2), l_discount DECIMAL(15,2), l_tax DECIMAL(15,2), l_returnflag CHAR(1), l_linestatus CHAR(1), l_shipdate DATE, l_commitdate DATE, l_receiptdate DATE, l_shipinstruct CHAR(25), l_shipmode CHAR(10), l_comment VARCHAR(44), PRIMARY KEY (l_orderkey, l_linenumber) );
Primary key and indexing
Both databases use the official TPC-H compound primary key (l_orderkey, l_linenumber)
.
-
In PostgreSQL, this creates a B-tree index by default.
-
In ClickHouse, it creates a sparse primary index over the same columns.
In both cases, the index ensures that all queries with predicates on these columns can quickly locate matching rows, speeding up both updates and reads.
Data size
We use the scale factor 100 version of this table, which contains roughly 600 million rows (~600 million items from ~150 million orders) and occupies ~30 GiB on disk with ClickHouse, and ~85 GiB with PostgreSQL.
Test environment
Hardware and OS
We used an AWS m6i.8xlarge EC2 instance (32 cores, 128 GB RAM) with a gp3 EBS volume (16k IOPS, 1000 MiB/s max throughput) running Ubuntu 24.04 LTS.
ClickHouse version
We performed all tests with ClickHouse 25.7 with a default installation without any optimizations applied.
PostgreSQL version
We used PostgreSQL 17.5 and installed it with optimized settings.
Benchmark methodology
We measured cold update times by dropping the OS-level page cache before each update. This forces the database to load the affected rows from storage and apply the changes from scratch, without help from cached data or prior reads. The result is upper-bound (worst-case) latency numbers. Warm-cache measurements are harder to compare fairly, as PostgreSQL and ClickHouse cache and reuse data differently.
Running it yourself
You can reproduce all results with the Benchmark scripts on GitHub.
We’ll start small, looking at point updates that change a single row at a time, before moving on to multi-row bulk changes.
Single-row (point) updates
We start with 10 standard SQL UPDATE statements, each targeting a single row identified by its primary key. These updates are executed sequentially in both ClickHouse and PostgreSQL.
Update latency
For each update, the chart shows the cold time from issuing the UPDATE to when the change becomes visible to queries. The Cols upd
column indicates how many of the 16 table columns were updated in that statement.

(See the raw benchmark results for PostgreSQL and ClickHouse behind the chart above.)
In these point-updates, ClickHouse consistently finished faster, which is a super satisfying result. That said, we’ll highlight again that Postgres is incurring an overhead of wrapping every statement in a transaction context, which ClickHouse is not.
Given that, our main takeaways here are:
- Postgres is really fast!
- ClickHouse UPDATEs have started off strong, with great performance in cases where transactions aren’t needed.
- The current performance is good enough that, should ClickHouse introduce production support for transactions, there is a good chance we’ll end up in parity with Postgres in a more apples-to-apples test.
It’s also interesting to consider that each system finds an individual row in different ways; PostgreSQL uses a B-tree index with one entry per row to locate the target row directly. ClickHouse’s sparse primary index identifies the block of rows (8,192 rows by default) that might contain the target, then searches through all rows in parallel.
For most workloads, the update isn’t the end of the story; it’s what happens next that matters. In real dashboards and reports, updated values need to be reflected right away in downstream queries. So next, we check how each database handles analytical queries that run immediately after these updates.
Query latency after update
We use 10 typical analytical queries that would immediately benefit from updated data. They cover common warehouse-style patterns, such as:
-
Revenue and margin recalculation – e.g., computing net revenue for a given shipping mode.
-
Aggregates on filtered subsets – e.g., average discounts and taxes for orders with certain comments or statuses.
-
Operational counts – e.g., counting orders shipped under specific modes or with certain instructions.
-
Ad-hoc investigations – e.g., sampling rows that match a comment substring.
These queries represent the kind of real-time dashboards and analytical reports where users expect updated facts to be reflected immediately after a change.
These queries are paired 1 with the 10 updates: After each update, its paired query runs. Each query touches at least the updated rows and columns, but many scan broader data for realistic analytics.
The following chart shows post-update cold query times for all 10 queries after each single-row UPDATE:

(See the raw benchmark results for PostgreSQL and ClickHouse behind the chart above.)
For ClickHouse, we show two cases:
-
Before materialization — queries see updated values instantly via patch-on-read, without waiting for background merges.
-
Fully materialized — updates are merged into the base data by the background merge process.
It’s not too surprising that ClickHouse is consistently much faster here; these are analytical queries over large datasets, and that’s ClickHouse’s home turf. The key takeaway is how little the update stage slows it down: whether updates are freshly applied or fully merged, query performance stays high and predictable.
Single-row updates are the OLTP baseline; they measure how quickly each engine can locate and modify an individual record. But many operational changes touch much larger slices of data, so next we look at bulk updates.
Bulk updates (multi-row changes)
We took the same 10 standard SQL UPDATE statements used for the single-row updates test, but modified the predicates so they no longer target individual rows by primary key. Instead, they match dozens, hundreds, or even hundreds of thousands of rows, a very common OLTP pattern in an orders table.
Typical reasons for such bulk updates include:
-
Correcting data entry errors – fixing discounts, taxes, or prices for all orders matching a certain date and quantity.
-
Adjusting order statuses – changing return flags, shipping modes, or line statuses in response to business events.
-
Applying policy or pricing changes – repricing items for a given supplier, contract, or campaign.
-
Handling special-case bulk actions – marking orders as “priority,” “warranty extended,” or “return-flagged” based on filters like commit date or quantity.
These scenarios are frequent in production OLTP workloads, where operational changes must be applied to large, filtered subsets of data rather than to just one row at a time.
Update latency
For every one of the 10 bulk UPDATEs (that we run sequentially in both ClickHouse and PostgreSQL), the chart measures the cold time from issuing the UPDATE to the moment its changes are visible to queries. The Rows upd
and Cols upd
columns indicate how many rows and how many of the 16 table columns were updated in each statement.

(See the raw benchmark results for PostgreSQL and ClickHouse behind the chart above.)
Initially, these results looked a bit off… but they actually make sense. Except for update ⑤, none of these bulk updates can use a primary key index because their WHERE clauses filter on columns other than the primary key. (Both systems could use extra indexes, but with 16 columns, covering all combinations is impractical, especially for ad-hoc bulk updates, where creating the index might take longer than the update.)
In those cases, the real cost isn’t writing the changes, it’s finding the rows. That means scanning the table end to end.
For PostgreSQL, that takes 381–398 s on this dataset before the changes are visible. ClickHouse scans the same data in 0.096–5.15 s, up to 4,000× faster. The advantage comes from highly parallel processing and a column-oriented layout, only reading columns used in the UPDATE’s WHERE clause and only writing values for the columns being updated.
Update ⑤ is the one exception: both systems use an index to jump directly to the matching rows, narrowing the gap to “only” 18×. But in every other case, where both have to scan everything, ClickHouse’s raw scan speed keeps update-visibility latency low, even for massive changes.
In OLTP workloads, you can’t update what you haven’t located, and locating large numbers of rows quickly takes an analytical engine’s horsepower.
Still, applying the update is only part one. Part two is how quickly you can run analytics on the updated data, which is often the whole reason you made the change in the first place. Let’s see how each system performs when we immediately follow these bulk updates with representative analytical queries.
Query latency after update
We use the same 10 typical analytical queries as before, paired again 1 with the 10 updates: After each update, its paired query runs. Each query touches at least the updated rows and columns, but many scan broader data for realistic analytics.
The next chart shows post-update cold query times for all 10 queries after each bulk-row UPDATE:

(See the raw benchmark results for PostgreSQL and ClickHouse behind the chart above.)
As before, unsurprisingly, ClickHouse is consistently much faster than PostgreSQL on the same hardware and same dataset for typical analytical queries that would immediately benefit from updated data.
Taken together, these tests show that ClickHouse not only applies changes quickly, but also makes them immediately useful for analytical queries, a rare combination in OLTP-heavy scenarios.
Conclusion
High-performance SQL-standard UPDATEs are a rare feature in column stores and analytical databases. It’s hugely beneficial to analytical workloads, and one of the most challenging stepping stones towards supporting OLTP-style workloads.
When we set out to build UPDATEs for ClickHouse, the goal wasn’t to “beat Postgres”. That’s still not the goal. But we’re extremely proud that the effort has paid off with such convincing results: parity with PostgreSQL for single-row updates, and speedups of up to 4,000× for bulk updates.
We know that Postgres is offering full transaction semantics that aren’t yet possible in ClickHouse, but never say never. We see no reason why ClickHouse couldn’t support transactions in the future.
And we learnt something that, perhaps, should have been obvious:
You can’t UPDATE what you can’t find — and ClickHouse finds fast.