Row-oriented vs column-oriented databases: a head-to-head comparison

Al Brown
Last updated: May 8, 2026

Row-oriented databases store each table row contiguously on disk. Column-oriented databases store each column contiguously instead. Row stores like Postgres and MySQL win point lookups and single-row updates. Column stores like ClickHouse, DuckDB, and Snowflake win wide aggregations and compress 5–10× tighter, reading ~6% of the bytes a row store reads on a query that selects 3 of 50 columns.

TL;DR #

AxisRow-oriented (Postgres, MySQL)Column-oriented (ClickHouse, DuckDB, Snowflake)
Storage layoutTuples stored together, page-aligned (8 KB pages in Postgres)Each column stored as a separate file or stripe
I/O on a 3-of-50-column query~100% of the bytes~6% of the bytes
Compression ratio1.5–3× typical5–10× typical, up to 30× on low-cardinality columns
Write pathSingle-row update in O(log n) via B-treeRewrite a part, copy-on-write, or merge-on-read deltas
Read latencySub-millisecond point lookupSub-second aggregation over millions of rows
ConcurrencyThousands of point queries/sec/nodeTens (legacy warehouses like Snowflake) to hundreds or thousands (modern real-time engines like ClickHouse) of analytical queries/sec/node
Typical useOLTP, transactional, single-row CRUDOLAP, aggregation, time-series, analytics
ExamplesPostgres, MySQL, Oracle, SQL ServerClickHouse, DuckDB, Snowflake, BigQuery, Redshift

What is a row-oriented database? #

A row-oriented database stores all the columns of a row together on disk, in a single page or block. Each table is a heap of fixed-size pages (8 KB in Postgres), with tuples written into the pages and a B-tree index per primary or secondary key. The layout is optimised for point lookups by key and for single-row mutations.

Row stores dominate transactional workloads (OLTP) for two structural reasons. First, fetching one row requires one B-tree walk and one page read, sub-millisecond on cached data. Second, updating one column updates one tuple on one page, with MVCC handling concurrent versions. Postgres, MySQL, Oracle, and SQL Server are the canonical examples. Each adds engine-specific machinery (Postgres MVCC and TOAST, InnoDB's clustered index in MySQL), but the underlying row layout is the same.

What is a column-oriented database? #

A column-oriented database stores each column of a table together on disk, in a separate file, stripe, or block. Each column compresses independently using encodings tuned to its type and value distribution. Queries read only the columns they reference, so I/O scales with the columns the query touches, not the table's full width. ClickHouse, DuckDB, and Snowflake are canonical examples.

Column stores dominate analytical workloads (OLAP) because most analytical queries touch a handful of columns from wide tables, and because those columns compress dramatically once stored adjacent to each other. The layout fits aggregation, group-by, and time-window scans natively, and it pairs with vectorised execution and data skipping to cut work further. There is a tradeoff. Every column file must be opened to materialise a single row, so full-row point lookups on a column store can be more expensive than on a row store.

How they differ on storage layout and I/O #

A row store reads every byte of every row it touches. A column store reads only the columns the query references. The difference compounds with table width.

Take a 50-column table where each column averages 8 bytes per row. One million rows occupy 400 MB on disk, ignoring compression and overhead. A query that selects 3 columns and aggregates over all rows reads:

  • Row store: ~400 MB. The engine must fetch every page to extract the 3 wanted values from each row.
  • Column store: ~24 MB (3/50 × 400 MB). The engine reads only the 3 column files, skipping the other 47 entirely.

That's a 16× I/O reduction before compression. After compression, the gap typically widens to 50–100× because column-store data compresses tighter (see below). AWS Redshift's columnar storage docs walk through the same calculation with similar numbers.

How they differ on compression #

Column stores compress 5–10× tighter than row stores because adjacent values in a column share type, often share value, and frequently follow predictable patterns. Row stores typically reach 1.5–3× because each block mixes types and the codec can't exploit value distribution.

A row-store page interleaves int32 IDs, int64 timestamps, varchar names, and boolean flags. General-purpose codecs like LZ4 or Zlib find some redundancy at the byte level, but the mixed types limit the ceiling. A column-store stripe holds one type, sorted (or near-sorted) by the engine's primary key. Dictionary encoding collapses repeated strings to small integers, run-length encoding collapses long runs of the same value, and delta encoding collapses monotonic sequences like timestamps. ClickHouse production data shows 30×+ compression on low-cardinality columns where dictionary plus run-length stack. The columnar compression deep dive walks through the full codec stack.

How they differ on the write path #

Row stores commit single-row updates in O(log n) by walking a B-tree and rewriting one page. Column stores rewrite a whole column block, use copy-on-write parts, or rely on merge-on-read deltas, making single-row updates more expensive.

Bulk inserts invert the picture. ClickHouse ingests millions of rows per second per node by appending sorted columnar parts and merging them in the background. DuckDB and Snowflake follow similar batching patterns. Postgres and MySQL typically saturate at single-digit thousands of rows per second per node on commit-heavy workloads, because every row triggers index maintenance and WAL appends. The OLTP write path optimises for one-row-at-a-time durability; the OLAP write path optimises for throughput.

How they differ on read latency and concurrency #

Row stores answer point lookups in sub-millisecond latency and handle thousands of concurrent point queries per node, because each read is one B-tree walk against an in-memory or page-cached index. AlloyDB has demonstrated ~467K transactions/sec on select-only workloads, and PlanetScale MySQL serves 420K+ QPS on sharded clusters.

Column stores serve analytical queries (aggregations across millions of rows) at sub-second latency, and modern columnar databases like ClickHouse can scale concurrency of analytical queries into the thousands per-node.

When to choose a row-oriented database #

Pick a row-oriented database when:

  • The dominant query shape is point lookup by primary or secondary key.
  • Single-row updates and deletes are part of the steady-state workload, not exceptional events.
  • Concurrency exceeds ~1,000 transactions per second per node, with strict per-query latency budgets.
  • Transactional consistency at row granularity is required (financial ledgers, account tables, inventory).
  • Tables are narrow (under ~20 columns) and queries usually touch most of them.

When to choose a column-oriented database #

Pick a column-oriented database when:

  • Queries scan many rows but reference a handful of the table's columns (the textbook OLAP shape).
  • Aggregation, group-by, and time-window queries dominate the read path.
  • Storage cost and compression efficiency matters at scale (TB+ data).

Frequently asked questions

Can the same database be both row-oriented and column-oriented?

Hybrid engines exist, known as Hybrid Transactional-Analytical Processing (HTAP) databases. HTAP databases often have a primary row-orientated store, with a secondary column store under the hood, and an internal process to automatically re-index data from one store to the other. While this can offer a simple architecture on paper, HTAP databases struggle to offer leading performance for either transactional or analytical use cases, and coupling the engines complicates scaling.

What about wide-column databases like Cassandra and HBase?

Wide-column databases are not the same as column-oriented databases. Cassandra, HBase, and Bigtable store rows, but each row's columns can vary. They are sparse row stores optimised for high-write-throughput key-value access. They do not deliver the OLAP scan performance of a true columnar engine like ClickHouse or Snowflake. The naming overlap is the most common reader confusion in this category.

Is one of the two faster overall?

Neither. The two win different workloads. On point lookups by primary key, row stores are 10–100× faster than column stores. On wide-table aggregations, column stores are 10–1,000× faster than row stores. The right framing is which one fits the queries you actually run, not which is faster in the abstract.

Can I migrate from a row store to a column store without rewriting my application?

Tools like ClickHouse's pg_clickhouse allow you to continue sending queries to Postgres, and enable automatic pushdown of analytical queries to ClickHouse. This means you can avoid rewriting your application or queries, while benefiting from the strengths of Postgres and ClickHouse as a single unified data stack.

Share this resource

Subscribe to our newsletter

Stay informed on feature releases, product roadmap, support, and cloud offerings!
Loading form...