OrioleDB Overview
The OrioleDB Postgres extension provides a drop-in replacement storage engine for the default heap storage method. It is designed to improve Postgres' scalability and performance.
OrioleDB addresses PostgreSQL's scalability limitations by removing bottlenecks in the shared memory cache under high concurrency. It also optimizes write-ahead-log (WAL) insertion through row-level WAL logging. These changes lead to significant improvements in the industry standard TPC-C benchmark, which approximates a real-world transactional workload. The following benchmark was performed on a c7g.metal instance and shows OrioleDB's performance outperforming the default PostgreSQL heap method with a 3.3x speedup.
OrioleDB is in active development and currently has certain limitations. Currently, only B-tree indexes are supported, so features like pg_vector's HNSW indexes are not yet available. An Index Access Method bridge to unlock support for all index types used with heap storage is under active development. In the Supabase OrioleDB image the default storage method has been updated to use OrioleDB, granting better performance out of the box.
Concepts
Index-organized tables
OrioleDB uses index-organized tables, where table data is stored in the index structure. This design eliminates the need for separate heap storage, reduces overhead and improves lookup performance for primary key queries.
No buffer mapping
In-memory pages are connected to the storage pages using direct links. This allows OrioleDB to bypass PostgreSQL's shared buffer pool and eliminate the associated complexity and contention in buffer mapping.
Undo log
Multi-Version Concurrency Control (MVCC) is implemented using an undo log. The undo log stores previous row versions and transaction information, which enables consistent reads while removing the need for table vacuuming completely.
Copy-on-write checkpoints
OrioleDB implements copy-on-write checkpoints to persist data efficiently. This approach writes only modified data during a checkpoint, reducing the I/O overhead compared to traditional PostgreSQL checkpointing and allowing row-level WAL logging.
Usage
Creating OrioleDB project
You can get started with OrioleDB by enabling the extension in your Supabase dashboard.
To get started with OrioleDB you need to create a new Supabase project and choose OrioleDB Public Alpha
Postgres version.
Creating tables
To create a table using the OrioleDB storage engine just execute the standard CREATE TABLE
statement. By default it will create a table using OrioleDB storage engine. For example:
_10-- Create a table_10create table blog_post (_10 id int8 not null,_10 title text not null,_10 body text not null,_10 author text not null,_10 published_at timestamptz not null default CURRENT_TIMESTAMP,_10 views bigint not null,_10 primary key (id)_10);
Creating indexes
OrioleDB tables always have a primary key. If it wasn't defined explicitly, a hidden primary key is created using the ctid
column.
Additionally you can create secondary indexes.
Currently, only B-tree indexes are supported, so features like pg_vector's HNSW indexes are not yet available.
_10-- Create an index_10create index blog_post_published_at on blog_post (published_at);_10_10create index blog_post_views on blog_post (views) where (views > 1000);
Data manipulation
You can query and modify data in OrioleDB tables using standard SQL statements, including SELECT
, INSERT
, UPDATE
, DELETE
and INSERT ... ON CONFLICT
.
_10INSERT INTO blog_post (id, title, body, author, views)_10VALUES (1, 'Hello, World!', 'This is my first blog post.', 'John Doe', 1000);_10_10SELECT * FROM blog_post ORDER BY published_at DESC LIMIT 10;_10 id │ title │ body │ author │ published_at │ views_10────┼───────────────┼─────────────────────────────┼──────────┼───────────────────────────────┼───────_10 1 │ Hello, World! │ This is my first blog post. │ John Doe │ 2024-11-15 12:04:18.756824+01 │ 1000
Viewing query plans
You can see the execution plan using standard EXPLAIN
statement.
_19EXPLAIN SELECT * FROM blog_post ORDER BY published_at DESC LIMIT 10;_19 QUERY PLAN_19────────────────────────────────────────────────────────────────────────────────────────────────────────────_19 Limit (cost=0.15..1.67 rows=10 width=120)_19 -> Index Scan Backward using blog_post_published_at on blog_post (cost=0.15..48.95 rows=320 width=120)_19_19EXPLAIN SELECT * FROM blog_post WHERE id = 1;_19 QUERY PLAN_19──────────────────────────────────────────────────────────────────────────────────_19 Index Scan using blog_post_pkey on blog_post (cost=0.15..8.17 rows=1 width=120)_19 Index Cond: (id = 1)_19_19EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM blog_post ORDER BY published_at DESC LIMIT 10;_19 QUERY PLAN_19──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────_19 Limit (cost=0.15..1.67 rows=10 width=120) (actual time=0.052..0.054 rows=1 loops=1)_19 -> Index Scan Backward using blog_post_published_at on blog_post (cost=0.15..48.95 rows=320 width=120) (actual time=0.050..0.052 rows=1 loops=1)_19 Planning Time: 0.186 ms_19 Execution Time: 0.088 ms