CockroachDB 22.1 Multi Region Abstractions with YCSB

An Introduction to Regional Survivability for YCSB Workloads A,B,C


I have written about YCSB with CRDB before using the original multi region commands, this blog is an update using the new MR features introduced on 21.X which makes schema design and regional data placement easier for multi regional applications. Just to recap, YCSB has been a widely accepted benchmark tool for relational databases also because it represents a set of clients performing simple CRUD operations which can be configured to a various mix of insert/read/update/scan and read-update operations. The data model and client are simple and can be reconfigured to support query behavior which can represent workloads across industries such as session management (cookies, shopping carts, microservices), metadata management (user profiles, IOT devices, password management) and fast lookups (product lookups or financial lookups). Furthermore, the data model can be partitioned to illustrate multiregional workloads across these regions/datacenters which makes YCSB a great tool to illustrate ACID transactions across multiple locations operating on the same rows within a database. CockroachDB is a distributed relational database which provides a SQL endpoint which is always on across multiple SQL nodes in and across data centers/regions on both private and public clouds. Take a minute to review the video and design doc.CockroachDB also provides strong consistency in the face of node and regional failures while maintaining a recovery point objective of zero.

Regional Survivability and Latency using a primary region

Let us start off with a CockroachDB Cluster which spans 3 regions each with 4 nodes.

Now let us load the schema,

Run CockroachDB MR commands to place YCSB DATA across 3 regions with regional survivability.

After setting the primary region to “us-east1”, add the 2 secondary regions “us-east4 and us-central1”

Any tables/data added to the YCSB database will have data protection across 3 regions with the primary region in “us-east1”. The below zone configuration indicates that all ranges of data will be distributed across all 3 regions, with 5 replicas per range where 2 quorum votes are constrained in us-east1.

Now let us create a table and load 1 million rows

Create Schema using ycsb Jdbc create table
Load 1 Millon rows
Loaded 1 Rows Batched across Multi-value Inserts

Also, a majority of these inserts were batched from 1 to 128, where a majority were 128 rows in one multi value insert. Notice for future reference the insert fast method used for a multi-regional write.

These multi value inserts are groups into explicit transaction with the following latencies for cross region replication.

Now let’s look at the schema and data placement of the usertable.

usertable ddl, NOTE the LOCALITY REGIONAL BY TABLE implies us-east1 is home region for all rows leaseholders

The table is configured to have a primary region following the same data placement as the ycsb zone configuration.

Using the show ranges command, all the rows for the user table are stored across 3 ranges or what others may call shards. Each range has 5 replicas ( copies ) and they are stored across the 3 regions with 2 copies in “us-east1” and 2:1 ratio across either “us-east4” or “us-central1”. Of the 2 voting replicas in “us-east1” one is lease holder which was enforced through the primary region command set earlier. All reads and writes will go through the leaseholder range in “us-east1” even when a client issues a sql command from any node in the cluster with the exception of a follower read. Inspecting the command below, we see that range 45 stores all rows from user1 to user329235 based on the primary key ycsb_key along with all the columns in the row. It has 2 replicas in “us-east1”, 2 copies in “us_central1” and 1 in “us-east4”. A read to key user32 will be routed to node 10 in us-east1 google region and a consensus operation will read or write data from that lease holder.

Now, let’s look at running Workloads A,B,C in the primary region.

Workload A represents 50/50 single key read and write
Workload B represents 95/5 single key read and write
Workload C represents 100% single key reads

The following matrix represents running a client in all 3 regions with the ycsb primary region in us-east1.

And the following matrix represents latency from clients reading and writing data within home region while one region is down. Notice, updates in “us-east1” have regressed from 14ms ro 36ms as the closet replica region is down and the next available region is 34ms RTT.

This completes the first pass of Multi Region Abstractions with YCSB and up next are the following —

Regional Survivability with optimized Client access and RBR
for Workloads A,B,C
for Workloads F,D,E.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store