On Distributed Consistency — Part 1

Mar 26 • Posted 4 years ago

See also:

For distributed databases, consistency models are a topic of huge importance. We’d like to delve a bit deeper on this topic with a series of articles, discussing subjects such as what model is right for a particular use case. Please jump in and help us in the comments.

We’ll start here with a basic introduction to the subject.


The CAP theorem states that one can only have two of consistency, availability, and tolerance to network partitions at the same time. In distributed systems, network partitioning is inevitable and must be tolerated, so essential CAP means that we cannot have both consistency and 100% availability. 

Informally, I would summarize the CAP theorem as:

  • If the network is broken, your database won’t work.

However, we do get to pick the definition of “won’t work”.  It can either mean down (unavailable) or inconsistent (stale data).

More precisely what do we mean by “consistency”? The academic work in this area is referring to “one copy serializability” or "linearizability". If a series of operations or transactions are performed, they are applied in a consistent order. One less formal way of thinking about the trade-off is “Could I be reading and manipulating stale/dirty data? Can I always write?”


We have two classes of architectures: a C class (strongly consistent) and an A class (higher availability looser consistency). Let’s consider some real-world distributed systems and where they are classified.

Amazon Dynamo is a distributed data store which implements consistent hashing and is in the A camp. It provides eventual consistency. One may read old data.

CouchDB is typically used with asynchronous master-master replication and is in the A camp. It provides eventual consistency.

A MongoDB auto-sharding+replication cluster has a master server at a given point in time for each shard. This is in the C camp.  Traditional RDBMS systems are also strongly consistent (as typically used) - a synchronous RDBMS cluster for example.

It’s worth noting that alternate configurations of these products sometimes alter their consistency (and performance) properties. For our discussion here, we’ll assume these products are configured in their common case setup, unless otherwise specified.

Write Availability, not Read Availability, is the Main Question

With most databases today, it’s easy to have any number of asynchronous slave replicas distributed about the world. If networks partition, we would then still have access to local slave data. As the replication is asynchronous, this data is eventually consistent, so this result is not surprising — we are now in the A class of systems. However, almost all designs, even from the C class, can add on asynchronous read capabilities easy. Thus, the critical design decisions are around write availability.

The Trade-offs

  • even load distribution is easier in eventually consistent systems
  • multi-data center support is easier in eventually consistent systems
  • some problems are not solvable with eventually consistent systems
  • code is sometimes simpler to write in strongly consistent systems

We will discuss these pros in cons in more details in subsequent articles.