DEV Community

Szymon Gibała
Szymon Gibała

Posted on • Updated on • Originally published at sgibala.com

Implementing Practical Byzantine Fault Tolerance

Part 1: The Foundation

Practical Byzantine Fault Tolerance is one of the seminal papers in the world
of Distributed Systems. It is one of the first practical solutions proposed to the Byzantine Fault Problem (also known as the Byzantine Generals problem).

NOTE: To keep this post focused instead of describing everything from the ground up I will assume readers' familiarity with some of those base concepts, or being comfortable to move on despite not knowing them.
If you are not familiar with some or all, do not worry, everyone starts somewhere! If you like reading papers I can send you straight to the source. If you prefer more casual articles there is plenty out there too with some good videos on top. Or... You can just ask Chat GPT to explain it.

Since this paper is very fundamental to the world of Distributed Systems, especially when we venture beyond the walls of trusted systems running in our data center to things like blockchain where nodes are operated by different parties, and so many later algorithms are built on top of it, I reckoned that understanding it well will help me a lot with understanding others. And what is the better way to understand something than to implement it?

In this post, I will lay some groundwork for understanding the protocol, make some high-level design choices to make implementation fairly simple, and try to model basic data structures.

Since this is a learning project, do not expect optimal design, handling all the edge cases, or production-grade code. The goal is to implement a complete protocol for a more or less happy path.

If you are already familiar with pBFT you can probably quickly skim this part. Otherwise, I strongly encourage you to look at the paper, follow along, and read up on any details that I omit.

Base Protocol

A good start to understanding pBFT is the base case -- achieving consensus on a single value. Let's ignore unresponsive leaders, view changes just for now, and try to understand the core.

NOTE: Following the paper we will be mostly looking at sections "4.1 The Client' and "4.2 Normal-Case Operation". We will also peak at the basics as they unfold.

The paper features a simple diagram, that pretty much illustrates the whole protocol for a single message (see "Figure 1: Normal Case Operation"), here's my recreation of it with a bit more details:
Normal Case Operation

To understand it, however, we need some background knowledge about how pBFT works, what are the constraints, etc:

  • pBFT assumes that at a given time (or in each view) there is exactly one replica that acts as a leader.
  • Since the whole purpose of pBFT is to achieve consensus in distributed system while tolerating the Byzantine Fault it can do so as long as at most ((n-1) / 3) (less than one third) of the replicas are misbehaving. In other words, if we want to tolerate f faulty replicas, we need at least 2f + 1 honest ones.

    NOTE: From Section 3 of the paper "Service Properties":
    "The algorithm provides both safety and liveness assuming no more than ((n-1) / 3) replicas are faulty. Safety means that the replicated service satisfies linearizability [...]: it behaves like a centralized implementation that executes operations atomically one at a time. Safety requires the bound on the number of faulty replicas because a faulty replica can behave arbitrarily, e.g., it can destroy its state."

  • From the above formula we can see that we need at least 4 replicas in the pBFT system to tolerate 1 dishonest one. For non-Byzantite Fault-tolerant consensus protocols such as Raft or Paxos the minimum number of replicas is usually 3, and the number needs to be odd so that they cannot be split into two even groups. Since in basic pBFT, a leader is predetermined for each view, there is no need for an election, hence we do not need a majority for it.

We dipped our toes in the theory enough to establish some ground rules, now let's come back to our diagram, look at each "Phase" in a bit more detail, and start writing some code.

NOTE: This and following post will include some snippets of code but it is far from the full implementation, so trying to copy-paste from it is rather pointless. For the full code base refer to the repository.

As many great adventures through layers of code begin with someone wanting to do something let's start this one here as well, with the client request.

Client Request

As we see on the diagram, the client sends the request to a single replica. That replica ideally is a leader, but if that is not the case, the request should be forwarded to the leader by any backup node.

Leader Election

In pBFT there is no leader election such as in non-BFT protocols like RAFT or Paxos. Instead, replicas become leader in a predetermined order based on their ID and current view. The leader remains in charge for the whole duration of the view, in other words: as long as protocol progresses (requested operations are being executed).
The simplest algorithm for selecting the leader would be: leader_id = view_number % number_of_replicas.

After the client request successfully reaches the node, the client's watch begins. For it to consider the request executed, it expects to receive matching responses from at least 2f + 1 different replicas directly (the last section on the diagram: reply).

If the client gets tired of waiting and smells that something might be fishy, it will send the request again, this time to all of the replicas.

NOTE: This part is later important for triggering the view changes...

You might have already spotted a common problem that may occur here: multiple executions of the same request. Fear not, however, as the solution for that is rather simple. The client will simply include a sequence number (or timestamp) with each request, which together with some client ID, will uniquely identify each request, hence will prevent multiple executions.

To keep things more straightforward, my implementation is using request UUID as a measure of identifying requests.

With all that I can propose an initial representation of the ClientRequest. The service behind pBFT will be a simple key-value store, but I will at least try to separate the consensus layer from the application just a bit.

NOTE: To spare screen real estate code in this article will be stripped to the minimum required to get the main points across. I omit things like attributes, helper methods, boilerplate, and other details. Again, the complete code can be found in the GitHub repository.

pub struct ClientRequestId(pub uuid::Uuid);

pub struct ClientRequest {
    pub request_id: ClientRequestId,
    pub operation: Operation,
}
Enter fullscreen mode Exit fullscreen mode

For the key-value store operations can be defined as a simple enum:

pub enum Operation {
    Noop, // Used by the protocol for NULL messages
    Set { key: String, value: String },
    Get { key: String },
}
Enter fullscreen mode Exit fullscreen mode

The last thing we need here is a response that the client receives from honest replicas that processed the request:

pub struct ClientResponse {
    pub request: ClientRequest,
    pub result: OperationResultSequenced,
}

pub struct OperationResultSequenced {
    pub result: OperationResult,
    /// Sequence number of the operation assigned by the pbft leader when
    /// initially accepting the operation
    pub sequence_number: u64,
}

pub enum OperationResult {
    Noop,
    Set { key: String, value: String },
    Get { key: String, value: Option<String> },
}
Enter fullscreen mode Exit fullscreen mode

The response consist of:

  • Operation result.
  • Sequence number (or simply sequence) - assigned as part of processing request through pBFT, determines the exact order of operation execution.
  • Client request - it could include just the request ID, but for this implementation, it might as well be included as a whole.

The client also has to identify which replica sends the response and verify its authenticity, which I am going to cover later (in part 2) with message signatures and API implementation.

Let's wrap up this part by modeling protocol messages.

pBFT consensus

Coming back to the diagram, we can see that the core of the protocol is composed of three phases:

  • Pre-Prepare
  • Prepare
  • Commit

Let's zoom in on those one by one.

Pre-Prepare

When the leader receives a client request it initiates the consensus for the operation. This process consists of:

  1. Computing the digest of the client request.
  2. Assigning a sequence number to it.
  3. Storing the request, digest, and newly composed PrePrepare message.
  4. Broadcasting PrePrepare and the client request to all replicas.

ProtocolMessage can be represented as an enum with all its possible variants:

pub enum ProtocolMessage {
    PrePrepare(PrePrepare),
    Prepare(Prepare),
    Commit(Commit),
    Checkpoint(Checkpoint),
    ViewChange(ViewChange),
    NewView(NewView),
}
Enter fullscreen mode Exit fullscreen mode

The PrePrepare includes metadata about the associated request:

// Simple md5 digest
pub struct MessageDigest(pub [u8; 16]);

pub struct MessageMeta {
    pub view: u64,
    pub sequence: u64,
    pub digest: MessageDigest,
}

pub struct PrePrepare {
    pub metadata: MessageMeta,
}
Enter fullscreen mode Exit fullscreen mode

As mentioned in the paper, PrePrepare does not include the client request itself to keep it smaller. This is important later on during view changes when a new leader might need to broadcast a whole bunch of PrePrepares in a new view for previously sent requests.

The authors also mention the possibility of using separate protocols for smaller pBFT messages and bigger client requests indicating that those are sent completely separately - we are not going that far.

Since initially both PrePrepare and the client request are easier to process together, they are going to be broadcasted together as well:

pub struct ClientRequestBroadcast {
    pub request: ClientRequest,
    pub sequence_number: u64,
    pub pre_prepare: PrePrepare,
}
Enter fullscreen mode Exit fullscreen mode

With this done, prepare yourself for the next phase...

Prepare

Upon receiving the PrePrepare backup performs some validation such as:

  • Verify signature and message digest.
  • Check that the message is in the current view.
  • Ensure it did not previously accept a different PrePrepare for the same sequence and view.
  • Check that the sequence number is between high and low watermarks.

If it is all good replica decides to accept the message and enters the prepare phase by:

  • Storing received PrePrepare and the client request.
  • Creating and storing its own Prepare message for the same request.
  • Broadcasting the Prepare to all replicas.

The Prepare message will include the same metadata as PrePrepare with a single addition of replica_id. This is because we only expect one PrePrepare for the consensus round coming from the leader, but each replica is going to create, and send its own Prepare:

pub struct Prepare {
    pub replica_id: u64,
    pub metadata: MessageMeta,
}
Enter fullscreen mode Exit fullscreen mode

All (or at least most) replicas will be doing it at more or less the same time -- bombarding each other with Prepare messages. The replica is considered prepared for a given message when its log contains:

  • Client request.
  • PrePrepare message for a given request.
  • 2f + 1 Prepare messages from different replicas (including itself) matching the PrePrepare.

NOTE: In the paper we see it state more precisely as: prepared(m, v, n, i), where m is a message, v is a view, n is a sequence number, and i is replica id.

Whether different protocol messages are "matching" is determined by the metadata - view, sequence number, and digest - in other words, if they are associated with the same client request.

This process ensures that the majority of honest replicas will agree on the same message order (determined by sequence number) in a given view. Or to put it simply: agree on the same message for a given combination of view and sequence number.

NOTE: sequence numbers increase independently of the view, meaning that if we achieved consensus for sequences 1-10 in view 1 and transition to view 2, the sequence numbers will not reset but will continue to grow, e.g. 11, 12, 13, etc.

When the replica is prepared it is time to move to the commit phase.

Commit

When the replica reaches the prepared state for a given message, it, again, broadcasts a message to all other nodes, this time it is Commit.

The process here is very similar to the prepare phase in that, replicas will validate received messages, and insert them into the log.

Similarly to the prepared condition of the previous phase, in this one, we define a committed-local property, which is true for the replica if it got 2f + 1 Commit messages from different peers.

With all the similarities the Commit message consists of the exact same properties as Prepare:

pub struct Commit {
    pub replica_id: u64,
    pub metadata: MessageMeta,
}
Enter fullscreen mode Exit fullscreen mode

There is another cluster-wide property described by the paper - committed(m, v, n) - which is true when at least f + 1 non-faulty replicas reached prepared state for a given message (this essentially amounts to 2f + 1 all replicas, given the assumption that at most f of them are faulty). The commit phase ensures that if at least one non-faulty replica is committed-local(m, v, n, i) then committed(m, v, n) is true. In other words, if one honest replica reaches committed-local property for the message, the message can be considered set in stone.

As a result, as soon as committed-local is true for a given message, the replica is ready to execute it (apply the operation to its state). After successful execution, the replica sends the reply to the client thereby finishing the flow represented in the diagram.

The client can be certain that the operation went through if it received 2f + 1 replies from different replicas.

There are still a few gotchas to watch out for. Since replicas can commit the requests out of order (e.g. one with sequence = 5 before the one with sequence = 3) depending on message order delivery, and therefore reach committed-local(..., n+1) before committed-local(..., n). The whole point of the protocol is to agree on ordering, hence replicas cannot execute operations out of order.

The paper assumes partial synchrony - that each message from an honest replica will get delivered eventually - so with this assumption we should not get stuck on any message indefinitely as long as 2f + 1 replicas are honest (and operating).

Summary

This wraps up a very basic overview of the protocol base case. There are more details to it but if you are interested in a really deep dive the paper is your go-to. Hopefully, this primer gives some good ground to get through it.

This, however, is not the end, as in the following parts I am going to cover the high-level overview of the implementation of both pBFT as well as Key-Value store on top of it and look into the remaining parts of the protocol: checkpoints and view changes.

Link to the code repository.

Top comments (0)