DEV Community

Cover image for The Undistributed System
Peter Nehrer
Peter Nehrer

Posted on

The Undistributed System

One common yet surprisingly stubborn problem in many evolving software products is the naive assumption that things happen in sequence. This situation is often the result of a noble intent -- don't over-architect, over-engineer the solution to something that seems like a simple problem at the outset. This simplification is often intentional, instituted in order to make the problem more manageable. After all, most systems aren't complex right from the start!

When a system is initially designed without considering the complexities of distributed computing, it often assumes a single, centralized state management approach. This design assumes consistent, instantaneous access to data, neglects the latency in communication between different parts of the system, and overlooks the possibility of node failures. Such assumptions are viable in a monolithic setup but become problematic when transitioning to a distributed architecture.

This conundrum isn't specific to globally distributed databases; it can occur in your single-zone Kubernetes cluster, or even a single-die, multi-core processor!

Time-based causality relies on the assumption that events can be ordered based on timestamps, ostensibly providing a straightforward way to determine the sequence of events in a distributed system. However, this approach is fraught with challenges.

The Challenges

For one, achieving data consistency across distributed nodes is a significant challenge. In a naive system, consistency is often taken for granted, assuming immediate visibility of all changes. However, in a distributed setting, data may be replicated across nodes, leading to potential inconsistencies due to network delays or partitions. Retrofitting consistency mechanisms like distributed consensus protocols can be complex and requires fundamental changes to how data is accessed and updated.

A non-distributed system may not have robust mechanisms for handling partial failures, as the entire system typically runs in a single process or machine. Distributed systems, by contrast, must be designed to tolerate failures of individual components without compromising the overall system availability. Implementing fault tolerance mechanisms, such as leader election, replication, and data recovery protocols, into a system not originally designed for this can be exceedingly challenging.

A naive system design often overlooks the impact of network latency and the possibility of network partitions. In distributed systems, these factors can significantly affect performance and consistency. Designing for distributed state use-cases requires careful consideration of these network issues, including strategies for dealing with partitioned networks and ensuring that the system can still function under such conditions.

Potential Mitigations

Conflict-Free Replicated Data Types (CRDTs) are data structures designed to ensure eventual consistency across distributed systems without the need for centralized coordination. They allow concurrent updates and resolve conflicts in a deterministic manner, ensuring convergence. However, CRDTs come with their trade-offs, including increased complexity in understanding and implementing these data structures, as well as the overhead of resolving conflicts, which can impact performance.

Distributed Consensus Protocols, such as Paxos or Raft, facilitate agreement among distributed processes on a single data value, ensuring consistency and fault tolerance. However, they incur costs in terms of latency due to the need for multiple rounds of communication between nodes to achieve consensus, and they also require a majority of nodes to be operational, limiting scalability in large, highly distributed environments.

Navigating the Trade-Offs

While CRDTs offer a way to sidestep the issues of time-based causality by enabling eventual consistency and conflict resolution without central coordination, they may not be suitable for applications requiring immediate consistency across all nodes. Distributed consensus protocols, on the other hand, provide strong consistency guarantees but at the expense of increased communication overhead and potential bottlenecks in decision-making processes.

TLDR;

Transitioning from a naive to a distributed system design is not merely an extension but a paradigm shift. It requires rethinking and often redesigning core components to handle the realities of distributed computing. Early consideration of distributed systems principles can mitigate these challenges, allowing for more seamless scaling and adaptation as requirements evolve. The process of retrofitting a system for distributed use-cases serves as a potent reminder of the complexity of distributed systems and the importance of incorporating distributed systems thinking from the outset of system design.

Top comments (0)