DEV Community

Ujjwal Raj
Ujjwal Raj

Posted on

Understanding Server Connections in Distributed Systems

What happens when you open google.com in your web browser?

In a distributed system, every system is a node. Your web browser is one node, while google.com is deployed on a server, possibly in the US, acting as another node. When you attempt to connect, you do so via a secure connection to the Google server. The entire web is essentially a network of interconnected nodes. These nodes are identified using IP addresses. For instance, the IPv6 protocol provides each node with a 128-bit address (allowing for 2^128 unique addresses on the web). Multiple networks, or webs, can also interconnect with one another.

Within a web, connections between nodes are managed by the Border Gateway Protocol (BGP). When your IP address makes a request to connect to google.com, BGP at each router maintains a local routing table. This table determines which destination address corresponds to the next route address. Using these routing tables, your request travels across the network and reaches Google’s server. Similarly, Google's response follows the route back to your browser.

Image description

Connection between nodes

In a distributed system, nodes communicate with one another through connections. To ensure reliable communication, Transmission Control Protocol (TCP) operates on top of the IP protocol. For example, when you visit instagram.com, your browser connects to Instagram's server. The data is not sent all at once via TCP. Instead, TCP breaks the data stream into segments that are sent sequentially. These segments, called packets, are numbered to ensure reliable transmission without loss of data. When a packet is received, it is acknowledged back to the sender. If something goes wrong, the segment is retransmitted.

TCP also uses checksums to verify data integrity. For instance, if the string abc is transmitted, the checksum might be the sum of the ASCII values of 'a', 'b', and 'c'. The receiver compares the checksum to the received data, ensuring it is not corrupted. This process ensures reliable connections at a high level.

A TCP handshake

There are three key states in a TCP connection: opening, established, and closing.

Image description

As shown in the diagram, a connection begins with the closed state. The sender sends a SYN (synchronization) segment to the receiver. The receiver responds with a SYN+ACK (synchronization + acknowledgment) segment. Once the connection is established, the sender can begin transmitting data. Each segment is acknowledged by the receiver. For the data transmission to occur, a round trip of data and acknowledgment (RTT – round trip time) is required, which can introduce latency, especially over long distances.

Connections are established between sockets on the connecting systems, with the operating systems of each system managing the connection state.

In the closing state, the socket enters a TIMEWAIT state before finally closing. This prevents segments from the old connection from being mixed with new connections. Servers have a limited number of available sockets, so a connection pool is maintained to reduce the need for repeated connection establishment.

DDoS attacks can exhaust available sockets by flooding a server with millions of connection requests, preventing legitimate users from accessing the service.

How does TCP prevent receiver overwhelm?

To prevent overwhelming the receiver with too much data, TCP employs a Receive Buffer. This buffer temporarily stores segments until they can be processed.

Image description

The receiver communicates the available buffer size to the sender, ensuring the sender does not transmit more data than the buffer can accommodate. This is a form of rate limiting to prevent data overload.

Image description

How does TCP prevent network flooding?

TCP uses a congestion window, which limits the number of segments that can be sent without acknowledgment. The network's bandwidth is determined by the congestion window size divided by the round trip time.

Image description

As the connection progresses, TCP increases the bandwidth by enlarging the congestion window. The faster the round trip, the faster the bandwidth growth.

User Datagram Protocol (UDP) beats TCP in latency

In a tech interview when the interviewer askind me at the dn if you have any question, I inquired about the scalability of their platform. They mentioned that their system could handle between 16 to 20 million users but scaling beyond that would be challenging. I humorously referenced Disney+ Hotstar's capability to manage over 50 million concurrent viewers during high-profile cricket matches, such as the India vs. Pakistan games.

He explained ,the key difference is that Hotstar streams live video using UDP, where some packet loss is acceptable. Fantasy sports, however, require TCP to ensure transactional integrity, leading to higher latency but reliable connections.

UDP, unlike TCP, skips the round trip acknowledgment process, sending datagrams without guarantees of delivery or sequence. This reduces latency but sacrifices reliability. This approach reduces latency, making it suitable for real-time applications like video streaming or online gaming, where occasional packet loss is acceptable and doesn't significantly impact the user experience. There is no abstraction of bytes stream by UDP. It send datagrams (dicrete data packets) which does not have sequence number. So there is no flow or ingestion control.

Conclusion

In this blog, we explored the fundamental concepts of how web connections work, with a focus on TCP and UDP. Stay tuned for the next blog, where we will dive into how connections are made secure.

Check out my previous blogs for more insights on distributed systems:

Top comments (0)