Modern systems are becoming increasingly complex, but this complexity ensures that a huge amount of network traffic and requests are handled. As a result of this Load balancers bring complexity but solve the important problems.
So, in simple terms, the main idea behind Load Balancers, as the name suggests, is providing direct client requests across servers. In other words, Load Balancers are systems or devices that distribute network or application traffic across multiple servers.
There are two fundamental strategies used to handle increased load and improve performance in computing environments, particularly when it comes to servers and applications.
- Scaling-out also known as horizontal scaling
- Scaling-up also known as vertical scaling
Modern distributed systems have a scaling-out strategy that dictates to increase nodes/units. This means you need to create or copy the entire node; not part of the node/unit. For example, in our case, we have 1 server and it handles 10000 requests per minute but if we want to increase requests count? what should we do? In simple terms, we will increase part of the server (RAM, CPU, etc..). So, this means scaling-up! But the scaling-out means we will not increase part of the server. Instead, we will add new server (or node/unit).
So, in case of scaling-out, the load balancers distribute traffic among the same servers using specified algorithms/methods.
Basically How do Load Balancers Work?
Let’s assume we have three copies of servers that perform the same tasks. When a client makes a request to the server (such as sending data over the network), the load balancer is the first point of contact. The load balancer sits in front of the servers and handles the request. It then chooses one of the servers based on their current state and forwards the request to that server.
When a client sends a request to access a service, like a webpage or an application, this request is directed to the load balancer, not directly to any of the servers. The load balancer, positioned in front of the servers, intercepts the incoming request and acts as the traffic manager, determining the best server to handle the request.
The load balancer checks the current state of each server, including the number of active connections on each server, the current load or traffic each server is handling, and the overall performance and health status of each server. Based on this evaluation, the load balancer selects the most suitable server. For instance, if Server A has fewer active connections than Servers B and C, the load balancer will choose Server A.
The load balancer then forwards the client’s request to the chosen server. The server processes the request and prepares the response. The server sends the response back to the load balancer, which then forwards this response to the client. The client perceives the interaction as if it is directly communicating with the server.
By distributing client requests among multiple servers, the load balancer ensures that no single server becomes overwhelmed. This helps in maintaining high performance, availability, and reliability of the service.
What Do They Provide?
- High Availability: Load balancers make sure that if one server goes down, another server can take over. This means the service stays up and running, even if there are problems with one of the servers.
- Scaling: When more people use the service, load balancers help by spreading the traffic across multiple servers. This way, the service can handle more users without slowing down or crashing.
- Reliability: Load balancers check the health of servers and only send requests to servers that are working well. This ensures that users get a smooth and dependable experience every time they use the service.
- Security: Load balancers can help protect against attacks by distributing traffic and hiding the actual servers from direct access. This makes it harder for attackers to target a specific server.
Types of Load Balancers
There are different types of load balancers that help manage traffic to servers.
- Hardware Load Balancers: These are physical devices. They sit in your server room and handle the traffic. They are fast and powerful but can be expensive.
- Software Load Balancers: These are programs that run on regular servers. They do the same job as hardware load balancers but are usually cheaper and easier to update.
- Layer 4 Load Balancers (Transport Layer): These load balancers work at the transport layer of the network. They make decisions based on data like IP addresses and port numbers. They are fast and simple.
- Layer 7 Load Balancers (Application Layer): These load balancers work at the application layer. They make decisions based on the content of the request, like URLs or cookies. They are more flexible and can handle more complex tasks.
In simple terms, hardware load balancers are physical devices, while software load balancers are programs. Layer 4 load balancers work with basic data like IP addresses, and Layer 7 load balancers work with more detailed data like URLs.
Types of Load Balancing Algorithms/Methods
Load balancers use different algorithms/methods to share traffic among servers. Here are some common ones.
- Round Robin: This method sends traffic to servers one by one in a line. Each server gets a turn, so the workload is evenly spread.
- Least Connections: With this method, traffic goes to the server that has the fewest people using it at the moment. It helps balance the load by sending more traffic to less busy servers.
- IP Hash: Traffic is sent to servers based on the client’s IP address. This way, the same client always goes to the same server, which can be helpful for some applications.
- Others: There are also algorithms/methods like weighted round robin, where some servers get more traffic based on their strength. Another method is random with two choices, which randomly picks between two servers to balance the load.
Load balancers use these algorithms/methods to make sure servers stay balanced and everyone gets a good experience using the service.
Stay tuned!
Top comments (1)
You rock man! Well explained!