The load balancer is a server that usually sits between client devices and a set of servers and distributes client requests across servers. Load balancers can be placed in various places of a system. The loads on the servers need to be distributed in a balanced way; that’s why they are called a load balancer.
Typically, we can put a load balancer between the client and the server to handle incoming network requests. Another very normal use of an LB is for distributing the network traffic across backend servers. A load balancer is used to reduce individual server load. And it prevents a single point of failure for any server. So, it improves the overall availability and responsiveness of the system. We may add LBs at various places in the system. Especially where we have multiple resources like servers or database or cache.
There are various types of load balancing methods. Every type uses different algorithms for different purposes. These techniques are actually different types of strategy for server-selection. Here is a list of load balancing techniques:
In this method, the servers are selected randomly. There are no other factors calculated in the selection of the server. There might be a problem with some of the servers sitting idle, and some are overloaded with requests in this technique.
This is one of the most common load balancing methods. It’s a method where the LB redirects incoming traffic between a set of servers in a certain order. Let's say, there is a list of five servers; the first request goes to server 1, the second one goes to server 2, and so on. When LB reaches the end of the list, it starts over at the beginning, from server number 1 again. It almost evenly balances the traffic between the servers. But in this method, server specifications are not considered. The servers need to be of equal specification for this method to be useful. Otherwise, a low processing powered server may have the same load as a high processing capacity server.