The goal of a web application is to have it automate the things you can’t do. You will have to fix things manually if you are a developer, no matter what. But if something unexpected happens to your application, like if your server is unavailable or slow due to traffic, then you probably want it to handle that problem itself. Load balancing is critical to web performance because of this.
What is a Load Balancer?
Load balancing is the distribution of network traffic across multiple back-end servers. And a load balancer makes sure that no single server will be overloaded. Because the application load is spread across multiple servers, this increases the responsiveness and consistency of web applications.
How a Load Balancer Works
As application traffic increases, the load balancer looks at the different servers and picks which ones can best handle the traffic. This maintains a good user experience.
A load balancer will manage the information being sent between a server and the end-user’s device. This server could be on-premise, in a data center, or on a public cloud. Load balancers will also conduct continuous health checks on servers to ensure they can handle requests. If necessary, the load balancer removes unhealthy servers from the server pool until they are restored. Some load balancers even trigger the creation of new virtualized application servers to cope with increased demand, which is an auto-scaling feature.
There are seven layers of networking:
In the seven-layer Open System Interconnection (OSI) model, network firewalls are at levels one to three (L1-Physical Wiring, L2-Data Link, and L3-Network). Meanwhile, load balancing happens between layers four and seven (L4-Transport, and L7-Application)
What Load Balancer Types Exist?
There are many types of load balancers. You can also think about types of load balancers in terms of the various cloud-based balancers available:
• Network Load Balancing: This is the management of network traffic. It’s accomplished through layer 4 which if you remember is Transport, and it directs traffic based on data from network and transport layer protocols, such as IP address and TCP port.
• HTTP(S) Load Balancing: HTTP(S) load balancing is one of the oldest forms of load balancing. This form of load balancing relies on layer 7, which means it operates in the application layer. This allows routing decisions based on attributes like HTTP header, uniform resource identifier, SSL session ID, and HTML form data.
When talking about types of load balancers, it’s also important to note there are hardware load balancers, software load balancers, and virtual load balancers.
• Hardware Load Balancer: A hardware load balancer, as the name implies, relies on physical, on-premises hardware to distribute the application and network traffic. These devices can handle a large volume of traffic but often carry a hefty price tag and are fairly limited in terms of flexibility.
• Software Load Balancer: A software load balancer comes in two forms—commercial or open-source—and must be installed before use. Like cloud-based balancers, these tend to be more affordable than hardware solutions.
• Virtual Load Balancer: A virtual load balancer differs from software load balancers because it deploys the software of a hardware load balancing device on a virtual machine.
Benefits of Load Balancing for Applications
Software load balancers have several functions. Not only is it able to act as a catalyst for automation because of it’s predictive analytics that determines bottlenecks before they even happen, but a load balancer can also have other advantages:
• Reduced Downtime
○ Because of the abundance of servers, you can shut one off for maintenance or if you are transferring to another location without impacting performance or downtime.
• Scalable
○ Because a load balancer will spread the work evenly throughout servers, this allows for increased scalability.
• Redundancy
○ When application traffic is sent to two or more web servers and one server fails, then the load balancer will automatically transfer the traffic to the other working servers.
• Flexibility
○ With load balancing, one server will always be available to pick up different application loads so admins are flexible in maintaining other servers.
• Resilience
○ Load balancing can help in detecting failures.
• Global Server Load Balancing
○ Global Server Load Balancing extends L4 and L7 capabilities to servers in different geographic locations. This makes it great for cloud-native applications with data centers located in different places.
• Security
○ An application load balancer can be used to prevent denial of service (DDoS) attacks. With an application load balancer, network and application traffic from the corporate server is “offloaded” to a public cloud server or provider, thus protecting the traffic from interference from dangerous cyber attacks.
Session Persistence
Session Persistence is another benefit of load balancers when used right. A browser stores user’s session data as cookies. Session persistence is the ability to make sure all requests for a given session are routed to the same server. If the server changes in the middle of a session, the end user’s data will likely not be saved.
Load Balancing Techniques
Load balancers rely on algorithms to determine which server from the server farm should receive each client request. Your choice of algorithm is one way you can maximize the efficiency of your resources and obtain the type of experience you want. Here are five of the most common load balancing methods:
Load Balancing Algorithms
There are a variety of load balancing methods, which use different algorithms best suited for a particular situation. Here are some of the more prominent methods:
Round Robin
The Round Robin method relies on a rotation system to sort network and application traffic. An inbound request is delegated to the first available server, and then the server is bumped to the bottom of the line. This method is particularly useful when working with servers of equal value.
Least Connection
As its name states, the least connection method directs traffic to whichever server has the least amount of active connections. This is helpful during heavy traffic periods, as it helps maintain even distribution among all available servers.
Resource-Based (Adaptive)
Resource-Based (Adaptive) is a load balancing algorithm that requires an agent to be installed on the application server that reports on its current load to the load balancer. The installed agent monitors the application servers’ availability status and resources. The load balancer queries the output from the agent to aid in load balancing decisions.
Least Response Time Method
In the least response time algorithm, the back-end server with the least number of active connections and the least average response time is selected. Using this algorithm IT ensures quick response time for end clients.
Fixed Weighting
Fixed Weighting is a load-balancing algorithm where the administrator assigns a weight to each application server based on criteria of their choosing to demonstrate the application server’s traffic-handling capability. The application server with the highest weight will receive all of the traffic. If the application server with the highest weight fails, all traffic will be directed to the next highest weighted application server.
Weighted Response Time
Weighted Response Time is a load-balancing algorithm where the response times of the application servers determine which application server receives the next request. The application server response time to a health check is used to calculate the application server weights. The application server that is responding the fastest receives the next request.
Source IP Hash
Source IP hash load balancing algorithm that combines source and destination IP addresses of the client and server to generate a unique hash key. The key is used to allocate the client to a particular server. As the key can be regenerated if the session is broken, the client request is directed to the same server it was using previously. This is useful if a client must connect to a session that is still active after a disconnection. This also ties back to session persistence where the clients session data would be obtained from the cookies saved in that server.
URL Hash
URL Hash is a load-balancing algorithm to distribute writes evenly across multiple sites and sends all reads to the site owning the object.
Scale Arc
If you want a taste of what a load balancer is capable of then it doesn’t hurt to try out some of the leading companies who are bringing load balancing to the forefront. Among these is Scale Arc which serves as database load balancing software providing continuous availability at high-performance levels for mission-critical database systems deployed at scale.
The ScaleArc software appliance is a database load balancer. It enables database administrators to create highly available, scalable — and easy to manage, maintain, and migrate — database deployments. ScaleArc works with Microsoft SQL Server and MySQL as an on-premise solution, and within the cloud for corresponding PaaS, as well as DBaaS solutions, including Amazon RDS or AzureSQL.
• Build highly available database environments
• Ensure zero downtime during database maintenance, and reduce risk of unplanned outages by automating failover processes and intelligently redirecting traffic to database replicas
• Effectively balance read and write traffic to dramatically improve overall database throughput
• Consolidate database analytics into a single platform allowing administrators and production support to make more efficient and intelligent decisions, thus saving time and money
• Seamlessly migrate to the cloud and between the platforms without incurring application downtime
Try ScaleArc or read our whitepaper to find out if ScaleArc is for you.
Originally appeared on devgraph.com
Top comments (2)
I had heard this word a lot of times and now, I finally know it !
Glad that we could help, Darkphantom7750