DEV Community

pirvanm
pirvanm

Posted on

100 Days of Servers Distributions- Days 1 and 100

**

Understanding Modern Infrastructure: A Simplified Look at Architecture

**

After several years in web development and the onset of the cloud era, I can say that even though I was a full-stack web developer, ironically, my journey began on the backend. Maybe it’s because I started with a local server, and unconsciously, I continued to use online servers, VPSs, and shared hosting packages in parallel. I can say that I regret only briefly working or venturing into the world of load balancers, even though I was initially hired for a position that promised more Vue.js work.
In recent years, out of the need to offer more than just code fixes, and with some research, I’ve dug deeper into DevOps in the past year, adding an additional layer of complexity—Docker.
Everything seemed quite promising at first when we started implementing Portainer and GitLab, but the decision by my colleagues to run Docker on the production server caused us problems, especially when we had to support 400 users at the same time. This led us to explore load balancing solutions and the nodes offered by Portainer.
Looking for something non-"enterprise," I accidentally discovered a load balancer using HAProxy (without too much searching). So yes, I will begin a 100-day series, exploring stages of the Load Balancer world, eventually adding more value than a basic sys admin, even though I’m rooted in PHP web development. I’ll also present some Dockerized configurations that pass through an online web monitoring system.
Yes, I could say that HAProxy or alternatives like Cloudflare are crucial, but from what I’ve seen, Cloudflare can be problematic for internet-based franchise applications, similar to McDonald's or KFC, strictly speaking from an IT perspective.
In the last 4 years, ironically, even though this all escalated in 2019, there’s still an intense debate about when and if we should structure an application as a monolith vs. microservices. Sometimes, it’s a bit sad to see that anything legacy brutally avoids switching to microservices.
I could say that the upcoming series could be viewed from the perspective of 3-4 infrastructure engineers, but I will focus on the following key points:
Microservices Architecture: The opposite of an "all-in-one" application—much like a shawarma, even though everyone is heading in this direction, it doesn’t mean they’re doing it right. The reality is that for a team of fewer than 6 people, without 3 specialists (a QA, a Sys Admin, a Senior Full Stack) from different fields, it’s a true nightmare. I can affirm this after 2 years of experience.

Image description

Containerization: Kubernetes and Docker have quickly become essential tools for me, helping avoid unnecessary costs on underutilized servers.

Image description

Cloud Servers and Hybrid Deployments: A problem we've faced for the past six months, creating true nightmares on a production server.

Image description

Load Balancing and High Availability (HA): One of the solutions for old, long-postponed problems, presenting challenges even for hardware setups.

Image description

The Importance of Load Balancing

Indeed, I cannot say that I imagine it, I live with this – hundreds of users accessing the company's CRM at the same time. Requests to the server are sent to a single server (the Monoid system), and no matter how much RAM you allocate to that server, no matter how well the caching system is designed, and even if you use a backend framework, the server will still crash. A large part of the "pain" caused to the server is solved by a load balancing system, a system that distributes traffic across multiple servers, significantly reducing the overload on a single server (even if a server has 4 processors, or n processors, or even n cores), this won't solve the problem. To keep the system within optimal parameters, a spike (a sudden increase in traffic) occurs, requiring control of traffic balancing.

HAProxy: the foundation for traffic management

One of the most widely used and well-known tools for load balancing is HAProxy (High Availability Proxy). It is an open-source system that provides an ideal solution for the high volume of requests that come to servers, being an application designed for websites with large-scale traffic.

Image description

Key Benefits of the HAProxy System

  1. 1. Efficient Load Balancing: There's no need to manually define algorithms, as HAProxy provides built-in algorithms such as round-robin, least connections, and source IP hash to distribute traffic across servers. Depending on your architecture, HAProxy can handle millions of requests per second with minimal latency.
  2. 2. SSL Termination: To secure connections, which is crucial, HAProxy offers built-in solutions for encrypting SSL/TLS connections and decrypting traffic. It can forward these requests to backend servers, making it a straightforward configuration for managing secure connections.
    1. Health Checks: HAProxy has built-in support for health checks with well-documented configurations. If one server goes down or becomes non-operational, HAProxy stops sending traffic to that server and reroutes requests to a healthy one, ensuring high availability. Image description
  3. Layer 4 and Layer 7 Load Balancing: HAProxy supports both transport layer (Layer 4) and application layer (Layer 7) load balancing. This means it can manage simple TCP connections as well as more complex HTTP traffic based on URLs or content.

Image description

  1. Fault Tolerance: HAProxy can manage retries and failures, ensuring that even if a server goes down, traffic is seamlessly rerouted to another server without any interruption.
  2. Custom Traffic Routing: You can set custom rules in HAProxy to direct traffic to specific servers or services, optimizing performance and reducing latency based on your specific needs.

Image description

Follow me for more!

Top comments (0)