DEV Community

Nikhil Malik
Nikhil Malik

Posted on

L4-L7 Performance: Comparing LoxiLB, MetalLB, NGINX, HAProxy

As Kubernetes continues to dominate the cloud-native ecosystem, the need for high-performance, scalable, and efficient networking solutions has become paramount. This blog compares LoxiLB with MetalLB as Kubernetes service load balancers and pits LoxiLB against NGINX and HAProxy for Kubernetes ingress. These comparisons mainly focus on performance for modern cloud-native workloads.

Comparing Kubernetes Service Load Balancer

Overview

Before we dig into the numbers, let me give our readers a short introduction about LoxiLB - A high-performance, cloud-native load balancer built for Kubernetes. LoxiLB is optimized for modern workloads with advanced features like eBPF acceleration, Proxy Protocol support, and multi-cluster networking.

Image description

On the other hand, MetalLB is an open-source load balancer spec controller which uses iptables/ipvs as datapath. It is designed specifically for Kubernetes clusters running on bare-metal environments. It implements Layer 2 (ARP/NDP) and Layer 3 (BGP) protocols for IP address management.

System Details: 3xMaster 4 vCPU, 4Gb RAM, 3xWorker 4 vCPU, 4Gb RAM, 1xClient 8 vCPU, 4Gb RAM

Additional Performance Tuning

Below are the common additional optimization options used for all the solutions.

  • Set the Max backlog
sysctl net.core.netdev_max_backlog=10000
Enter fullscreen mode Exit fullscreen mode
  • Enable Multiple queues and Configure MTU
    We used Vagrant with libvirt. For better performance, it is recommended that number of driver queues are set same as the number of CPUs.
    Image description You can find more information about libvirt here.

  • Disable TX XPS (needed for only LoxiLB)
    Configure this setting on all the nodes where LoxiLB is running

for ((i=0;i<7;i++))
do
echo 00 > /sys/class/net/enp1s0/queues/tx-$i/xps_cpus
done
Enter fullscreen mode Exit fullscreen mode

Performance Metrics

Metric LoxiLB MetalLB
Throughput High (eBPF-based) Moderate (IPTables)
Latency Low Higher under load
Connection Handling Scales to millions Limited by IPTables
Resource Usage Efficient (eBPF) CPU-intensive

There are few key differences between LoxiLB and MetalLB:

  • Performance : LoxiLB uses eBPF for packet processing, providing near-kernel speed and minimal CPU overhead whereas MetalLB relies on traditional IPTables/IPVS for packet forwarding, leading to higher latency and limited scalability in high-throughput environments.

  • Scalability : LoxiLB can handle significantly more connections and workloads due to its optimized architecture whereas MetalLB struggles with high-scale environments, especially under heavy network loads.

  • Feature Set : LoxiLB supports advanced features like direct server return (DSR), Proxy Protocol, and observability for debugging network flows whereas MetalLB provides basic load-balancing capabilities, primarily for simple Layer 2 or Layer 3 setups.

We benchmarked LoxiLB with IPVS on AWS Graviton2 environment before. But this blog covers the comparison in Kubernetes environment.

Performance Tests

We benchmarked LoxiLB performance as Kubernetes load balancer with popular open-source tools like iperf and go-wrk.

Throughput

Image description

We created a iperf service and used iperf client in a separate VM outside the cluster. Traffic flow originated from client, hit the Load Balancer, goes to NodePort and then redirected to the workload. Now, It depends which cluster node hosts the service and where the selected workload is scheduled: Same or Different Node. Throughput will be naturally higher when the service and workload are hosted in the same node. But, In both the cases, LoxiLB performed better in case of throughput.

Requests Per Second

Image description

We created another service with nginx pod DaemonSet on the backend side and used go-wrk client in a separate VM outside the cluster. Traffic flow originated from client was same as throughput test.

Comparing Kubernetes Ingress

Overview

  • NGINX: A widely used ingress controller with rich Layer 7 features such as SSL termination, HTTP routing, and caching.

  • HAProxy: Known for its robust load balancing and performance, HAProxy provides fine-grained control over Layer 4 and Layer 7 traffic.

  • LoxiLB: Combines Layer 4 and Layer 7 capabilities with the added advantage of eBPF-based performance and Kubernetes-native integration.

Performance Metrics

Metric LoxiLB NGINX HAProxy
Throughput High Moderate High
Latency Low Moderate Low
SSL Termination Supported Supported Supported
Connection Handling Scales to millions Limited High

Key differences between LoxiLB, NGINX and HAProxy:

  • Performance : LoxiLB offers high throughput and low latency, especially under high-load conditions. HAProxy performs well for high-throughput environments but consumes more resources. NGINX, while feature-rich, often lags behind in raw performance compared to LoxiLB and HAProxy.

  • Scalability : LoxiLB scales seamlessly for modern, containerized workloads with support for millions of connections. HAProxy scales well but can require additional tuning for Kubernetes-specific deployments. NGINX, being less optimized for extreme scale, may require more resources and configurations.

  • Feature Set : NGINX excels in advanced HTTP-based routing, caching, and SSL management. HAProxy provides robust Layer 4 and Layer 7 capabilities but is less Kubernetes-native. LoxiLB integrates Layer 7 features while maintaining high performance, making it a balanced choice.

  • Kubernetes-Native Design : LoxiLB is purpose-built for Kubernetes, offering tighter integration with cluster networking and service discovery. On the other hand NGINX and HAProxy, while Kubernetes-compatible, are not specifically designed for cloud-native environments.

Performance Tests

We benchmarked LoxiLB Ingress solution with go-wrk against NGINX and HAProxy using go-wrk. The test for RPS and latency was done with different variations.

Requests per Second

Image description

Latency

Image description

LoxiLB also supports IPVS-compatibility mode where it will eBPFy all the services managed by IPVS. In simpler words - if you have a cluster running with flannel with IPVS then running LoxiLB with --ipvs-compat is going to improve the performance of your entire cluster. You can check out the details in this blog.

Conclusion

When evaluating solutions for Kubernetes networking, the choice depends on your specific workload and scalability requirements. LoxiLB consistently outperforms its peers in terms of raw performance and scalability, making it a strong candidate for modern, high-throughput environments. However, for traditional use cases with a focus on Layer 7 features, NGINX and HAProxy remain solid options. For simpler setups, MetalLB can suffice but may not scale to meet future demands.

Note: Author is one of the maintainers of LoxiLB project and currently working on it.

Top comments (0)