DEV Community

Cover image for ☸️ Kubernetes NGINX Ingress Controller: 10+ Complementary Configurations for Web Applications
Benoit COUETIL 💫 for Zenika

Posted on • Edited on

☸️ Kubernetes NGINX Ingress Controller: 10+ Complementary Configurations for Web Applications

Initial thoughts

Kubernetes is an open-source container orchestration platform used to manage and automate the deployment and scaling of containerized applications. It has gained popularity in recent years due to its ability to provide a consistent experience across different cloud providers and on-premises environments.

The NGINX ingress controller is a production‑grade ingress controller that runs NGINX Open Source in a Kubernetes environment. The daemon monitors Kubernetes ingress resources to discover requests for services that require ingress load balancing.

NGINX architecture

In this article, we will dig into the versatility and simplicity of this ingress controller to implement different common use cases. You will find some other ones in different articles (such as Kubernetes NGINX Ingress: 10 Useful Configuration Options) but none of them has both described and regrouped the ones below, yet these are widely used for web applications in production.

These apply to multiple Cloud providers, at least AWS, GCP and OVHCloud, except when a specific Cloud provider is mentioned.

These are also fully compatible with each other, except when architecture differ (for example, TLS termination on load balancer versus termination on NGINX pods).

As future experiences demand, we'll augment its content with additional use cases, ensuring its relevance continues to flourish

Helm chart Installation/update

Everything in the YAML snippets below — except for ingress configuration — relates to configuring the NGINX ingress controller. This includes customizing the default configuration.

To begin, make sure your Helm distribution is aware of the chart using this command:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo update
Enter fullscreen mode Exit fullscreen mode

After preparing or updating your custom nginx.helm.values.yml file, deploy or update the Helm deployment using this command:

helm -n system upgrade --install ngx ingress-nginx/ingress-nginx --version 4.3.0 --create-namespace -f nginx.helm.values.yml
Enter fullscreen mode Exit fullscreen mode

Replace 4.3.0 with the latest version found on ArtifactHUB, and proceed according to your upgrade strategy.

(green ship wheel), helm, tropical sea, sunny day, (green)

1. Set NGINX as the default ingress controller

By default, you have to specify the class in each of your ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example
spec:
  ingressClassName: nginx
Enter fullscreen mode Exit fullscreen mode

But if you have a single ingress controller in your cluster, just configure it to be the default:

nginx.helm.values.yml

controller:
  ingressClass:
    create: true # default
    setAsDefaultIngress: true
Enter fullscreen mode Exit fullscreen mode

No more need for the ingressClassName field. Ever.

2. Set SSL/TLS termination on AWS load balancer

By default with Kubernetes incoming traffic, SSL/TLS termination has to be handled by target application, one by one. Another application, another TLS termination handling, with certificate handling.

A simple yet powerful way of abstracting TLS handling is to terminate on load balancer, and have HTTP inside the cluster by default.

lb tls termination

As a pre-requisite, you have to request a public ACM certificate in AWS.

Once you have the certificate ARN, use it in below configuration under service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation:

nginx.helm.values.yml

controller:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:94xxxxxxx:certificate/2c0c2512-a829-4dd5-bc06-b3yyyyy
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" # If you don't specify this annotation, controller creates TLS listener for all the service ports
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
Enter fullscreen mode Exit fullscreen mode

3. Use professional error pages

By default, NGINX ingress controller gives you neutral yet boring error pages:

NGINX 404 default error page

These ones can be replaced with nice polished and animated ones, such as this one from tarampampam's repository:

NGINX 404 custom error page

It has some nice side features, like automatic light/dark modes, and routing details displayable fore debugging purpose.

Examples from multiple themes are showcased here for everyone to choose from.

Once you found your theme, configure your favorite ingress controller:

nginx.helm.values.yml

controller:
  config:
    custom-http-errors: 404,408,500,501,502,503,504,505

# Prepackaged default error pages from https://github.com/tarampampam/error-pages/wiki/Kubernetes-&-ingress-nginx
# multiple themes here: https://tarampampam.github.io/error-pages/
defaultBackend:
  enabled: true
  image:
    repository: ghcr.io/tarampampam/error-pages
    tag: 2.21 # latest as of 01/04/2023 here: https://github.com/tarampampam/error-pages/pkgs/container/error-pages
  extraEnvs:
    - name: TEMPLATE_NAME
      value: lost-in-space # one of: app-down, cats, connection, ghost, hacker-terminal, l7-dark, l7-light, lost-in-space, matrix, noise, shuffle
    - name: SHOW_DETAILS # Optional: enables the output of additional information on error pages
      value: "false"
Enter fullscreen mode Exit fullscreen mode

4. Redirect users HTTP calls to HTTPS port

Once you have all your web routes configured to handled SSL/TLS/HTTPS, HTTP routes have no reason to be, and is even dangerous to keep, security-wise.

Instead of disabling the port, which can be annoying to your users, you can automatically redirect HTTP to HTTPS with this configuration:

nginx.helm.values.yml

controller:
  containerPort:
    http: 80
    https: 443
    tohttps: 2443 # from https://github.com/kubernetes/ingress-nginx/issues/8017

  service:
    enableHttp: true
    enableHttps: true
    targetPorts:
      http: tohttps # from https://github.com/kubernetes/ingress-nginx/issues/8017
      https: https

  # Will add custom configuration options to Nginx ConfigMap
  config:
    # from https://github.com/kubernetes/ingress-nginx/issues/8017
    http-snippet: |
      server{
        listen 2443;
        return 308 https://$host$request_uri;
      }
    use-forwarded-headers: "true" # from https://github.com/kubernetes/ingress-nginx/issues/1957
Enter fullscreen mode Exit fullscreen mode

5. Rewrite internal redirects from HTTP to HTTPS

When you terminate your TLS on the load balancer or the ingress controller, application does not know of the TLS incoming calls: everything inside the cluster is HTTP. Hence, when an application needs to redirect you somewhere else inside the cluster to another path, it might try to redirect you on HTTP, same as it received.

For each ingress redirecting internally, apply this configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: auth-server
  annotations:
    nginx.ingress.kubernetes.io/proxy-redirect-from: http://
    nginx.ingress.kubernetes.io/proxy-redirect-to: https://
spec:
  # [...]
Enter fullscreen mode Exit fullscreen mode

6. Set a valid default certificate when TLS is terminated on NGINX Ingress Controller

When you don't have the option to terminate TLS on load balancer, NGINX Ingress Controller can be used to do the TLS termination. It would be too long to detail here, if needed you can find litterature on internet, such as kubernetes + ingress + cert-manager + letsencrypt = https, or Installing an NGINX Ingress controller with a Let's Encrypt certificate manager, or else How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

When this scenario is in place, each ingress route get it's own certificate, it can be the same certificate. It can also be the same secret, if the services are in the same namespace.

But the default NGINX certificate, for non-configured routes, will still be the NGINX auto-signed certificate.

To fix that, you can reuse a matching wildcard certificate that you already have somewhere in the cluster, generated using Cert-Manager. NGINX ingress controller can be configured to target it, even from another namespace:

nginx.helm.values.yml

controller:
  extraArgs:
    default-ssl-certificate: "my-namespace/my-certificate"
Enter fullscreen mode Exit fullscreen mode

7. Allow large file transfer

By default, the NGINX ingress controller allow a maximum of 1 Mb payload transfer.

For each ingress route where you need more, apply this configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 100m
[...]
Enter fullscreen mode Exit fullscreen mode

(green ship wheel), helm, tropical sea, sunny day, (green)

8. Autoscale the ingress controller

Eventually, the traffic of your web application will grow, and the ingress controller initial configuration may become obsolete.

One way to do an easy autoscaling is using a daemonset, one pod for each node:

nginx.helm.values.yml

controller:
  kind: DaemonSet # Deployment or DaemonSet
Enter fullscreen mode Exit fullscreen mode

Another way is autoscaling on NGINX CPU and memory:

nginx.helm.values.yml

controller:
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 3
    targetCPUUtilizationPercentage: 200
    targetMemoryUtilizationPercentage: 200
Enter fullscreen mode Exit fullscreen mode

If this is not sufficient, gather your incoming connections metrics and autoscale based on them. This needs complex operations, so we just forward you to the excellent article Autoscaling Ingress controllers in Kubernetes by Daniele Polencic

9. Stick user session to the same targeted pod

Applications in Kubernetes cluster must be mostly stateless, by often there is still an ephemeral session depending on the pod the user is reaching. If the users ends up on another pod, the session can be disrupted. In this case we need a "sticky session".

The enabling of sticky sessions is on the ingress side:

nginx.helm.values.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    # sticky session, from documentation: https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent" # change to "balanced" (default) to redistribute some sessions when scaling pods
    nginx.ingress.kubernetes.io/session-cookie-name: "name-distinguishing-services"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" # in seconds, equivalent to 48h
[...]
Enter fullscreen mode Exit fullscreen mode

10. Have access to real client IP in applications

By default for managed load balancers, the client IP visible to your application is not the one from the real client.

You can have it defined in X-Real-Ip request header by setting this NGINX ingress controller configuration:

For AWS:

nginx.helm.values.yml

controller:
  service:
    externalTrafficPolicy: "Local"
Enter fullscreen mode Exit fullscreen mode

Or for OVHCloud, from official documentation:

nginx.helm.values.yml

controller:
  service:
    externalTrafficPolicy: "Local"
    annotations:
      service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v2"
  config:
    use-proxy-protocol: "true"
    real-ip-header: "proxy_protocol"
    proxy-real-ip-cidr: "xx.yy.zz.aa/nn"
Enter fullscreen mode Exit fullscreen mode

This will be effective on Helm install but not always on upgrade, depending on the status of your release ; sometimes you have to edit the NGINX LoadBalancer service to define the value in spec.externalTrafficPolicy, and then restart NGINX pods to use the config part (targeting the configmap).

More information in Kubernetes Documentation.

11. Set maintenance mode

You may have already wondered how you could have your users know that you are currently deploying, to help them patiently wait for your website to be available again.

There are multiple lightweight ways to do that, and some of them involve NGINX ingress controller.

DevOps Directive has made an awesome job in this field described the article Kubernetes Maintenance Page. The solution uses a dedicated deployment + a service without any custom Docker image, that you can target with any ingress during maintenance.

12. Disable NGINX access logs for a particular service

In cases where you're dealing with a massively used ingress that's drowning out your NGINX logs, there's a solution. This often crops up in development environments, especially when a high-frequency tool like an APM server comes into play. These tools trigger frequent calls, even during idle user moments.

To combat this, leverage the nginx.ingress.kubernetes.io/enable-access-log annotation:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apm-server
  labels:
    app: apm-server
  annotations:
    nginx.ingress.kubernetes.io/enable-access-log: "false"
spec:
  rules:
    - host: apm.my-app.com
Enter fullscreen mode Exit fullscreen mode

13. Whitelist IP for a particular Ingress

To restrict access to a particular Ingress per source IP, you can set NGINX whitelist-source-range annotation with some IPs and/or CIDR. For example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-direct
  annotations:
    nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1
spec:
  rules:
    - host: my.website.com
      http:
        paths:
          - path: "/"
            pathType: Prefix
            backend:
              service:
                name: webapp
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Wrapping up

We have covered multiple NGINX ingress controller use-cases for web application, that we can use for a large variety of situations.

If you think one or two others are common and missing here, don't hesitate to comment in the section below 🤓

(green ship wheel), helm, tropical sea, sunny day, (green)

Illustrations generated locally by Automatic1111 using Lyriel model

Further reading

This article was enhanced with the assistance of an AI language model to ensure clarity and accuracy in the content, as English is not my native language.

Top comments (11)

Collapse
 
rverchere profile image
Rémi Verchère

Thanks for the blogpost!

Concerning the 1st point, "Set NGINX as the default ingress controller": the kubernetes.io/ingress.class annotation is deprecated since Kubernetes 1.18, prefer use ingressClassName.

One tip, if you love logs and metrics dashboard, you can change default logs configuration, with json support and geo-ip + maxmind, and enable metrics:

controller:
  config:
    log-format-escape-json: "true"
    log-format-upstream: '{"msec": "$msec", "connection": "$connection", "connection_requests": "$connection_requests", "pid": "$pid", "request_id": "$request_id", "request_length": "$request_length", "remote_addr": "$remote_addr", "remote_user": "$remote_user", "remote_port": "$remote_port", "time_local": "$time_local", "time_iso8601": "$time_iso8601", "request": "$request", "request_uri": "$request_uri", "args": "$args", "status": "$status", "body_bytes_sent": "$body_bytes_sent", "bytes_sent": "$bytes_sent", "http_referer": "$http_referer", "http_user_agent": "$http_user_agent", "http_x_forwarded_for": "$http_x_forwarded_for", "http_host": "$http_host", "server_name": "$server_name", "request_time": "$request_time", "upstream": "$upstream_addr", "upstream_connect_time": "$upstream_connect_time", "upstream_header_time": "$upstream_header_time", "upstream_response_time": "$upstream_response_time", "upstream_response_length": "$upstream_response_length", "upstream_cache_status": "$upstream_cache_status", "ssl_protocol": "$ssl_protocol", "ssl_cipher": "$ssl_cipher", "scheme": "$scheme", "request_method": "$request_method", "server_protocol": "$server_protocol", "pipe": "$pipe", "gzip_ratio": "$gzip_ratio", "http_cf_ray": "$http_cf_ray", "geoip_country_code": "$geoip_country_code" }'
    use-geoip2: "true"
  maxmindLicenseKey: changeme

  metrics:
    enabled: true
    serviceMonitor:
      enabled: true
Enter fullscreen mode Exit fullscreen mode
Collapse
 
bcouetil profile image
Benoit COUETIL 💫

Thank you Remi for your valuable insights 🤗

I will update with the ingressClassName.

For the tip, what backend do you have in mind for logs/metrics ? ELK ? I have yet to test that part 🤓

Collapse
 
rverchere profile image
Rémi Verchère

On my side, I'm using Loki & Prometheus ;)
You can get some dashboard from here, need to build one for the logs (you can still explore them).

Thread Thread
 
bcouetil profile image
Benoit COUETIL 💫

Thanks 🙏

Article updated with ingressClassName ✌️

Collapse
 
asuresky profile image
Boris

Thanks for share your experience ;)

Collapse
 
bcouetil profile image
Benoit COUETIL 💫

Thanks, I appreciate your feedback 🤗

Don't hesitate to share some use cases if you think they are missing and deserve a place in the list 😉

Collapse
 
bcouetil profile image
Benoit COUETIL 💫

Many thanks to @K8SArchitect for spreading this article on twitter yesterday 🤗

Collapse
 
adesoji1 profile image
Adesoji1

So if I want to deploy to a registered domain, is this the approach to use?

Collapse
 
bcouetil profile image
Benoit COUETIL 💫 • Edited

Do you already have a kubernetes cluster, or are you trying to evaluate if Kubernetes + NGINX is the right approach ?

Can you give more info about your context ?

Collapse
 
adesoji1 profile image
Adesoji1

i want to deploy a java backend and angular frontend, i have the nginx configurations setup already in my dockerfile. i have a registered domain to use and i want to deploy to aws using terraform+jenkins+docker+kubernetes, therefore i want to know if the approach in your tutorial is applicable to my task?

Thread Thread
 
bcouetil profile image
Benoit COUETIL 💫 • Edited

If you have a domain and a Kubernetes cluster, yes, it is applicable. The fact that you have NGINX conf in your Dockerfile (for the frontend), may not be relevant : When using NGINX Ingress Controller, in general we remove NGINX specificites inside application.