DEV Community

loading...

Basic auth with NGINX Ingress Controller on Kubernetes

Jordan Gregory
I'm a dad, husband, gopher, rustacean, pythonista and devops guy
Updated on ・4 min read

So, I couldn't find a lot of good resources online when I went to do this yesterday, so I figured I would post what I did to accomplish this task.

This particular post will not try to explain the basics of Kubernetes Ingress controllers, if the need is there, I can write another post explaining more of the basics. Feel free to comment on the post if you would like me to do so.

First things first, let me start by clearing up a few things:

The first order of business is that I am in no way affiliated with NGINX or F5, I am just a fan of their products.

The NGINX Ingress Controller, provided by F5 (the company that owns NGINX) is not the same thing as the ingress-nginx controller (the ingress provided and maintained by the Kubernetes community).

I don't have anything against the ingress-nginx controller, but there are a number of things that the NGINX Ingress Controller does that ingress-nginx does not, and I needed those particular features. Again, if you would like a breakdown of the differences, I could write another post, but I feel like F5 did a decent job with this post:
Which NGINX Ingress Controller am I using?

Both are open source (but the NGINX Ingress Controller has a paid support option) and I'm pretty sure that the following steps can be performed with the ingress-nginx controller as well, but I've not tested it.

With that out of the way, here is what I did to enable BASIC AUTH using the NGINX Ingress Controller by F5.

Assumptions and Necessary Pre-Work

So, my basic assumptions are these:

  1. You have a running Kubernetes cluster that you can access ... somewhere.
  2. You have the NGINX Ingress Controller installed (NGINX Plus is not necessary, but enabling snippets is necessary).

If you do not have the NGINX Ingress Controller installed, just follow the steps in the guides:

The only real pre-work step is that you have to have a valid .htpasswd file to provide to the controller pods.

In my case, I did the following in an Ubuntu container:

apt-get update
apt-get install apache2-utils
htpasswd -c .htpasswd <my_first_user>
<< The utility will ask you to input the password for the user >>

cat .htpasswd
Enter fullscreen mode Exit fullscreen mode

If you need more than a single user, feel free to rinse//repeat the htpasswd -c ... step for as many users as you need.

I then just copied the contents of that file via cat, but you could have just as easily mounted a local volume to the running container and saved the file there for easier use.

Adding the .htpasswd file to the existing/future NGINX Ingress Controller pods

First, we have to add the contents of the .htpasswd file to either a ConfigMap or a Secret, and given the contents, I chose a Secret, so to do this, I created this resource:

# Contents of htpasswd.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: htpasswd
  namespace: nginx
stringData:
  .htpasswd: |
    << CONTENTS OF .HTPASSWD THAT YOU COPIED FROM PRE-WORK >>
Enter fullscreen mode Exit fullscreen mode

and then simply applied it using kubectl apply -f htpasswd.yaml, but feel free to call the file whatever you want.

If you happened to save the contents of the .htpasswd to a file before hand, you could have simply run kubectl create secret generic htpasswd -n nginx --from-file=<your_file>, in hindsight, I would probably do this.

Now, we have to add this file the NGINX pods. To do this step, we need to get the deployment name that we have to edit:

kubectl get deployments -n nginx

NAME           READY  UP-TO-DATE  AVAILABLE  AGE
nginx-ingress  1/1    1           1          15d
Enter fullscreen mode Exit fullscreen mode

Using this, we can simple edit the resource using the following command:

kubectl edit deployment nginx-ingress -n nginx

The modifications we have to make are as follows:

...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: nginx-ingress
        ...
        # THIS IS WHAT WE NEED TO ADD TO THE CONTAINER
        volumeMounts:
        - mountPath: /etc/apache2
          name: htpasswd
        ...
      # AND THIS IS WHAT WE NEED TO ADD TO THE OVERALL SPEC
      volumes:
      - secret:
          defaultMode: 420
          items:
          - key: .htpasswd
            path: .htpasswd
          name: htpasswd
        name: htpasswd
      ...
Enter fullscreen mode Exit fullscreen mode

If you are comfortable with patching Kubernetes resources, that would be a viable alternative to just editing, but I was in a time crunch.

Modifying your ingress to use the work

So now, the last step is you modify your ingress to actually use everything we have done up to this point. So again, we need to get the name of your ingress and edit it.

kubectl get ingresses

NAME        CLASS  HOSTS                      ADDRESS  PORTS AGE
my-ingress  nginx  my-service.whatever.myTld  1.2.3.4  80,443  15d
Enter fullscreen mode Exit fullscreen mode

Go ahead and edit your ingress like so:

kubectl edit ingress my-ingress

The only changes we need to make are to the annotations of the ingress, and the annotations we need to add are:

metadata:
  ...
  annotations:
    ...
    # THIS IS THE ADDITION
    nginx.org/server-snippets: |
      auth_basic "my-ingress";
      auth_basic_user_file /etc/apache2/.htpasswd;
Enter fullscreen mode Exit fullscreen mode

Once you save the resource, go ahead and try to access you ingress ... and voila! you are presented with a login popup that we are all so familiar with.

Parting Words

Yeah, basic auth is kind of dumb, but there are still reasons for it. If you are interested in paying for NGINX Plus you get access to automated JWT authentication and things of that nature, but I simply just didn't need all of that at this layer. Other ingress controllers (ambassador, kong, etc...) may make this process easier or not, but we don't use them, so that is the reason for this post.

Like I said, this may very well work with the ingress-nginx controller as well, but, I'll leave that to y'all to work out and report back.

Discussion (5)

Collapse
sceptic30 profile image
Nikolas

Unfortunately, this workaround doesn't work with namespaced secrets, and namespaced ingress controllers (which are pointing to namespaced services).

Really annoying though that nginxinc ingress controller version doesn't support annotations for this particular functionality.

Collapse
j4ng5y profile image
Jordan Gregory Author

I agree wholeheartedly, their controller ought to just handle things like this like Kong and others.

But yes, there are plenty of limitations to this approach, but it did "work" in the context lol.

There are plenty of reasons to use the (F5) NGINX ingress controller, but there seem to be more reasons not to.

Collapse
stevesims2 profile image
SteveSims2

This is a cool post. Thanks. I am curious if you use cert-manager at all.

Collapse
j4ng5y profile image
Jordan Gregory Author

Yes, we use cert-manager for the vast majority of cases mostly due to how easy to use it is. There are some cluster deployments where I use google managed certificates as well, but those are more of a special use case.

Collapse
j4ng5y profile image
Jordan Gregory Author • Edited

Usually, in a GCP/GKE context, I tend to deploy cert-manager, a namespace scoped issuer, and tend to ask for a wildcard cert on a subdomain that I subsequently apply to NGINX-Ingress-controller for things that I forget to actually apply the cert-manager.io/issuer: "issuer" annotation to (or for things that literally don't need it. Otherwise, the ingresses that I use just request from that same issuer. Just kind of depends on the use case.