DEV Community

Arseny Zinchenko
Arseny Zinchenko

Posted on • Originally published at rtfm.co.ua on

Pritunl: running VPN in Kubernetes

Pritunl is a VPN server with a bunch of advanced security and access control features.

In fact, it is just a wrapper over OpenVPN, adding such Access Control Lists to it in the form of Organizations, users, and routes.

The task is to deploy a Pritunl test instance in Kubernetesб so we can take a closer look at it.

For now, we will use the free version and later will look at the paid one. Differences and costs can be found here>>>.

Will run it in Minikube, and for installation, we will use the Helm chart from Dysnix.

Running Pritunl in Kubernetes

Create a namespace:

$ kubectl create ns pritunl-local
namespace/pritunl-local created
Enter fullscreen mode Exit fullscreen mode

Add a repository:

$ helm repo add dysnix [https://dysnix.github.io/charts](https://dysnix.github.io/charts)
Enter fullscreen mode Exit fullscreen mode

And install the chart with Pritunl:

$ helm -n pritunl-local install pritunl dysnix/pritunl…
Pritunl default access credentials:
export POD_ID=$(kubectl get pod — namespace pritunl-local -l app=pritunl,release=pritunl -o jsonpath=’{.items[0].metadata.name}’)
kubectl exec -t -i — namespace pritunl-local $POD_ID pritunl default-password
…
export VPN_IP=$(kubectl get svc — namespace pritunl-local pritunl — template “{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}”)
echo “VPN access IP address: ${VPN_IP}”
Enter fullscreen mode Exit fullscreen mode

Check the pods:

$ kubectl -n pritunl-local get pod
NAME READY STATUS RESTARTS AGE
pritunl-54dd47dc4d-672xw 1/1 Running 0 31s
pritunl-mongodb-557b7cd849-d8zmj 1/1 Running 0 31s
Enter fullscreen mode Exit fullscreen mode

Get the login-password from the master pod:

$ kubectl exec -t -i — namespace pritunl-local pritunl-54dd47dc4d-672xw pritunl default-password
…
Administrator default password:
username: “pritunl”
password: “zZymAt1tH2If”
Enter fullscreen mode Exit fullscreen mode

Find its Services:

$ kubectl -n pritunl-local get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pritunl LoadBalancer 10.104.33.93 <pending> 1194:32350/TCP 116s
pritunl-mongodb ClusterIP 10.97.144.132 <none> 27017/TCP 116s
pritunl-web ClusterIP 10.98.31.71 <none> 443/TCP 116s
Enter fullscreen mode Exit fullscreen mode

Here, the LoadBalancer pritunl is for client access to the VPN server, and the pritunl-web ClusterIP service is for accessing the web interface.

Forward a port to the web:

$ kubectl -n pritunl-local port-forward svc/pritunl-web 8443:443
Forwarding from 127.0.0.1:8443 -> 443
Forwarding from [::1]:8443 -> 443
Enter fullscreen mode Exit fullscreen mode

Open https://localhost:8443:

Log in and get into the initial settings:

Here, in the Public Address, the public address of the host on which Prytunl itself is running will be automatically set, and then it will be substituted into the client configs as the VPN host address.

Since Pritunl is working in Kubernetes, which is running in VirtualBox, which is running on Linux on a regular home PC, it does not suit us, but we will return to it later. For now, you can leave it as it is.

The rest of the settings are not interesting to us yet.

Setting up Pritunl VPN

Organization, Users

See Initial Setup.

Go group users, Pritunl has Groups, but it’s available in the full (paid) version, we will see it later.

Also, users can be grouped through Organizations.

Go to Users, add an Organization:

Add a user:

PIN, email — optional, not needed now.

Pritunl Server and routes

See Server configuration.

Go to Servers, add a new one:

Here:

  • DNS Server : a DNS server for clients
  • Port, Protocol : port and protocol for OpenVPN, which will run “inside” Prytunl and will accept connections from our users
  • Virtual Network : a network from the address pool of which we will allocate private IPs for clients

I would single out Virtual Network 172.16.0.0 — then our home network, Kuber’s network, and client IPs will differ — it will be more simple to debug, see IPv4 Private Address Space and Filtering.

At the same time, it is important that the Server port here must match the port and protocol on the LoadBalancer —  1194 TCP.

I.e. requests from the working machine will go through the route:

  • 192.168.3.0/24  — home network
  • hits the VirtualBox network 192.168.59.1/24 (see Proxy )
  • will go to LoadBalancer in the Kuber network 10.96.0.0/12
  • and LoadBalancer will send a request to the Kubernetes Pod, where we have OpenVPN listening on the TCP port 1194

Check LoadBalancer itself:

$ kubectl -n pritunl-local get svc pritunl
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE]
pritunl LoadBalancer 10.104.33.93 <pending> 1194:32350/TCP 22m
Enter fullscreen mode Exit fullscreen mode

Port 1194 — TCP. We will deal with the Pending status a bit later.

Set the Virtual Network, port, and protocol for the Server:

Next, connect the Organization with all its users:

Start the server:

Check the process and port in the Kubernetes Pod — we see our OpenVPN Server on the port 1194:

$ kubectl -n pritunl-local exec -ti pritunl-54dd47dc4d-672xw — netstat -anp | grep 1194
Defaulted container “pritunl” out of: pritunl, alpine (init)
tcp6 0 0 :::1194 :::* LISTEN 1691/openvpn
Enter fullscreen mode Exit fullscreen mode

And let’s go to fix LoabBalancer.

minikube tunnel

See Kubernetes: Minikube, and a LoadBalancer in the Pending status for full details, for now just call minikube tunnel:

$ minikube tunnel
[sudo] password for setevoy:
Status:
machine: minikube
pid: 1467286
route: 10.96.0.0/12 -> 192.168.59.108
minikube: Running
services: [pritunl]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
…
Enter fullscreen mode Exit fullscreen mode

Check Loadbalancer:

$ kubectl -n pritunl-local get svc pritunl
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pritunl LoadBalancer 10.104.33.93 10.104.33.93 1194:32350/TCP 139m
Enter fullscreen mode Exit fullscreen mode

The EXTERNAL-IP has the correct value now, so check the connection:

$ telnet 10.104.33.93 1194
Trying 10.104.33.93…
Connected to 10.104.33.93.
Escape character is ‘^]’.
Enter fullscreen mode Exit fullscreen mode

Return to the main Settings, specify Public Address == LoadBalancer IP :

OpenVPN — connect to the server

Go to Users, click Download profile:

Unpack the archive:

$ tar xfp local-user.tar
Enter fullscreen mode Exit fullscreen mode

And connect using a common OpenVPN client:

$ sudo openvpn — config local-org_local-user_local-server.ovpn
[sudo] password for setevoy:
…
2022–10–04 15:58:32 Attempting to establish TCP connection with [AF_INET]10.104.33.93:1194 [nonblock]
2022–10–04 15:58:32 TCP connection established with [AF_INET]10.104.33.93:1194
…
2022–10–04 15:58:33 net_addr_v4_add: 172.16.0.2/24 dev tun0
2022–10–04 15:58:33 WARNING: this configuration may cache passwords in memory — use the auth-nocache option to prevent this
2022–10–04 15:58:33 Initialization Sequence Completed
Enter fullscreen mode Exit fullscreen mode

But now the network will not work:

$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
…
Enter fullscreen mode Exit fullscreen mode

Since in our VPN the route to 0.0.0.0/0 is directed through the same host on which the VPN actually works, so we got a “ring”.

Go to Servers, stop the server and delete the Default route:

Click on the Add Route — add a route to 1.1.1.1 through our VPN, and all other requests from the client will go through the usual routes:

Restart the connection:

$ sudo openvpn — config local-org_local-user_local-server.ovpn
Enter fullscreen mode Exit fullscreen mode

Check the routes on the host machine, locally:

$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.3.1 0.0.0.0 UG 100 0 0 enp38s0
1.1.1.1 172.16.0.1 255.255.255.255 UGH 0 0 0 tun0
…
Enter fullscreen mode Exit fullscreen mode

And check the network — the request went through the VPN:

$ traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets
1 172.16.0.1 (172.16.0.1) 0.211 ms 41.141 ms 41.146 ms
2 * * *
…
Enter fullscreen mode Exit fullscreen mode

“It works!” (с)

Done.

Originally published at RTFM: Linux, DevOps, and system administration.


Top comments (0)