DEV Community

Cover image for Using NetBird for Kubernetes Access
Misha Bragin
Misha Bragin

Posted on • Originally published at netbird.io

Using NetBird for Kubernetes Access

Securing access to your Kubernetes clusters is crucial as inadequate security measures can lead to unauthorized access and potential data breaches. However, navigating the complexities of Kubernetes access security, especially when setting up strong authentication, authorization, and network policies, can be challenging.

NetBird simplifies Kubernetes access with its zero-configuration approach, leveraging WireGuard's simplicity and strength. It seamlessly integrates with various tools, offering transparency and high reliability as an open source solution.

In this article, you'll learn how to set up the NetBird CLI to ensure a secure connection to a Kubernetes cluster on the Google Cloud Platform (GCP), complete with a fail-safe route for uninterrupted access.

Using NetBird for Kubernetes Access

In this tutorial, you'll follow these steps to enable secure access to your Kubernetes cluster using NetBird:

  • Set up a NetBird agent on your local machine.
  • Create NetBird setup keys to be used in the Kubernetes cluster.
  • Set up a remote Kubernetes cluster. For this tutorial, Google Kubernetes Engine (GKE) was chosen; however, feel free to use any remote Kubernetes cluster.
  • Deploy the setup key in the remote Kubernetes cluster.
  • Configure a highly available (HA) network route in NetBird for uninterrupted access to the Kubernetes cluster.
  • Test secure access from the local client to the remote Kubernetes cluster.

Please note that to follow this tutorial, you need kubectl and gcloud command line tools installed and properly configured on your local machine.

With the procedure outlined, let's get started.

Set Up the NetBird Agent

If you haven't already, sign up for a free NetBird account. After confirming your email,
you'll receive a message with instructions on how to install the NetBird agent on your local machine.
This agent includes NetBird's CLI, which is used to control it.

NetBird client setup instructions

Follow the instructions for your operating system and finish by authorizing the NetBird app to access the NetBird dashboard using the command sudo netbird up:

Authorize the NetBird app

You should see your local machine in the Peers dashboard, which means that it's now connected to the NetBird peer-to-peer (P2P) network:

Peers: local machine

Create Setup Keys

Now that you're connected to the network, it's time to add a Kubernetes cluster to this network. To do that, you need to create a setup key that your Kubernetes cluster can use to authenticate with NetBird.

Click on the Setup Keys tab on the left-hand side of your screen and then click on the Create Setup Key button:

Creating a setup key

A pop-up window opens where you can name the key, set the maximum number of peers that can use it, and determine its expiration date.
Additionally, you can also select auto-assigned groups.
With this option, any peer that registers with the key will be automatically added to these groups, meaning that the access control
rules set for these groups will automatically apply to them. Set it to gke-cluster, it will help you to avoid adding
each peer to the group manually later in the tutorial.

You'll also see two toggle switches there: one for making the key reusable and another to mark a peer as
ephemeral.
Reusable setup keys are useful when there is no human behind a machine that can use SSO and MFA, e.g.,
to associate servers with a NetBird account,
so they can authenticate to the network automatically. On the other hand, ephemeral keys automatically remove machines
from NetBird if they are offline for more than 10 minutes. This feature helps prevent offline machines from cluttering
the NetBird peer dashboard, such as when a Kubernetes pod gets replaced by a new one.

The following screenshot shows the creation of an ephemeral key:

Ephemeral setup key

Likewise, this screenshot shows how to create a reusable key:

Reusable setup key

Given the reasons mentioned earlier, using ephemeral keys for this tutorial is recommended. This is because Kubernetes
might restart services, causing machines to go offline.

Set Up a Remote Kubernetes Cluster

To ensure a highly available network route with NetBird, there are some considerations to keep in mind when deploying the GKE cluster:

  • You should create, at a bare minimum, a regional cluster or a multizonal cluster with nodes distributed across multiple zones within a region to ensure that the cluster is resilient to zonal failures.
  • You need to deploy at least three nodes to ensure high availability.
  • You are highly recommended to enable cluster autoscaling to ensure that there are always at least three nodes running.

You can use the Google Cloud console UI to set up a Kubernetes cluster like the one described earlier, or you can adapt this gcloud command to fit your needs:



gcloud container --project "YOUR_PROJECT_ID" clusters create "netbird-cluster-1" \
--no-enable-basic-auth --release-channel "regular" \
--machine-type "e2-small" --image-type "COS_CONTAINERD" --disk-type "pd-standard" \
--disk-size "100" --metadata disable-legacy-endpoints=true \
--num-nodes "1" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --enable-ip-alias \
--enable-autoscaling --min-nodes "1" --max-nodes "2" \
--network "projects/YOUR_PROJECT_ID/global/networks/default" \
--subnetwork "projects/YOUR_PROJECT_ID/regions/us-east1/subnetworks/default" \
--no-enable-intra-node-visibility --default-max-pods-per-node "110" \
--security-posture=standard --workload-vulnerability-scanning=disabled \
--no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
--enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 \
--enable-managed-prometheus --enable-shielded-nodes \
--region "us-east1" --node-locations "us-east1-b,us-east1-c,us-east1-d"


Enter fullscreen mode Exit fullscreen mode

Replace YOUR_PROJECT_ID with the ID of your project. Additionally, replace the region us-east1 and the zones us-east1-b, us-east1-c, and us-east1-d according to your preferences.

If you want to create a cluster with a specific version, use the --cluster-version flag to match the latest available version in your region.

Pay attention to the instance type configuration. In this case, e2-small guarantees NetBird has sufficient CPU and memory resources. --num-nodes is set to 1 since it deploys one node per zone (three nodes in total), and the --enable-autoscaling flag is set to allow a maximum of two nodes per zone or a maximum of six nodes in total.

Once the cluster is deployed, log in to the Google Cloud console and check that everything is working as expected (cluster status is Ok and status of the nodes' is Ready):

GKE cluster in the Google Cloud console

You can also verify the nodes using kubectl from your local machine:



$ kubectl get nodes
NAME                                               STATUS   ROLES    AGE     VERSION
gke-netbird-cluster-1-default-pool-6969d6cf-w5fd   Ready    <none>   5m13s   v1.27.8-gke.1067004
gke-netbird-cluster-1-default-pool-7ddbb961-8rkf   Ready    <none>   5m14s   v1.27.8-gke.1067004
gke-netbird-cluster-1-default-pool-a7de6516-rbb0   Ready    <none>   5m14s   v1.27.8-gke.1067004


Enter fullscreen mode Exit fullscreen mode

With the GKE cluster up and running, it's time to add the NetBird ephemeral key.

Add the NetBird Ephemeral Setup Key to the Kubernetes Cluster

To register your Kubernetes cluster to the NetBird secure P2P network, create a YAML file on your local machine called netbird-client.yaml and paste the following content in it:



apiVersion: apps/v1
kind: Deployment
metadata:
  name: netbird
spec:
  selector:
    matchLabels:
      app: netbird
  replicas: 3
  template:
    metadata:
      labels:
        app: netbird
    spec:
      containers:
        - name: netbird
          image: netbirdio/netbird:latest
          env:
            - name: NB_SETUP_KEY
              value: <YOUR_SETUP_KEY>
            - name: NB_HOSTNAME
              value: "netbird-cluster-1"
            - name: NB_LOG_LEVEL
              value: "info"
          volumeMounts:
            - name: netbird-client
              mountPath: /etc/netbird
          resources:
            requests:
              memory: "128Mi"
              cpu: "500m"
          securityContext:
            privileged: true
            runAsUser: 0
            runAsGroup: 0
            capabilities:
              add:
                - NET_ADMIN
                - NET_RESOURCE
                - SYS_ADMIN
      volumes:
        - name: netbird-client
          emptyDir: {}


Enter fullscreen mode Exit fullscreen mode

Let's break down the key arguments of this deployment:

  • replicas: 3 ensures that at least one replica is deployed per node and is necessary to secure HA.
  • image: netbirdio/netbird:latest uses the latest available image. For production, it's best practice to use a specific version to avoid conflicts during updates.
  • The requests parameter ensures that NetBird has a minimum guaranteed amount of resources to run properly. It's best practice to assign requests and/or limits to any app or service running on your cluster, which includes the NetBird client.
  • privileged: true is required since, generally, modifying the routing table requires root permissions.

Remember to replace YOUR_SETUP_KEY with the previously created ephemeral setup key. Once you're ready, add the ephemeral key to the cluster by running the following command:



kubectl apply -f netbird-client.yaml


Enter fullscreen mode Exit fullscreen mode

You can check the status of the deployment using kubectl get pods:



kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
netbird-7dc884d77f-285dh   1/1     Running   0          96s
netbird-7dc884d77f-cfq69   1/1     Running   0          96s
netbird-7dc884d77f-x56nh   1/1     Running   0          96s


Enter fullscreen mode Exit fullscreen mode

At this point, if you go to the NetBird Peers tab, you should see the GKE nodes:

GKE Peers view

With your local machine and GKE nodes now part of the NetBird P2P network, you're ready to establish a secure HA network route for communication.

Set Up an HA Network Route in NetBird

Simply having all GKE nodes on the same peer-to-peer network doesn't guarantee accessibility. For instance, if you attempt to ping a node, you'll receive a "request timeout…" message. This happens because you need to set up a network route and define an access policy before you can connect to the nodes.

To facilitate the process of creating network routes and access policies, NetBird lets you create peer groups.
Do you remember that you created and assigned the gke-cluster group to the GKE nodes when you created the ephemeral key?
If you haven't done this then, you can create and assign the group manually to the peers now.

To do so, click on the first GKE node in the Peers view. This takes you to a screen with the peer details. On the right side, you should see Assigned Groups:

**Assigned Groups**

Start typing a name (ie gke-cluster) to create a new group. After you've entered the name, click Save Changes to apply the new configuration:

Create a new group

Repeat the procedure for the remaining two GKE nodes; remember to choose the same group and save the changes.

Once finished, hover the mouse pointer over the GROUPS column on any of the nodes to verify that they belong to the desired group:

Peers added to a group

Similarly, to simplify setting a route, you can assign a group to your local machine, such as local:

Local machine group

With the peer groups ready, it's time to move to the Network Routes tab, which you can find on the left-hand side of your screen:

Network routes

Before setting up your first network route, you need to determine which network you wish to connect with. For instance, if you want to access pods and services in your Kubernetes cluster, you'll need the GKE cluster Pod IPv4 and IPv4 service ranges. To find these, navigate to the Networking section in the Google Cloud console:

Google console **Networking**

Click on the Add Route button, and a pop-up opens, allowing you to create a new route.

Input the Network Range, then click on Peer Group and choose gke-cluster (or the group you've established for your nodes). Under Distribution Groups, pick the local group you just created to make this route known to your local machine:

Create network route

In the Name & Description tab, provide a name (network identifier) for the route and click Add Route to finalize its creation:

Naming a network route

Following is an example of two networks: one for the GKE pods and another for the GKE services. Note that the HIGH AVAILABILITY column shows 3 Peer(s) in green to indicate that it is a highly available route:

Network routes view

While these routes create a link between your local machine and the GKE cluster, you still need one more component to enable communication: an access control policy (ACP).

Set Up an Access Control Policy to Access Your Kubernetes Cluster

To create an ACP, go to Access Control > Policies and click on Add Policy. A pop-up appears where you can define a new ACP. Select the local group as Source and the gke-cluster group as Destination. This allows bidirectional communication between both peers:

New ACP

Configuring security posture checks is beyond the scope of this guide, but in a nutshell, the Posture Checks tab lets you enhance access control by blocking peers that fail to meet certain conditions, such as client version, country of origin, and operating system.
If you are interested in this and other Zero Trust Security features, check the Open-Source Zero Trust Networking Guide.

To create the new policy, go to the Name & Description tab and enter an appropriate name for the rule (eg Local to GKE):

Name new ACP

To finish setting up the new ACP, select Add Policy.

The following are two example policies: Local to GKE and the NetBird Default policy. In this example, the default policy has been turned off, so there's only Local to GKE active:

ACP view

Congratulations! You just configured secure access to Kubernetes using NetBird P2P. To check the connection, navigate to the Peers tab and take note of the IP address or domain assigned by NetBird to any of the nodes.

The following image shows the response when pinging one of the GKE nodes:



$ ping -c 3 netbird-cluster-1.netbird.cloud
PING netbird-cluster-1.netbird.cloud (100.74.176.22): 56 data bytes
64 bytes from 100.74.176.22: icmp_seq=0 ttl=64 time=106.287 ms
64 bytes from 100.74.176.22: icmp_seq=1 ttl=64 time=107.516 ms
64 bytes from 100.74.176.22: icmp_seq=2 ttl=64 time=105.674 ms

--- netbird-cluster-1.netbird.cloud ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 105.674/106.492/107.516/0.766 ms

Enter fullscreen mode Exit fullscreen mode




Conclusion

In this tutorial, you installed the NetBird CLI, configured a Kubernetes cluster on GCP, integrated a NetBird ephemeral key, created a high-availability route, and set an access control policy for secure local access.

Take your network management to the next level—explore more features and enhance your setup with the NetBird cutting-edge peer-to-peer network.

Top comments (0)