DEV Community

InterSystems Developer for InterSystems

Posted on

Try Kubernetes with microk8s

Objective

At the Japan Virtual Summit 2021, I gave a session on Kubernetes, which was targeted at those who had an Azure account or IRIS evaluation license key. I'm sure some developers would like to try it out a little more quickly, therefore, this article will introduce the procedure to run IRIS Community Edition on microk8s, a lightweight implementation of k8s that can be used in virtual environments.

For your reference, my environment is as follows:

Purpose O/S Host Type IP
Client PC Windows10 Pro Physical Hosts 192.168.11.5/24
mirok8s environment Ubuntu 20.04.1 LTS Virtual host (vmware) on Windows 10 as listed above 192.168.11.49/24

Ubuntu used ubuntu-20.04.1-live-server-amd64.iso to install only the minimum server features.

Overview

This section describes the steps to deploy IRIS Community Edition as a Kubernetes StatefulSet.
For the persistent storage to store IRIS system files and user database externally, we will use microk8s_hostpath or Longhorn.
You can find the code to use here.

The preview version of 2021.1 is being used, but when 2021.1 will be officially released, the image name will be changed and it will not work properly without changing the image: value in the yml.

Installation

Install and start microk8s.

$ sudo snap install microk8s --classic --channel=1.20
$ sudo usermod -a -G microk8s $USER
$ sudo chown -f -R $USER ~/.kub
$ microk8s start
$ microk8s enable dns registry storage metallb
  ・
  ・
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'):192.168.11.110-192.168.11.130
Enter fullscreen mode Exit fullscreen mode

Since you will be asked for the IP range to be assigned to the load balancer, set an appropriate range. In my environment, the CIDR of the host where k8s is running is 192.168.11.49/24, so I designated [192.168.11.110-192.168.11.130] as the appropriate free IP range.

At this point, the single-node k8s environment is ready.

$ microk8s kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
ubuntu 
Enter fullscreen mode Exit fullscreen mode

In the following examples, microk8s is omitted. It is troublesome to add microk8s every time you run kubectl, so I configured alias with the following command. In the following examples, microk8s is omitted.

$ sudo snap alias microk8s.kubectl kubectl
$ kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
ubuntu   Ready    <none>   10d   v1.20.7-34+df7df22a741dbc
Enter fullscreen mode Exit fullscreen mode

To restore the original state

sudo snap unalias kubectl
Enter fullscreen mode Exit fullscreen mode

Launch

$ kubectl apply -f mk8s-iris.yml
Enter fullscreen mode Exit fullscreen mode

As IRIS Community Edition, I have not specified a license key nor imagePullSecrets to log in to the container registry.

After a few moments, two pods will be created. IRIS is now up and running

$ kubectl get pod
NAME     READY   STATUS    RESTARTS   AGE
data-0   1/1     Running   0          107s
data-1   1/1     Running   0          86s
$ kubectl get statefulset
NAME   READY   AGE
data   2/2     3m32s
$ kubectl get service
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)           AGE
kubernetes   ClusterIP      10.152.183.1     <none>           443/TCP           30m
iris         ClusterIP      None             <none>           52773/TCP         8m55s
iris-ext     LoadBalancer   10.152.183.137   192.168.11.110   52773:31707/TCP   8m55s
Enter fullscreen mode Exit fullscreen mode

If the pod's STATUS does not run, you can check the event with the following command. It may be because the image name is specified incorrectly and the pull fails or because some resource is missing.

$ kubectl describe pod data-0
Enter fullscreen mode Exit fullscreen mode

The following command will allow you to login to IRIS with O/S authentication.

$ kubectl exec -it data-0 -- iris session iris
Node: data-0, Instance: IRIS
USER>
Enter fullscreen mode Exit fullscreen mode

Access the IRIS management portal on individual pods

Use the command below to check the internal IP address of each pod.

$ kubectl get pod -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
data-0   1/1     Running   0          46m   10.1.243.202   ubuntu   <none>           <none>
data-1   1/1     Running   0          45m   10.1.243.203   ubuntu   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

Usually, the internal IP cannot be referenced directly from the kubectl executing host (use kubectl port-forward to reference it). Still, the microk8s environment can be accessed this time since they are all running on the same host.

As my virtual environment, Linux does not have a GUI. I can access the management portal from a Windows browser by running the following command on a client PC.

C:\temp>ssh -L 9092:10.1.243.202:52773 YourLinuxUserName@192.168.11.49
C:\temp>ssh -L 9093:10.1.243.203:52773 YourLinuxUserName@192.168.11.49
Enter fullscreen mode Exit fullscreen mode

The internal IP will be updated every time the pod is recreated.

Target URL USER Password
IRIS on pod data-0 http://localhost:9092/csp/sys/%25CSP.Portal.Home.zen SuperUser SYS
IRIS on pod data-1 http://localhost:9093/csp/sys/%25CSP.Portal.Home.zen SuperUser SYS

Password is specified by PasswordHash in CPF

Check the database configuration. The following databases can be confirmed to be created on the PV.

Database Name Path
IRISSYS /iris-mgr/IRIS_conf.d/mgr/
TEST-DATA /vol-data/TEST-DATA/

Stoppage

Delete the created resource.

$ kubectl delete -f mk8s-iris.yml --wait
Enter fullscreen mode Exit fullscreen mode

Note that this will also delete the IRIS pod, keeping the PV saved. The next time a pod with the same name is invoked, it will be served the same volume as before. This allows you to separate the life cycle of the pod from the life cycle of the database. You can also delete the PV with the following command (the database contents will be permanently lost):

$ kubectl delete pvc --all
Enter fullscreen mode Exit fullscreen mode

To shut down the O/S, execute the following command to stop the k8s environment successfully.

$ microk8s stop
Enter fullscreen mode Exit fullscreen mode

After restarting the O/S, you can start the k8s environment with the following command.

$ microk8s start
Enter fullscreen mode Exit fullscreen mode

If you want to delete the microk8s environment completely, do the following "before" running microk8s stop. (It took me a long time to do this. I don't think you need to run this daily).

$ microk8s reset --destroy-storage
Enter fullscreen mode Exit fullscreen mode

Observation

Storage location

Out of curiosity, where does /iris-mgr/ exist? microk8s is a standalone k8s environment, so if storageClassName is microk8s-hostpath, the file entity is on the same host. Firstly, use kubectl get pv to check the created PV.

$ kubectl apply -f mk8s-iris.yml
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS        REASON   AGE
pvc-ee660281-1de4-4115-a874-9e9c4cf68083   20Gi       RWX            Delete           Bound    container-registry/registry-claim   microk8s-hostpath            37m
pvc-772484b1-9199-4e23-9152-d74d6addd5ff   5Gi        RWO            Delete           Bound    default/dbv-data-data-0             microk8s-hostpath            10m
pvc-112aa77e-2f2f-4632-9eca-4801c4b3c6bb   5Gi        RWO            Delete           Bound    default/dbv-mgr-data-0              microk8s-hostpath            10m
pvc-e360ef36-627c-49a4-a975-26b7e83c6012   5Gi        RWO            Delete           Bound    default/dbv-mgr-data-1              microk8s-hostpath            9m55s
pvc-48ea60e8-338e-4e28-9580-b03c9988aad8   5Gi        RWO            Delete           Bound    default/dbv-data-data-1             microk8s-hostpath            9m55s
Enter fullscreen mode Exit fullscreen mode

Now, we describe default/dbv-mgr-data-0, which is used as the ISC_DATA_DIRECTORY for the data-0 pod.

$ kubectl describe pv pvc-112aa77e-2f2f-4632-9eca-4801c4b3c6bb
  ・
  ・
Source:
    Type:   HostPath (bare host directory volume)
    Path:   /var/snap/microk8s/common/default-storage/default-dbv-mgr-data-0-pvc-112aa77e-2f2f-4632-9eca-4801c4b3c6bb
Enter fullscreen mode Exit fullscreen mode

This path is the location of the entity file.

$ ls /var/snap/microk8s/common/default-storage/default-dbv-mgr-data-0-pvc-112aa77e-2f2f-4632-9eca-4801c4b3c6bb/IRIS_conf.d/
ContainerCheck  csp  dist  httpd  iris.cpf  iris.cpf_20210517  _LastGood_.cpf  mgr
Enter fullscreen mode Exit fullscreen mode

Please, do not use hostpath for storageClassName, because unlike microk8s_hostpath, it will result in multiple IRISs being co-located in the same folder (destroyed state).

Resolving hostnames

StatefulSet assigns a unique hostname to each pot, such as data-0, data-1, etc., according to the value of metadata.name.
To use this hostname for communication between pots, Headless Service is adopted.

kind: StatefulSet
metadata:
  name: data

kind: Service
spec:
  clusterIP: None # Headless Service

Enter fullscreen mode Exit fullscreen mode

It is useful when using features such as sharding, which involves communication between nodes. There is no direct benefit in this example.

I want to use nslookup, but the container runtime (ctr) used in kubectl and k8s cannot log in as root like docker. Also, the IRIS container images do not have sudo installed for security reasons, so you cannot use apt install additional software outside of the image build time. Here, we will additionally boot busybox and use nslookup to check the hostname.

$ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
/ # nslookup data-0.iris
Server:    10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local

Name:      data-0.iris
Address 1: 10.1.243.202 data-0.iris.default.svc.cluster.local
/ #
Enter fullscreen mode Exit fullscreen mode

You will notice that data-0.iris has been assigned an IP address of 10.1.243.202 with an FQDN of data-0.iris.default.svc.cluster.local. 10.152.183.10 is the DNS server provided by k8s. Likewise, data-1.iris is registered in the DNS.

Using your images

Currently, k8s does not use Docker. Thus, a separate Docker setup is required to build the image.

k8s is only for the operational environment.

The following is based on the assumption that you have already setup Docker and docker-compose. The image can be any content. Here we will use simple as an example. This image will provide a straightforward REST service on the MYAPP namespace, where localhost:32000 is the built-in container registry created by microk8s, to which we will push this image.

$ git clone https://github.com/IRISMeister/simple.git
$ cd simple
$ ./build.sh
$ docker tag dpmeister/simple:latest localhost:32000/simple:latest
$ docker push localhost:32000/simple:latest
Enter fullscreen mode Exit fullscreen mode

For those who don't want to bother with the build process, the pre-built image is saved as dpmeister/simple:latest, so you can pull it and use it directly as shown below.

$ docker pull dpmeister/simple:latest
$ docker tag dpmeister/simple:latest localhost:32000/simple:latest
$ docker push localhost:32000/simple:latest
Enter fullscreen mode Exit fullscreen mode

Change the image to localhost:32000/simple by editing yml. In addition, add ModifyNamespace to the cpf action to switch the data storage location from the database in the container (MYAPP-DATA) to an external database (MYAPP-DATA-EXT). An edited file is available as mk8s-simple.yml ( which looks almost the same as mk8s-iris.yml). Use this to run it.

If you already have a pod running, delete it.

$ kubectl delete -f mk8s-iris.yml
$ kubectl delete pvc --all
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f mk8s-simple.yml
$ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)           AGE
kubernetes   ClusterIP      10.152.183.1     <none>           443/TCP           3h36m
iris         ClusterIP      None             <none>           52773/TCP         20m
iris-ext     LoadBalancer   10.152.183.224   192.168.11.110   52773:30308/TCP   20m

$ curl -s -H "Content-Type: application/json; charset=UTF-8" -H "Accept:application/json" "http://192.168.11.110:52773/csp/myapp/get" --user "appuser:SYS" | python3 -mjson.tool
{
    "HostName": "data-1",
    "UserName": "appuser",
    "Status": "OK",
    "TimeStamp": "05/17/2021 19:34:00",
    "ImageBuilt": "05/17/2021 10:06:27"
}
Enter fullscreen mode Exit fullscreen mode

If you repeatedly run curl, the HostName (the hostname on which the REST service ran) may be data-0 or data-1, but this is because it is load balanced (as expected).

The maximum number of sessions for the Community Edition is 5, which may have been exceeded by some previous action. If this happens, a message will be logged indicating that the license limit has been exceeded.

$ kubectl logs data-0
  ・
  ・
05/17/21-19:21:17:417 (2334) 2 [Generic.Event] License limit exceeded 1 times since instance start.
Enter fullscreen mode Exit fullscreen mode

When using Longhorn

Learn more about distributed Kubernetes storage Longhorn, please click here.

Run Longhorn and wait until all the pods are READY.

$ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
$ kubectl -n longhorn-system get pods
NAME                                       READY   STATUS    RESTARTS   AGE
longhorn-ui-5b864949c4-72qkz               1/1     Running   0          4m3s
longhorn-manager-wfpnl                     1/1     Running   0          4m3s
longhorn-driver-deployer-ccb9974d5-w5mnz   1/1     Running   0          4m3s
instance-manager-e-5f14d35b                1/1     Running   0          3m28s
instance-manager-r-a8323182                1/1     Running   0          3m28s
engine-image-ei-611d1496-qscbp             1/1     Running   0          3m28s
csi-attacher-5df5c79d4b-gfncr              1/1     Running   0          3m21s
csi-attacher-5df5c79d4b-ndwjn              1/1     Running   0          3m21s
csi-provisioner-547dfff5dd-pj46m           1/1     Running   0          3m20s
csi-resizer-5d6f844cd8-22dpp               1/1     Running   0          3m20s
csi-provisioner-547dfff5dd-86w9h           1/1     Running   0          3m20s
csi-resizer-5d6f844cd8-zn97g               1/1     Running   0          3m20s
csi-resizer-5d6f844cd8-8nmfw               1/1     Running   0          3m20s
csi-provisioner-547dfff5dd-pmwsk           1/1     Running   0          3m20s
longhorn-csi-plugin-xsnj9                  2/2     Running   0          3m19s
csi-snapshotter-76c6f569f9-wt8sh           1/1     Running   0          3m19s
csi-snapshotter-76c6f569f9-w65xp           1/1     Running   0          3m19s
csi-attacher-5df5c79d4b-gcf4l              1/1     Running   0          3m21s
csi-snapshotter-76c6f569f9-fjx2h           1/1     Running   0          3m19s
Enter fullscreen mode Exit fullscreen mode

Please modify the storageClassName in all (there are two places) of mk8s-iris.yml to longhorn.
If you are already running on microk8s_hostpath, please delete all pods and PVs, and then follow the above steps. In short...

$ kubectl delete -f mk8s-iris.yml --wait
$ kubectl delete pvc --all
   edit mk8s-iris.yml
      before)storageClassName: microk8s-hostpath
      after)storageClassName: longhorn

$ kubectl apply -f mk8s-iris.yml
Enter fullscreen mode Exit fullscreen mode

The fsGroup is specified because the owner of the mounted Longhorn-derived volume was set to root. Without it, a protection error will occur when creating the database.

irisowner@data-0:~$ ls / -l
drwxr-xr-x   3 root      root         4096 May 18 15:40 vol-data

You can delete Longhorn with the following command if you no longer need it.

$ kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
Enter fullscreen mode Exit fullscreen mode

In case Longhorn was not removed cleanly the last time you used it, you may get the following error:

$ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
  ・
  ・
Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml": serviceaccounts "longhorn-service-account" is forbidden: unable to create new content in namespace longhorn-system because it is being terminated
Error from server (Forbid
Enter fullscreen mode Exit fullscreen mode

Apparently you can remove it by using longhorn-manager found here. I personally stumbled upon this, so I hope you find it helpful.

$ git clone https://github.com/longhorn/longhorn-manager.git
$ cd longhorn-manager
$ make
$ kubectl create -f deploy/uninstall/uninstall.yaml
podsecuritypolicy.policy/longhorn-uninstall-psp created
serviceaccount/longhorn-uninstall-service-account created
clusterrole.rbac.authorization.k8s.io/longhorn-uninstall-role created
clusterrolebinding.rbac.authorization.k8s.io/longhorn-uninstall-bind created
job.batch/longhorn-uninstall created
$ kubectl get job/longhorn-uninstall -w
NAME                 COMPLETIONS   DURATION   AGE
longhorn-uninstall   0/1           12s        14s
longhorn-uninstall   1/1           24s        26s
^C
$ kubectl delete -Rf deploy/install
$ kubectl delete -f deploy/uninstall/uninstall.yaml
Enter fullscreen mode Exit fullscreen mode

InterSystems Kubernetes Operator

It also works with IKO on microk8s, however, it's not a feature for Community, thus I decided not to introduce it here.

Top comments (1)

Collapse
 
sindouk profile image
Sindou Koné

Add to the discussion