DEV Community

Cover image for Kubernetes Services and Deployments coming together
Preetham
Preetham

Posted on

Kubernetes Services and Deployments coming together

In this tutorial, we are going to look at how the services and deployments come together and the WordPress web application is up and running.

By the end of this tutorial, we will have
Deployments_services

After creating the service and deployment objects, I found another major issue. I was skeptical about whether to write this or write along with the fix for the issue. Still, I wanted to show this, as I believed that we want to understand what purpose each individual component satisfies.

Shush!Shush! Back to our topic. Since we understand the pods, deployments, and services I'll jump into the manifest part.

db-service.yaml

---
apiVersion: v1
kind: Service
metadata: 
    name: wpdb-service
    labels:
        app: wordpress
        type: db-service
spec:
    selector:
        app: wordpress
        type: db
    ports:
        - protocol: TCP
          port: 3306
          targetPort: 3306
    clusterIP: None
Enter fullscreen mode Exit fullscreen mode

The .spec.selector you see above will match with the .spec.template.metadata.labels - which creates the link between the service and the pods in the deployment

db-deployment.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
    name: wordpress-db
    labels:
        app: wordpress
        type: db-deployment
spec:
    replicas: 2
    selector:
        matchLabels:
            app: wordpress
            type: db
    template:
        metadata:
            name: wordpress-db
            labels:
                app: wordpress
                type: db
        spec:
            containers:
                - name: wordpress-db-container
                  image: mysql:5.7
                  env:
                  - name: MYSQL_ROOT_PASSWORD
                    value: DEVOPS1
                  - name: MYSQL_DATABASE
                    value: wpdb
                  - name: MYSQL_USER
                    value: wpuser
                  - name: MYSQL_PASSWORD
                    value: DEVOPS12345
                  ports:
                    - containerPort: 3306
                      name: mysql
Enter fullscreen mode Exit fullscreen mode

In the top db-deployment.yaml, always 2 replicas of db pod will be created and maintained. The .spec.selector.matchLabels and .spec.template.metadata.labels must match for the deployment to maintain the pods. We have passed the environment variables to configure the database and we will use these for the WordPress pod to connect to the DB.

Below, I have combined the service and deployment for the WordPress app in a single YAML.

app.yaml

---
apiVersion: v1
kind: Service
metadata: 
    name: app-service
    labels:
        app: wordpress
        type: app-service
spec:
    selector:
        app: wordpress
        type: app
    type: LoadBalancer
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
    name: wordpress-app
    labels:
        app: wordpress
        type: wordpress-deployment
spec:
    replicas: 3
    selector:
        matchLabels:
            app: wordpress
            type: app
    template:
        metadata:
            name: wordpress-app
            labels:
                app: wordpress
                type: app
        spec:
            containers:
            - name: wordpress-app
              image: wordpress
              env:
              - name: WORDPRESS_DB_HOST
                value: wpdb-service
              - name: WORDPRESS_DB_NAME
                value: wpdb
              - name: WORDPRESS_DB_USER
                value: wpuser
              - name: WORDPRESS_DB_PASSWORD
                value: DEVOPS12345
              ports:
                - containerPort: 80
                  name: wordpress
Enter fullscreen mode Exit fullscreen mode

In the env section of the WordPress container pod, for WORDPRESS_DB_HOST I have given the name of the DB service which is wpdb-service (found in metadata of db service). The other environment variables match with the appropriate env variables of MySQL container pod.

Once these yamls are applied, let us verify it.

kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
wordpress-app-556fbb7c44-67frw   1/1     Running   0          2d12h
wordpress-app-556fbb7c44-mkh75   1/1     Running   0          2d12h
wordpress-app-556fbb7c44-t4m4m   1/1     Running   0          2d12h
wordpress-db-d9949b65d-6d2xr     1/1     Running   0          101m
wordpress-db-d9949b65d-jk7sb     1/1     Running   0          101m

kubectl get deployments
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
wordpress-app   3/3     3            3           2d12h
wordpress-db    2/2     2            2           102m

kubectl get service
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
app-service    LoadBalancer   10.101.181.255   <pending>     80:30195/TCP   2d12h
kubernetes     ClusterIP      10.96.0.1        <none>        443/TCP        4d1h
wpdb-service   ClusterIP      10.111.171.206   <none>        3306/TCP       25h
Enter fullscreen mode Exit fullscreen mode

We can see that the pods and deployments are up and running. On the service(app-service), I note that the application is running on port 30195.

Now I can access the web application through my browser - by typing any one of the node server name (master or two workers) with the port.

assuming node name is abc.def.com and port as seen above is 30195, type abc.def.com:30195

I am able to access the web application and I am able to see
Alt Text

If the environment variables of the WordPress and MySQL pods don't match appropriately, there will be no connectivity between the two pods and hence we will see the below:
Alt Text

Now coming back, I finished the initial setup or configuration of WordPress and I am trying to login
Alt Text

Voila, I have the issue I was speaking of in the beginning. I am again sent to the same page.
Alt Text

I dug around and I found that among the two pods, it randomly picks one. So, the reason it brought me back was that when I tried to login, it moved to a different pod. Now, had the pods been connected to a storage and mounted it, the actual data accessible by both the pods would be the same.

Alt Text

We need a solution to fix this, which we will look at the next post in the series.

On a lighter note, we would have been able to achieve the same with pods and services, instead of a deployment.

Alt Text

But it doesn't solve the need for using Kubernetes. Any normal container communication using docker would appear something similar.

Top comments (0)