DEV Community

Cover image for Deploy Any AI/ML Application On Kubernetes: A Step-by-Step Guide!
Pavan Belagatti
Pavan Belagatti

Posted on

Deploy Any AI/ML Application On Kubernetes: A Step-by-Step Guide!

In today's fast-paced technological landscape, deploying AI/ML applications efficiently and effectively is paramount. Kubernetes, a powerful open-source platform, has emerged as a leading solution for managing and scaling containerized applications, ensuring they run seamlessly across various environments.

In this comprehensive guide, we will walk you through the step-by-step process of deploying any AI/ML application on Kubernetes. From containerising your application to setting up your Kubernetes cluster to deploying your AI/ML applications, this guide covers it.

Let's embark on this learning adventure together!

Why Deploy GenAI Applications On Kubernetes?

Deploying AI/ML applications on Kubernetes provides a robust solution for managing complex AI/ML workloads. One of the primary benefits is scalability. Kubernetes can automatically scale the infrastructure, accommodating varying workloads efficiently, ensuring that resources are allocated effectively based on demand. This auto-scaling feature is crucial for handling large computations involved in AI/ML tasks.

AI/ML deployment on Kubernetes

Additionally, Kubernetes supports multi-cloud and hybrid-cloud environments, offering flexibility and avoiding vendor lock-in. It provides a consistent and unified environment for development, testing, and deployment, enhancing the collaboration between data scientists and engineers.

Kubernetes also ensures high availability and fault tolerance, automatically replacing or rescheduling containers that fail, ensuring the reliability and robustness of AI/ML applications. Furthermore, it simplifies many operational aspects, including updates and rollbacks, allowing teams to focus more on building AI/ML models rather than managing infrastructure.

Prerequisites

Tutorial

Hope you have GitHub account.

First, we will clone the already available openai-quickstart-node repository to our local.

git clone https://github.com/pavanbelagatti/openai-quickstart-node.git
Enter fullscreen mode Exit fullscreen mode

Let's navigate into the project directory

cd openai-quickstart-node
Enter fullscreen mode Exit fullscreen mode

Install the project requirements and dependencies

npm install
Enter fullscreen mode Exit fullscreen mode

Create a .env file and add your OpenAI API Key.

touch .env
Enter fullscreen mode Exit fullscreen mode

In your .env file, add the OpenAI Key as an environment variable as shown below.

OPENAI_API_KEY=<Add Your OpenAI API Key>
Enter fullscreen mode Exit fullscreen mode

Run the application using the below command.

npm run dev
Enter fullscreen mode Exit fullscreen mode

You should see the application on local 3000.

application on local

Let's write a Dockerfile for our application to containerize it.

touch Dockerfile
Enter fullscreen mode Exit fullscreen mode

Add the following Dockerfile instructions in it.

# Use the official Node.js image as a parent image
FROM node:14-alpine as build

# Set the working directory in the Docker container
WORKDIR /app

# Copy the package.json and package-lock.json files into the container at /app
COPY package*.json ./

# Install the dependencies
RUN npm install

# Copy the local files into the container at /app
COPY . .

# Build the application
RUN npm run build

# Start from a smaller image to reduce image size
FROM node:14-alpine as run

# Set the working directory in the Docker container
WORKDIR /app

# Copy over dependencies
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/.next ./.next
COPY --from=build /app/public ./public
COPY --from=build /app/package*.json ./

# Expose port 3000 for the app to be accessible externally
EXPOSE 3000

# Command to run the application
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Let's build, tag and push this to our DockerHub.

docker build -t <image name> .
Enter fullscreen mode Exit fullscreen mode

Note: I am naming my image as generativeai-node-app

Let's launch our container.
The image is built! It’s time to launch the Docker container with assigned ports using the following command:

docker run -p 3002:3002 generativeai-node-app
Enter fullscreen mode Exit fullscreen mode

Let's build it again with our DockerHub credentails.

docker build -t <your dockerhub username>/<image name> .
Enter fullscreen mode Exit fullscreen mode

Let's push the image to DockerHub.

docker push <your dockerhub username>/<image name>
Enter fullscreen mode Exit fullscreen mode

You can confirm if the image is pushed by going to your DockerHub.
docker image

Deploy and expose our application on Kubernetes

To deploy and expose application, we need two yaml files.
deployment.yaml and service.yaml files.

One file contains the instructions of deployment and the other one for service exposure.

Let's see our deployment.yaml file first.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: genai-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: genai-app
  template:
    metadata:
      labels:
        app: genai-app
    spec:
      containers:
        - name: genai-app
          image: pavansa/generativeai-node-app:latest
          ports:
            - containerPort: 3000
Enter fullscreen mode Exit fullscreen mode

The below is our service.yaml file.

apiVersion: v1
kind: Service
metadata:
  name: genai-app-service
spec:
  selector:
    app: genai-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

We are using Minikube to create a single node Kubernetes cluster and we will use it to deploy our application.

Start the Minikube using the below command.

minikube start
Enter fullscreen mode Exit fullscreen mode

This is the output you should see.
minikube started

Note: Keep your Docker Desktop running and enable Kubernetes in it. Below image is just for your reference.

Docker desktop reference

Let's apply our deployment file using the below command.

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Similarly, apply our service yaml file.

kubectl apply -f service.yaml
Enter fullscreen mode Exit fullscreen mode

Let's check the status of our Pods first using the below command.

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

You should see the desired output as below.
pods

Let's check the deployment status of our application to see if the desired pods are running as expected.

kubectl get deployment
Enter fullscreen mode Exit fullscreen mode

deployment status

Let's check the service status of our application.

kubectl get service
Enter fullscreen mode Exit fullscreen mode

service

Let's see if we can expose our application and access it.

minikube service <add your application service name> --url
Enter fullscreen mode Exit fullscreen mode

You should see the output as below and will be able to access your application.

app running

You can see the url link to go and access our application.

Congratulations! We containerized our application, deployed and exposed it using Kubernetes.

Integrating a Database for Our Application

After successfully deploying and exposing your AI/ML application on Kubernetes, you might need a robust and scalable database to handle your application data. SingleStore is a high-performance, scalable SQL database that is well-suited for AI/ML applications. In this section, we will guide you through the process of integrating SingleStore database into your Kubernetes-deployed application.

You need a free SingleStore cloud account.

  • Create a workspace and then create a database and table suitable for your application.

Go to the SQL editor as shown in the below image
SQL editor

Create a new database using the following SQL statement.

-- create a database
CREATE DATABASE <database name>;
Enter fullscreen mode Exit fullscreen mode

Next, switch to the new database using the USE command

USE <database name>;
Enter fullscreen mode Exit fullscreen mode

Then, create a table in the new database with the desired schema.

-- create a table
CREATE TABLE <table name> (
);
Enter fullscreen mode Exit fullscreen mode

You can paste these SQL commands in the SQL Editor, highlight them, and then click the Run button

You can find the whole process of creating a database, tables and feeding information in tables in the tutorial below.

Update Kubernetes Deployment Configuration:

If your SingleStore database is running outside the Kubernetes cluster, update your application’s Kubernetes deployment configuration to allow connections to the SingleStore database.

apiVersion: apps/v1
kind: Deployment
...
spec:
  containers:
    - name: genai-app
      ...
      env:
        - name: DB_HOST
          value: "<Your SingleStore DB Host>"
        - name: DB_PORT
          value: "<Your SingleStore DB Port>"
        ...

Enter fullscreen mode Exit fullscreen mode

Redeploy Your Application:

Apply the updated Kubernetes deployment configuration to redeploy your application with SingleStore integration.

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Verify the Integration:

After redeployment, verify that your application is successfully connected to the SingleStore database and is performing database operations as expected.

By following these steps, you have successfully integrated SingleStore database into your Kubernetes-deployed AI/ML application, providing a robust and scalable solution for managing your application data.

Conclusion

Congratulations on successfully navigating through the comprehensive steps to deploy an AI/ML application on Kubernetes! This guide has walked you through each essential phase, from containerizing your application to deploying and exposing it on Kubernetes.

As you continue to explore and enhance your AI/ML deployments, consider integrating a high-performance database like SingleStore for managing your application data seamlessly. SingleStore offers scalability, speed, and efficiency, ensuring your AI/ML applications run optimally with a robust database backend.

Top comments (0)