DEV Community

Cover image for Build and Deploy your Machine Learning Application with Docker
Israel Aminu
Israel Aminu

Posted on

Build and Deploy your Machine Learning Application with Docker

Ever deployed a Machine Learning model that works perfectly fine on your computer locally, but then the code breaks on another machine or worse when it's been deployed into production? Well, in this article I will walk you through how you can use a popular tool named 'Docker' to run and also deploy your Machine Learning model(s).

So what is Docker?

Docker is a tool that makes it easier to create, deploy and run any application by using what is called a container. It's also a software platform, which is used to create Docker images that will be referred to as a Docker container once it's been deployed.

A Docker Container is an isolated environment which contains all the required dependencies for your application to run, it is often referred to as a running instance of a Docker image.

A Docker image is a file(read-only), comprised of multiple layers, that is used to execute code in a Docker container. Docker images are found in a large hub which is referred to as Docker Hub. So you pull images from the hub or you build a custom image from a base image and when these images are being executed they serve as containers for your application.

So combining the pieces together we can simply define Docker as:

A Software platform which makes it easier to create and deploy any application by creating a Docker image which will then be a Docker container which contains all the dependencies and packages we need for our application to work once it's been deployed.

Benefits of Docker

  • Docker solves the problem of having an identical environment across various stages of development and having isolated environments in your individual applications.

  • Docker allows you to run your application from anywhere as long as you have docker installed on that machine.

  • Docker gives you the liberty to scale up quickly.

  • Expand your development team painlessly.

Installing Docker

Docker is available across various platforms whether if you're using a Linux, Windows or a Mac computer, you can follow the installation guide here

Now that we've understood the basics of Docker and you've gotten Docker running on your machine, let us go ahead and deploy a Machine Learning Application with it.

My Working directory

For the model I want to deploy, this is how my working directory looks like:

.
├── app.py
├── Dockerfile
├── ML_Model
│   ├── Diabetestype.csv
│   ├── model.pkl
│   └── model.py
└── requirements.txt
1 directory, 6 files

Enter fullscreen mode Exit fullscreen mode

app.py
The app.py is a python script which contains the API I built for my Machine Learning model using flask. I defined the API endpoint and the path, how we receive data from the web, how the data is being processed and how predictions are being returned as a response.

import json
import pickle
import numpy as np
from flask import Flask, request
# 

flask_app = Flask(__name__)

#ML model path
model_path = "ML_Model/model.pkl"


@flask_app.route('/', methods=['GET'])
def index_page():
    return_data = {
        "error" : "0",
        "message" : "Successful"
    }
    return flask_app.response_class(response=json.dumps(return_data), mimetype='application/json')

@flask_app.route('/predict',methods=['GET'])
def model_deploy():
    try:
        age = request.form.get('age')
        bs_fast = request.form.get('BS_Fast')
        bs_pp = request.form.get('BS_pp')
        plasma_r = request.form.get('Plasma_R')
        plasma_f = request.form.get('Plasma_F')
        HbA1c = request.form.get('HbA1c')
        fields = [age,bs_fast,bs_pp,plasma_r,plasma_f,HbA1c]
        if not None in fields:
            #Datapreprocessing Convert the values to float
            age = float(age)
            bs_fast = float(bs_fast)
            bs_pp = float(bs_pp)
            plasma_r = float(plasma_r)
            plasma_f = float(plasma_f)
            hbA1c = float(HbA1c)
            result = [age,bs_fast,bs_pp,plasma_r,plasma_f,HbA1c]
            #Passing data to model & loading the model from disk
            classifier = pickle.load(open(model_path, 'rb'))
            prediction = classifier.predict([result])[0]
            conf_score =  np.max(classifier.predict_proba([result]))*100
            return_data = {
                "error" : '0',
                "message" : 'Successfull',
                "prediction": prediction,
                "confidence_score" : conf_score
            }
        else:
            return_data = {
                "error" : '1',
                "message": "Invalid Parameters"             
            }
    except Exception as e:
        return_data = {
            'error' : '2',
            "message": str(e)
            }
    return flask_app.response_class(response=json.dumps(return_data), mimetype='application/json')


if __name__ == "__main__":
    flask_app.run(host ='0.0.0.0',port=8080, debug=False)
Enter fullscreen mode Exit fullscreen mode

ML_Model
The ML_Model directory contains the ML model, the data I used to train the model and the pickle file generated after model is being trained which the API will make use of.

requirements.txt
The requirements.txt file is a text file which contains all the required python packages we need for our application to run. Some of the packages I made of use were:

Flask==1.1.2
pandas==1.0.3
numpy==1.18.2
sklearn==0.0
Enter fullscreen mode Exit fullscreen mode

Dockerfile
A Dockerfile is a text file that defines a Docker image. You'll use a Dockerfile to create your own custom Docker image when the base image you want to use for your project doesn't meet your required needs. For the model I'll be deploying, this is how my Dockefile looks like:

#I specify the parent base image which is the python version 3.7
FROM python:3.7

MAINTAINER aminu israel <aminuisrael2@gmail.com>

# This prevents Python from writing out pyc files
ENV PYTHONDONTWRITEBYTECODE 1
# This keeps Python from buffering stdin/stdout
ENV PYTHONUNBUFFERED 1

# install system dependencies
RUN apt-get update \
    && apt-get -y install gcc make \
    && rm -rf /var/lib/apt/lists/*

# install dependencies
RUN pip install --no-cache-dir --upgrade pip

# set work directory
WORKDIR /src/app

# copy requirements.txt
COPY ./requirements.txt /src/app/requirements.txt

# install project requirements
RUN pip install --no-cache-dir -r requirements.txt

# copy project
COPY . .

# Generate pikle file
WORKDIR /src/app/ML_Model
RUN python model.py

# set work directory
WORKDIR /src/app

# set app port
EXPOSE 8080

ENTRYPOINT [ "python" ] 

# Run app.py when the container launches
CMD [ "app.py","run","--host","0.0.0.0"] 
Enter fullscreen mode Exit fullscreen mode

In my Dockerfile, I pulled the Docker base image which is python:3.7, updated the system dependencies, installed the packages in the requirements.txt file, ran the ML code to train the model and generate the pickle file which the API will use and lastly run the server locally.

Now let's build our Docker image from the Dockerfile we've created using this command:

israel@israel:~/Documents/Projects/Docker_ML$ docker build aminu_israel/ml_model:1.0 .
Enter fullscreen mode Exit fullscreen mode

I named my custom image "aminu_israel/ml_model" and set the tag to 1.0. Notice the "." at the end of the command, it means I'm telling Docker to locate the Dockerfile in my current directory, which is my project folder. If it's successful you should have a result like this:

Sending build context to Docker daemon  249.3kB
Step 1/16 : FROM python:3.7
 ---> cda8c7e31f89
Step 2/16 : MAINTAINER aminu israel <aminuisrael2@gmail.com>
 ---> Running in cea1c80b990f
Removing intermediate container cea1c80b990f
 ---> 2c82fc9c1b5a
Step 3/16 : ENV PYTHONDONTWRITEBYTECODE 1
 ---> Running in 6ee3497a7ff4
Removing intermediate container 6ee3497a7ff4
 ---> 56f5f9838610
Step 4/16 : ENV PYTHONUNBUFFERED 1
 ---> Running in 1f53b581eed7
...

Step 16/16 : CMD [ "app.py","run","--host","0.0.0.0"]
 ---> Running in 1f7fc05b4e12
Removing intermediate container 1f7fc05b4e12
 ---> 8636b5bc482e
Successfully built 8636b5bc482e
Successfully tagged aminu_israel/ml_model:1.0
Enter fullscreen mode Exit fullscreen mode

You can check the new image you've created using this command:

israel@israel:~$ docker images
Enter fullscreen mode Exit fullscreen mode

Now that we've successfully built image, lets run the docker images using the command:

israel@israel:~$ docker run --name deployML -p 8080:8080 aminu_israel/ml_model:1.0
Enter fullscreen mode Exit fullscreen mode

If successful you should see a result like this:

* Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
Enter fullscreen mode Exit fullscreen mode

To check if your docker container is running, use this command:

israel@israel:~$ docker ps
Enter fullscreen mode Exit fullscreen mode

And you'll see a result like this:

CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                    NAMES
dc5c417d893f        aminu_israel/ml_model:1.0   "python app.py run -…"   24 seconds ago      Up 20 seconds       0.0.0.0:8080->8080/tcp   deployML
Enter fullscreen mode Exit fullscreen mode

Which shows that the new container is currently running. To get full docker documentation, you can check here

And there you have it, you've successfully deployed your ML model using docker.

You can get the code for this article here

Thanks for reading 😀

Top comments (6)

Collapse
 
sadiqful profile image
Aliyu Abubakar

This is great. Thanks for this Israel

Collapse
 
aminu_israel profile image
Israel Aminu

Thank you

Collapse
 
dhanasekharreddy profile image
dhanasekharreddy

@Israel Aminu

This is really great. I have deployed this model into AKS cluster and service type as loadbalancer, Please help me on what is endpoints and how we can test this in postman

Collapse
 
aminu_israel profile image
Israel Aminu

Hey dhanasekharreddy,
Some of the endpoints which I defined in my article are:
/
/predict
For the predict endpoint, this takes in some parameters as a body as:

  • age
  • BS_Fast
  • BS_pp etc. This is the features you can use to test on Postman
Collapse
 
cloudgrimm profile image
Tinashe Wilbrod Chipomho

This is great, I am new to ML and wanted to ask on how does one then test this deployed model? In the flask application there is no way to request for data?

Collapse
 
aminu_israel profile image
Israel Aminu

Actually, you can. You can test the endpoints using Postman.