DEV Community

Cover image for Building a CI/CD Pipeline with GitHub Actions for Data Analysis Docker Images to DockerHub.
kelvin maingi
kelvin maingi

Posted on

Building a CI/CD Pipeline with GitHub Actions for Data Analysis Docker Images to DockerHub.

Image By svstudioart

Introduction

In the fast-paced realm of data analysis, Continuous Integration and Continuous Delivery (CI/CD) have become indispensable practices for ensuring seamless development, testing, and deployment processes. This article explores the pivotal role of CI/CD in data analysis workflows, emphasizing the significance of automating tasks to enhance efficiency and reliability. Specifically, it delves into the integration of GitHub Actions, a powerful CI/CD tool, for streamlining Docker image builds—a critical component in modern data analysis environments. By adopting these practices, teams can foster collaboration, reduce errors, and accelerate the delivery of high-quality data analysis solutions.

Overview of CI/CD for Data Analysis.

CI/CD, which stands for Continuous Integration and Continuous Delivery, is a set of best practices and processes aimed at improving the software development lifecycle and performing on the data analytics floor, CI/CD consumes key role in increasing productivity, collaboration and overall optimization of analytics business processes.

  1. Continuous Integration (CI).
    • In CI, code changes from multiple contributors are combined in a shared repository. This functionality ensures that the latest code changes are always checked the same way, and prevents integration issues that arise during code integration.
    • In data analysis, where multiple team members work simultaneously on different parts of a project, CI helps maintain a consistent and reliable codebase This enables early detection and development of integration issues easy to fix quickly.
  2. Continuous Delivery: (CD).
    • Expand the concept of CI by automating the entire process of preparing CD software releases and making them available for use at any point in time. This includes services such as testing, packaging, and deployment.
    • CD in data analysis ensures that the analysis and modeling can be reliably and consistently applied to other businesses or areas. This is crucial to ensure further sustainability and reliability of research results.

Why CI/CD is important in data analysis.

  1. collaboration:
    • Data analytics projects typically involve multidisciplinary teams, including data scientists, analysts, and engineers. CI/CD provides a deliberate and automated way to integrate and acknowledge their contributions, encourage collaboration, and reduce conflict.
  2. Error Detection and Prevention:
    • Automated testing in the CI/CD pipeline helps detect bugs, errors, or discrepancies early in the development process. This enables timely improvements, reducing the chances of information leaking into manufacturing facilities and affecting inspection results.
  3. Reproducible:
    • CI/CD ensures consistent repeatability of all data analysis operations from data processing to model training analysis. This is important to validate results, share findings, and maintain the integrity of studies over time.
  4. Performance and Speed:
    • Accelerate the CI/CD development and delivery process by automating common tasks such as testing and deployment. This effort is especially valuable in data analytics, where iterations and responses to changing data or requirements are often required.
  5. Best Features:
    • CI/CD pipelines promote coding standards and best practices, and contribute to the overall quality and maintenance of data analysis code. This is critical to ensuring that research is reliable, scalable, and easily understood by team members.

Prerequisites.

  1. Access to a GitHub Repository: You will need access to a GitHub repository to clone the project files and follow the instructions.
  2. Basic Knowledge of Docker: Familiarity with Docker concepts like containers, images, and Docker Hub will be beneficial for understanding the Dockerfile and building the container image.
  3. Command-Line Interface (CLI): Familiarity with a command-line interface (CLI) like Bash or Zsh is required to execute commands and navigate the file system.
  4. Code Editor: A code editor like Visual Studio Code or Sublime Text will be helpful for editing and reviewing code files.
  5. Git: If you are not already familiar with Git, it is recommended to learn the basics of version control to effectively manage your project files.
  6. Docker Hub Account: An account on Docker Hub is recommended to push your built container image to a public registry for sharing or deployment.
  7. Knowledge of Python Programming Languages

Setting Up the GitHub Repository.

Creating a New GitHub Repository

  1. Go to the GitHub profile homepage.
  2. Click on the "+" icon in the top right corner.
  3. Select "New repository".

creating new repo

  1. Enter a name for your repository.
  2. Optionally, add a description for your repository.
  3. Select whether you want your repository to be public or private.

add description to repo

  1. Click on the "Create repository" button.

create repo

Then we will use HTTPS to clone this repository:

repo created

Here the clone the repository using the command line:

  1. Open a terminal window.
  2. Change the directory to the location where you want to clone the repository to.
  3. Run the following command: git clone https://github.com//.git

git clone the repo

This will clone the repository to your local computer.

Introduction to Docker Containers in Data Analysis.

Docker containers have revolutionized the way software applications and environments are used, and their usefulness has extended to simple data analytics aspects Docker packaging provides a lightweight, portable and scalable solution application and its dependencies, thus ensuring stability across environments For well-defined data analysis, where reproducibility and stability are key, Docker containers provide powerful tools for creating and managing environments we are surrounded by it.
Relevance of Docker in Data Analysis:

  1. Reproducible environment:
    • Docker allows data analysts to hold the entire analytics environment, including libraries, dependencies, and configurations, in a single container. This ensures that the analysis can always be replicated across systems, reducing issues of version incompatibility and system-specific dependencies
  2. Separation and Portability:
    • Docker containers contain applications and dependencies, isolating them from the host system. This isolation not only ensures isolation of the research environment but also facilitates portability. Analysts can confidently share Docker images, knowing that analytics will continue to work regardless of the underlying infrastructure.
  3. Consistent development and production environment:

    • Docker containers help bridge the gap between development and production. Analysts can build and test their analytics in the same Docker container they use, reducing the chances of "it works on my machine" issues. This synchronization between environments increases reliability and reduces implementation challenges.
  4. Effective communication:

    • Docker containers facilitate collaboration between data analysts and teams. Instead of relying on complicated configuration guidelines or manually managing dependencies, team members can share Docker images of the entire test environment this simplifies collaboration and reduces the configuration effort required for team members or other colleagues.
  5. Translation and Rollback:

    • Docker enables rendering of images, allowing researchers to tag and track changes in the research environment over time. This interpretation capability is invaluable for maintaining the history of the search environment, aiding in reconstruction, and facilitating a return to a specific version if problems arise
  6. Flexibility:

    • The lightweight nature of Docker makes it ideally suited for scalable and distributed data analytics workflows. Containers can be easily configured using tools such as Docker Compose or Kubernetes, allowing researchers to horizontally scale their research across multiple containers and benefit from parallel configuration in Docker containers offer a solution to the challenges of reproducibility and stability in data analysis. By embedding the analytics environment, Docker facilitates high-performance collaboration, ensures consistent deployment across multiple environments, and empowers data analysts to reliably build, share, and replicate analytics
  7. Creating a Dockerfile:
    Here we create a etl_data.py and define the logic for our data ETL pipeline using pandas

create docker file

This script first reads data from an API endpoint using the requests library. The data is then converted to a pandas DataFrame. Finally, the data is converted to csv format and then loaded to a GCS bucket using the google-cloud-storage library.
We then create a requirements.txt file to hold the libraries the application will need to run.

requirement file content

A requirements.txt file is a plain text file that lists all of the Python packages that a project needs to run. It is typically used with the pip package manager to install the necessary packages.
To create a requirements.txt file, you can use a text editor to create a new file named "requirements.txt". Then, you can add the names of all of the Python packages that your project needs to run, one per line.
Creating a Dockerfile is an essential step in containerizing your data analysis project. It provides a structured way to define the environment and dependencies required for running your project within a container.
We create a dockerfile for our application to dockerize it.

dockerfile content

Step 1: Choose a Base Image.
The base image serves as the foundation for your Docker image. It provides the operating system and basic tools required for running your project. For data analysis projects, you'll typically use a Python-based base image, such as python:3.10
Step 2: Specify Working Directory.
Set the working directory for the container using the WORKDIR instruction. This indicates where the project files will be located within the container.
Step 3: Copy requirements.text file.

Copy the requirement file from your local machine into the container using the COPY instruction.
Step 4: Install Dependencies.

Use the RUN instruction to install all required dependencies for your project from the requirements txt file. This could include Python libraries, data analysis tools, or other software packages.
Step 5: Copy Project Files.
Copy the project files from your local machine into the container using the COPY instruction. Specify the source directory on your local machine and the destination directory within the container.
Step 6: Define Entrypoint.
Specify the entrypoint command using the ENTRYPOINT instruction.This command runs the application when the container starts.

Building a Docker Image Locally.

Build the Docker Image:
• Navigate to the directory containing your Dockerfile.
• Run the following command to build the Docker image:

docker build command
Here we build the docker image locally.

docker build

Stage Changes:
• Open a terminal window and navigate to the directory containing your project files.
• Add the modified files to the staging area using the following command:

git add .
Enter fullscreen mode Exit fullscreen mode

This will add all the modified files in the current directory to the staging area.

  1. Commit Changes: • Commit the staged changes with a descriptive message using the following command:
git commit -m “the data application done"
Enter fullscreen mode Exit fullscreen mode
  1. Push Changes: • Push the committed changes to the remote repository on GitHub using the following command: Bash
git push origin master
Enter fullscreen mode Exit fullscreen mode

git repo files

GitHub Actions and Automation of CI/CD Pipelines.

GitHub Actions is a powerful and flexible automation platform integrated directly into the GitHub repository. It allows developers to define, maintain, and execute working systems directly in the repository, making it easier to build, test, and deploy. GitHub Actions is particularly well suited for the Continuous Integration and Continuous Deployment (CI/CD) pipeline, which simplifies the software development lifecycle.
Highlights of GitHub actions:

  1. Business Description:
    • Workflow on GitHub Behaviors are defined using YAML files. These files define various tasks, each of which contains steps that specify the tasks to be performed, such as build code, run tests, or deploy applications
  2. Triggers:
    • Workflows can be triggered by various events, such as code pushes, pull requests, or release builds. This ensures that defined actions are automatically performed in response to specific events during the development process.
  3. Matrix Installations:
    • GitHub Actions supports matrix builds, allowing developers to define multiple combinations of operating systems, dependencies, or other parameters. This feature is useful for testing and ensuring code consistency across environments.
  4. Parallel and sequential activities:
    • The process can accelerate the accuracy and sequencing of tasks, efficient resource utilization, and overall piping. This is especially useful for tasks such as running tests concurrently.

Benefits of using GitHub actions to create and deploy Docker images:

  1. Native integration with GitHub:
    • GitHub Actions are seamlessly integrated into the GitHub repository, eliminating the need for external CI/CD services. This tight integration simplifies configuration and increases visibility, as workflows and results are easily accessible in the GitHub interface.
  2. Docker Image Installation:
    • GitHub Actions provides native support for Docker image builds. Developers can define workflows that run Docker images based on specified configurations. This automation ensures consistency during the build process and allows for updating and following changes to Docker images.
  3. Flexible workflow:
    • The workflow in GitHub Processes is highly customizable. Developers can define multiple steps in a workflow, enabling tasks such as linting, testing, building, and deploying Docker images. These changes align with project requirements.
  4. Combined mystery and variable environment:
    • GitHub Actions allow confidentiality and environment variables to be stored and managed securely. This is important for managing sensitive information, such as access tokens or API keys, that are important in creating or deploying Docker images.
  5. Shared artifacts:
    • Enables you to share artifacts between projects in a GitHub Actions workflow. This is beneficial for moving Docker images and other building blocks from one project to another, simplifying the entire pipeline and avoiding unnecessary work
  6. Community and Market Trends:
    • GitHub Actions has a robust ecosystem of community contribution actions and workflows in the GitHub Marketplace. Developers can use these pre-built practices to perform repetitive tasks, saving time and effort when configuring complex CI/CD pipelines. GitHub Actions provides a unified and flexible platform for running CI/CD pipelines directly within the GitHub repository. Its native support for Docker image builds coupled with easy integration and scalable workflows make it an ideal choice for building and efficiently deploying Dockerized applications.

Creating a CI/CD Workflow with GitHub Actions.

The GitHub Actions workflow describes the steps that must be automated in response to specific events when creating a YAML file, such as code push or pull requests to create and push Docker images.

github actions

Step 1: Create a GitHub Actions Workflow YAML File
In your GitHub repository, create a directory named .github/workflows or click on Actions *tab on your repository
Inside the *
.github/workflows
directory, create a new YAML file, for example, main.yml.

github action

Here a YAML file for building and pushing a Docker image:

github actions workflow

  1. name:- This defines the name of the workflow.
  2. on:- This specifies the event that will trigger the workflow. In this case, the workflow will run when there is a push to the main branch.
  3. jobs:- This section defines the jobs in the workflow. In this case, there is one job named build.
  4. runs-on:- This specifies the runner environment for the job. In this case, the job will run on a ubuntu-latest runner.
  5. steps:- This section defines the steps to be executed in the job. Each step has a name, and the run keyword is used to specify the command to execute.
  6. Checkout code: - This step checks out the code from the GitHub repository.
  7. Set up Docker Buildx: -This step sets up Docker Buildx, which is a tool for building Docker images.
  8. Build Docker image: - This step builds the Docker image using the Dockerfile in the current directory. The -t flag specifies the image name and tag.
  9. Publish Docker image to Docker Hub: - This step logs in to Docker Hub using the DOCKERHUB_USERNAME and DOCKERHUB_TOKEN secrets.
  10. Push Docker image to Docker Hub: - This step pushes the built Docker image to Docker Hub.

Secrets and Environment Variables

Securing sensitive information, such as DockerHub credentials, is important when working with GitHub Actions workflows. GitHub Secrets provides a secure way to store and manage sensitive information in your repository without it being exposed in your workflow code or logs

Create Secrets in your GitHub Repository:

  • Navigate to your repository's settings page.
  • Select "Settings" from the drop-down menu under your repository name.
  • Click on "Secrets" in the left sidebar.
  • Click on "New repository secret".
  • Enter a name for your secret, such as "DOCKERHUB_USERNAME" or "DOCKERHUB_TOKEN".
  • Paste your secret value, such as your DockerHub username or token, in the "Value" field.
  • Click on "Add secret".

github actions secrets

By using GitHub Secrets, you can securely manage sensitive information in your GitHub Actions workflows, ensuring that your credentials and other sensitive data are not exposed in your code or logs.

Triggering the CI/CD Pipeline.

GitHub Actions workflows can be triggered by various events, allowing you to automate your development and deployment processes.
In this project code changes pushed to the repository will trigger the workflow to run.

pipeline triggers

Here are the primary triggering mechanisms:

  1. Code Pushes: Code pushes are the most common trigger for GitHub Actions workflows. When you push changes to your repository, the workflow will automatically run, allowing you to test, build, and deploy your code without manual intervention.
  2. Pull Requests: Pull requests allow you to collaborate on changes with other developers before merging them into the main codebase. By triggering workflows on pull requests, you can automate testing and code quality checks to ensure that changes are consistent and error-free before merging.
  3. Schedules: Scheduled workflows run at predetermined times or intervals, independent of code changes or pull requests. This is useful for tasks that need to be executed periodically, such as data backups, system maintenance, or automated deployments.
  4. Manual Triggering: For workflows that require manual execution, you can use the "Workflow Dispatch" event. This allows you to trigger a workflow manually from the Actions tab in your repository, providing flexibility for specific tasks or testing scenarios.
  5. Repository Webhooks: Repository webhooks can be used to trigger workflows from external events, such as changes in other repositories or notifications from third-party services. This allows for interoperability and integration with other tools in your development environment. When a code change is pushed to a repository the github action workflow is triggered :

pipeline triggered

Once the build completes the docker image is pushed to your dockerhub account.

docker built and uploaded

Conclusion.

This comprehensive guide demonstrates the important role that Continuous Integration and Continuous Delivery (CI/CD) plays in accelerating data analytics. By adopting CI/CD practices, teams can streamline their development, testing, and deployment processes, foster collaboration, reduce errors, and accelerate the delivery of high-quality data analytics solutions
The article highlights the importance of automating tasks and introduces GitHub Actions as a powerful CI/CD tool for data analytics workflows. GitHub Actions integration is being explored, with a focus on simplifying Docker image architecture—a key feature in today’s data analytics environments
The considerations of CI/CD for data analysis illustrate the importance of maintaining a comprehensive and reliable codebase, facilitating cross-disciplinary collaboration, and ensuring the reproducibility of the analysis. Happy coding!

Top comments (0)