DEV Community

Cover image for MinIO as a local S3 service
Stefan Alfbo
Stefan Alfbo

Posted on

MinIO as a local S3 service

To develop and test an application that will use Amazon S3, you need to simulate the S3 dependency in your development environment.

This is where MinIO can be a great resource to make it possible to have a S3 service local on your development machine.

Here is their elevator pitch:

MinIO is a high-performance, S3 compatible object store. It is built for large scale AI/ML, data lake and database workloads. It is software-defined and runs on any cloud or on-premises infrastructure. MinIO is dual-licensed under open source GNU AGPL v3 and a commercial enterprise license.

To use MinIO in a development environment, you can use devcontainers with VS Code. Devcontainers let you build all your dependencies in a reproducible way for everyone on the project, instead of installing MinIO directly on your computer. You could later extend the devcontainer to include a database like PostgreSQL and more.

Here are the basic steps to configure a devcontainer to include MinIO from scratch.

mkdir devcontainer-with-minio && cd $_
mkdir .devcontainer

touch .devcontainer/devcontainer.json \
      .devcontainer/docker-compose.yml \
      .devcontainer/Dockerfile \
      .devcontainer/minio.env

code .
Enter fullscreen mode Exit fullscreen mode

First we create the skeleton for the devcontainer which are four files in a .devcontainer directory in our project.

File tree

In the devcontainer.json file is the configuration for the devcontainer.

{
    // A name for the devcontainer which can be
    // displayed to the user
    "name": "Python 3 with MinIO",
    // The name of the docker-compose file use 
    // to start the services
    "dockerComposeFile": "docker-compose.yml",
    // The service you want to work on. This is 
    // considered the primary container for your
    // dev environment which your editor will 
    // connect to.
    "service": "app",
    // The path of the workspace folder inside 
    // the container. This is typically the target
    // path of a volume mount in the docker-compose.yml.
    "workspaceFolder": "/workspace",
    // The username to use for spawning processes
    // in the container including lifecycle scripts
    // and any remote editor/IDE server process. 
    // The default is the same user as the container.
    "remoteUser": "root"
}
Enter fullscreen mode Exit fullscreen mode

Now we need to add some content to the docker-compose.yml file.

version: '3'

services:
  app:
    build: 
      context: .
      dockerfile: Dockerfile

    volumes:
      # This is where VS Code should expect to find your project's source code and the value of "workspaceFolder" in .devcontainer/devcontainer.json
      - ..:/workspace:cached

    # Overrides default command so things don't shut down after the process ends.
    command: /bin/sh -c "while sleep 1000; do :; done"  

  s3service:
    image: quay.io/minio/minio:latest
    command: server --console-address ":9001" /data
    ports:
      - '9000:9000'
      - '9001:9001'
    env_file: minio.env

  initialize-s3service:
    image: quay.io/minio/mc
    depends_on:
      - s3service
    entrypoint: >
      /bin/sh -c '
      /usr/bin/mc alias set s3service http://s3service:9000 "$${MINIO_ROOT_USER}" "$${MINIO_ROOT_PASSWORD}";
      /usr/bin/mc mb s3service/"$${BUCKET_NAME}";
      /usr/bin/mc admin user add s3service "$${ACCESS_KEY}" "$${SECRET_KEY}";
      /usr/bin/mc admin policy attach s3service readwrite --user "$${ACCESS_KEY}";
      exit 0;
      '
    env_file: minio.env
Enter fullscreen mode Exit fullscreen mode

This file define our services and specially the setup of MinIO. The s3service is running the minio image. MinIO is using two ports, 9000 is for the API endpoint and 9001 is for the administration web user interface of the service. There is also a minio.env file that holds environment variables that is used for configuring MinIO.

The initialize-s3service is responsible to setup the artifacts that our application is going to use. In this case we are creating a bucket and an user that has read/write permissions to the bucket. This user can then be used by our application when interacting with the bucket.

The Dockerfile defines our primary container, app. In this case just a Python container.

FROM mcr.microsoft.com/devcontainers/python:1-3.11-bookworm
Enter fullscreen mode Exit fullscreen mode

The last file, minio.env, holds the environment variables used by the MinIO service.

# https://min.io/docs/minio/linux/reference/minio-server/minio-server.html#id5
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=password

BUCKET_NAME=my-bucket

# Development credentials for storing files locally
ACCESS_KEY=VPP0fkoCyBZx8YU0QTjH
SECRET_KEY=iFq6k8RLJw5B0faz0cKCXeQk0w9Q8UdtaFzHuw4J
Enter fullscreen mode Exit fullscreen mode

Once you have all these files in place, you can rebuild the devcontainer in your editor. To do this, your editor must be configured correctly. Once all the images and containers are started and the devcontainer has been rebuilt, you can go to the MinIO administrator web page at http://localhost:9001/ and use the credentials from the minio.env file.

Login

To make it a little bit more interesting lets build a simple application that uploads a file to MinIO.

In the root of the project.

mkdir src
touch src/app.py

# Install AWS SDK for Python 
python -m pip install boto3
# Store our dependencies in a requirements.txt file
python -m pip freeze > requirements.txt 
Enter fullscreen mode Exit fullscreen mode

Open up the app.py file and add the following code.

import boto3

# The bucket we created in docker-compose
BUCKET_NAME = 'my-bucket'
# Here we need to use the name of our service
# from the docker-compose file as domain name
ENDPOINT = 'http://s3service:9000'
# Credentials from the user we created in the
# setup (located in minio.env)
AWS_ACCESS_KEY_ID = 'VPP0fkoCyBZx8YU0QTjH'
AWS_SECRET_KEY_ID = 'iFq6k8RLJw5B0faz0cKCXeQk0w9Q8UdtaFzHuw4J'

if __name__ == "__main__":
    data = "Start uploading"

    s3_client = boto3.client(
        's3',
        aws_access_key_id=AWS_ACCESS_KEY_ID,
        aws_secret_access_key=AWS_SECRET_KEY_ID,
        endpoint_url=ENDPOINT,
    )

    s3_client.upload_file('app.py', BUCKET_NAME, 'app-py-in-minio')

    print("Done, file is uploaded")
Enter fullscreen mode Exit fullscreen mode

This code just takes the app.py file and upload it to our bucket, my-bucket, and gives it the key app-py-in-minio. Run the code and see the result.

cd src
python app.py
Enter fullscreen mode Exit fullscreen mode

This is how it will look like in MinIO.

File in bucket

This is just a simple project, but it shows how we can use MinIO to act as a local S3 storage. And this is also done by using Amazons SDK for Python.

Happy coding!

Top comments (0)