Last week we talked about the audio note input tab. If you missed it, you can check that out here!
Introduction
After finishing the first iteration of the Streamlit application, I started thinking about how to make this project accessible to a wider group of people. In it's current state, you had to know how to configure Elasticsearch/Kibana, as well as have an environment to effectively run them, in addition to running the Python Streamlit application. I had heard about Docker, but I had never used it before; so I decided to give it a try.
Docker
Docker is a product that serves virtualization in containers on a host machine.
I utilize Docker Compose to perform all of the setup for me, creating necessary volumes, networks, and containers. The full Docker Compose file is as follows:
version: "3.8"
volumes:
certs:
driver: local
esdata01:
driver: local
kibanadata:
driver: local
streamlitdata:
driver: local
networks:
default:
name: elastic-dnd-internal
external: false
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_ingest/pipeline/add_timestamp -d "{\"description\":\"Pipeline to automatically add @timestamp to incoming logs.\",\"processors\":[{\"set\":{\"field\":\"@timestamp\",\"value\":\"{{_ingest.timestamp}}\",\"ignore_empty_value\":true,\"ignore_failure\":true}}]}"
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_ingest/pipeline/dnd-notes -d "{\"description\":\"Pipeline to manipulate dnd notes logs.\",\"processors\":[{\"pipeline\":{\"name\":\"add_timestamp\"}}]}"
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_component_template/dnd-notes -d "{\"template\":{\"mappings\":{\"dynamic\":\"true\",\"dynamic_date_formats\":[\"strict_date_optional_time\",\"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z\"],\"dynamic_templates\":[],\"date_detection\":true,\"numeric_detection\":false,\"properties\":{\"@timestamp\":{\"type\":\"date\",\"format\":\"strict_date_optional_time\"},\"finished\":{\"type\":\"boolean\"},\"message\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"ignore_above\":256}}},\"name\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"ignore_above\":256}}},\"session\":{\"type\":\"long\"},\"type\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"ignore_above\":256}}}}}}}"
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_index_template/dnd-notes -d "{\"index_patterns\":[\"dnd-notes-*\"],\"template\":{\"settings\":{\"index\":{\"number_of_shards\":\"1\",\"number_of_replicas\":\"0\",\"default_pipeline\":\"dnd-notes\"}},\"mappings\":{\"_routing\":{\"required\":false},\"numeric_detection\":false,\"dynamic_date_formats\":[\"strict_date_optional_time\",\"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z\"],\"dynamic\":true,\"_source\":{\"excludes\":[],\"includes\":[],\"enabled\":true},\"dynamic_templates\":[],\"date_detection\":true}},\"composed_of\":[\"dnd-notes\"]}"
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
api:
depends_on:
es01:
condition: service_healthy
build:
dockerfile: .\dockerfile-api
context: .\
ports:
- ${API_PORT}:8000
volumes:
- '.\data:/usr/src/app/data:delegated'
- '.\project\api:/usr/src/app/api:delegated'
kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
co.elastic.logs/module: kibana
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
mem_limit: ${KB_MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
streamlit:
depends_on:
kibana:
condition: service_healthy
build:
dockerfile: .\dockerfile-streamlit
context: .\
ports:
- ${STREAMLIT_PORT}:8501
volumes:
- certs:/usr/src/app/certs
- '.\data:/usr/src/app/data:delegated'
- '.\project\streamlit:/usr/src/app/streamlit:delegated'
- '.\.streamlit:/usr/src/app/.streamlit:delegated'
The first few lines take care of setting up "volumes", which are essentially data drives that store information for Docker to use, "networks", which are internal or external networks that Docker can use, and "services", which are the containers.
As you can see, my current Docker implementation consists of 3 Elastic containers (a setup container, an Elasticsearch container, and a Kibana container), and 2 Python containers (a Streamlit container, and a FastAPI container).
Elastic Containers
Funnily enough, the Elastic containers were quite easy to set up because of a great article by my contact for this project -- the man himself: Eddie. Check it out here!
For the most part, I followed this guide and added additional pieces to automate some settings, templates, etc. associated with this project.
NOTE:
The .env file in the project directory is very important here. It defines passwords, port numbers, names, etc. for use with the variables inside of the Docker Compose file. Be sure to set these variables before trying to set this up!
Setup Container
The setup container sets up passwords, creates certs, and places the Elastic D&D backend pipelines and templates.
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_ingest/pipeline/add_timestamp -d "{\"description\":\"Pipeline to automatically add @timestamp to incoming logs.\",\"processors\":[{\"set\":{\"field\":\"@timestamp\",\"value\":\"{{_ingest.timestamp}}\",\"ignore_empty_value\":true,\"ignore_failure\":true}}]}"
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_ingest/pipeline/dnd-notes -d "{\"description\":\"Pipeline to manipulate dnd notes logs.\",\"processors\":[{\"pipeline\":{\"name\":\"add_timestamp\"}}]}"
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_component_template/dnd-notes -d "{\"template\":{\"mappings\":{\"dynamic\":\"true\",\"dynamic_date_formats\":[\"strict_date_optional_time\",\"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z\"],\"dynamic_templates\":[],\"date_detection\":true,\"numeric_detection\":false,\"properties\":{\"@timestamp\":{\"type\":\"date\",\"format\":\"strict_date_optional_time\"},\"finished\":{\"type\":\"boolean\"},\"message\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"ignore_above\":256}}},\"name\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"ignore_above\":256}}},\"session\":{\"type\":\"long\"},\"type\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"ignore_above\":256}}}}}}}"
curl -s -X PUT --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_index_template/dnd-notes -d "{\"index_patterns\":[\"dnd-notes-*\"],\"template\":{\"settings\":{\"index\":{\"number_of_shards\":\"1\",\"number_of_replicas\":\"0\",\"default_pipeline\":\"dnd-notes\"}},\"mappings\":{\"_routing\":{\"required\":false},\"numeric_detection\":false,\"dynamic_date_formats\":[\"strict_date_optional_time\",\"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z\"],\"dynamic\":true,\"_source\":{\"excludes\":[],\"includes\":[],\"enabled\":true},\"dynamic_templates\":[],\"date_detection\":true}},\"composed_of\":[\"dnd-notes\"]}"
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
Elasticsearch Container
The Elasticsearch container creates an Elasticsearch node for storing data and connecting with Kibana, and will only begin when the Setup container is healthy.
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
Kibana Container
The Kibana container creates a Kibana instance that connects to the Elasticsearch container, and allows users to view their notes.
kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
co.elastic.logs/module: kibana
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
mem_limit: ${KB_MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
Python Containers
Both of these containers are built with dockerfiles, which allows for more control of container creation in this instance; especially since we have Python dependencies and have to run the programs with a command.
NOTE:
The dockerfiles in the project directory are very important here. They handle all of the container setup for the applications.
Streamlit Container
The Streamlit container hosts and runs the application. I am still working on exposing the application to a public IP address. Getting this piece right will be essential for D&D groups that are not on the same network.
streamlit:
depends_on:
kibana:
condition: service_healthy
build:
dockerfile: .\dockerfile-streamlit
context: .\
ports:
- ${STREAMLIT_PORT}:8501
volumes:
- certs:/usr/src/app/certs
- '.\data:/usr/src/app/data:delegated'
- '.\project\streamlit:/usr/src/app/streamlit:delegated'
- '.\.streamlit:/usr/src/app/.streamlit:delegated'
Streamlit Dockerfile
The Streamlit dockerfile handles installation of Python dependencies, creating directories, and running the application command.
###############
# BUILD IMAGE #
###############
FROM python:3.8.2-slim-buster AS build
# set root user
USER root
# virtualenv
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# add and install requirements
RUN pip install --upgrade pip
COPY ./requirements-streamlit.txt .
RUN pip install -r requirements-streamlit.txt
#################
# RUNTIME IMAGE #
#################
FROM python:3.8.2-slim-buster AS runtime
# create app directory
RUN mkdir -p /usr/src/app
# copy from build image
COPY --from=build /opt/venv /opt/venv
# set working directory
WORKDIR /usr/src/app
# disables lag in stdout/stderr output
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# Path
ENV PATH="/opt/venv/bin:$PATH"
# Run streamlit
CMD streamlit run streamlit/main.py
FastAPI Container
The FastAPI container hosts and runs the API code.
api:
depends_on:
es01:
condition: service_healthy
build:
dockerfile: .\dockerfile-api
context: .\
ports:
- ${API_PORT}:8000
volumes:
- '.\data:/usr/src/app/data:delegated'
- '.\project\api:/usr/src/app/api:delegated'
FastAPI Dockerfile
The FastAPI dockerfile handles installation of Python dependencies, creating directories, and running the Python command.
###############
# BUILD IMAGE #
###############
FROM python:3.8.2-slim-buster AS build
# set root user
USER root
# virtualenv
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# add and install requirements
RUN pip install --upgrade pip
COPY ./requirements-api.txt .
RUN pip install -r requirements-api.txt
#################
# RUNTIME IMAGE #
#################
FROM python:3.8.2-slim-buster AS runtime
# create app directory
RUN mkdir -p /usr/src/app
# copy from build image
COPY --from=build /opt/venv /opt/venv
# set working directory
WORKDIR /usr/src/app
# disables lag in stdout/stderr output
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# Path
ENV PATH="/opt/venv/bin:$PATH"
# Run streamlit
CMD python3 api/main.py
Closing Remarks
This section is subject to change. More containers may be added (such as NGINX) as needs arise, especially since I am working with network-related tasks.
Next week, I will hopefully be covering my progress of exposing Kibana and Streamlit to my public IP, which allows use of the project by my entire D&D group. I am currently having trouble with Streamlit, so it may not go as planned.
Check out the GitHub repo below. You can also find my Twitch account in the socials link, where I will be actively working on this during the week while interacting with whoever is hanging out!
Happy Coding,
Joe
Top comments (0)