DEV Community

Juan Julián Merelo Guervós
Juan Julián Merelo Guervós

Posted on

Deploying a Flask and Logstash application to Digital Ocean using docker cloud.

Deployed in the cloud

Do you know one of the things that every single application has to do? Log. Even if you have no churning data, you still have to know what everyone is doing with your stuff. And Logstash is probably one of the best choices. Besides just storing your data, it is part of the ELK stack which includes the Elastic data store and the Kibana dashboard construction kit.
So, when creating a web application, you could as well do it with this (or, for that matter, other) logging service. That's what we are going to do with this application, which has been originally created as an example for a Platzi course.

But first, the cloud.

Docker cloud is an extremely good idea. It's an uniform web, API and consequently CLI bridge to using containers with any cloud vendor. You no longer have to worry about particular templates, or specific command line interfaces. One CLI to rule them all: the Docker cloud CLI, which, for some reason, works only with Python 2.x.

Being a bridge, you have to provide what's on the other side: you bring your own cloud. I do have access to a few of them, but Digital Ocean provided me with a coupon and setting it up was a snap.

And if you want to try it on your own, this link with my referral code will get you $10 and me 25$ after you sign again to keep experimenting and creating this, and other, tutorials. Thanks for using it!

OK, enough with the publicity. Let's assume you have everything set up and you have downloaded the docker-cloud CLI and switched Python to 2.7 version to get it to work. Then you need to log in to your account with

docker login
Enter fullscreen mode Exit fullscreen mode

You will need to create a rather big node to hold logstash. It's done in Java, which are usually memory hogs. This will do

docker-cloud nodecluster create -t 1 --tag platzi platzi-big digitalocean fra1 2gb
Enter fullscreen mode Exit fullscreen mode

This is creating a node called platzi-big using the provider digitalocean in the first data center in Frankfort (fra1) and using the 2gb instance size. These two last ones were the trickiest, knowing exactly the name of everything. However, this program tapping the Docker cloud API returns all data centers and instances available, so that you can easily pick and choose. This is actually one of the differences between different vendors: data centers and instances will all have different names. Other than that, docker-cloud provides with a nice and seamless interface to deploying your applications to the cloud.

Ready to roll

Spoiler: You can get the whole thing in this GitHub repo.

In Docker, every thing gets its own container. Flask will go to one, Logstash to another. You need to connect Flask to Logstash, which offers a wide variety of ways to do so. We will use a port to send JSON logs to it, but in order to do we need to configure Green Unicorn, the application that is actually starting up the web service and doing the logs. This gunicorn-logging.conf will do.

[loggers]
keys=root, gunicorn.error, gunicorn.access

[handlers]
keys=console, logstash

[formatters]
keys=json

[logger_root]
level=INFO
handlers=console

[logger_gunicorn.error]
level=ERROR
handlers=console
propagate=0
qualname=gunicorn.error

[logger_gunicorn.access]
level=INFO
handlers=logstash
propagate=0
qualname=gunicorn.access

[handler_console]
class=StreamHandler
formatter=json
args=(sys.stdout, )

[handler_logstash]
class=logstash.TCPLogstashHandler
formatter=json
args=('logstash',5959)

[formatter_json]
class=jsonlogging.JSONFormatter
Enter fullscreen mode Exit fullscreen mode

Couple of important things to look at here. You have to define a Logstash handler, configure it, and also configure the formatter used in that handler, that will use JSON. lass=logstash.TCPLogstashHandler says you are going to be using a driver that listens to a TCP port. With args you tell it the name of the machine, which we are going to call logstash, and the port where it's going to be listening. Of course, all those things will have to be taken into account in the requirements for the project, where we include json-logging-py for logging and python3-logstash, which we are importing implicitly in when we configure gunicorn this way. Since this is done at that level, we don't need to do change anything in our app. It will just do the magic by itself.

On to the clouds

Docker cloud uses something called a stack file to configure the deployment. It's substantially similar to a Docker Compose file, except for a few things; main one is that Docker-cloud does not build images from Dockerfiles. They must be stored somewhere. Easiest thing is to store them in a public registry, but of course you can use private registries, specially those provided by the cloud vendors themselves. We have defined this very simple stack file:

logstash:
  image: docker.elastic.co/logstash/logstash-oss:6.2.1
  expose:
    - "5959"
  command: -e 'input { tcp {  port => 5959  codec => json   } } output { stdout {} }'

web:
  image: jjmerelo/platzi-servicio-web
  ports:
    - "80:80"
  links:
    - logstash
Enter fullscreen mode Exit fullscreen mode

There are two containers, one that is going to be deployed by the official image provided by Elastic. In that container, which we are calling logstash, we expose the port where we want the logs to go and define with -e the input and output configuration file. I know, this is going to print the logs to standard output. You would probably want to connect it to somewhere else to store them and do cool stuff with it. There are several examples of how to do that, so we will leave that for the next tutorial. logstash is going to be also the name of the machine. Remember we used that name in the gunicorn log configuration.

The configuration for the Flask part includes the image, which is going to be grabbed from Docker Hub, definition of ports so that we can publish it to the exterior, and the important part: the links to logstash, which creates a bridge network between the two things.

And that's it. Run docker-cloud stack up and docker-cloud stack inspect long-number-returned-by-the-prev-command+ will show you what's going on. It might take a while for them to start up, mainly logstash. That inspect command will tell you the UUID for the service, docker-cloud service logs long-UUID will return something like this in the logstash container:

logstash-1 | 2018-02-23T17:34:50.133858365Z [2018-02-23T17:34:50,133][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5959", :ssl_enable=>"false"}
logstash-1 | 2018-02-23T17:34:50.464871650Z [2018-02-23T17:34:50,464][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4a26bdaa run>"}
logstash-1 | 2018-02-23T17:34:50.653066642Z [2018-02-23T17:34:50,650][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
Enter fullscreen mode Exit fullscreen mode

Hungry for more?

Check out how to scale, redeploy and create blue-green deployments using docker-cloud, for instance. Or wait until the next instance of this tutorial.

Also, remember to terminate your stack and node if you've just done it for the show. That will get rid of the containers; then terminate the node you have also started up. The whole thing above might have set you back a few cents, but remember to keep for when you really need it.

Discussion (0)