A few days ago we announced the first version of Pipeless, an open-source multimedia framework with focus on computer vision.
With Pipeless developers can build and deploy apps that analyze and manipulate audio and video in just minutes and totally forget about building and maintaining multimedia pipelines.
Today, we will talk about how to easily deploy computer vision apps with Pipeless and Docker.
Let’s take the example from the previous post. To refresh the memory, we were creating an app that allows you to identify when your cat is in the garden and we want to receive an alarm in that case.
To execute the example during the previous you needed to install Pipeless and its dependencies on your system. Even though Pipeless has very few dependencies, we understand how tedious this is for developers, thus we published a minimal container image that ships Pipeless with everything you need to run and deploy your apps.
Using the container image, you can deploy and run the application code with just a simple command and without installing anything on our system (apart from Docker). Find here the whole container documentation.
Running an application with the Pipeless container image is really simple. We just need to provide the container with our application code. To do that, just mount the application into the container’s
/app directory. It will automatically load and executed:
docker run --rm -v /my/app/path:/app miguelaeh/pipeless run all
In the cats application example, you had to install OpenCV to draw the detected bounding boxes around the cat face, however, OpenCV is not shipped with the Pipeless container image by default. Since it is a Python dependency from the specific application, we need to provide it via the
PIPELESS_USER_PYTHON_PACKAGES environment variable, and the container will install it at run time. So, extending the previous command:
docker run --rm -v /my/app/path:/app -e "PIPELESS_USER_PYTHON_PACKAGES=opencv-python" miguelaeh/pipeless run all
Running the above command the application will run as expected, nevertheless, the whole Pipeless framework will be running under the same container, i.e. under the same process, which is a bad practice taking into account that Pipeless is composed from several components.
In order to run every Pipeless component isolated from each other, we have created a basic docker-compose.yaml file. The structure of this docker-compose file is the same that would be followed when deploying to the cloud, for example using Kubernetes.
The following explanation will be built based on a local docker-compose so that everyone reading this has the resources and access to try it out with just one computer.
Since the cat will be filmed by the garden camera, it doesn’t make sense to process local files, thus, we will configure Pipeless to read the video from a
https source (check the value of
PIPELESS_INPUT_VIDEO_URI on the docker-compose file). The default value provided is pointing to a hosted video for the example, but feel free to change it to a
RTSP URI or any other protocol.
In the case of the output video, the ideal case would be to send it to an external source as well, however, this is not interesting in this particular example since you don’t want to be looking at the output video. For the moment, let’s simply store it locally on the mounted app directory (see
PIPELESS_OUTPUT_VIDEO_URI on the docker-compose file).
NOTE: In order to allow the non-root container to write into the mounted app directory, the mounted directory must be owned by the root group (note the root group is just another common group without any special permissions).
We now have our computer vision application deployed using docker compose, reading streams from an external source that we can easily update without modifying our app code or infrastructure, and we can also update the output URI to store our result remotely if we want, for example to a S3 bucket.
You wanted to receive a notification when your cat is in the garden, so storing the video with bounding boxes is not really useful for this case.
From a simplified point of view, our cat is in the garden when the model identifies some bounding boxes on the input video. So simply sending us an email when a bounding box is detected could be enough.
If you want to avoid false positives, you can easily implement a mechanism to send that email only if the cat is in the garden for a period of some seconds. Remember we process the input stream frame by frame, so finding a bounding box into an isolated frame could be a false positive.
The above is easily achievable thanks to the Pipeless
post-process hooks. On the before hook, you can initialize a counter at zero. Then, if you find bounding boxes on the frame, increase the counter by one, if you don’t find bounding boxes on a frame, reset the counter. Finally, if the counter reaches your specified threshold, 30 in the case of 1 second for a 30 FPS stream, you will simply send yourself an email and you can even include the cat picture on it. You should be able to use any library to send that email.
You have deployed your computer vision application using Docker, connected it to an external stream source and set up a system to receive an email notification when your cat is in the garden. All this without building complex infrastructure, in under 15 lines of code and with a couple commands. We encourage you to continue playing with the example or implement a different one from scratch.