As you may know, The host’s NVIDIA GPU does not expose to your containers by default. So, you have to work around a little bit to make your models interface with your GPU. Eventually, the process of deploying your machine learning frameworks to production will be easier and more effective.
Here are the main articles I have passed through when I tried to dockerize my app with GPU support. In that journey, I encountered many troubles that were not mentioned in those articles. I searched the solution for each problem from many discussion threads I had found on the internet. It mainly depends on your architecture, your ML frameworks, your host machine, ... I will release an official blog soon.
- How to Use the GPU within a Docker Container: link
- How to Use an NVIDIA GPU with Docker Containers: link
- Complete guide to building a Docker Image serving a Machine learning system in Production: link
- CUDA + Docker = ❤️ for Deep Learning: link
Deployment Environment:
- Ubuntu: 20.04
- Graphic card: GTX 3090
- Python: 3.7
- Torch: 1.8
- CUDA: 11.1
- CUDNN: 8.2.1
Top comments (0)