From the ollama help discord channel:
Docker compose: How do you pull a model automatically with container creation?
The conundrum of containerizing Ollama is that it must be running to pull a model. If you run ollama using docker compose
, it doesn't provide a way to pull a model from the ollama registry. What's a budding generative AI nerd to do?
The answer is more docker compose
, as in docker compose exec
, which runs a script to download a model. Add both commands to a startup script, and you have autollama.
docker compose up -d
sleep 5
docker compose exec autollama sh /root/.ollama/pull_model.sh
Come and get your ollama love here.
Top comments (0)