Lately I've been working on (re-)learning Jenkins and learning how to use their Pipelines feature.
We had set up a Jenkins server at $curr_job to off-load some of the potential work from Bamboo, and also just because we're trying as a DevOps team to move away from Bamboo since we're not too happy with it.
Docker Swarm
First, let's address the ten thousand pound elephant in the room: Docker Swarm is less cool than Kubernetes.
For small, easy to set up/manage Docker clusters on-prem, Docker Swarm is fine and perfectly serviceable. The automatic load balancing of published ports to all nodes/manager by default is really slick.
That being said, not everything is great. For this to work, you need to expose the Docker Swarm API and to do that is fairly ugly. I'm not going to get into securing it, since this is a lab set-up.
Exposing the swarm in an insecure manner is fairly straight-forward. Find the systemd unit file and add this to the end of the ExecStart line and then restart the service: -H tcp://0.0.0.0:2375
Jenkins Service
Here's my docker-compose.yml I used to deploy this:
Beyond the Docker/Swarm stuff, I'm not getting into Jenkins configuration. There's plenty of documentation that exists for general Jenkins configuration and usage.
I can't stress this isn't a production service. Nothing is persistant.
Jenkins Docker Swarm Plug-In
The next two sections are the most annoying. The documentation is definitely lacking when it exists.
To enable Docker Swarm access in Jenkins, you need to install 'Docker Swarm Plugin' and then reload Jenkins.
That adds a couple things to Jenkins. First and foremost, the Docker Swarm Dashboard:
Then the more important thing, in Manage Jenkins -> Configure System, at the bottom there's an option for Adding a Cloud:
The configuration for that is pretty straight forward:
The 'Test' button is a little flaky from what I've seen, you might need to save your configuration before it will succeed.
Jenkins JNLP Agents
This was the most interesting part. And by interesting, I mean frustrating. This is where the documetation was really lacking and I happened to find a blog post with a quick explanation of how this all works.
When you configure your cluster above, you get a button to add templates:
Clicking that button gives you this:
The label is what you can use for targetting, the image is the Docker Container to pull from. They actually provide a bunch of containers to use: https://github.com/jenkinsci/jnlp-agents
The docker image is different from the other containers and works as documented/is fine with the example configuration. The general idea on how it's supposed to work is that when the container starts, it does a curl to the Jenkins master and downloads the JNLP agent to ensure there's no version mis-matches.
As far as I can tell, none of the other agent images work that way. They're all based on their languages/toolsets alpine image and don't have tools like curl or git installed, but in the Dockerfile, Jenkins copies the agent .jar from the base JNLP agent image.
jnlp-docker command:
sh
-cx
curl --connect-timeout 20 --max-time 60 -o agent.jar $DOCKER\_SWARM\_PLUGIN\_JENKINS\_AGENT\_JAR\_URL && java -jar agent.jar -jnlpUrl $DOCKER\_SWARM\_PLUGIN\_JENKINS\_AGENT\_JNLP\_URL -secret $DOCKER\_SWARM\_PLUGIN\_JENKINS\_AGENT\_SECRET -noReconnect -workDir /tmp
jnlp-python command:
sh
-cx
apk -u add git && java -jar /usr/share/jenkins/agent.jar -jnlpUrl $DOCKER\_SWARM\_PLUGIN\_JENKINS\_AGENT\_JNLP\_URL -secret $DOCKER\_SWARM\_PLUGIN\_JENKINS\_AGENT\_SECRET -noReconnect -workDir /tmp
It took a bit of debugging and digging to figure out what was different between the two container images and figure out why my Docker container was working but my Python container wasn't.
The other big 'gotchas' I ran into is that the container needs to be run as root, or it will fail to launch. For Docker, you also need to make sure you mount /var/run/docker.sock from the host into the container so the Docker client in the container has a Docker daemon to work against.
Jenkinsfile
pipeline{
agent { label 'jenkins-docker' }
stages {
stage('Print Test'){
agent { label 'jenkins-docker' }
steps {
sh "uname -a"
sh "docker run hello-world"
}
}
stage('Schedule Tests'){
agent { label 'jenkins-python' }
steps{
sh 'python --version'
sh 'ls .'
sh 'printenv'
}
}
}
}
The above Jenkinsfile is a working example where I have the agents for the Pipeline level, and each stage (the job spawns as the first agent, then that spawns the agents for each of the stages).
I'm not going to get into setting up builds on API calls, again, that gets into Jenkins documentation that is good and there's plenty of.
Hopefully this helps. It'll be nice to have this reference for myself in the future.
Top comments (2)
Hey Roy, Thanks for the post. I'm not getting the whole JNLP agent thing or how to actually use this with my own docker images that I have in a local on-prem image registry? How do you tell Jenkins where to deploy (i.e. if i want to use a node which GPUs vs CPU only)?
So, the JNLP agent is the Jenkins Agent.
When you first go to create the Docker Template, Jenkins fills out some information for you. If your container has curl installed in it, the command that gets auto-filled out will reach out to the Master instance, and grab the agent and run it. If you don't have curl in your container, you could build a container that already has the JNLP agent in it (the GitHub repo from Jenkins I liked to has examples of that) and use the command I have above for the python container.
I haven't been playing around with running things from our private registry yet, but it looks pretty straight forward. In the image line you put the image ref like you would in a compose file, then there's a button at the bottom of the template section to add registry authentication, you just have to have the registry stuff as Jenkins credentials already.
For host constraints, there's a 'Placement' button that you can restrict where containers are run