A number of months ago, I delivered a set of CloudFormation templates an Jenkins pipelines to drive them. Recently, I was brought back onto the project to help them clean some things up. One of the questions I was asked was, "is there any way we can reduce the number of parameters the Jenkins jobs require?"
While I'd originally developed the pipelines on a Jenkins server that had the "Rebuild" plugin, the Jenkins servers they were trying to use didn't have that plugin. Thus, in order to re-run a Jenkins job, they had two choices: use the built-in "replay" option or the built in "build with parameters" option. The former precludes the ability to change parameter values. The latter means that you have to repopulate all of the parameters values. When a Jenkins job has only a very few parameters, using the "build with parameters" option is relatively painless. When you start topping five parameters, it becomes more and more painful to use when all you want to do is tweak one or two values.
Unfortunately, for the sake of portability across this customer's various Jenkins domains, my pipelines require a minimum of four parameters just to enable tailoring for a specific Jenkins domain's environmental uniqueness. Yeah, you'd think that the various domains' Jenkins services would be sufficiently identical to not require this ...but we don't live in a perfect world. Apparently, even though the same group owns three of the domains to use, each deployment is pretty much wholly unlike the others.
That aside... I replied back, "I can probably make it so that the pipelines read the bulk of their parameters from an S3-hosted file, but it will take me some figuring out. Once I do, you should only need to specify which Jenkins stored-credentials to use and the S3 path of the parameter file". Yesterday, I set about figuring out how to do that. It was, uh, beastly.
At any rate, what I found was that I could store parameter/value-pairs in a plain text file posted to S3. I could then stream-down that file and use a tool like awk
to extract the values and assign them to values. Only problem is, I like to segment my Jenkins pipelines ...and it's kind of painful (in much the same way that rubbing ghost peppers into an open wound is "kind of" painful) to make variables set in one job-stage available in another job-stage. Ultimately, what I came up with was code similar to the following (I'm injecting explanation within the job-skeleton to hopefully make things easier to follow):
pipeline {
agent any
[…elided…]
environment {
AWS_DEFAULT_REGION = "${AwsRegion}"
AWS_SVC_ENDPOINT = "${AwsSvcEndpoint}"
AWS_CA_BUNDLE = '/etc/pki/tls/certs/ca-bundle.crt'
REQUESTS_CA_BUNDLE = '/etc/pki/tls/certs/ca-bundle.crt'
}
My customer operates in a couple of different AWS partitions. The environment{}
block customizes the job's behavior so that it can work across the various partitions. Unfortunately, can't really hard-code those values and still maintain portability. Thus, those values are populated from the following parameters{}
section:
parameters {
string(
name: 'AwsRegion',
defaultValue: 'us-east-1',
description: 'Amazon region to deploy resources into'
)
string(
name: 'AwsSvcEndpoint',
description: 'Override the service-endpoint as necessary'
)
string(
name: 'AwsCred',
description: 'Jenkins-stored AWS credential with which to execute cloud-layer commands'
)
string(
name: 'ParmFileS3location',
description: 'S3 URL for parameter file (e.g., "s3://<bucket>/<object_key>")'
)
}
The parameters{}
section allows a pipeline-user to specify environment-appropriate values for the AwsRegion
, AwsSvcEndpoint
and AwsCred
used for governing the behavior of the AWS CLI utilities. Yes, there are plugins available that would obviate needing to use the AWS CLI, but, as with other plugins I can't rely on being universally-available, I can't rely on the more-advanced AWS-related plugins. Thus, I have to rely on the AWS CLI since that one actually is available in all of their Jenkins environments. But for the need to work across AWS partitions, I could have made the pipeline require only a single parameter: ParmFileS3location
.
What follows is the stage that prepares the run-environment for the rest of the Jenkins job:
stages {
stage ('Push Vals Into Job-Environment') {
steps {
// Make sure work-directory is clean //
deleteDir()
// Fetch parm-file //
withCredentials([[
$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
credentialsId: "${AwsCred}",
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
sh '''#!/bin/bash
# For compatibility with ancient AWS CLI utilities
if [[ -v ${AWS_SVC_ENDPOINT+x} ]]
then
AWSCMD="aws s3 --endpoint-url https://s3.${AWS_SVC_ENDPOINT}"
else
AWSCMD="aws s3"
fi
${AWSCMD} --region "${AwsRegion}" cp "${ParmFileS3location}" Pipeline.envs
'''
}
// Populate job-env from parm-file //
script {
def GitCred = sh script:'awk -F "=" \'/GitCred/{ print $2 }\' Pipeline.envs',
returnStdout: true
env.GitCred = GitCred.trim()
def GitProjUrl = sh script:'awk -F "=" \'/GitProjUrl/{ print $2 }\' Pipeline.envs',
returnStdout: true
env.GitProjUrl = GitProjUrl.trim()
def GitProjBranch = sh script:'awk -F "=" \'/GitProjBranch/{ print $2 }\' Pipeline.envs',
returnStdout: true
env.GitProjBranch = GitProjBranch.trim()
[…elided…]
}
}
}
The above stage-definition has three main steps:
- The
deleteDir()
statement ensures that the workspace assigned on the Jenkins agent-node doesn't contain any content left over from prior runs. Leftovers can have bad effects on subsequent runs. Bad juju. - The shell invocation is wrapped in a call to the Jenkins credentials-binding plugin (and the CloudBees AWS helper-plugin). Wrapping the shell-invocation, this way, allows the contained call to the AWS CLI to work as desired. Worth noting:
- The credentials-binding plugin is a default Jenkins plugin
- The CloudBees AWS helper-plugin is not If the CloudBees plugin is missing, the above won't work. Fortunately, that's one of the optional plugins they do seem to have in all of the Jenkins domains they're using.
- The
script{}
section does the heavy lifting of pulling values from the downloaded parameters file and making those values available to subsequent job-stages
The really important part to explain is the script{}
section, as the prior two are easily understood from either the Jenkins pipeline documentation or the innumerable Google-hits you'd get on a basic search. Basically, for each parameter that I need to extract from the parameter file and make available to subsequent job-stages, I have to do a couple things:
- I have to define a variable scoped to the currently-running stage
- I have to pull value-data from the parameter file and assign it to the stage-local variable. I use a call to a sub-shell so that I can use
awk
to do the extraction. - I then create a global-scope environment variable from the stage-local variable. I need to do things this way so that I can invoke the
.trim()
method against the stage-local variable. Failing to do that leaves an unwanted<CRLF>
at the end of my environment variable's value. To me, this feels like back when I was writing Perl code for CGI scripts and other utilities and had to callchomp()
on everything. At any rate, absent the need to clip off the deleterious<CRLF>
, I probably could have done a direct assignment. Which is to say, I might have been able to simply do:
env.GitProjUrl = sh script:'awk -F "=" \'/GitProjUrl/{ print $2 }\' Pipeline.envs',
returnStdout: true
Once the parameter files' parameter-values have all been pushed to the Jenkins job's environment, they're now available for use. In this particular case, that means I can then use the Jenkins git SCM sub-module to pull the desired branch/tag from the desired git project using the Jenkins-stored SSH credential specified within the parameters file:
stage("Repository Pull") {
steps {
checkout scm: [
$class: 'GitSCM',
userRemoteConfigs: [[
url: "${GitProjUrl}",
credentialsId: "${GitCred}"
]],
branches: [[
name: "${GitProjBranch}"
]]
],
poll: false
}
}
}
But, yeah, sorting this out resulted in quite a few more shouts of "seriously, Jenkins?!?"
Top comments (2)
Ahh Jenkins. Jenkins is jank ;)
Arguably unique in it's generic applicability: Given a large number of operations problems I can usually cobble together a Jenkin's solution that works. However, definitely has some odd behavior!
Yeah, it's eminently flexible. It's just the gymnastics I have to engage in between Jenkins built-in peculiarities, the crappiness of the various plugins' (pipeline-related) documentation and the fact that my customers' (Jenkins) service-owners can't seem to figure out how to keep their service-domains' capabilities synchronized (or up to date), makes dealing with it really old.
In general, I tend to prefer to deliver automation via other methods. However, a couple of my customers demand Jenkins pipelines so that they can hire "minimum wage" technicians – who have just enough knowledge to fill in web-forms – rather than more-capable people. Given that I'm generally delivering Jenkins pipelines that simply act as overlays for other abstractions that have their own web UI, I'm not sure what trading one web UI for another buys them.
/shrug