DEV Community

What Tools Do You Use To Validate Jenkins Pipeline Syntax

Thomas H Jones II on September 13, 2018

One of the customers I do work for recently decided that, in addition to supplying native deployment-automation code targeting their desired cloud-...
Collapse
 
dmfay profile image
Dian Fay

I've always just run them until they work. I'm not sure there's another way to validate the whole thing unless they've made adding exceptions to the script security defaults easier recently.

Collapse
 
cvega profile image
Casey Vega

Adding your code to a global shared library assumes the code is "trusted" meaning you should not have to validate each and every call outside of the sandbox.

jenkins.io/doc/book/pipeline/share...

"These libraries are considered "trusted:" they can run any methods in Java, Groovy, Jenkins internal APIs, Jenkins plugins, or third-party libraries. This allows you to define libraries which encapsulate individually unsafe APIs in a higher-level wrapper safe for use from any Pipeline"

Collapse
 
dmfay profile image
Dian Fay

That looks new! The Jenkins instance I was working with was a couple years old and I don't remember seeing anything like that functionality.

Collapse
 
ferricoxide profile image
Thomas H Jones II

Yeah. I'd been doing similar. However, as we've added new people to this particular project and have more "never used Jenkins" people on the team writing Jenkins jobs, more in the way of simple mistakes have been happening.

Always nice to get a green from Travis saying that at least the syntax of something is correct.

Collapse
 
cvega profile image
Casey Vega

I'd recommend the following:

  • use a git pre-commit hook that invokes linting via shell/curl
    • do this for all of your code, prevent bad commits whenever possible
  • use a branch to update your pipeline where possible (multibranch/organization job type)
  • use if statements for scripted and when blocks for declarative to develop safely when dealing with prod systems
  • define a PR strategy and code review changes
Thread Thread
 
ferricoxide profile image
Thomas H Jones II
  • use a git pre-commit hook that invokes linting via shell/curl
    • do this for all of your code, prevent bad commits whenever possible

We generally just define tests within our ..yml files. With the various versions of git clients in use, it's more reliable than hoping that a given user's git client is - or can even be - configured to use standardized pre-commit hooks.

  • use a branch to update your pipeline where possible (multibranch/organization job type)

Our overall model is "work in a branch of your own fork, then PR back to the appropriate branch of the root project".

  • use if statements for scripted and when blocks for declarative to develop safely when dealing with prod systems

This is what I'm looking for. Happen to have any examples? As mentioned above, when using things like GitLab's built-in CI tool or tools like Travis, we've mostly been just handing off tests by way of file-extensions (.json files going through jq; .sh files going through ShellCheck; etc.). I'd been hoping there was a similar linter tool out there to hand off to. If we had to go the "roll your own route" (as your above seems to hint at), would be super helpful to have something to steal from use as a reference.

Thread Thread
 
cvega profile image
Casey Vega

I wouldn't change what you're doing. Jenkins very much wants to be utilized in the same manner. If you can shell out to CLI tools to accomplish tasks, that's perfect. I encounter many pipelines that use tools like make, pip, npm.

The nice thing about a pre-commit hook is the user doesn't have to much more installed than CURL. I'll put together a little article here shortly for getting it setup since I think others may benefit too.

Here are a couple declaritive examples and a single scripted exampled below. Typically we recommend you use declaritive syntax where possible assuming you don't already have a large scripted code base.

Declaritive:

stage('Production Deploy') {
  when {
    branch 'master'
  }
  steps {
    sh 'kubectl rollout status deployment/project-deployment' 
  }
}
stage('Production Deploy') {
  when {
    not {
      branch 'master'
    }
  }
  steps {
    echo 'Doing stuff in DEV branch'
  }
}

And with scripted:

if (env.BRANCH_NAME == 'master') {
  echo 'Doing stuff in PROD'
} else {
  echo 'Doing stuff not in PROD'
}

Thread Thread
 
ferricoxide profile image
Thomas H Jones II

I'd probably do more stuff in the DSL if the plugins I wanted to use were more "on board" with the pipeline thing. Seems like the bulk of the plugins (at least the ones our user-community requests) were designed for "Freestyle" usage rather than pipeline usage ...frequently meaning have to go read the plugin's source-code to suss out the "which of these bits are usable in Pipeline mode." Usually, it's just quicker/easier to use the OS-native tools via a sh() statement than try to do things The Right Way™

This is doubly so when, the reason I'm dicking with Jenkins-managed jobs is because the primary customer for the devops tooling keeps harping on "we need to eat our own dogfood". I find myself struggling with the "why would I waste already over-commited time wrapping shit in Jenkins when I can just use the AWS CLI directly". Adding the pipeline layer doesn't really seem to be more "infrastructure as code" than is directly-maintaining CFns and CFn parameter-files in relevant git projects. Maybe there's something that's "too obvious" that I'm missing it. :p

I'll admit, though, that I'm a curmudgeon. To me, when you aggregate the knowledge required to use all of these "simplified" tools, you actually require more in the way of specific knowledge than if you just did things "the hard way" (see: all the various simplified markup languages like Markdown, Textile ...and then all of their variants). But, I'm looking at these things from the standpoint of the person charged with fielding and operating all the tools rather than our users who typically only use one or two of the tools (frequently after someone like me has created the further-simplifying automation for them).

Oh. Wow. Wasn't really meaning for that to turn into a rant! =)

Thread Thread
 
dmfay profile image
Dian Fay

If it comes to it you can invoke plugins manually in the DSL, like these for coverage/lint reports:

step([
    $class: 'CoberturaPublisher',
    autoUpdateHealth: false,
    autoUpdateStability: false,
    coberturaReportFile: '**/cobertura-coverage.xml',
    failUnhealthy: false,
    failUnstable: false,
    maxNumberOfBuilds: 0,
    onlyStable: false,
    sourceEncoding: 'ASCII',
    zoomCoverageChart: false
])

step([$class: 'CheckStylePublisher',
    canRunOnFailed: true,
    defaultEncoding: '',
    healthy: '100',
    pattern: 'eslint.xml,**/nsp.xml',
    unHealthy: '90',
    useStableBuildAsReference: true
])
Thread Thread
 
ferricoxide profile image
Thomas H Jones II

True... But that's part of what I was alluding to when saying you frequently need to dig through the GUI-oriented plugins' source to find the names of the knobs to turn via the DSL.

Collapse
 
june07t profile image
667 • Edited

github.com/june07/sublime-Jenkinsf...

Sublime Text is my goto so I wrote that plugin to leverage Jenkins' own declarative linter.

Uploading the Jenkinsfile and running the pipeline on each edit was definitely a broken workflow and took loads of time.