re: What Tools Do You Use To Validate Jenkins Pipeline Syntax VIEW POST

TOP OF THREAD FULL DISCUSSION
re: I've always just run them until they work. I'm not sure there's another way to validate the whole thing unless they've made adding exceptions to th...
 

Yeah. I'd been doing similar. However, as we've added new people to this particular project and have more "never used Jenkins" people on the team writing Jenkins jobs, more in the way of simple mistakes have been happening.

Always nice to get a green from Travis saying that at least the syntax of something is correct.

 

I'd recommend the following:

  • use a git pre-commit hook that invokes linting via shell/curl
    • do this for all of your code, prevent bad commits whenever possible
  • use a branch to update your pipeline where possible (multibranch/organization job type)
  • use if statements for scripted and when blocks for declarative to develop safely when dealing with prod systems
  • define a PR strategy and code review changes
  • use a git pre-commit hook that invokes linting via shell/curl
    • do this for all of your code, prevent bad commits whenever possible

We generally just define tests within our ..yml files. With the various versions of git clients in use, it's more reliable than hoping that a given user's git client is - or can even be - configured to use standardized pre-commit hooks.

  • use a branch to update your pipeline where possible (multibranch/organization job type)

Our overall model is "work in a branch of your own fork, then PR back to the appropriate branch of the root project".

  • use if statements for scripted and when blocks for declarative to develop safely when dealing with prod systems

This is what I'm looking for. Happen to have any examples? As mentioned above, when using things like GitLab's built-in CI tool or tools like Travis, we've mostly been just handing off tests by way of file-extensions (.json files going through jq; .sh files going through ShellCheck; etc.). I'd been hoping there was a similar linter tool out there to hand off to. If we had to go the "roll your own route" (as your above seems to hint at), would be super helpful to have something to steal from use as a reference.

I wouldn't change what you're doing. Jenkins very much wants to be utilized in the same manner. If you can shell out to CLI tools to accomplish tasks, that's perfect. I encounter many pipelines that use tools like make, pip, npm.

The nice thing about a pre-commit hook is the user doesn't have to much more installed than CURL. I'll put together a little article here shortly for getting it setup since I think others may benefit too.

Here are a couple declaritive examples and a single scripted exampled below. Typically we recommend you use declaritive syntax where possible assuming you don't already have a large scripted code base.

Declaritive:

stage('Production Deploy') {
  when {
    branch 'master'
  }
  steps {
    sh 'kubectl rollout status deployment/project-deployment' 
  }
}
stage('Production Deploy') {
  when {
    not {
      branch 'master'
    }
  }
  steps {
    echo 'Doing stuff in DEV branch'
  }
}

And with scripted:

if (env.BRANCH_NAME == 'master') {
  echo 'Doing stuff in PROD'
} else {
  echo 'Doing stuff not in PROD'
}

I'd probably do more stuff in the DSL if the plugins I wanted to use were more "on board" with the pipeline thing. Seems like the bulk of the plugins (at least the ones our user-community requests) were designed for "Freestyle" usage rather than pipeline usage ...frequently meaning have to go read the plugin's source-code to suss out the "which of these bits are usable in Pipeline mode." Usually, it's just quicker/easier to use the OS-native tools via a sh() statement than try to do things The Right Way™

This is doubly so when, the reason I'm dicking with Jenkins-managed jobs is because the primary customer for the devops tooling keeps harping on "we need to eat our own dogfood". I find myself struggling with the "why would I waste already over-commited time wrapping shit in Jenkins when I can just use the AWS CLI directly". Adding the pipeline layer doesn't really seem to be more "infrastructure as code" than is directly-maintaining CFns and CFn parameter-files in relevant git projects. Maybe there's something that's "too obvious" that I'm missing it. :p

I'll admit, though, that I'm a curmudgeon. To me, when you aggregate the knowledge required to use all of these "simplified" tools, you actually require more in the way of specific knowledge than if you just did things "the hard way" (see: all the various simplified markup languages like Markdown, Textile ...and then all of their variants). But, I'm looking at these things from the standpoint of the person charged with fielding and operating all the tools rather than our users who typically only use one or two of the tools (frequently after someone like me has created the further-simplifying automation for them).

Oh. Wow. Wasn't really meaning for that to turn into a rant! =)

If it comes to it you can invoke plugins manually in the DSL, like these for coverage/lint reports:

step([
    $class: 'CoberturaPublisher',
    autoUpdateHealth: false,
    autoUpdateStability: false,
    coberturaReportFile: '**/cobertura-coverage.xml',
    failUnhealthy: false,
    failUnstable: false,
    maxNumberOfBuilds: 0,
    onlyStable: false,
    sourceEncoding: 'ASCII',
    zoomCoverageChart: false
])

step([$class: 'CheckStylePublisher',
    canRunOnFailed: true,
    defaultEncoding: '',
    healthy: '100',
    pattern: 'eslint.xml,**/nsp.xml',
    unHealthy: '90',
    useStableBuildAsReference: true
])

True... But that's part of what I was alluding to when saying you frequently need to dig through the GUI-oriented plugins' source to find the names of the knobs to turn via the DSL.

code of conduct - report abuse