DEV Community

How-To: Setup a unit-testable Jenkins shared pipeline library

Adrian Kuper on January 05, 2019

(Originally posted on loglevel-blog.com) In this blog post I will try to explain how to setup and develop a shared pipeline library for Jenkins, t...
Collapse
 
prestontim profile image
Tim Preston

This is a great article. My only criticism is that I wish I would have discovered it a couple of months ago before I coded absolutely everything in vars. I'm now in the middle of a massive refactor and it's all your fault. :-)

Collapse
 
mcascone profile image
Max

Uh oh, I've done exactly the same thing because i really can't wrap my head around the java package structure, i'm not a java guy; it makes a lot of sense to me to just have everything in /vars as single groovy scripts and call them as steps. How did the refactor go? Was it worth it?

Collapse
 
prestontim profile image
Tim Preston

It was absolutely worth it (for me) because I'm a big proponent of unit testing. It just bugged me to have a huge repository of "untested" code that my whole enterprise is depending on. Our shared library has something like 80% unit test coverage right now (post refactor) and a deployment pipeline that makes sure that it doesn't regress.

Of course, your mileage will vary . I'm certainly not going to judge you if you're comfortable having all of your code in vars.

Thread Thread
 
mcascone profile image
Max • Edited

I agree completely in theory; i was just saying to the bosses today that

yes, [onboarding apps to the shared pipeline] is the priority, but the more apps we onboard, the more critical the robustness and reliability of the pipeline is. The more we are relying on it, the more we need to be able to maintain it without breaking it.

Is there a general pattern that develops when doing a refactor like this?

Whenever I start feeling like I'm some sort of jenkins guru, I come across an article like this that puts me in my place.

Collapse
 
kuperadrian profile image
Adrian Kuper

😅 Thanks for your feedback 👍

Collapse
 
stevewallone profile image
Steve Wall

Thanks for the article Adrian!
Nice implementation. One piece of feedback, in the test case, it is accepting any string as valid input. I was wondering if it would be better if the test case asserted the specific string.

verify(_steps).sh(echo \"building some/path/to.sln...\")

Collapse
 
kuperadrian profile image
Adrian Kuper

Good catch, it probably would. Looks like I was a little bit lazy at the time of writing and just decided to with anyString() ;) Thanks for the feedback

Collapse
 
hungluong profile image
Hung Luong

Just what I'm looking for!

I have a question: do you configure the library as global or non-global? I ran into some access control issue with Script Security while setting this up as a non-global so I'm assuming it should be global?

If that's the case then probably should be mentioned in the article :)

Collapse
 
kuperadrian profile image
Adrian Kuper • Edited

Interesting. I indeed only tested this with a global library. I will look into this as soon as I have some spare time. If you would like to help you can open an issue on Github.

Thanks for the feedback :)

Edit: Finally got around to do some testing, but could not reproduce your problems. The example Jenkinsfile worked for me with both a global and a folder specific library (which per default is untrusted and runs inside the groovy sandbox). Maybe you added some non-whitelisted classes, which caused your access control issues?

Collapse
 
hungluong profile image
Hung Luong

Hi, thanks for getting back :)

The error I get was on calling the static method of ContextRegistry though. I'm not entirely sure to be honest, could be some weird versioning thing or something else. I'll try to open an issue when I get around to test it further.

Collapse
 
shadycuz profile image
Levi

When I create a class I pass in the workflow script context in order to use steps.

Myclass example = new Myclass(this)

This allows me to use the pipeline steps in my class. Did you avoid this because its hard to test?

I personally feel like it might be a lot of technical debt to add every pipeline step I use to that interface you are using. I probably use 40-100 different steps in our company.

Collapse
 
kuperadrian profile image
Adrian Kuper

Jep, the main reason is testability. Outside of the Jenkins pipeline (e.g. during regular unit tests), the steps can't be called that easily. The interface therefore allows for mocking of the step calls. Adding all of them (or at least the used ones) to the interface is busy-work for sure, but not technical debt in my opinion. In my experience the most used steps by far are either bat or sh, which means that the IStepExecutor interface should not be that big.

But obviously its different in your case and having to mock 40-100 steps would mean a lot of boilerplate that no one really wants to write. You must decide for yourself, if the advantage of unit tests is worth the effort. Alternatively you could have a look at this blog post, which details a different approach to testing Jenkins Shared Libraries using jenkins-spock (no IStepExecutor interface needed 😉).

Collapse
 
mccabep67 profile image
mccabep67

Thanks for the article Adrian, I have a question -
Since we are keeping the vars scripts as clean as possible do you know of any examples of how to perform more complex operations within the groovy "testable" sources?
Your example MsBuild.class is a good start but I'm struggling to find examples of an approach for performing git checkouts , builds etc. I'm new to JSL so I quite possibly have overestimated what can be wrapped in these classes.

We have tons of repeated code in Jenkins files and I have been tasked with implementing shared libraries. If I can do this create testable code that would be very cool...

Many thanks.

Collapse
 
kuperadrian profile image
Adrian Kuper

Hi, thanks for your comment :D Let me try to answer some questions:

  • If your are using declarative pipeline scripts the git checkout should be handled for you and does not have to be part of your library.
  • Jenkinsfiles are just a collection of command-line calls to be executed on the build agent. Doing a build therefore depends on the technology stack used. E.g. building a .NET Framework project would include a command-line call to MSBuild, while a Rust program would call cargo build. The latter could look somewhat like this inside the library
class CargoBuilder implements Serializable {

    void build() {
        IStepExecutor steps = ContextRegistry.getContext().getStepExecutor()

        // on windows agents use steps.bat("...")
        int returnStatus = steps.sh("cargo build")
        if (returnStatus != 0) {
            steps.error("Cargo build failed.")
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Inside your Jenkinsfiles instead of writing cargo build each time, you could just write build (if your var script is called build.groovy of course). This is a very basic example (and only saves a single word), but in a real world example, the library can streamline the build process inside your organization and abstract away some of the more complicated command-line calls (e.g. by setting specific build parameters by default, which in turn can be omitted inside the Jenkinsfile).

  • I would keep the var scripts concentrated to a single task, e.g. building, testing, running metrics, deploying artifacts etc. Let's assume we wrote var scripts for all of these tasks, ex_build.groovy, ex_test.groovy, ex_metrics.groovy and ex_deploy_artifacts.groovy. Any Jenkinsfile, which before might have been very long, filled with strange parameters, paths and commands, could (more or less) be reduced to this:
@Library('my-library@1.0') _

pipeline {
    agent any
    stages {
        stage('build') {
            steps {
                ex_build
            }
        }
        stage('test') {
            steps {
                ex_test
            }
        }
        stage('metrics') {
            steps {
                ex_metrics
            }
        }
        stage('deploy artifacts') {
            steps {
                ex_deploy_artifacts 'some/path/to/artifact'
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

(of course, this is still simplified).

Imo, the whole challenge of writing Jenkins shared libraries is collecting and wrapping all command-line calls necessary for your builds and make fitting abstractions that simplify your Jenkinsfiles and streamline your organizations build process. In the beginning this can be very difficult. (To be honest, before I was tasked in setting up Jenkins pipelines, I almost had no clue, which parts were involved in building our software. For me "a build" was just pressing the green play button in Visual Studio 🙈).

So, in your case, I would probably take a long and good look at your current Jenkinsfiles, write down which tools (e.g. MSBuild, cargo, npm, dotnet, maven, gradle) are used and how (parameters passed to the tools, which commands repeat between Jenkinsfiles, etc.). As soon as you have a good understanding of the current build process, you can start to extract the command-line calls of these tools into your library by creating fitting var scripts (e.g. ex_build.groovy) which in turn call a unit-tested "build" class. This class simply wraps the command-line call to the build tool (with steps.sh("<the command>"); or if you are using windows-based build agents steps.bat("<the command>");). That's basically it.

I hope this wall of text clears things up a bit 😅 If not, don't be shy to ask any new questions 👍

Collapse
 
mccabep67 profile image
mccabep67 • Edited

Wow.. What a fantastic reply. You do make perfect sense. I will be following your advice tomorrow. Starting by trawling through our jenkinsfiles and many many bash scripts to see what we can DRY (so to speak)
Thank you for expanding with more examples and so much detail. It's also nice to hear you started from a similar place.

Many thanks again

Regards

Patrick

Thread Thread
 
kuperadrian profile image
Adrian Kuper • Edited

You do make perfect sense.

I don't hear that very often 😄 Glad I could help

Collapse
 
biskit1943 profile image
Maximilian Konter • Edited

Great Article! But one thing I could not wrap my Head around, how do you mock steps.node? I can't figure it out on how to test my sh command which is inside of a node...

Collapse
 
kuperadrian profile image
Adrian Kuper

You are using a Scripted Pipeline, right? Not sure how you could mock the node step (or any step that uses closures for that matter) as I'm only familiar with Declarative Pipelines. Sorry 🙈

Keep in mind that the goal of this article is to test your Jenkins Pipeline Library. Not the pipeline itself (aka the Jenkinsfile).

Collapse
 
biskit1943 profile image
Maximilian Konter • Edited

Thank you for the reply.
Yes I'm using the Scripted Pipeline, but I use my SL to do some things on nodes. Anyway, I figured it out just now.

The following is my current code to mock node("<name>") {}:

// StepExecutor.groovy
@Override
void node(String name, Closure closure) {
    this._steps.node(name, closure)
}
Enter fullscreen mode Exit fullscreen mode
// SomeTest.groovy
@Before
void setUp() {
    doAnswer({ invocation ->
        String name = invocation.getArgument(0)
        Closure closure = invocation.getArgument(1)

        println(name)
        closure.call()
    }).when(_steps).node(anyString(), any())
}
Enter fullscreen mode Exit fullscreen mode

My problem was to figure out what signature the method node had. This was
caused by the fact, that I did not know, that node("<name>") {} is the
same as node("<name>", {}).

So for anyone wondering how to mock the closure methods, this is the way.

@kuperadrian maybe you could include this in your article to help anyone who is in need of mocking a closure :)

Collapse
 
bakito profile image
Marc Brugger

Hi Adrian

Thank you very much for the article.
I'm maintaining a shared library that I'd like to test more. I like your solution very much.

Some of my vars use the openshift client plugin. github.com/openshift/jenkins-clien...

openshift.withCluster() {
   openshift.withProject() {
      openshift.raw("project")
   }
}
Enter fullscreen mode Exit fullscreen mode

I was looking for a way to mock the calls to the openshift variable, but didn't find a way how to do that.

Have you ever tried something like that? / Do you have an idea how to do that?

Collapse
 
f3rdy profile image
f3rdy

Hi Adrian,

I hope you're still reading here. I still have a problem of understanding how the steps are mocked and used in your example. I've managed to rework the testing to move to spock and get rid of all that java stuff.

Now the following. I created a ProjectVersionGetter class to encapsulate the fetching of the current version (which in turn is done by calling a gradle task in the project). For simplification and testing I return a 3.0.0.0 hardly using the following class:

class ProjectVersionGetter implements Serializable {
  private Map _configuration

  ProjectVersionGetter(Map configuration) {
    _configuration = configuration
  }

  void exec() {
    IStepExecutor steps = ServiceLocator.getContext().getStepExecutor()

    String version = steps.sh(true,"echo \"3.0.0.0\"")
    _configuration.put('projectVersion', version)
  }
}

The vars file looks like this:

def call(configuration) {
  ServiceLocator.registerDefaultContext(this)

  def projectVersionGetter = new ProjectVersionGetter(configuration)
  projectVersionGetter.exec()
}

So, I provide a configuration Map where the version should be kept. In my declarative pipeline, I've got a stage like this:

      stage("Fetch version") {
        steps {
          ex_getversion config
          script {
            currentBuild.displayName = "[${currentBuild.id} ${config.projectVersion ?: 'No version set!'}"
          }
        }
      }

My simple spock test shall verify, that the projectVersion is set to the configuration map and - for the matter of simplicity - it should be valued to "3.0.0.0".

def setup() {
    _context = mock(IContext.class)
    _steps = mock(IStepExecutor.class)
    when(_context.getStepExecutor()).thenReturn(_steps)
    ServiceLocator.registerContext(_context)
  }

  def "Version is returned in configuration"() {
    when:
    def configuration = Defaults.loadDefaults([:])

    and: "ensure mandatory keys are set"

    Defaults.getMandatoryKeys().each {
      configuration.put(it, "dummy_value")
    }

    and:
    def projectVersionGetter = new ProjectVersionGetter(configuration)
    projectVersionGetter.exec()

    then:
    // verify(_steps).sh(anyString())
    configuration.containsKey('projectVersion') // FIXME: && configuration.get('projectVersion') == '3.0.0.0'
  }

The key is set, but to null. So, the main problem I have is with the line

    String version = steps.sh(true,"echo \"3.0.0.0\"")

So, I added to IStepExecutor and the Implementation a function to cover the return of the stdout to a String, so I can retrieve the version from the "gradle call" (in future):

interface IStepExecutor {
    int sh(String command)
    String sh(boolean returnStdout, String script)
    void error(String message)
}

The implementation is

    @Override
    String sh(boolean returnStdout, String script) {
        this._steps.sh returnStatus: true, returnStdout: returnStdout, script: "${script}"
    }

Can you, or someone point me to the mistake I made?

Thanks a lot,
ferdy

Collapse
 
jcoelho profile image
José Coelho

Great article Adrian!
My team has a huge shared lib but we have 0 unit tests. I've been trying to find a way to test our library, I'm going to try and follow your examples.

Thanks a lot.

Collapse
 
kuperadrian profile image
Adrian Kuper

Cool, glad I could help :D

Collapse
 
murashandme profile image
murashandme

Thanks for great article, Adrian! It will go to my bookmarks!

I have a huge shared library with zero tests and every run of deployment pipeline is killing small peace of my soul. Hopefully, I'll try to cover it with unit tests.

Thanks for mention Serializable thing, this is pain for beginners.

Collapse
 
kuperadrian profile image
Adrian Kuper

Jenkins pipelines can be exasperating indeed 🙈 Thanks for your comment 👍

Collapse
 
haridsv profile image
Haridsv

Great guide! Have you tried getting code coverage numbers for the vars files?

Collapse
 
haridsv profile image
Haridsv

I guess it doesn't matter in this approach, since the var files don't really have much logic, but curious nevertheless.

Collapse
 
kuperadrian profile image
Adrian Kuper

Yeah, I'm not to sure about how to test groovy scripts in general, but testing anything in vars can be tough when Jenkins steps like sh are used. So as you said, the general approach here is to keep the code inside vars as small and trivial as possible while testing can be done for "conventional" classes. Therefore I don't have any code coverage numbers for the vars stuff

Collapse
 
wanderrful profile image
Wanderrful

FYI the build.gradle file should instead have this:

sourceSets {
    main {
        java {
            srcDirs = []
        }
        groovy {
            // all code files will be in either of the folders
            srcDirs = ['src/main/groovy', 'vars']
        }
    }
    test {
        java {
            srcDir 'src/test/java'
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The classpath isn't correct when you try to run the tests, and the package path adds main.groovy... as a prefix to everything.

Collapse
 
tueksta profile image
tueksta

It's a bit confusing how you use var/vars resource/resources interchangeably. Would be good to call the things the way they show up on the screenshots and on the disk.

Collapse
 
bouncingjack profile image
bouncingjack

This is a wonderful article!
I don't have a lot of Groovy/Java experience and this slow and methodical explanation is exactly what I needed!