Lately, I’ve been thinking about how to effectively implement multiple Pipelines from a monolithic multibranch repository.

As far as I can tell, there are two options:

  • create multiple Jenkinsfiles and choose which one to run within job config
  • shared Pipeline libraries

The first option is probably the sane one, but ties the version of Pipeline you’re running to the version contained within master unless you consistently merge in master before testing. This is bothersome, and I wanted to try the shared Pipeline library for fun.

Creating the Pipeline library

A Jenkins Pipeline library is pretty straightforward, the directory structure should look something like this:

mySharedLibrary/
├── resources
└── vars
    └── somePipeline.groovy

2 directories, 1 file

Managing your resources

The resources folder should, as its name suggests, contain any resources you may need while running your Pipeline. A caveat of storing your resources within your shared library is that any scripts you want to run within your resources need to be copied to a temporary or known directory beforehand. This is… a little tedious, so what we can do is define a function to bulk copy our resources to a known directory, and then work with the assumption that our resources exist in ${WORKSPACE}/resources.

A sample function which achieves this (*nix based):

// vars/initResources.groovy

def call(Map arguments) {
  def resourcesDirectory = "${WORKSPACE}@libs/mySharedLibrary/resources"

  shell sh: "mkdir -p resources/"
  shell sh: "yes | cp -vr ${resourcesDirectory}/* resources/"
}

Note: I’m unsure of how this works when your pipeline is delegated across Nodes, as AFAIK shared Pipeline libraries are only loaded onto the master.

Creating your Pipelines

Within this model, your pipelines should be defined within vars/${pipelineName}.groovy as they’re now functions. The general skeleton should be like:

// vars/somePipeline.groovy

def call(Map arguments) {
  def globalVariable

  pipeline {
    agent {
      label 'master'
    }

    stages {
      stage('Pre-build') {
        initResources()

        globalVariable = script sh:"echo ${WORKSPACE}", returnStdout: true
      }

      stage('Unit tests') {
        // Run some unit tests here or something
      }
    }

    post {
      always {
        // Do some cleanup here, or archive artifacts, etc
      }
    }
  }
}

Rinse and repeat the above process to create more Pipelines as needed!

Calling your Pipelines

Within your Jenkins setup, you have the option of either loading it as a global library, or a local library. As its name suggests, a global library is available to all jobs within Jenkins and isn’t subject to the usual Groovy sandbox restrictions. Really, the type of library you choose depends on your needs and whether or not you’re planning on using the library over multiple jobs.

Since I’m not arsed to add screenshots and explanation when there already exists a how to page on it, click here to learn how to add a pipeline library.

Within your Jenkinsfile, you can call your pipeline like so:

// Jenkinsfile

// If you want to load a specific version of the Pipeline, just do something like this
// @Library('my-shared-library@feature-branch') _

switch (myCondition) {
  case cond1:
    somePipeline()
    break
  case cond2:
    anotherPipeline()
    break
  default:
    echo "Couldn't find a pipeline to run for ${myCondition}"
}