Crafting Jenkins Pipelines: A Guide to Multi-Agent Orchestration

Crafting Jenkins Pipelines: A Guide to Multi-Agent Orchestration

How to Create Jenkins Pipelines with Multi-Agent Orchestration

Introduction

In this article we will be looking at how to create a pipeline using Jenkins, we will begin by installing and setup Jenkins on our machine, basic architecture of Jenkins, setup Jenkins pipelines, configure docker for running slave nodes (agents) & finally run multi-agent pipelines using docker containers as agents. This article is inspired from iam-veeramalla's tutorial on Jenkins

Overview

Even though you can follow this article by using your local machine for hosting Jenkins server instead of a remote VM, it is recommended to use a VM like EC2 since it is a general practice in organisations to use a long running computer for a smooth and collaborative CICD experience while using Jenkins. So for this article we will be running our Jenkins server in an EC2 instance.

Steps:

  • Create an EC2 instance in AWS

    • Create an AWS account and initiate an EC2 Instance

    • With OS: ubuntu image, instance type: t2.micro and create a new key-value pair and store the key pair somewhere safe and accessible, rest of the settings can remain as it is.

    • Make sure the permission for the key pair is set properly by running: chmod 400 yourkeypair.pem this is required to protect the key as we will be trying to SSH into our EC2 instance from our local machine using this key

    • To ssh into the instance run:
      ssh -i /your-path-to-key-location/newKey.pem ubuntu@<your-IP>

    • Make sure to change the path with correct path to where you store your location and also replace <your-IP> with the public IP of your EC2 Instance

  • Install Jenkins in the instance through the SSH terminal

    • Jenkins requires the following prerequisites to be installed in the host server: Java JDK

    • to install JDK run the following commands in the instance's SSH shell

        sudo apt update //update apt package manager 
        sudo apt install openjdk-11-jre // install JDK 11
      
    • verify the installation: java -version

    • Now, install Jenkins by running the following command:

    curl -fsSL <https://pkg.jenkins.io/debian/jenkins.io-2023.key> | sudo tee \\
    /usr/share/keyrings/jenkins-keyring.asc > /dev/null
    echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \\
    <https://pkg.jenkins.io/debian> binary/ | sudo tee \\
    /etc/apt/sources.list.d/jenkins.list > /dev/null
    sudo apt-get update
    sudo apt-get install jenkins
  • Enable inbound traffic for our instance

    • By default our EC2 will not have any inbound traffic enabled except for SSH traffic, which means when we are running any application in our EC2, in a specific port, we should be able to allow external traffic to that port so that we can access our application running on that port (in this case it is Jenkins that we are trying to access from our local browser)

    • By default Jenkins servers run on port 8080, anyways just to confirm, find out in which port Jenkins is using by running: ps -ef | grep jenkins command

    • Now, go to AWS Console > Instance Home Page > Security > Select the current security group > Edit inbound security to add a new rule

    • In here click on Edit inboud rules to add the new incoming traffic rule.

    • Click on Add rule > and set the following:
      Type = Custom Rule Protocol = TCP Port Range = 8080 Source = Anywhere/ 0.0.0.0/0 and save

    • Once done you should be able to access Jenkins ui from browser with http://yourip:8080

  • Update the password and setup Jenkins

    • In the UI, you will be given a path in your instance where you can find the initialPassword set for Jenkins by default, using which you can create your new password

    • sudo cat /path to view the initialAdminPassword

    • In the next page select Install recommended plugins and setup admin user credentials to Start using Jenkins

  • Understanding Jenkins architecture

    • Jenkins has a master-slave architecture, both master and slave may or may not run on the same machine.

    • The master is the Jenkins Controller which is responsible for management of the Jenkins instance, including scheduling jobs and managing the workers known as Agent

    • Worker/Agents are responsible for the actual execution which means each agent in scheduled to run the jobs or Jenkins build processes by the master node

    • A general practice for offloading the work is to create separate node for master and workers. This also provides some other benefits like reduced dependancy conflicts between project/teams & isolation and more

    • So these nodes can be anything like local machine, VMs or containers. A further improved option for running worker nodes are to use docker containers

    • With docker images we improve the resource utilisation by far compared to running standalone EC2s for each teams/projects

  • Setup docker agents for Jenkins

    • There are a few approaches to setup docker-agents in Jenkins, such as create the container within the same instance as the Master, create a separate instance for just running the worker nodes, or run the container locally(not recommended)

    • We will use the same instance for running the docker slaves for this demo, so we need to setup docker on the ec2

    • Install docker by running this command on the ssh shell: sudo apt install docker.io

    • Now we need to grant access for Jenkins to docker daemon by running:

    // run one by one
    sudo su - //switch to superuser
    usermod -aG docker jenkins //using usermod package grant access to jenkins
    usermod -aG docker ubuntu //using usermod package grant access to ubuntu
  • sudo su - will take you to the root user environment from where you are adding jenkins to the docker group using usermod, now switch to Jenkins user environment: su - jenkins and verify if Jenkins user is able to access docker by running: docker run hello-world and then you can switch back to ubuntu user by su - ubuntu

  • Now to confirm the changes are applied to our Jenkins instance lets restart the Jenkins by going to this url in browser:
    http://<ec2-instance-public-ip>:8080/restart

  • And again login to Jenkins

  • Setup docker plugin

    • To run docker images from a pipeline we need a plugin called Docker Pipeline to be installed in Jenkins.

    • Go to Manage Jenkins > Manage Plugins

    • In the Available tab, search for "Docker Pipeline".

    • Select the plugin and click the Install button.

    • If needed restart Jenkins after the plugin is installed.

  • Create our first a pipeline

    • Now lets look at how to create our first pipeline

    • Click on New Job > Select Pipeline & Give a name > Apply ****> This will take you to your job dashboard where you can configure and build your pipeline

    • Scroll down to pipelines definition section

    • Here we have two options to define our pipeline one is to fetch an existing definition aka Jenkinsfile from a SCM like git which is hosted somewhere else like Github or else write the pipeline there itself, to keep it simple we will write our pipeline script by selecting Pipeline script from Definition dropdown and start our script

    • To begin with we will create a simple pipeline using any available node in the host system

    • Paste the following code to the script section:

    pipeline {
        agent any
        stages {
            stage('Build') {
                steps {
                    echo 'Building on the master node...'
                }
            }
            stage('Test') {
                steps {
                    echo 'Testing on the master node...'
                }
            }
            stage('Deploy') {
                steps {
                    echo 'Deploying from the master node...'
                }
            }
        }
    }
  • The above script is written in a language called groovy which is used for scripting in Jenkins

  • The pipeline block defines the entire Jenkins Pipeline

  • The agent any directive specifies that the Pipeline can run on any available agent, which in your case, will be the master node because we haven’t setup any worker nodes yet so agent any will use the only available which is the master node aka built-in-node

  • The stages block contains multiple stage blocks, each representing a different phase of the build process (Build, Test, Deploy), this is where you define your actual script to run your jobs such as checkout from a git repo, building, static code analysis using tools like sonarqube, unit test, docker image builds etc

  • Each stage block has a steps block that contains specific steps to execute during that stage, such as in this case printing messages with echo.

  • A key feature to look out for is Pipeline Syntax which help you figure out the correct syntax for your groovy script, you can play around this feature to learn and improve your Jenkins pipeline writing skills.

  • Save the changes and this will redirect to the newly created jobs dashboard, from here you can run, configure, monitor this job.

  • To run the job click on Build Now on the left panel, this will start a new build of your job, you can see the history and current running jobs in the bottom left form where you can click on the build you want to explore, this will take you to that specific build details page

  • To see the console output for the build select the console output option from the left, for this job we will be able to see each stages running with respective echo statements

  • Now we have successfully ran our first Jenkins build pipeline

  • Create a docker-agent pipeline

    • In our first pipeline we have ran our task using the built in node which is our master agent, this is not a recommended way of running jobs in Jenkins for many reasons such as performance, security, maintainability. You can read more about it

    • Both the Jenkins community and best practice guidelines from CI/CD experts strongly recommend using dedicated agent nodes or docker-agents to achieve isolation, flexibility, and scalability for your CI/CD pipelines.

    • We will look into how to use docker images as build agents, for this task except for the pipeline script rest other steps we followed in our first-pipeline remains the same for pipeline configuration, so I will just explain the difference in our script and additionally some setups we need to do in our host machine.

    • So the idea of using a docker container as build agents is that you can have a completely isolated environment for your build tasks, this will help solve the earlier mentioned issues while you do not have to have a separate computing machine to achieve isolation, which is the idea of container in general

    • In order for your task to run on a docker agent we need to install docker in our host machine and make sure the Jenkins user environment have access to docker daemon

    • To install docker do:

    sudo apt update
    sudo apt install docker.io
  • Now to give Jenkins access to docker daemon switch to root user by running: sudo su -

  • Next, add the Jenkins user to the Docker group (of user which has access to Docker): usermod -aG docker jenkins

  • Now our Jenkins server is ready for running docker containers as agent nodes, lets look at the script for our pipeline:

    pipeline {
      agent {
        docker { image 'node:16-alpine' }
      }
      stages {
        stage('Test') {
          steps {
            sh 'node --version'
          }
        }
      }
    }
  • In this pipeline, we are using a docker base image of node:16-alpine as our agent, this is configured under agent block as you can see above

  • Here what happens is during each build Jenkins will create a container from the provided image and run each steps of each stage in that same container and once the build stages are completed it will automatically stop and remove the container form the host.

  • To test this you can simultaneously run docker ps -a in the host machine while the build is happening in Jenkins to view all containers created for the job and once the build is completed you can see that the containers will be exited and removed immediately after completion of the build tasks.

  • This can also be seen in the end of the console output of the job.

  • Creating a multi-agent pipeline

    • So far we’ve seen how to use a single agent for all the stages in a pipeline, but there might be scenarios where you want to run multiple stages with different agents configurations, like a stage requiring a different OS or two stages requiring two different versions of the same dependency etc.

    • For such scenarios Jenkins supports the use of multi-agent stages, by setting up the global agent as none and specifying distinct agents for each of the stages. Here the global agent is nothing but the agent we specified directly under the pipeline block which will be applied globally within all the stages of the pipeline

    • By setting this to none we are specifying Jenkins to not use any common agents and just use the agent specified within the stages block, lets look at the script for better understanding

    pipeline {
      agent none
      stages {
        stage('Test maven') {
          agent {
            docker { image 'maven:3.8.1-adoptopenjdk-11' }
          }
          steps {
            sh 'mvn --version'
          }
        }
        stage('Test node') {
          agent {
            docker { image 'node:16-alpine' }
          }
          steps {
            sh 'node --version'
          }
        }
      }
    }
  • Here in this example we have two stages with two different agents, the first stage is running on a docker container with base image maven:3.8.1-adoptopenjdk-11 and the second stage is running on node:16-alpine

  • This can be useful when you want to write a stages for different stacks in the same pipeline

Summary

In this article we‘ve covered an overview of Jenkins including basics architecture of Jenkins, agents and nodes, running docker containers as slave machine and creating multi-agent pipelines. Even though the article is a bit lengthy I've tried to explain every detail behind each step we followed. Using this knowledge next we can create automated pipeline using webhooks and trigger builds and create actual CICD stages such as building, testing, SCA and deployments.