diff --git a/_posts/2024-05-15-first-gsoc.md b/_posts/2024-05-15-first-gsoc.md
new file mode 100644
index 0000000..9bf2945
--- /dev/null
+++ b/_posts/2024-05-15-first-gsoc.md
@@ -0,0 +1,52 @@
+---
+layout: post
+title: "[GSoC] I was accepted into Google Summer of Code!"
+date: 2024-05-15
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci, kcov, code coverage]
+---
+
+# My first steps of community bonding in Google Summer of Code
+I've been accepted into GSoC program for the kworkflow project this year! My proposal is to implement a self-owned server that will host a CI pipeline in Jenkins (replacing the actual GitHub Actions pipeline) and manage data telemetry. Let's see what I've done in the first two weeks.
+
+## Studying the Jenkins capabilities
+Jenkins is an open-sourced tool for providing automation, especially for Continuous Integration and other DevOps practices. It is a very solid and consolidated project that offers a variety of different plugins, allowing easier integration with different automation infrastructures. It is meant to be structured as a distributed system with a central Jenkins controller, which will manage the automation requests and schedule them to its different computing nodes (known as agents).
+
+Therefore, one of the most important first tasks is to implement a first Jenkins agent to deal with the CI tasks. I've gone through the [official Jenkins agent tutorial](https://www.jenkins.io/doc/book/using/using-agents/). The agent worked fine, but a bit more refining will be done next days.
+
+
+
+Another major aspect of the initial Jenkins setup is the integration of the Jenkins pipeline with [the GitHub repository](https://github.com/kworkflow/kworkflow), so it can receive webhooks and build the Pull Requests and commits into the pipeline accordingly. I've already done this before the start of the Community Bonding period, but I will expand on that next section.
+
+## The GitHub Branch Source Plugin
+
+One of the most used Jenkins plugins is the GitHub Branch Source Plugin. It is responsible for abstracting in Jenkins the process of setting the controller to listen for webhooks sent from GitHub.
+
+To correctly use the plugin and implement the functionality, one needs to first set up a GitHub App in GitHub. It will be responsible for receiving subscriptions from other services (in our case, Jenkins), conceding them chosen permissions in a repository, and sending them webhooks for certain events in the given repository.
+
+It is a simple process once you get it. Create the GitHub App, select permissions regarding Checks and Pull Requests, and set the webhook URL. It is also very important to generate the App's private key and download it, so it can be given to Jenkins and used by it to authenticate with the GitHub App.
+
+In Jenkins, create the Multibranch pipeline. With the Branch Source Plugin, VCS hosts integration options will be displayed on the pipeline settings. Then, give the appropriate branch required information, including the private key.
+
+I've done these steps in a kworkflow's fork of mine. Then, following the suggestions of the maintainers, I implemented the first required task for the pipeline: The code coverage.
+
+## Implementing a code coverage stage for the pipeline
+
+When the Jenkins pipeline got associated with the repository, it didn't detect many branches at first. It is because the pipeline was set to interact only with branches containing a **Jenkinsfile** in the root directory. It is the file containing all the pipeline steps to be executed. This way, the CI pipeline is on the project repository itself, open to the general public.
+
+
+
+This is the starting point for developing the basic pipeline for code coverage generation.
+
+I've started by understanding how kworkflow initially generated their code coverage using GitHub Actions. It appears that the [kcov](https://github.com/SimonKagstrom/kcov) software is used, and the output is then integrated with CodeCov. By reading their documentation, I've found out that kcov also generates output in XML Cobertura format, which can then be parsed by Jenkins by using the Cobertura Plugin. Finding out how to use the cobertura command in the Groovy-scripted Jenkinsfile was not that hard, since Jenkins offers a Snippet Generator.
+
+
+
+[The initial Jenkinsfile](https://github.com/MarceloSpessoto/kworkflow/commit/36b7b40ea32d5c09fbb5246839af459032b43fa4) was then finished. It resolved the code coverage problem in general.
+
+There are still some problems that need to be addressed soon. First, I will ensure that the docker agent is set in the most efficient and scalable way possible. Then, I will also improve the modularization of the pipeline steps for the code coverage job.
+
+Another important fix to be addressed is related to the vm_test and signal_manager_test, which failed in the pipeline and had to be "suspended" to validate the overall pipeline. I highly suspect it is caused by some dependency missing in the pipeline environment, and I hope I can fix it this month.
+
+This is what I did in the first two GSoC weeks. In my first proposed schedule, I had a bigger emphasis on studying/setting virtual and physical hosts. However, after some discussions with the maintainers, we've decided to focus on replicating the actual GitHub Actions pipeline and validating it as a primary focus.
diff --git a/_posts/2024-05-19-implementing-jenkins-as-code.md b/_posts/2024-05-19-implementing-jenkins-as-code.md
new file mode 100644
index 0000000..40b1d57
--- /dev/null
+++ b/_posts/2024-05-19-implementing-jenkins-as-code.md
@@ -0,0 +1,64 @@
+---
+layout: post
+title: "[GSoC] Implementing Jenkins as Code"
+date: 2024-05-19
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci, CaC]
+---
+
+# Applying the Jenkins as Code paradigm
+
+On my first two GSoC weeks, I've dealt with a basic Code Coverage pipeline and implemented it on a
+bare-metal infrastructure. My next expected steps, according to my project schedule, were to start
+implementing the virtual machine and physical device agents. Turns out, however, that this task may be
+delayed a little bit more, so I can focus on polishing the Jenkins pipeline foundation.
+
+As I've immersed myself in the Community Bonding period, the proper port of the complete actual GitHub
+Actions CI to Jenkins has emerged as a more important task. From that, we will have a nicer environment
+to plan and develop the new testing workflow. Also, the dummy implementations could be seamlessly packed
+into the development period, where I will direct all my efforts on this specific issue.
+
+## Applying CaC
+
+Configuration as Code (CaC) is a paradigm where the configuration of an application is described in "code",
+such as yaml scripts. This way, one can easily automate the deployment of such infrastructure by using the
+code instead of setting things up manually.
+
+Doing that for the kworkflow Jenkins pipeline would bring many benefits. Most of the CI configurations
+and plugin settings could be saved on a VCS host server, keeping all the DevOps work preserved and ready
+for deployment on any bare-metal network.
+
+### First step: setting a Docker Compose environment
+
+First of all, it will be interesting to replace the actual Jenkins bare-metal install with a containerized
+service. It makes the service deployment easier and also enables one to set up a specific configuration for
+Jenkins as Code Plugin (we'll come into that really soon) and plugin set.
+
+Also, this enables the use of Docker Compose to orchestrate the automated deployment of the Jenkins servers
+alongside its Jenkins agent container.
+
+The only important configuration I needed to set on the Dockerfile was to install the necessary Jenkins plugins.
+
+The full setup can be found [here](https://github.com/MarceloSpessoto/jenkins-kw-infra).
+
+### Using Jenkins plugins for CaC deployment
+
+After setting up the basic docker environment, my next steps would be to configure Jenkins as Code for immediate configuration of newly deployed Jenkins containers.
+
+The core Jenkins for CaC is JCaC (Jenkins Configuration as Code). It lets you write on a simple yaml file the configuration settings of your Jenkins server.
+
+It is very simple to figure out how to write the configuration file. You can access their [example configuration samples](https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos). You can also export yaml from a Jenkins server.
+
+By using their example configuration as a reference and by exporting the configuration from a Jenkins server I've previously set before, I was able to deliver a basic Jenkins configuration for kworkflow's needs.
+
+The most important configuration, however, was to allow the automated setup of the Jenkins credentials. I've managed to achieve that by mounting the credentials files (private keys, etc.) into the docker containers, and loading the mounted credential files directly from the JCaC configuration file.
+
+### Job DSL plugin
+
+For the kworkflow pipeline, it is also very important to setup the pipelines automatically. The kworkflow's CI workflow actually executes 5 different jobs, and thus, we need to easily configure 5 different pipelines in Jenkins, each having the same credentials configuration and executing their unique pipeline jobs.
+
+This can be achieved by using the Job DSL plugin. It is called by JCaC plugin to create jobs in an automated way. We set a groovy file, declare an array with 5 different job names. For each job, create a Jenkins multibranch pipeline, and set it to execute the job from a Jenkinsfile with the job's name.
+
+The Jenkinsfile with the job to be executed is expected to remain in the kworkflow repository.
+
diff --git a/_posts/2024-07-20-jenkins-intro.md b/_posts/2024-07-20-jenkins-intro.md
new file mode 100644
index 0000000..30b6a00
--- /dev/null
+++ b/_posts/2024-07-20-jenkins-intro.md
@@ -0,0 +1,124 @@
+---
+layout: post
+title: "[Jenkins] Introduction to Jenkins"
+date: 2024-07-20
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci]
+---
+
+# Exploring Jenkins
+
+This the very first post from my Jenkins series. The idea of this series is to
+register valuable information about the Jenkins tool but also to track different aspects
+of it that I've explored during Google Summer of Code.
+
+These blog posts will try to explain core concepts/practices regarding Jenkins in a succint way.
+Of course, you can always dive into official documentation for more detailed explanations.
+
+## Summary
++ [Introduction to Jenkins](#intro)
+ + [The Pipeline](#pipeline)
+ + [Nodes and Distributed CI](#nodes)
+ + [The Jenkinsfile](#jenkinsfile)
+ + [Plugins](#plugins)
++ [Glossary](#glossary)
++ [Resources](#resources)
+
+
+
+## Introduction to Jenkins
+
+Jenkins is a Java-based tool used for providing a self-hosted automation server.
+
+It is quite useful for CI/CD infrastructures, since it provides the necessary automations for
+building and deploying code in a self-owned server.
+
+Since Jenkins is open source software, there are many good reasons to
+choose Jenkins over other CI/CD alternatives, such as:
++ Althought it requires from the user the management of their own server, it is a completely free tool.
++ There are plenty of plugins to extend Jenkins functionalities, making it very versatile.
++ The open source model is good for improving overall security.
+
+Let's take a look on some key concepts to understand how to effectively use Jenkins for a CI/CD context...
+
+
+
+### The Pipeline
+
+The Jenkins Pipeline is the heart of every Jenkins automation. It is basically the automation pipeline
+to be executed itself.
+
+
+
+#### Nodes and Distributed CI
+
+In Jenkin, it is very important to distribute the CI/CD tasks between different computing nodes. You can
+assign Jenkins agents (which can be an entire physical computer, a VM, a container, etc.) to execute specific
+tasks.
+
+
+
+#### The Jenkinsfile
+
+The Jenkinsfile is the "as code" definition of the instructions to be executed by a pipeline, usually placed
+in the root of the project. It is basically a groovy script describing each **step** (i.e., task) to be executed
+by the CI/CD pipeline. These steps can be conceptually separated in different **stages**.
+
+The Jenkinsfile also enables the definition of which **agent** (i.e., nodes) will run the pipeline or specific stage or step.
+
+Here's an example of a Jenkinsfile, in a kworkflow fork root, which installs kworkflow and prints some content.:
+
+```
+pipeline {
+ agent any
+ stages {
+ stage('Install kworkflow'){
+ agent {
+ label 'kw-installer'
+ }
+ steps {
+ sh './setup.sh --install --force'
+ }
+ }
+ stage('Echo Something'){
+ steps {
+ echo 'kworkflow has been installed'
+ echo 'Now I will print some statements'
+ }
+ }
+ }
+}
+```
+
+The Jenkinsfile above assings any agents to execute the pipeline. Then it executes the first stage, "Install kworkflow",
+which will use exclusively the agents labeled as "kw-installer". This stage has a single step: execute the sh command
+"./setup.sh --install --force".
+
+Then, it reaches the second stage, "Echo Something", which doesn't assign any specific agent, so the agents any from the
+outer scope are applied. It executes two steps, each `echo`ing a different statement.
+
+
+
+#### Plugins
+
+Plugins are one of the most important features of Jenkins. They extend Jenkins functionalities, and this
+applies to the Pipeline as well. With plugins, one can, for example, extend the Pipeline Syntax for Jenkinsfile,
+and, for example, use a new `junit` command in a step, or define the use of a dynamic `docker` agent in the Pipeline.
+
+
+
+## Glossary
+
++ CI/CD: Continuous Integration and Continuous Delivery, i.e., the automation of the process of developing and
+delivering code.
++ Groovy: A dynamic scripting language that can be compiled to bytecode for JVM (Java Virtual Machine). This
+enables Groovy to work pretty well with Java applications, such as Jenkins.
++ kworkflow: Open source project for eliminating manual overhead on the context of kernel development.
+
+
+
+## Resources
+
++ [Jenkins Handbook](https://www.jenkins.io/doc/book/)
+
diff --git a/_posts/2024-07-20-my-post-midterm-evaluation-progress-on-gsoc.md b/_posts/2024-07-20-my-post-midterm-evaluation-progress-on-gsoc.md
new file mode 100644
index 0000000..26316b3
--- /dev/null
+++ b/_posts/2024-07-20-my-post-midterm-evaluation-progress-on-gsoc.md
@@ -0,0 +1,108 @@
+---
+layout: post
+title: "[GSoC] My post midterm evaluation progress on GSoC"
+date: 2024-07-20
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci]
+---
+
+It has been a while since I've posted about my GSoC project, and a lot has changed about it.
+Since the midterm evaluation period has come to an end very recently, I will take the opportunity
+to share how the CI infrastructure is now and my next steps for it.
+
+# Completely migrating kworkflow GitHub Actions CI to Jenkins
+
+The kworkflow project uses GitHub Actions for all its needed automation processes, such as unit and
+integration tests, code coverage analysis, and linting. However, a little bit of Jenkins automation may
+help the project with some things.
+
+First of all, it is expected that kworkflow integration tests get an improvement in complexity. In an ideal
+scenario, kworkflow will be tested with realistic test cases involving virtual machines (and maybe real
+machines) and coordination between devices (such as image deployment and ssh connections). The GitHub Actions
+Ubuntu VM environment will not be enough to handle such situations. It will be necessary to prepare a
+self-hosted setup, and Jenkins is the best CI/CD tool for a self-hosted implementation. It is widely used
+to this day and it is open source, granting decent community support and flexibility for a tool.
+
+Also, this implementation would enable the kworkflow project to host its own code coverage, not relying on
+anymore on codecov. Although codecov has made its source code available recently and also offers a
+self-hosted solution, it is still better to have a kworkflow's own coverage host, since codecov is still
+on BSL license (which allows access to code, but is not open source according to OSI definition) on the
+time of this blog post.
+
+The final Jenkins implementation for this stage of development will completely migrate all
+GitHub Actions automation to Jenkins. It is expected, however, that the integration test implementation
+changes over time. Also, the Jenkins infrastructure may work along existing GitHub Actions workflow,
+if the former's implementation brings no benefit over the latter.
+
+## The general architecture of the Jenkins infrastructure
+
+First of all, the Jenkins infrastructure is open source and available as code (by using CaC Jenkins plugins,
+such as Jenkins Configuration as Code and Job DSL).
+
+The Jenkins server will be primarily composed of its controller deployed in the official Jenkins docker
+container. Alongside the controller, one or more agent containers will be deployed, with docker-compose.
+These agents will be launched via SSH and run the Pipelines. Their Dockerfile will also ensure the
+setup of necessary tools for some Pipelines, such as kcov for code coverage or shellcheck for bash linting.
+These agent containers will run all tests, except the integration tests.
+
+For the integration tests, something different must be done, because it will contain testing scenarios
+with containers creating other containers (such as ssh integration test). Since the agent containers are being directly
+run by the host kernel, one should avoid granting them privileges with the `--privileged` flag or access to the
+container engine socket. For integration tests, this should be an absolute NO, since the
+agent containers would not only get higher privileges to create new containers for the integration tests (in
+the kworkflow case, the distros containers) but also run contributors' code (such as `run_tests.sh`, which would
+orchestrate the podman containers used for integration tests and could be maliciously modified by anyone). This
+would be a dangerous approach that would highly increase the attack surface for the Jenkins physical server.
+
+The idea to overcome this is to simply follow GitHub's approach when providing their GitHub Actions CI/CD
+environment: pack it into a VM. If a Virtual Machine agent is responsible for running the integration tests,
+which will allow containers in container, it won't be dangerous to grant containers higher privileges, since
+they will only have access to the VM's kernel, completely isolated from Jenkins's controller and the
+host kernel.
+
+For the other CI/CD automation, there is no need for a VM agent, because the other Pipelines will not
+require privilege escalation, and the lightweight approach is sufficient.
+
+Therefore, for the VM agent, Vagrant was the chosen tool to configure a virtualized environment for
+the integration tests. By using a Vagrantfile, one can easily set up the virtual machine setup, which
+is very similar to an "as code" approach for providing VMs.
+
+It is expected that the Jenkins CI for kworkflow will have, therefore, a containerized environment for
+the majority of testing jobs (unit tests, code coverage, etc.) and a full VM for running integration tests.
+
+## An overview of the current status
+
+The base of the infrastructure is basically completed, for regular tests (excluding integration), but it
+needs more polishing. I am also not completely satisfied with the actual container agent implementation (using SSH)
+and plan to experiment a little bit more.
+
+It is now time to prepare myself for the most complex job: the integration tests. I can easily replicate its
+actual state with a VM agent, but I also need to plan how to adapt it for the upcoming deploy test validation,
+which will require an approach similar to kernel-level CI testing.
+
+## The next steps
+
+Having a good base for a CI infrastructure to be applied to the kworkflow project, I plan to improve the
+following:
+
++ Experiment with Kubernetes plugin: The containerized solution for testing using SSH agents feels safer
+than using the Docker plugin to instantiate new containers within the controller container. However, there
+may be a more scalable and dynamic way to provide container agents. This may be the case for the Kubernetes
+plugin for Jenkins, which deals with using a Kubernetes cluster to dynamically provide new inbound Jenkins
+agents. This would be my final study on how to manage container agents.
+
++ Polish the CI infrastructure repo: Having a setup to install all dependencies required to deploy the
+infrastructure would be nice. It is also very important to update its `Readme` and test the infrastructure
+on a real environment (i.e. in a real server).
+
++ Finish replicating the integration tests agent: Ensuring the infrastructure can execute an almost
+identical replica of the actual state of integration tests is something to be done soon. I also plan to
+study the (KernelCI Jenkins repository)[https://github.com/kernelci/kernelci-jenkins]. It will certainly
+help me prepare to implement the next steps for a CI infrastructure (the interaction with physical agents).
+
++ Write many blog posts about it: I plan to write tutorial blog posts explaining the different Jenkins
+concepts I have explored so far, aiming to produce more feasible content derived from this more theoretical
+approach to GSoC and also keep a register of everything I've done so far.
+
+
diff --git a/_posts/2024-07-23-jenkins-pipeline.md b/_posts/2024-07-23-jenkins-pipeline.md
new file mode 100644
index 0000000..9a577f5
--- /dev/null
+++ b/_posts/2024-07-23-jenkins-pipeline.md
@@ -0,0 +1,129 @@
+---
+layout: post
+title: "[Jenkins] Jenkins Pipeline"
+date: 2024-07-23
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci]
+---
+
+# Exploring Jenkins - Part 2
+
+In this post we are going to understand the Jenkins Job with a pratical implementation.
+
+## Running the Jenkins Docker image
+
+Instead of installing and setting up a Jenkins environment on our host device, let's play with it on a
+Docker image first. There is a [Docker image specific for running Jenkins already on Docker Hub](https://hub.docker.com/r/jenkins/jenkins).
+
+Let's pull the image with `docker pull jenkins/jenkins:lts-jdk17` and then run it with
+`docker run -p 8080:8080 jenkins/jenkins:lts-jdk17`.
+
+We can now access the web interface from `localhost:8080`.
+
+It will redirect us to the Jenkins installation. Copy the password printed on the terminal and paste it
+in the required field of the web interface. Then choose to install recommended plugins.
+
+## Running our first Job in Jenkins
+
+Since we are just testing for now, we can skip the **highly recommended step of setting up a Jenkins agent**.
+
+On the Jenkins Dashboard, select the "Create a job" button. For this tutorial, we are going for
+"Freestyle project".
+
+
+
+
+
+On the job's configuration page, we can see how simple it is to configure a Freestyle Project job, as it doesn't
+even require a Jenkinsfile. I will skip the Source Code Management with "None" set and declare simple shell
+commands on the Build Steps section.
+
+
+
+We can now save, and manually run the test by clicking "Build now" button on left sidebar. We can see
+build history on this lef sidebar, select one individually and see it in more details.
+
+On the a build page, we can see the Console Output, where the output of the commands are displayed.
+
+## Creating a pipeline
+
+Let's return to the Dashboard and select "New Item" on the left sidebar. This time we'll create a Pipeline.
+This time you'll notice that we are required to write a Groovy script for the pipeline, instead of just
+inserting bash commands.
+
+On the "Pipeline" section, in the "Definition" selection, we can either choose to manually write the script
+from the Web Interface with "Pipeline" script option or read it from our SCM with "Pipeline script from SCM".
+Let's choose the former.
+
+From the upper right corner of the Script window, we can select a template to begin with. I will stick with
+the Hello world template.
+
+
+
+You can play around with `echo`s (print a string) and `sh` (use a sh command), but if you want more complex
+scripts, you can access `localhost:[YOUR-JENKINS-PORT]/job/Pipeline/pipeline-syntax/`. There, you can find
+lots of documentations and also a Snippet generator where you can generate script segments through the
+GUI.
+
+## Creating a Multibranch Pipeline for a real GitHub repository
+
+After experimenting a bit with basic job management and Pipeline syntax, let's configure a job for a real
+case scenario: integrating Jenkins with a GitHub repository, that will listen for new PRs and commits,
+and check them.
+
+This will require the use of Jenkins's GitHub Branch Source Plugin, which is probably
+installed by default on a regular install of Jenkins. Then, we have to set up the following:
+
++ A GitHub App for the GitHub repository. It will allow third-parties (our Jenkins server) to read/write
+over our repository with the permissions set for the GitHub App. It can also listen to events in our repository
+and send webhooks;
++ A Jenkins credentials containing the GitHub App private key. It allows Jenkins to read/write on the
+repository;
++ A multibranch pipeline job for the corresponding repository automation.
+
+### Setting the Multibranch Pipeline
+
+The first step is to [create the GitHub App, configure it and create its Jenkins credential by following these
+steps](https://docs.cloudbees.com/docs/cloudbees-ci/latestcloud-admin-guide/github-app-auth)
+
+Then, create the Multibranch Pipeline. In Branch Sources section, choose to "Add source".
+Associate the Pipeline with the repository by inserting the credentials and Repository HTTPS URL. You can
+validate if the configuration was successful by clicking on the "Validate" option
+
+To set the pipeline steps for this new job, we will be required to write it in a Jenkinsfile and keep it in the
+repository. It will be read by Jenkins when it needs to start a new build.
+
+With the Multibranch Pipeline, you will be able to make builds for each branch of the project if necessary.
+
+### Testing the Multibranch Pipeline
+
+After setting it up and validating the Jenkins authentication through GitHub App, you can write a Jenkinsfile
+on the path specified for the job configuration. Write it using the same syntax previously seen on the
+regular Pipeline section.
+
+If your Jenkins server is exposed to the internet and you have set `https:///github-webhook/`
+on the WebHook URL for the GitHub App, the job should be automatically executed for branches that are commited
+or pull requested.
+
+Otherwise, you can manually run the scan through the Jenkins web interface. Go to the Job page and select
+to build it.
+
+## Final thoughts
+
+We've covered so far the basics of Jenkins jobs, by trying some of Jenkins jobs types, covering Pipelines
+and their syntax for groovy scripting and applying that knowledge to set up a Multibranch Pipeline to automate
+builds for a GitHub repository.
+
+I hope to talk about Jenkins agents next and cover some implementations I've been experimenting for the
+kworkflow CI I am working on and.
+
+I will also prepare a blog post detailing the Jenkins as Code implementations
+for the kworkflow CI I am working on. It will cover this aspect of Jenkins with more in-depth details
+and also explain the progress I've made with my Jenkins as Code repository for kworkflow.
+
+## Resources
+
+[Docker Jenkins image](https://github.com/jenkinsci/docker)
+[Github Branch Source Documentation](https://docs.cloudbees.com/docs/cloudbees-ci/latest/cloud-admin-guide/github-branch-source-plugin)
+
diff --git a/_posts/2024-07-24-jenkins-agents.md b/_posts/2024-07-24-jenkins-agents.md
new file mode 100644
index 0000000..b0e8afc
--- /dev/null
+++ b/_posts/2024-07-24-jenkins-agents.md
@@ -0,0 +1,304 @@
+---
+layout: post
+title: "[GSoC][Jenkins] Different Jenkins agent configurations and how I tested them for the kworkflow CI"
+date: 2024-07-24
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci, docker, vms]
+---
+
+# Studying Jenkins Agents for the kworkflow CI
+
+## Table of Contents
++ [What is a Jenkins agent?](#what)
++ [Static agents vs. Cloud](#vs)
++ [What I've tried so far](#sofar)
+ + [The Docker Pipeline Plugin (`docker-workflow`)](#dworkflow)
+ + [SSH Agents](#sshagents)
+ + [Configuring the controller as code to implement the agents](#cascode)
+ + [Deploying the agents](#dagents)
+ + [The Docker agent](#dagent)
+ + [The VM agent](#vmagent)
+ + [The Docker Plugin (`docker-plugin`)](#dplugin)
+ + [Configuring Docker as a Cloud provider for kworkflow's Jenkins CI](#cdc)
+ + [Preparing the environment](#pte)
+ + [Creating the Cloud](#ctc)
++ [The integration tests](#tint)
++ [Final thoughts](#ft)
++ [References](#resources)
+
+During my GSoC project of planning a Jenkins CI for kworkflow, I spent plenty of time experimenting with different possibilities of agent types for the infrastructure.
+This post aims to register every attempt of agent implementation so far, their respective implementation guide for a Jenkins server, and my insights about them for the kworkflow context.
+
+## What is a Jenkins agent?
+
+Jenkins is designed to be a **distributed** automation server, which means that it is best used when the workload is balanced between different computing nodes. These computing nodes are the Jenkins agents and represent
+computing environments assigned to execute pipeline steps.
+
+The use of agents to run jobs exclusively is also recommended as a safer approach, as it gives appropriate controller isolation. Let's remember that a build in kworkflow's Jenkins will execute code from contributors, which
+may be broken or malicious. Therefore, one should avoid exposing the controller node directly to such code and disable the controller's built-in node.
+
+## Static agents vs. Cloud
+
+In Jenkins, there are two primary ways to provide agents: You can either configure and launch them statically, by configuring your own containers and VMs and accessing them via (outbound) SSH connections or
+(inbound) TCP connections, or dynamically provide them through a Cloud service, such as Amazon EC2 instances, Kubernetes Pods, etc.
+
+## What I've tried so far
+
+### The Docker Pipeline Plugin (`docker-workflow`)
+
+My first attempt at configuring an agent for Jenkins was by using the Docker Pipeline Plugin. It allows one to define in the **jenkinsfile** the Docker image that will be run for a specific pipeline/stage and
+dynamically use it in the pipeline when required.
+
+It is pretty simple to use. You just have to install the plugin for your Jenkins server and write a jenkinsfile invoking the dynamic docker container. You can pull a container image from the docker hub
+defining
+
+```
+agent {
+ docker { image 'chosen-image' }
+}
+```
+
+or even using a custom Dockerfile from the root of the project with
+
+```
+agent {
+ dockerfile true
+}
+```
+
+but notice you need to define a SCM for your Pipeline or use a Multibranch Pipeline to use the custom Dockerfile since it needs a repository to get the Dockerfile from.
+[There is full documentation on how to define docker agents from the pipeline with the `docker-workflow` plugin with docker and dockerfile agent options](https://www.jenkins.io/doc/book/pipeline/syntax/#agent).
+If you are using this plugin for a Jenkins in Docker, remember that you must mount the host docker socket on it (Or run the jenkins container with `--privileged`, but don't do that!), and the Jenkins process must
+have permissions to write on the socket.
+
+It was a very easy and simple approach, but I am afraid it wouldn't work well with the kworkflow project. Look below at a [previously defined jenkinsfile from my kworkflow branch](https://github.com/MarceloSpessoto/kworkflow/commit/9ff5c4c408fb2fb1981c3c5138d592cbbd060a82).
+
+
+```
+agent {
+ dockerfile {
+ filename 'Dockerfile'
+ args '--privileged=true'
+ }
+}
+```
+
+When trying to check if I could run kworkflow's integration tests (which requires creating new containers) with this plugin, I attempted to run the agent with the `--privileged` option, a fix that allows Docker in Docker (DinD).
+A very cursed and forbidden fix, that allows a container to have control over the host kernel.
+
+When it actually worked, I noticed that it wasn't very good news actually. Since this Jenkins CI will be used in an open source software context, exposing to every unknown contributor the ability to modify the jenkinsfiles
+with this plugin active in the Jenkins server would represent a great attack surface for the server host system. Even after noticing that the integration tests should not be run in a container, but in a more isolated VM agent,
+one could still modify the jenkinsfiles from other tests, such as code coverage and unit tests, and grant to the container agent the `--privileged` attribute.
+
+The simplicity provided by this plugin is great, but it doesn't feel good for kworkflow case. It should restrict the ability to define the container agents only to trusted maintainers granted the Jenkins server administration.
+So it felt wiser to not have this plugin installed on the server, disabling the possibility of defining the agents from the public Pipeline script.
+
+### SSH agents
+
+The second option I've tried is to implement static SSH agents to evaluate CI tests, i.e., configure static VMs/containers to act as agents and evaluate CI/CD Pipelines.
+
+First of all, you need to install the [SSH Build Agents Plugin](https://plugins.jenkins.io/ssh-slaves/) and then configure your nodes. In order to be considered a valid agent, a node must:
+1. have Java properly installed;
+2. have a user named "jenkins"
+
+The [docker-ssh-agent container](https://github.com/jenkinsci/docker-ssh-agent) already fills these requirements and
+is very recommended for container SSH agents. For other types of nodes, such as a VM, you will have to ensure it
+yourself.
+
+And then, to make it work with the controller node, one must:
+1. configure an SSH connection method for the controller to execute the jobs on the agent. This can be done by giving the agent a public
+SSH key and storing the private key on Jenkins controller as a credential.
+2. Set in the controller node the directory where the job will be executed.
+
+#### Configuring the controller as code to implement the agents
+
+The configuration steps above can be automated with Jenkins as Code. For the kworkflow CI, I've done the following:
+
+```
+credentials:
+ [...]
+ system:
+ domainCredentials:
+ - credentials:
+ [...]
+ - basicSSHUserPrivateKey:
+ description: "Credentials for Docker SSH Agent"
+ id: "docker-agent"
+ username: "jenkins"
+ passphrase: "${SSH_DOCKER_PASSWORD}"
+ privateKeySource:
+ directEntry:
+ privateKey: "${file:/usr/local/configuration/secrets/container_key}"
+ - basicSSHUserPrivateKey:
+ description: "Credentials for VM Agent"
+ id: "vm-agent"
+ username: "jenkins"
+ passphrase: "${SSH_VM_PASSWORD}"
+ privateKeySource:
+ directEntry:
+ privateKey: "${file:/usr/local/configuration/secrets/vm_key}"
+```
+
+The configuration above configures two credentials for SSH connections. These are credentials for SSH for
+storing an user, its password and its SSH private key required to connect to the node with SSH.
+
+The Jenkins configuration as Code plugin allows one to pass sensitive information to the config yaml by
+using environment variables and also by files (using `${file:/var/key}` will be translated to the content of
+/var/key).
+
+There are two configured credentials in the example above. One for the Docker agent and the other for the
+VM agent.
+
+There is also the proper configuration of each agent to be done:
+
+```
+jenkins:
+ [...]
+ nodes:
+ - permanent:
+ labelString: "docker-agent"
+ name: "docker-agent"
+ remoteFS: "/home/jenkins/agent"
+ launcher:
+ ssh:
+ credentialsId: docker-agent
+ host: localhost
+ port: DOCKER_AGENT_PORT
+ sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
+ - permanent:
+ labelString: "vm-agent"
+ name: "vm-agent"
+ remoteFS: "/var/lib/jenkins"
+ launcher:
+ ssh:
+ credentialsId: vm-agent
+ host: localhost
+ port: VM_AGENT_PORT
+ sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
+```
+
+Just configure the agent label, its name, the directory where jobs will be run, and the configuration for
+the launch method (in this case, an SSH agent, with the SSH credential to be used (we defined in the
+previous code block), the host, port, and SSH verification strategy).
+
+You can get a fully detailed explanation of how to configure an SSH agent on the [Plugin official documentation](https://github.com/jenkinsci/ssh-agents-plugin/blob/main/doc/CONFIGURE.md).
+
+#### Deploying the agents
+
+##### The Docker agent
+
+After configuring the controller properly, the last step was to deploy the static agents. For the Docker agent, I've added an extra field in the `docker-compose.yml` to include the agent service. The jenkins service image is
+an extended Dockerfile `FROM jenkins/ssh-agent:latest`. It configured the environment for the kworkflow needs (such as configuring
+the kcov software for code coverage tests, etc.) and also set a custom SSH port using the `sed` command.
+
+To pass the custom port from the host to the container, I've passed it to docker-compose.yml as an `ARG`, and in the
+Dockerfile I've passed the port `ARG` to an environment variable with `ENV`. The differences between Docker `ARG`s and `ENV`s can be seen in [this nice post](https://vsupalov.com/docker-arg-env-variable-guide/).
+
+With the jenkins/ssh-agent, the public ssh key can be easily configured by just setting the "JENKINS_AGENT_SSH_PUBKEY" env var, something I've done
+on the docker-compose script.
+
+The docker agent worked pretty well, but I didn't like the results. The complete absence of scalability is a big reason.
+But there were more problems. Many kworkflow jobs I would assign to the container require `sudo` privileged commands, such as `apt install` to test dependencies' installation.
+If passwordless `sudo` was given to `jenkins` user in the Docker agent, it would be easy to mess with the container itself, making it vulnerable to breaking easily and needing manual maintenance.
+One could also enable `sudo` to selected commands, but it would require manual changes if some new command was needed. It would be much better to enable `sudo` in ephemeral and dynamically provided
+container agents.
+
+##### The VM agent
+
+To automate the deployment of a VM agent, I've used Vagrant.
+I've created a very simple Vagrantfile configuring the Java installation, creating of `jenkins` user, creating a directory for job execution, and injecting the SSH public key. This can be done with [Vagrant provisioning]
+(https://developer.hashicorp.com/vagrant/docs/provisioning). You can provision the VM with an automation tool like Ansible, Chief, or Puppet. The basic shell provisioning was enough for my case.
+
+To ensure the jenkins controller would be able to connect to the VM, I had to change the docker-compose network_mode to "host".
+Since docker-compose isolates its orchestrated containers in its network from the docker interface, it was necessary
+to change the network_mode to "host" so the container would have the host IP address and share its ports with the host.
+
+I've also changed my Vagrant provider from libvirt to VirtualBox, because the former was presenting networking problems. For production, it is very likely that I will change to VMWare.
+
+The VM agent has the same flaws as the Docker static SSH agent, but, since the integration tests from kworkflow
+require a more isolated and less used environment, it will be enough for now. I am still trying to figure out a way
+to provide a VM agent cloud that is not proprietary and can be self-owned.
+
+### The Docker Plugin (`docker-plugin`)
+
+After trying both Docker Pipeline Plugin and Docker SSH agents and not being satisfied with both, I've considered trying a Cloud for Jenkins with the Kubernetes plugin. It would be much more complex and difficult to handle, but it
+would be safer than defining containers directly from the pipeline with Docker Pipeline and much more scalable and easier to maintain than static Docker SSH agents.
+
+It would be a long path to follow and an overkill solution for the kworkflow, that doesn't contain such a high amount of PRs being opened and could benefit from something simpler. Gladly, I've found out that I've been misled by
+the confusing Docker plugins: there are actually two different Docker Plugins.
+
+I've installed both `docker-workspace` and `docker-plugin` before, but I thought that the latter complemented the former as a solution for Pipeline Syntax for Docker usage. Turns out that the `docker-plugin` has a completely
+different purpose: Allow a Docker Host API to be used as a Cloud provider for the Jenkins server. It would do exactly what I was looking for that entire time, but avoid the necessity of configuring a super complex Kubernetes
+cluster or relying on a proprietary Cloud provider.
+
+With the `docker-plugin`, it is possible to set up the Jenkins controller to communicate with a Docker Host API as a Cloud and then using dynamically provided Docker agents from this Cloud to evaluate Jobs.
+
+#### Configuring Docker as Cloud provider for kworkflow's Jenkins CI
+
+Here's a basic usage of the plugin. It is very simple and straightforward to configure.
+
+##### Preparing the environment
+
+Since the Jenkins server for kworkflow will be run inside a Docker container communicating with the Docker Host API from the host, mounting the docker socket on the container and being sure that Jenkins can write on it
+is enough to prepare the environment for my problem.
+
+With this Jenkins plugin, it is also possible to connect to a remote Docker Host API by configuring an open TCP port for the Docker Host API. On the docker host, just go to `lib/systemd/system/docker.service`, edit the `ExecStart` config
+with `ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:[PORT] -H unix:///var/run/docker.sock`, restart the Docker Daemon and ensure the Jenkins server can connect to the device on the port.
+
+##### Creating the Cloud
+
+On Jenkins Controller Web Interface, create a Cloud. Ensure the Docker plugin is installed so you can check
+the Cloud option to be "Docker".
+
+
+
+Proceed and configure the Cloud.
+
+There are two important fields here:
++ "Docker Cloud details": here you will configure how the Jenkins server will communicate with the Docker Host API.
+In the "Docker Host URI", fill it with the mounted socket path/address of the remote Docker Host. Test Connection to ensure if Jenkins can use the Docker Cloud you've set.
++ "Docker Agent templates": Here you can define multiple docker templates to be provided by the Docker Cloud.
+For each template you wish to define, the following steps are important:
+ + Labels: define the labels that can identify the template. This means that if a template has the label "hello-world",
+ the Cloud can provide a container agent with this template whenever a Pipeline script requires an agent with the label "hello-world". (`agent {label 'hello-world'}`).
+ + Docker Image: The docker image that the container template will be built from. It must be the hash, tag, or name of an image built in the Docker Host.
+ + Connect method: if the controller will connect with the container with SSH, JNLP, or just attach the container.
+
+Configure the Cloud and save it.
+Run a basic Pipeline invoking a template with its label and validate.
+
+This is, in my opinion, the best possible implementation of Docker agents for the kworkflow. I am finally satisfied with it
+and will now move on to the integration tests.
+
+## The integration tests
+
+The integration tests from kworkflow, in their actual state, require the creation of new containers. From what has been previously
+discussed in this post, it can be noticed that container agents won't offer a proper and safe solution for this specific use case.
+
+Because of that, a Virtual Machine SSH agent will provide the necessary isolation for this scenario. Of course, there will be
+even more expected complexity, since it is desired that the integration tests will be able to handle more sophisticated cases,
+such as deployment tests on VMs and even physical devices. But, for now, the actual state of the integration tests can be replicated
+with a simple VM running the test.
+
+The other tests can take advantage of a more lightweight Docker container environment provided by the Docker Cloud.
+
+## Final thoughts
+
+After studying many different Jenkins plugins and how kworkflow can benefit from each one, I've come to the decision that the best
+solution is to provide a Jenkins Cloud using Docker for the basic workflows (code coverage, unit tests, etc.) and a VM agent for the
+integration tests.
+
+After finishing and polishing the Jenkins as Code infrastructure I'm providing, and ensuring it can offer these functionalities on a real server, I will be keeping track of
+the development status of the integration steps and actively plan with the maintainers and contributors of kworkflow the implementation of the most sophisticated tests, such as
+the `deploy` feature testing. It probably be an even more theoretical and study-focused phase of this GSoC project of mine, and I hope to keep, from now on, constantly updating the
+progress and accumulated insights on this blog.
+
+
+## References
+
+[Controller isolation page from Jenkins](https://www.jenkins.io/doc/book/security/controller-isolation/)
+[Docker agent definition from the pipeline](https://www.jenkins.io/doc/book/pipeline/docker/)
+[Pipeline Syntax for agents](https://www.jenkins.io/doc/book/pipeline/syntax/#agent)
+[SSH Build Agents Plugin](https://plugins.jenkins.io/ssh-slaves/)
+[docker-ssh-agent container](https://github.com/jenkinsci/docker-ssh-agent)
diff --git a/_posts/2024-07-29-configuring-the-docker-cloud-for-kw.md b/_posts/2024-07-29-configuring-the-docker-cloud-for-kw.md
new file mode 100644
index 0000000..4d7da64
--- /dev/null
+++ b/_posts/2024-07-29-configuring-the-docker-cloud-for-kw.md
@@ -0,0 +1,87 @@
+---
+layout: post
+title: "[GSoC] Configuring the Docker Cloud for the kworkflow's Jenkins CI"
+date: 2024-07-29
+author: Marcelo Mendes Spessoto Junior
+author_website: https://marcelospessoto.github.io/
+tags: [jenkins, ci, cloud, docker]
+---
+
+# Configuring a Docker Cloud for Jenkins using the Docker Plugin
+
+After investigating different methods for launching containers to be used as Jenkins agents and deciding to configure a Docker daemon to be used as a Cloud to provision containers to the Jenkins controller, it is now time
+to effectively implement this Cloud in the kworkflow's Jenkins CI.
+
+## The basic configuration of the Cloud
+
+The first step was to install the Docker Plugin ( with id `docker-plugin` ) in the Jenkins controller. The basic configuration setup can be found on the latest GSoC post about Jenkins Agents.
+
+## Setting custom images
+
+The regular `jenkins/*-agent` containers from the Docker Hub registry aren't enough to run the original collection of CI workflows for the kworkflow project. The actual kworkflow test pipeline interacts with dependencies
+that aren't set up by default on the official Jenkins agents containers, such as `kcov` and `shellcheck`. Therefore, these images must be extended to include these required tools.
+
+To use these custom images, I've created their respective Dockerfiles and pushed them onto Docker Hub.
+
+### The custom images
+
+After experimenting a bit and planning how to set the testing environments for each pipeline for the kworkflow
+project, I've decided to set the tests and container images like the following:
+
++ There will be a `kw-basic` container image. It will install dependencies required for running unit tests, dependencies
+for documentation generation, and the `shfmt` package required to lint shell scripts. It also is
+the only container image that will set sudo for the Jenkins user, as it runs commands such as `apt-get`.It is expected to be
+the most lightweight image and it should run the [`unit_tests.yml`](https://github.com/kworkflow/kworkflow/blob/unstable/.github/workflows/unit_tests.yml),
+[`test_setup_and_docs.yml`](https://github.com/kworkflow/kworkflow/blob/unstable/.github/workflows/test_setup_and_docs.yml)
+and [`shfmt.yml`](https://github.com/kworkflow/kworkflow/blob/unstable/.github/workflows/shfmt.yml) pipelines.
+
++ There will be a `kw-kcov` container image. It will install `kcov`, its dependencies, and the dependencies required
+to run the unit tests (`kcov` will run the unit tests when generating the code coverage, so it is important to
+ensure the unit tests work). It will be a container image to be run specifically for the
+[`kcov.yml`](https://github.com/kworkflow/kworkflow/blob/unstable/.github/workflows/kcov.yml) pipeline.
+
++ There will be a `kw-shellchek` container image. It will run the [`shellcheck-reviewdog.yml`](https://github.com/kworkflow/kworkflow/blob/unstable/.github/workflows/shellcheck_reviewdog.yml)
+pipeline. It basically installs [`shellcheck`](https://github.com/koalaman/shellcheck) and [`reviewdog`](https://github.com/reviewdog/reviewdog).
+
+### Pushing the images to Docker Hub
+
+It is straightforward to push your custom Docker images onto the Docker Hub online registry. First of all, create an account on Docker Hub.
+Then, create a repository. Ensure it is public, otherwise, you will have to manually configure a credential in Jenkins to pull the private repository.
+
+After setting up the repository properly, you have to build the image locally and push it with `docker push `.
+But first, ensure that you are logged into your Docker Hub account on the docker CLI, so you can get the push permission. Run `docker login -u ` and insert your password.
+
+I've created four repositories (i.e., four different images) in the namespace `marcelospe` (my account username).
+The `marcelospe/kw-install-and-setup` repository was created while experimenting a little bit, but, in the
+end, I decided to use `marcelospe/kw-basic` for the `test_setup_and_docs.yml` workflow.
+
+
+
+## Some small problems I've encountered
+
++ When configuring the use of the custom Docker images, the controller couldn't start the job on the agents.
+It is possible that the problem was caused by extending `jenkins/ssh-agent` images and using the Docker Plugin
+attach method for connecting with the containers instead of SSH. Extending the `jenkins/agent` image for my
+custom images fixed the problem.
++ Two test cases are failing for the unit tests in the agents:
+ + `./tests/unit/build_test.sh`: The test fails on the new `from_sha` feature for `kw build`. It is likely
+ happening because the `unstable` branch from the fork I'm testing the CI doesn't have the recent
+ [fix for this bug](https://github.com/kworkflow/kworkflow/pull/1141)
+ + `./tests/unit/lib/signal_manager_test.sh`: Yet to be investigated.
++ The `shellcheck` pipeline appears to work properly, but the `reviewdog` is not configured yet. This means that
+the `reviewdog` won't publish custom commentaries over PR'ed code.
+
+## My next steps
+
+This week I will focus on fixing the unit test problems, polishing the repository with the "as Code configuration"
+for the CI and integrating the `reviewdog` into the `shellcheck` pipeline for the Jenkins CI.
+
+I've also noticed recently that, despite configuring different jobs for each Pipeline, the `GitHub Branch Source` Plugin
+won't produce a new check for each job. It will, instead, overwrite the previous check. It is desired that each
+job contains a GitHub Check of its own.
+
+
+
+I will see if the [`GitHub Checks`](https://plugins.jenkins.io/github-checks/) plugin can fix it for me. It enables the communication of Jenkins with
+the [GitHub Checks API](https://docs.github.com/en/rest/checks?apiVersion=2022-11-28#runs).
+
diff --git a/images/Build_steps_freestyle.png b/images/Build_steps_freestyle.png
new file mode 100644
index 0000000..75f963f
Binary files /dev/null and b/images/Build_steps_freestyle.png differ
diff --git a/images/Cloud_Install_Plugin.png b/images/Cloud_Install_Plugin.png
new file mode 100644
index 0000000..2889710
Binary files /dev/null and b/images/Cloud_Install_Plugin.png differ
diff --git a/images/Create_Cloud.png b/images/Create_Cloud.png
new file mode 100644
index 0000000..f1581f7
Binary files /dev/null and b/images/Create_Cloud.png differ
diff --git a/images/Groovy_script_window.png b/images/Groovy_script_window.png
new file mode 100644
index 0000000..628245d
Binary files /dev/null and b/images/Groovy_script_window.png differ
diff --git a/images/Jenkins_Create_a_job.png b/images/Jenkins_Create_a_job.png
new file mode 100644
index 0000000..8bc0225
Binary files /dev/null and b/images/Jenkins_Create_a_job.png differ
diff --git a/images/Jenkins_dashboard.png b/images/Jenkins_dashboard.png
new file mode 100644
index 0000000..cf1d800
Binary files /dev/null and b/images/Jenkins_dashboard.png differ
diff --git a/images/Welcome_to_Jenkins.png b/images/Welcome_to_Jenkins.png
new file mode 100644
index 0000000..d877e6c
Binary files /dev/null and b/images/Welcome_to_Jenkins.png differ
diff --git a/images/docker_hub_images.png b/images/docker_hub_images.png
new file mode 100644
index 0000000..2430133
Binary files /dev/null and b/images/docker_hub_images.png differ
diff --git a/images/github_checks.png b/images/github_checks.png
new file mode 100644
index 0000000..54f3b2b
Binary files /dev/null and b/images/github_checks.png differ
diff --git a/images/post_1_branches.png b/images/post_1_branches.png
new file mode 100644
index 0000000..4be48f6
Binary files /dev/null and b/images/post_1_branches.png differ
diff --git a/images/post_1_codecoverage.png b/images/post_1_codecoverage.png
new file mode 100644
index 0000000..6405d42
Binary files /dev/null and b/images/post_1_codecoverage.png differ
diff --git a/images/post_1_nodes.png b/images/post_1_nodes.png
new file mode 100644
index 0000000..af6bb04
Binary files /dev/null and b/images/post_1_nodes.png differ