diff --git a/documentation/azure-devops/setup-aks-provisioning-pipeline.asciidoc b/documentation/azure-devops/setup-aks-provisioning-pipeline.asciidoc index 2e70283b8..5b712596b 100644 --- a/documentation/azure-devops/setup-aks-provisioning-pipeline.asciidoc +++ b/documentation/azure-devops/setup-aks-provisioning-pipeline.asciidoc @@ -1,17 +1,26 @@ -= Setting up a Azure AKS provisioning pipeline on Azure DevOps +:provider: Azure Devops +:pipeline_type: pipeline +:trigger_sentence_azure: +:pipeline_type2: pipeline +:path_provider: azure-devops +:aks_variables_path: Azure DevOps > Pipelines > Library > 'aks-variables += Setting up a Azure AKS provisioning {pipeline_type} on {provider} -In this section we will create a pipeline which will provision an Azure AKS cluster. This pipeline will be configured to be manually triggered by the user. As part of AKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name `aks-variables` is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix for more details. +In this section we will create a {pipeline_type} which will provision an Azure AKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of AKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name `aks-variables` is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix for more details. -The creation of the pipeline will follow the project workflow, so a new branch named `feature/aks-provisioning` will be created, the YAML file for the pipeline and the terraform files for creating the cluster will be pushed to it. +The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/aks-provisioning` will be created, the YAML file for the {pipeline_type} and the terraform files for creating the cluster will be pushed to it. Then, a Pull Request (PR) will be created in order to merge the new branch into the appropiate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag. -The script located at `/scripts/pipelines/azure-devops/pipeline_generator.sh` will automatically create this new branch, create the AKS provisioning pipeline based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch. +The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the AKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch. == Prerequisites -* Install the https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks[Terraform extension] for Azure DevOps. -* Create a https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#create-a-service-connection[service connection] to Azure Resource Manager and name it `aks-connection`. If you already have a service connection available or you need a specific connection name, please update `aks-pipeline.cfg` accordingly. + * Install the https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks[Terraform extension] for Azure DevOps. + * Create a https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#create-a-service-connection[service connection] to Azure Resource Manager and name it `aks-connection`. If you already have a service connection available or you need a specific connection name, please update `aks-pipeline.cfg` accordingly. + + + * An Azure resource group in the desired cluster location (e.g. `westeurope`). You can use an existing one or create a new one with the following command: ``` @@ -32,7 +41,7 @@ az storage container create -n --account-name kubectl --kubeconfig= ``` -To get the DNS name of the NGINX Ingress controller on the EKS cluster, go into Azure DevOps > Pipelines > Library > `aks-variables`. +To get the DNS name of the NGINX Ingress controller on the AKS cluster, go into {aks_variables_path}. Rancher, if installed, will be available on `https:///dashboard`. You will be asked for an initial password, which can be retrieved with: @@ -104,4 +114,4 @@ kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ === Appendix: Destroying the cluster -To destroy the provisioned resources, set `operation` pipeline variable value to `destroy` and run the pipeline. \ No newline at end of file +To destroy the provisioned resources, set `operation` {pipeline_type} variable value to `destroy` and run the {pipeline_type}. diff --git a/documentation/gitlab/setup-aks-provisioning-pipeline.asciidoc b/documentation/gitlab/setup-aks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..5fd710279 --- /dev/null +++ b/documentation/gitlab/setup-aks-provisioning-pipeline.asciidoc @@ -0,0 +1,99 @@ +:provider: Gitlab +:pipeline_type: pipeline +:trigger_sentence_gitlab: +:pipeline_type2: Gitlab pipeline +:path_provider: gitlab +:aks_variables_path: Group > settings > ci_cd > Variables += Setting up a Azure AKS provisioning {pipeline_type} on {provider} + +In this section we will create a {pipeline_type} which will provision an Azure AKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of AKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name `aks-variables` is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix for more details. + +The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/aks-provisioning` will be created, the YAML file for the {pipeline_type} and the terraform files for creating the cluster will be pushed to it. + +Then, a Pull Request (PR) will be created in order to merge the new branch into the appropiate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag. + +The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the AKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch. + +== Prerequisites + + + * Add AZURE credentials as https://docs.gitlab.com/ee/ci/variables/#add-a-cicd-variable-to-a-project[Variables] in your repository and name it `AZURE_USERNAME`, `AZURE_PASSWORD`. If you already have a available credentials or you need a specific credentials connection, please update `aks-provisioning.yml` accordingly. + +* Create a Gitlab personal access token https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html#create-a-personal-access-token[PAT], Store in environment variable as `GITLAB_PAT`. + +* An Azure resource group in the desired cluster location (e.g. `westeurope`). You can use an existing one or create a new one with the following command: + +``` +az group create -n -l +``` + +* This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with `git pull`). + +== Creating the {pipeline_type} using provided script + +Before executing the script you will need to customize some input variables about the environment. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/azure/aks`, which allows you to create or update values for the required variables, passing them as flags. As a full example: + +``` +./set-terraform-variables.sh --location --resource_group_name --instance_type --worker_node_count --dns_prefix +``` + +=== Usage +``` +pipeline_generator.sh \ + -c \ + -d \ + --cluster-name \ + --storage-container \ + [--rancher] \ + [-b ] \ + [-w] +``` + +NOTE: The config file for the AKS provisioning {pipeline_type} is located at `/scripts/pipelines/{path_provider}/templates/aks/aks-pipeline.cfg`. + +=== Flags +``` +-c, --config-file [Required] Configuration file containing pipeline definition. +-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\'). + --cluster-name [Required] Name for the cluster. + --storage-container [Required] Name of the storage container where the Terraform state of the cluster will be stored. + --rancher Install Rancher to manage the cluster. +-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided. +-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag. +``` + +=== Example + +``` + + ./pipeline_generator.sh -c ./templates/aks/aks-pipeline.cfg -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name devon-hangar --storage-container aks-state --rancher -b develop -w +``` + +NOTE: Rancher is installed on the cluster after provisioning when using the above command. + +=== Appendix: Interacting with the cluster + +NOTE: Make sure you have https://kubernetes.io/docs/tasks/tools/#kubectl[kubectl] installed. + +In order to interact with your cluster you will need to download the artifact `kubeconfig` generated by the cluster provisioning {pipeline_type} on the location it is expected by default (`~/.kube/config`) or either: + +``` +# via environment variable (you can add this on your profile) +export KUBECONFIG= +kubectl + +# via command-line flag +kubectl --kubeconfig= +``` + +To get the DNS name of the NGINX Ingress controller on the AKS cluster, go into {aks_variables_path}. + +Rancher, if installed, will be available on `https:///dashboard`. You will be asked for an initial password, which can be retrieved with: + +``` +kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\n"}}' +``` + +=== Appendix: Destroying the cluster + +To destroy the provisioned resources, set `operation` {pipeline_type} variable value to `destroy` and run the {pipeline_type}. diff --git a/documentation/src/azure-devops/setup-aks-provisioning-pipeline.asciidoc b/documentation/src/azure-devops/setup-aks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..5c32723f9 --- /dev/null +++ b/documentation/src/azure-devops/setup-aks-provisioning-pipeline.asciidoc @@ -0,0 +1,7 @@ +:provider: Azure Devops +:pipeline_type: pipeline +:trigger_sentence_azure: +:pipeline_type2: pipeline +:path_provider: azure-devops +:aks_variables_path: Azure DevOps > Pipelines > Library > 'aks-variables +include::../common_templates/setup-aks-provisioning-pipeline.asciidoc[] \ No newline at end of file diff --git a/documentation/src/common_templates/setup-aks-provisioning-pipeline.asciidoc b/documentation/src/common_templates/setup-aks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..5e0b0d86b --- /dev/null +++ b/documentation/src/common_templates/setup-aks-provisioning-pipeline.asciidoc @@ -0,0 +1,116 @@ += Setting up a Azure AKS provisioning {pipeline_type} on {provider} + +In this section we will create a {pipeline_type} which will provision an Azure AKS cluster. This {pipeline_type} will be configured to be manually triggered by the user. As part of AKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name `aks-variables` is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix for more details. + +The creation of the {pipeline_type} will follow the project workflow, so a new branch named `feature/aks-provisioning` will be created, the YAML file for the {pipeline_type} and the terraform files for creating the cluster will be pushed to it. + +Then, a Pull Request (PR) will be created in order to merge the new branch into the appropiate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag. + +The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create this new branch, create the AKS provisioning {pipeline_type} based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch. + +== Prerequisites + +ifdef::trigger_sentence_azure[ * Install the https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks[Terraform extension] for Azure DevOps.] +ifdef::trigger_sentence_azure[ * Create a https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#create-a-service-connection[service connection] to Azure Resource Manager and name it `aks-connection`. If you already have a service connection available or you need a specific connection name, please update `aks-pipeline.cfg` accordingly.] + +ifdef::trigger_sentence_gitlab[ * Add AZURE credentials as https://docs.gitlab.com/ee/ci/variables/#add-a-cicd-variable-to-a-project[Variables] in your repository and name it `AZURE_USERNAME`, `AZURE_PASSWORD`. If you already have a available credentials or you need a specific credentials connection, please update `aks-provisioning.yml` accordingly.] + +ifdef::trigger_sentence_gitlab[* Create a Gitlab personal access token https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html#create-a-personal-access-token[PAT], Store in environment variable as `GITLAB_PAT`.] + +* An Azure resource group in the desired cluster location (e.g. `westeurope`). You can use an existing one or create a new one with the following command: + +``` +az group create -n -l +``` +ifndef::trigger_sentence_gitlab[] + +* An Azure storage account within the previous resource group. You can use an existing one or create a new one with the following command: + +``` +az storage account create -n -g -l +``` + +* An Azure storage container in Azure within the previous storage account. You can use an existing one or create a new one with the following command: + +``` +az storage container create -n --account-name +``` +endif::[] + +* This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with `git pull`). + +== Creating the {pipeline_type} using provided script + +Before executing the script you will need to customize some input variables about the environment. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables.sh` script located at `/scripts/environment-provisioning/azure/aks`, which allows you to create or update values for the required variables, passing them as flags. As a full example: + +``` +./set-terraform-variables.sh --location --resource_group_name --instance_type --worker_node_count --dns_prefix +``` + +=== Usage +``` +pipeline_generator.sh \ + -c \ +ifdef::trigger_sentence_azure,trigger_sentence_github[ -n \] + -d \ + --cluster-name \ +ifdef::trigger_sentence_azure,trigger_sentence_github[ --resource-group \] +ifdef::trigger_sentence_azure,trigger_sentence_github[ --storage-account \] + --storage-container \ + [--rancher] \ + [-b ] \ + [-w] +``` + +NOTE: The config file for the AKS provisioning {pipeline_type} is located at `/scripts/pipelines/{path_provider}/templates/aks/aks-pipeline.cfg`. + +=== Flags +``` +-c, --config-file [Required] Configuration file containing pipeline definition. +ifdef::trigger_sentence_azure,trigger_sentence_github[-n, --pipeline-name [Required] Name that will be set to the pipeline.] +-d, --local-directory [Required] Local directory of your project (the path should always be using '/' and not '\'). + --cluster-name [Required] Name for the cluster. +ifdef::trigger_sentence_azure,trigger_sentence_github[ --resource-group [Required] Name of the resource group for the cluster. ] +ifdef::trigger_sentence_azure,trigger_sentence_github[ --storage--account [Required] Name of the storage account for the cluster. ] + --storage-container [Required] Name of the storage container where the Terraform state of the cluster will be stored. + --rancher Install Rancher to manage the cluster. +-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided. +-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag. +``` + +=== Example + +``` +ifdef::trigger_sentence_azure,trigger_sentence_github[ ./pipeline_generator.sh -c ./templates/aks/aks-pipeline.cfg -n aks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name devon-hangar --resource-group devonfw --storage-account hangar --storage-container aks-state --rancher -b develop -w ] + +ifdef::trigger_sentence_gitlab[ ./pipeline_generator.sh -c ./templates/aks/aks-pipeline.cfg -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name devon-hangar --storage-container aks-state --rancher -b develop -w ] +``` + +NOTE: Rancher is installed on the cluster after provisioning when using the above command. + +=== Appendix: Interacting with the cluster + +NOTE: Make sure you have https://kubernetes.io/docs/tasks/tools/#kubectl[kubectl] installed. + +In order to interact with your cluster you will need to download the artifact `kubeconfig` generated by the cluster provisioning {pipeline_type} on the location it is expected by default (`~/.kube/config`) or either: + +``` +# via environment variable (you can add this on your profile) +export KUBECONFIG= +kubectl + +# via command-line flag +kubectl --kubeconfig= +``` + +To get the DNS name of the NGINX Ingress controller on the AKS cluster, go into {aks_variables_path}. + +Rancher, if installed, will be available on `https:///dashboard`. You will be asked for an initial password, which can be retrieved with: + +``` +kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\n"}}' +``` + +=== Appendix: Destroying the cluster + +To destroy the provisioned resources, set `operation` {pipeline_type} variable value to `destroy` and run the {pipeline_type}. diff --git a/documentation/src/gitlab/setup-aks-provisioning-pipeline.asciidoc b/documentation/src/gitlab/setup-aks-provisioning-pipeline.asciidoc new file mode 100644 index 000000000..bfa0e86fd --- /dev/null +++ b/documentation/src/gitlab/setup-aks-provisioning-pipeline.asciidoc @@ -0,0 +1,7 @@ +:provider: Gitlab +:pipeline_type: pipeline +:trigger_sentence_gitlab: +:pipeline_type2: Gitlab pipeline +:path_provider: gitlab +:aks_variables_path: Group > settings > ci_cd > Variables +include::../common_templates/setup-aks-provisioning-pipeline.asciidoc[] diff --git a/scripts/environment-provisioning/azure/aks/main.tf b/scripts/environment-provisioning/azure/aks/main.tf index 618c0ea74..e015b1bc9 100644 --- a/scripts/environment-provisioning/azure/aks/main.tf +++ b/scripts/environment-provisioning/azure/aks/main.tf @@ -6,7 +6,7 @@ terraform { } } - backend "azurerm" {} + backend "http" {} } provider "azurerm" { @@ -34,4 +34,4 @@ resource "azurerm_kubernetes_cluster" "cluster" { identity { type = "SystemAssigned" } -} \ No newline at end of file +} diff --git a/scripts/pipelines/azure-devops/pipeline_generator.sh b/scripts/pipelines/azure-devops/pipeline_generator.sh index f5ed33453..41b0cece3 100644 --- a/scripts/pipelines/azure-devops/pipeline_generator.sh +++ b/scripts/pipelines/azure-devops/pipeline_generator.sh @@ -60,6 +60,17 @@ function obtainHangarPath { hangarPath=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && cd ../../.. && pwd ) } +function addAdditionalArtifact { + # Check if an extra artifact to store is supplied. + if test ! -z "$artifactPath" + then + # Add the extra step to the YAML. + cat "${hangarPath}/${commonTemplatesPath}/store-extra-path.yml" >> "${localDirectory}/${pipelinePath}/${yamlFile}" + else + echo "The '-a' flag has not been set, skipping the step to add additional artifact." + fi +} + function createPipeline { echo -e "${green}Generating the pipeline from the YAML template..." echo -ne ${white} @@ -156,6 +167,8 @@ createNewBranch copyYAMLFile +addAdditionalArtifact + copyCommonScript type copyScript &> /dev/null && copyScript diff --git a/scripts/pipelines/github/pipeline_generator.sh b/scripts/pipelines/github/pipeline_generator.sh index 556f6f248..eb1fcb02f 100644 --- a/scripts/pipelines/github/pipeline_generator.sh +++ b/scripts/pipelines/github/pipeline_generator.sh @@ -54,6 +54,19 @@ function obtainHangarPath { hangarPath=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && cd ../../.. && pwd ) } +function addAdditionalArtifact { + # Check if an extra artifact to store is supplied. + if test ! -z "$artifactPath" + then + # Add the extra step to the YAML. + storeExtraPathContent="\n - name: Publish Additional Output Artifact\n uses: actions\/upload-artifact@v3\n with:\n name: additional-pipeline-output\n path: \"\${{ env.artifactPath }}\"" + sed -i "s/# mark to insert step for additonal artifact #/$storeExtraPathContent\n/" "${localDirectory}/${pipelinePath}/${yamlFile}" + else + echo "The '-a' flag has not been set, skipping the step to add additional artifact." + sed -i '/# mark to insert step for additonal artifact #/d' "${localDirectory}/${pipelinePath}/${yamlFile}" + fi +} + # Function that adds the variables to be used in the pipeline. function addCommonPipelineVariables { if test -z "${artifactPath}" @@ -141,6 +154,8 @@ type addPipelineVariables &> /dev/null && addPipelineVariables copyYAMLFile +addAdditionalArtifact + copyCommonScript type copyScript &> /dev/null && copyScript @@ -152,6 +167,4 @@ commitCommonFiles type commitFiles &> /dev/null && commitFiles -# createPipeline - createPR diff --git a/scripts/pipelines/gitlab/pipeline_generator.sh b/scripts/pipelines/gitlab/pipeline_generator.sh new file mode 100644 index 000000000..e306f8dd6 --- /dev/null +++ b/scripts/pipelines/gitlab/pipeline_generator.sh @@ -0,0 +1,179 @@ +#!/bin/bash +set -e +FLAGS=$(getopt -a --options c:n:d:a:b:l:i:u:p:hw --long "config-file:,pipeline-name:,local-directory:,artifact-path:,target-branch:,language:,build-pipeline-name:,sonar-url:,sonar-token:,image-name:,registry-user:,registry-password:,storage-container:,cluster-name:,s3-bucket:,s3-key-path:,quality-pipeline-name:,dockerfile:,test-pipeline-name:,aws-access-key:,aws-secret-access-key:,aws-region:,help,rancher" -- "$@") + +eval set -- "$FLAGS" +while true; do + case "$1" in + -c | --config-file) configFile=$2; shift 2;; + -n | --pipeline-name) export pipelineName=$2; shift 2;; + -d | --local-directory) localDirectory=$2; shift 2;; + -a | --artifact-path) artifactPath=$2; shift 2;; + -b | --target-branch) targetBranch=$2; shift 2;; + -l | --language) language=$2; shift 2;; + --build-pipeline-name) export buildPipelineName=$2; shift 2;; + --sonar-url) sonarUrl=$2; shift 2;; + --sonar-token) sonarToken=$2; shift 2;; + -i | --image-name) imageName=$2; shift 2;; + -u | --registry-user) dockerUser=$2; shift 2;; + -p | --registry-password) dockerPassword=$2; shift 2;; + --storage-container) storageContainerName=$2; shift 2;; + --rancher) installRancher="true"; shift 1;; + --cluster-name) clusterName=$2; shift 2;; + --s3-bucket) s3Bucket=$2; shift 2;; + --s3-key-path) s3KeyPath=$2; shift 2;; + --quality-pipeline-name) export qualityPipelineName=$2; shift 2;; + --test-pipeline-name) export testPipelineName=$2; shift 2;; + --dockerfile) dockerFile=$2; shift 2;; + --aws-access-key) awsAccessKey="$2"; shift 2;; + --aws-secret-access-key) awsSecretAccessKey="$2"; shift 2;; + --aws-region) awsRegion="$2"; shift 2;; + -h | --help) help="true"; shift 1;; + -w) webBrowser="true"; shift 1;; + --) shift; break;; + esac +done + +# Colours for the messages. +white='\e[1;37m' +green='\e[1;32m' +red='\e[0;31m' + +# Common var +commonTemplatesPath="scripts/pipelines/gitlab/templates/common" # Path for common files of the pipelines +pipelinePath=".pipelines" # Path to the pipelines. +scriptFilePath=".pipelines/scripts" # Path to the scripts. +gitlabCiFile=".gitlab-ci.yml" +export provider="gitlab" + +function obtainHangarPath { + + # This line goes to the script directory independent of wherever the user is and then jumps 3 directories back to get the path + hangarPath=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && cd ../../.. && pwd ) +} + +function addAdditionalArtifact { + # Check if an extra artifact to store is supplied. + if test ! -z "$artifactPath" + then + # Add the extra step to the YAML. + grep " artifacts:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null && storeExtraPathContent=" - \"$artifactPath\"" + grep " artifacts:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null || storeExtraPathContent="\n artifacts:\n paths:\n - \"$artifactPath\"" + sed -i "s/# mark to insert step for additonal artifact #/$storeExtraPathContent\n/" "${localDirectory}/${pipelinePath}/${yamlFile}" + else + echo "The '-a' flag has not been set, skipping the step to add additional artifact." + sed -i '/# mark to insert step for additonal artifact #/d' "${localDirectory}/${pipelinePath}/${yamlFile}" + fi +} + +# Function that adds the variables to be used in the pipeline. +function addCommonPipelineVariables { + if test -z "${artifactPath}" + then + echo "Skipping creation of the variable artifactPath as the flag has not been used." + # Delete the commentary to set the artifactPath input/var + sed -i '/# mark to insert additional artifact env var #/d' "${localDirectory}/${pipelinePath}/${yamlFile}" + else + # add the input for the additional artifact + grep "variables:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null && textArtifactPathVar=" artifactPath: ${artifactPath//\//\\/}" + grep "variables:" "${localDirectory}/${pipelinePath}/${yamlFile}" > /dev/null || textArtifactPathVar="variables:\n artifactPath: \"${artifactPath//\//\\/}\"" + sed -i "s/# mark to insert additional artifact env var #/$textArtifactPathVar/" "${localDirectory}/${pipelinePath}/${yamlFile}" + fi +} + +function addCiFile { + echo -e "${green}Copying and commiting the gitlab ci file." + echo -ne ${white} + + cp "${hangarPath}/${commonTemplatesPath}/${gitlabCiFile}" "${localDirectory}/${gitlabCiFile}" + testCommit=$(git status) + if echo "$testCommit" | grep "nothing to commit, working tree clean" > /dev/null + then + echo "gilab-ci file already present with same content, nothing to commit." + else + git add "${gitlabCiFile}" -f + git commit -m "adding gitlab-ci.yml" + git push + fi +} + +function createPR { + # Check if a target branch is supplied. + if test -z "$targetBranch" + then + # No branch specified in the parameters, no Pull Request is created, the code will be stored in the current branch. + echo -e "${green}No branch specified to do the Pull Request, changes left in the ${sourceBranch} branch." + exit + else + echo -e "${green}Creating a Pull Request..." + echo -ne "${white}" + repoURL=$(git config --get remote.origin.url) + repoNameWithGit="${repoURL/https:\/\/gitlab.com\/}" + repoName="${repoNameWithGit/.git}" + # Create the Pull Request to merge into the specified branch. + #debug + echo "glab mr create -b \"$targetBranch\" -d \"merge request $sourceBranch\" -s \"$sourceBranch\" -H \"${repoName}\" -t \"merge $sourceBranch\"" + pr=$(glab mr create -b "$targetBranch" -d "merge request $sourceBranch" -s "$sourceBranch" -H "${repoName}" -t "merge $sourceBranch") + + # trying to merge + if glab mr merge -s $(basename "$pr") -y + then + # Pull Request merged successfully. + echo -e "${green}Pull Request merged into $targetBranch branch successfully." + exit + else + # Check if the -w flag is activated. + if [[ "$webBrowser" == "true" ]] + then + # -w flag is activated and a page with the corresponding Pull Request is opened in the web browser. + echo -e "${green}Pull Request successfully created." + echo -e "${green}Opening the Pull Request on the web browser..." + python -m webbrowser "$pr" + exit + else + # -w flag is not activated and the URL to the Pull Request is shown in the console. + echo -e "${green}Pull Request successfully created." + echo -e "${green}To review the Pull Request and accept it, click on the following link:" + echo "${pr}" + exit + fi + fi + fi +} + + +obtainHangarPath + +# Load common functions +. "$hangarPath/scripts/pipelines/common/pipeline_generator.lib" + +if [[ "$help" == "true" ]]; then help; fi + +ensurePathFormat + +importConfigFile + +checkInstallations + +createNewBranch + +type addPipelineVariables &> /dev/null && addPipelineVariables + +copyYAMLFile + +addAdditionalArtifact + +copyCommonScript + +type copyScript &> /dev/null && copyScript + +# This function does not exists for the github pipeline generator at this moment, but I let the line with 'type' to keep the same structure as the others pipeline generator +type addCommonPipelineVariables &> /dev/null && addCommonPipelineVariables + +commitCommonFiles + +type commitFiles &> /dev/null && commitFiles + +addCiFile + +createPR diff --git a/scripts/pipelines/gitlab/templates/aks/aks-pipeline.cfg b/scripts/pipelines/gitlab/templates/aks/aks-pipeline.cfg new file mode 100644 index 000000000..bdc622112 --- /dev/null +++ b/scripts/pipelines/gitlab/templates/aks/aks-pipeline.cfg @@ -0,0 +1,58 @@ +# Mandatory flags. +mandatoryFalgs="$localDirectory,$clusterName,$storageContainerName," +# Path to the templates. +templatesPath="scripts/pipelines/gitlab/templates/aks" +#Path to common kubernetes templates. +commonKubernetesPath="scripts/pipelines/gitlab/templates/common/kubernetes" +# aks-provision YAML file name. +yamlFile="aks-provisioning.yml" +# Source branch. +sourceBranch="feature/aks-provisioning" +# Path to terraform templates. +terraformTemplatesPath="scripts/environment-provisioning/azure/aks" +# Path to terraform scripts. +terraformPath=".terraform/aks" +# Default cluster operation. +operation="create" +# Install Rancher on AKS cluster. +if test -z $installRancher +then + installRancher=false +fi + +# Function that copies the necessary scripts into the directory. +function copyScript { + # Create .terraform/aks folder if it does not exist. + mkdir -p "${localDirectory}/${terraformPath}" + + # Copy the terraform files. + cd "${hangarPath}/${terraformTemplatesPath}" + cp * "${localDirectory}/${terraformPath}" + + # Copy the script for the DNS name into the directory. + cp "${hangarPath}/${templatesPath}/obtain-dns.sh" "${localDirectory}/${scriptFilePath}/obtain-dns.sh" + + # Copy the common files for kubernetes + cp "${hangarPath}/${commonKubernetesPath}"/*.sh "${localDirectory}/${scriptFilePath}" +} + +function commitFiles { + # Add the terraform files. + git add .terraform -f + + # Changing all files to be executable. + find .terraform -type f -name '*.sh' -exec git update-index --chmod=+x {} \; + + # Git commit and push it into the repository. + git commit -m "Adding the terraform files" + git push -u origin ${sourceBranch} +} + +# Function that adds the variables to be used in the pipeline. +function addPipelineVariables { + export clusterName + export storageContainerName + export installRancher + export operation + specificEnvSubstList='${clusterName} ${operation} ${storageContainerName} ${installRancher}' +} diff --git a/scripts/pipelines/gitlab/templates/aks/aks-provisioning.yml.template b/scripts/pipelines/gitlab/templates/aks/aks-provisioning.yml.template new file mode 100644 index 000000000..f2656298e --- /dev/null +++ b/scripts/pipelines/gitlab/templates/aks/aks-provisioning.yml.template @@ -0,0 +1,147 @@ +default: + image: + name: ubuntu:latest + entrypoint: + - /usr/bin/env + - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + +workflow: + rules: + - if: '$CI_PIPELINE_SOURCE == "web"' + when: always + - when: always + +variables: + CLUSTER_NAME: + value: "$clusterName" + description: "Name for the AKS cluster to be created" + OPERATION: + value: "$operation" + description: "Operation to perform on cluster. Create or Destroy." + INSTALL_RANCHER: + value: "$installRancher" + description: "Installs Rancher on AKS when set to true." + organization: "$CI_PROJECT_NAMESPACE" + TF_STATE_NAME: "$storageContainerName" + TF_CACHE_KEY: default + TF_ROOT: "${CI_PROJECT_DIR}/.terraform/aks" + TF_USERNAME: ${GITLAB_USER_NAME} + TF_PASSWORD: ${GITLAB_PAT} + TF_ADDRESS: "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/terraform/state/${TF_STATE_NAME}" + TF_HTTP_ADDRESS: ${TF_ADDRESS} + TF_HTTP_LOCK_ADDRESS: ${TF_ADDRESS}/lock + TF_HTTP_LOCK_METHOD: POST + TF_HTTP_UNLOCK_ADDRESS: ${TF_ADDRESS}/lock + TF_HTTP_UNLOCK_METHOD: DELETE + TF_HTTP_USERNAME: ${TF_USERNAME} + TF_HTTP_PASSWORD: ${TF_PASSWORD} + TF_HTTP_RETRY_WAIT_MIN: 5 + +.Prerequisites_install: &Prerequisites_install + before_script: + - apt-get update + - apt-get install sudo -y + - apt-get install curl -y + - apt-get install zip -y + - apt-get install -y wget + +.configure_azcli: &login_azcli + # INSTALL AZCLI + - curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash + - az --version + - az login --username "${AZURE_USERNAME}" --password "${AZURE_PASSWORD}" + +.packages: &configure_packages + # INSTALL KUBECTL + - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + - chmod +x ./kubectl + - mv ./kubectl /usr/local/bin/kubectl + # INSTALL HELM + - curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 + - chmod +x get_helm.sh + - DESIRED_VERSION=v3.9.0 ./get_helm.sh + +.install-terraform: &install-terraform + # INSTALL + - wget -nv https://releases.hashicorp.com/terraform/1.2.6/terraform_1.2.6_linux_amd64.zip + - unzip -qq terraform_1.2.6_linux_amd64.zip + - sudo mv terraform /usr/local/bin + +.configure_glab_auth: &glab_auth + - curl -s https://raw.githubusercontent.com/profclems/glab/trunk/scripts/install.sh | sudo sh + - glab auth login --token $GITLAB_PAT + +Provision: + <<: *Prerequisites_install + script: + - *install-terraform + - *login_azcli + - cd .. + - terraform --version + - cd ${TF_ROOT} + - terraform init -var cluster_name=${CLUSTER_NAME} + - terraform apply -var cluster_name=${CLUSTER_NAME} --auto-approve + - mv ${TF_ROOT}/kubeconfig ${CI_PROJECT_DIR} + - *glab_auth + - cat ${CI_PROJECT_DIR}/kubeconfig | glab variable set -g $organization KUBECONFIG -t "file" + artifacts: + paths: + - "./kubeconfig" + rules: + - if: '$OPERATION == "create"' + when: always + +Install_nginx: + <<: *Prerequisites_install + needs: [Provision] + script: + - *configure_packages + - export KUBECONFIG=${KUBECONFIG} + - chmod 755 .pipelines/scripts/install-nginx-ingress.sh + - ./.pipelines/scripts/install-nginx-ingress.sh + rules: + - if: '$OPERATION == "create"' + when: always + +Obtain_dns: + <<: *Prerequisites_install + needs: [Install_nginx] + script: + - *configure_packages + - *login_azcli + - export KUBECONFIG=${KUBECONFIG} + # Obtain-dns + - chmod 755 .pipelines/scripts/obtain-dns.sh + - ./.pipelines/scripts/obtain-dns.sh ${CLUSTER_NAME} + - dnsname="${CLUSTER_NAME}.westeurope.cloudapp.azure.com" + # Create aks_dns_name variable + - *glab_auth + - glab variable set -g $organization aks_dns_name -v "$dnsname" -t "env_var" + rules: + - if: '$OPERATION == "create"' + when: always + +Install-rancher: + <<: *Prerequisites_install + needs: [Obtain_dns] + script: + - *configure_packages + - export KUBECONFIG=${KUBECONFIG} + # INSTALL RANCHER + - chmod 755 .pipelines/scripts/install-rancher.sh + - ./.pipelines/scripts/install-rancher.sh ${aks_dns_name} + rules: + - if: '$INSTALL_RANCHER == "true" && $OPERATION == "create"' + when: always + +Destroy-terraform: + <<: *Prerequisites_install + script: + - *install-terraform + - *configure_packages + - cd ${TF_ROOT} + - terraform init + - terraform apply -destroy -var cluster_name=${CLUSTER_NAME} --auto-approve + rules: + - if: '$OPERATION == "destroy"' + when: always diff --git a/scripts/pipelines/gitlab/templates/aks/obtain-dns.sh b/scripts/pipelines/gitlab/templates/aks/obtain-dns.sh new file mode 100644 index 000000000..c6f431573 --- /dev/null +++ b/scripts/pipelines/gitlab/templates/aks/obtain-dns.sh @@ -0,0 +1,21 @@ +#!/bin/bash +ip="$(kubectl get svc nginx-ingress-nginx-ingress-controller --namespace nginx-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" + +while test -z "$ip" +do + sleep 5s + ip="$(kubectl get svc nginx-ingress-nginx-ingress-controller --namespace nginx-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" +done + +# Obtain the AKS cluster name +dnsname=$1 + +ipname=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$ip')].[name]" --output tsv) + +iprg=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$ip')].[resourceGroup]" --output tsv) + +az network public-ip update --resource-group "$iprg" --name "$ipname" --dns-name "$dnsname" + +dns="$(az network public-ip show --resource-group "$iprg" --name "$ipname" --query "[dnsSettings.fqdn]" --output tsv)" + +echo "$dns" diff --git a/scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml b/scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml new file mode 100644 index 000000000..c38d0535e --- /dev/null +++ b/scripts/pipelines/gitlab/templates/common/.gitlab-ci.yml @@ -0,0 +1,18 @@ +include: + - '.pipelines/*.yml' + +workflow: + rules: + - if: '$CI_PIPELINE_SOURCE == "web"' + when: always + - when: never + +# stages: +# - build +# - test +# - quality +# - package + +# default: +# image: maven:3-jdk-11 +# tags: ['docker_ruby'] \ No newline at end of file diff --git a/scripts/pipelines/gitlab/templates/common/kubernetes/install-nginx-ingress.sh b/scripts/pipelines/gitlab/templates/common/kubernetes/install-nginx-ingress.sh new file mode 100644 index 000000000..0914cf122 --- /dev/null +++ b/scripts/pipelines/gitlab/templates/common/kubernetes/install-nginx-ingress.sh @@ -0,0 +1,4 @@ +#!/bin/bash +helm repo add bitnami https://charts.bitnami.com/bitnami +helm repo update +helm install nginx-ingress bitnami/nginx-ingress-controller --set ingressClassResource.default=true --set containerSecurityContext.allowPrivilegeEscalation=false --namespace nginx-ingress --create-namespace \ No newline at end of file diff --git a/scripts/pipelines/gitlab/templates/common/kubernetes/install-rancher.sh b/scripts/pipelines/gitlab/templates/common/kubernetes/install-rancher.sh new file mode 100644 index 000000000..66c61ac78 --- /dev/null +++ b/scripts/pipelines/gitlab/templates/common/kubernetes/install-rancher.sh @@ -0,0 +1,7 @@ +#!/bin/bash +helm repo add rancher-latest https://releases.rancher.com/server-charts/latest +helm repo add jetstack https://charts.jetstack.io +helm repo add +kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.1/cert-manager.crds.yaml +helm install cert-manager "jetstack/cert-manager" --namespace cert-manager --create-namespace --version v1.5.1 +helm install rancher "rancher-latest/rancher" --namespace cattle-system --create-namespace --set hostname="$1" \ No newline at end of file