Google Cloud and Gitlab Pipelines for an Ideal CI / CD for your Kubernetes Deployments

In of my previous blog, I wrote about how an Ideal Continuous Integration / Continuous Deployment Pipeline should look like, with the technologies like Gradle, Gitversion, Docker, JiB, Helm, Helmfile. ArgoCD. I also claimed that these ideas can be applied to any Pipeline DSL engine. This blog is continuation of that claim, showing how it can be done with Google Cloud and Gitlab Pipelines (the original blog was using the DSL of the GitHub Actions).

PS. WordPress is unfortunately is reducing the image quality, to see higher resolution images please click images and follow the links.

In general, this is how our battle plan will look like…

You can read more about the details of the battle plan in the previous blog, I don’t want to repeat myself here, here we will discuss the Google Cloud and Gitlab specifics of this battle plan.

I implemented the initial demonstration with GitHub Actions because it is completely Open Source and accessible, so you could test it without any strings attached. To test these concept at Google Cloud and Gitlab, we will need Free Test accounts for Google Cloud and Gitlab.

Specially Google Cloud test account is really nice, you would have 3 months of test duration with credit limit 400$ (which is really nice compare to 1 month 200$ limit of the MS Azure), Gitlab is little bit limited compared Google Cloud, you would only have 1 month to test and there would be some limitation with the features (which I will mention further in the blog) and Gitlab Runner time quotas (you would have only 400 hours on shared runners if you want to test it further, you have to install your local runners, which is little bit tricky if you are a new user to Gitlab but don’t worry I will explain it how to do it in this blog).

Before we further proceed in the blog, I like to mention something, first let me do the following disclaimer so you can judge my below comments, I have no affiliation with Github or Gitlab. This is my third blog about Pipeline DSL’s first one was Github Actions, second one MS Azure DevOps Pipelines and now Gitlab pipelines, out of these three, I found it Gitlab most difficult one to work with it.

I don’t exactly know the development timeline of the Gitlab but I have a feeling it is oldest one, some of it concepts were most probably cutting edge ten years ago but I think those are little bit outdated at this age but because of the amount of the existing code in the client Pipelines, it is not that easy to change the DSL.

What I mean with this, Gitlab pipelines has an entry point of ‘.gitlab-ci.yml‘ in every Gitlab Repository, let’s say if I have scenarios that I have to build continuous deployment pipelines for development branch, continuous deployment pipeline for a release branch and many more additional pipelines, I have to place all those to one this single ‘.gitlab-ci.yml‘ file compared to the ability of programming these use cases in dedicated files / triggers at Github Actions and MS Azure DevOps pipelines.

In the previous iterations of this blogs, you could see that I have ten use cases to accomplish (and in your scenarios, you might even have more use cases) and I have to program all these ten use cases in one single file, which make the Gitlab Pipelines extremely complex, so I had to program it like a state machine (even that I used the conditional include mechanism of the Gitlab Pipelines for the purpose of re-use and simplification). In my opinion Gitlab Pipelines are much more complex and difficult to maintain compared to Github Action and MS Azure DevOps Pipeline counterparts.

And now for the reason of my disclaimer, if you are starting the development of your pipelines new and you are planning install Gitlab Ultimate on Premise, I sincerely advice to evaluate Github Actions by using Github Enterprise Server on Premise.

Other then being much easier to operate, Github Actions had a solution in its components library for the every problem that I encountered, many more trigger options, which really helped me automate the Use Cases, compared somethings being big hustle in Gitlab Pipelines, I will mention these points further in the blog when relevant topics are reached.

  1. Gitlab Pipelines
    1. Use Case 1: Continuous Integration and Deployment
      1. Service Pipeline – Credit Score
      2. Environment Pipeline – Helm Umbrella Chart
      3. GitOps / ArgoCD
      4. Proof That All Works
        1. Service Repository
        2. Helm Umbrella Chart
          1. Helmfile Template
        3. GitOps
        4. ArgoCD
        5. Google Cloud Kubernetes Engine (GKE)
    2. Use Case 2: Prepare Environment for Merge Request
      1. Service Pipeline
      2. Environment Pipeline
      3. GitOps / ArgoCD
      4. Proof that it works
        1. Service Repository
        2. Umbrella Repository
        3. GitOps
        4. ArgoCD
        5. Pods
    3. Use Case 3: Environment Cleanup after Completed Pull Request [ Merged / Closed]
    4. Use Case 4: Producing Release Candidates for Services
      1. Service Pipeline
      2. Environment Pipeline
      3. GitOps / ArgoCD
      4. Proof that it works
        1. Service Repository
        2. Helm Umbrella Repository
        3. GtiOps
        4. ArgoCD
        5. Pods
    5. Use Case 5: Release Environment Cleanup
    6. Use Case 6: Integration Environment for Helm Umbrella Charts / Sanity Checks
      1. Environment Pipeline
      2. Proof that all works
    7. Use Case 7: Integration Environment Cleanup for Helm Umbrella Chart
    8. Use Case 8: Service Release Process
    9. Use Case 9: Helm Umbrella Chart Release Process
    10. Use case 10: Environment for Epic Stories
  2. Preparations:
    1. Google Cloud CLI
    2. Google Cloud Project
    3. Google Cloud Artifact Registry
      1. For Docker Images
      2. For Helm Charts
      3. Service Account for Artifact Registry
    4. Google Cloud Kubernetes Engine
      1. Service Account
      2. Service Account Roles / Workflow Identities
      3. Kubeconfig
    5. Gitversion
    6. ArgoCD
    7. Gitlab
      1. Gitlab Runner
      2. Project Access Token
      3. CI / CD Variables

Gitlab Pipelines

Now it is time to look how Gitlab Pipeline DSL differs from the Github Action DSL as you could see in this blog or for MS Azure DevOps Pipeline DSL as mentioned in this blog. As those blogs realise exactly the same Use Cases, I think it is fair comparison to understand what those do better or worse.

Most of the pipelines that we are going develop will be used from several Service Gitlab Repositories (as you can see in my previous blogs), so re-usability is a big topic, which also make your job quite easy if you decide to use these Pipelines in your Systems, you can adapt those to your purposes with minimal changes. To achieve this, we would have to create a Central Gitlab Repository for the re-usable Gitlab Pipelines, which will be referenced in Service Pipelines with Gitlab Include mechanism.

In the previous blogs, we had repositories for Helm Umbrella Chart for Services and Infrastructure but those were designed for GitHub Actions or MS Azure, because of the amount of the necessary changes, I created the Gitlab Pipeline versions of these repositories, fsm-akka-helm-umbrella-chart-gitlab.

The main reason for this, is our intent to use Artifact Registry as an Helm Chart Repository. OCI Helm Registries like Google Artifact Registry are not fully supported unfortunately from Gradle Plugin ‘org.unbroken-dome.gradle-plugins.helm:helm-plugin‘ that we use for Helm processes at the moment and development for this Plugin is somehow is frozen, so I have to extend this Gradle Plugin myself.

You can find the plugin with OCI functionality io.github.mehmetsalgar.gradle-plugins.helm:helm-plugin, if the old plugin development reactivates, I will transfer the solution there also.

Below I will demonstrate the Use Cases for the Pipelines with the same concepts from the original blog, to not duplicate the content, I will place the sequence diagrams here, but I will not repeat description s that detailed here, you can check those from the original blog if you want to.

Use Case 1: Continuous Integration and Deployment

Trigger Action: Commit to ‘credit-score‘ repository ‘development‘ branch

Service Pipeline – Credit Score

We will first deal with the most basic Scenario for the services, build the Java Code, Test it, create a Docker Image, upload to Google Artifact Registry, build Helm Chart and upload to Google Artifact Registry.

The pipeline will start with a push to all usual GitFlow branches.

credit-score/.gitlab-ci.yml

First thing to that I have to explain here is the ‘stage‘ concept of Gitlab, my opinion, which was a very important concept for the first generation of Gitlab Pipelines to be able to order the execution of the Gitlab Pipeline Jobs, with the introduction of the ‘needs‘ key word into the Pipeline DSL it lost it significance, we can now explicitly declare execution of the order of Jobs with the help of ‘needs‘. Personally I still use by Gitlab Pipeline defined stages like ‘.pre’, ‘build’, ‘deploy’ to make visualisation of Pipelines more structured but you really don’t have to.

For the Service Repositories while we will reuse the Gitlab Pipeline code, what we see in ‘credit-score‘ Gitlab Repository is just an entry point, which defines variables like ‘SOURCE_REPO’, ‘SERVICE_CHART_NAME’, ‘GRADLE_COMMAND’ (command to use for Gradle) but you also see the source my biggest complains for Gitlab Pipelines, if you look to my previous blogs, there the use cases are realised in separate Pipeline files to be able fully benefit from ‘Separation of Concerns‘. With GitIab Pipeline DSL, I have to accomplish 10 Use Cases in one single file.

To realise this, I have to use ‘USE_CASE_X’ variables to build a primitive to state machine, to turn on and off part of the Gitlab pipeline, if you compare this for ex, Github Actions,

USE_CASE_1 -> credit-score/.github/workflows/continuous-deployment-development-with-resuable.yaml

USE_CASE_2 -> credit-score/.github/workflows/continuous-deployment-pull-request-with-reusable.yaml

USE_CASE_3 -> credit-score/.github/workflows/cleanup-after-pull-request-closing-with-reusable.yaml

USE_CASE_4 -> credit-score/.github/workflows/continuous-deployment-release-with-reuse.yaml

USE_CASE_5 -> credit-score/.github/workflows/cleanup-after-branch-delete.yaml

as you can see it is much easier to organise to use the means of the ‘Separation of Concerns‘, compared to constant struggle to turn on and off the parts of the Gitlab Pipelines depending on which Use Case are you executing, as you can see below with the include functionality of the Gitlab Pipelines.

Above snippet shows us, depending on which branch the changes are made, we are trying to identify for which Use Case we are trying to realise. Also we have to use same mechanism for Gitlab Pipeline DSL ‘include‘ keyword, to identify correct Gitlab Pipeline to include but you also see another annoying mechanism in Gitlab Pipelines. ‘workflow -> rules‘ part of the Pipeline identifies the Use Case and sets the variable ‘USE_CASE_X’ value but the ‘include’ mechanism does not recognise these variables (it only recognise the build-in variables from Gitlab transferred from Environment, so we can’t use the ‘USE_CASE_X’ variables for include mechanism).

Now lets look to the ‘ci-cd-gitlab-pipelines/composite/cd-deployment.yml‘,

This use case deals deployment every commit to the ‘development’ branch to be deployed to our Kubernetes Environment, it has a standard component ‘component/build.yml‘ which deals Gradle Java build, Docker Image creation / upload with Google JiB, Helm Chart packaging / upload and additionally triggering of further process under Helm Umbrella Chart Gitlab Repository (you also see here how the triggering mechanism of Gitlab Pipeline function, please pay attention Pipeline DSL keyword ‘strategy‘, without of this option, your current pipeline will continue without waiting the completion of the triggered downstream pipeline).

‘SERVICE_CHART_VERSION’ value is acquired from Gitversion tool, as you can see our Pipeline Job declare it dependency to ‘determine-version’ Job and its ‘artifacts‘, which will transfer the value of the variable ‘$GitVersion_SemVer‘.

You can also see the reason of my biggest complaining, turning on / off parts of the pipelines with ‘rules‘ and ‘USE_CASE_X’ variables.

ci-cd-gitlab-pipelines/composite/build.yml

This part of the pipeline coordinates the execution of Gitversion component, Gradle build for Java, Docker and Helm.

ci-cd-gitlab-pipelines/jobs/gradle.yml

This snippet realise the Gradle build, it will be reused in several places so that Gradle command parameterised and it uses the cache feature of the Gitlab Pipelines (which is critical if you want to speed up your pipeline execution, for ex, if your next job will check the code coverage, you can just take the java binary from the cache, instead of rebuilding it) and expose it ‘artifacts‘ to the following jobs (in this case packaged Helm Chart).

ci-cd-gitlab-pipelines/jobs/helm-push.yml

One of the drawbacks of the Helm plugin for Gradle that I am using, it can’t deal with OCI Registries for uploading Helm Charts, for this reason I have to use the above Pipeline Job to upload those, the interesting part this job needs from Gradle job packed the Helm Chart as an artifact, to be able realise this, we have to use a trick in Gitlab and use the following Snippet.

needs:
  - pipeline: $PARENT_PIPELINE_ID
    job: build-gradle

this will transfer packaged Helm Chart to be published to Artifact Repository.

ci-cd-gitlab-pipelines/jobs/helm-push-service.yml

We have to organise this Pipeline this way because we want to re-use it at Helm Chart upload functionality for Helm Umbrella Charts and it will operate with other parameters.

Environment Pipeline – Helm Umbrella Chart

As we seen ‘ci-cd-gitlab-pipelines/composite/cd-deployment.yml‘, Serivce Pipeline triggers Environment Pipeline at

trigger:
  project: 'org.salgar.fsm.akka/fsm-akka-helm-umbrella-chart-gitlab'
  branch: 'development'
  strategy: depend

Now let’s look to this Pipeline and you can further understand my concerns with Gitlab Pipelines, Trigger Mechanism in Gitlab, if it triggers a Pipeline from another Gitlab Project, it can only trigger the main entry point, which is ‘.gitlab.ci.yml’, as we have many scenarios / Use Cases to accomplish for Environment Pipelines, we have to place all the logic to one single file. As you can see below, I am trying to control which part of the Pipeline will execute with the help of the ‘USE_CASE_X’ variables.

fsm-akka-helm-umbrella-chart-gitlab/.gitlab-ci.yml

In this chapter we are analysing Use Case 1, the part of the Pipeline that we are interested is the inclusion of ‘fsm-akka-helm-umbrella-chart-gitlab/gitlab/composite/prepare-environment-development.yml‘ and even there we can see the variables are checked there.

This pipeline determine the version of Helm Umbrella Chart, realise the Gradle build of the Helm Chart with the command

workflow:
  rules:
    - if: $USE_CASE_1 == "true"
      variables:
        UMBRELLA_GRADLE_COMMAND: "helmPackage -Pversion="

to package those and then push those to Artifact Registry,

variables:
  CHART_NAME: "fsm-akka-helm-umbrella-chart-gitlab"
  CHART_VERSION: $GitVersion_SemVer
script:
  - |
    echo $HELM_PASSWORD | helm registry login $HELM_URL -u $HELM_USER --password-stdin
    echo "helm push /build/helm/charts/$CHART_NAME-$CHART_VERSION.tgz oci://$HELM_URL$HELM_PATH"
    helm push build/helm/charts/$CHART_NAME-$CHART_VERSION.tgz oci://$HELM_URL$HELM_PATH

Now probably, you are asking yourself, how are we identifying which Service Version to package, while Use Case 1 deals with the newest development version of the services, we will use the following file to say to the Pipelines which version to use.

fsm-akka-helm-umbrella-chart-gitlab/gradle.properties

org.gradle.jvmargs=-Xms2048m
org.gradle.logging.level=INFO
credit-score-version=<=2.6.0-beta
address-check-version=<=2.0.0-beta

In Semantic Versioning the tag ‘-alpha’ represent the SNAPSHOT Version of Maven / Gradle Dependency systems, as you can read here, by using notation ‘<=2.6.0-beta’ we are telling Helm that the latest 2.6.0-alpha version. When your release train moves forward, you modify this file for ex to, ‘<=2.7.3-beta’.

Now our workflow will delegate the continuation of the pipeline at ‘fsm-akka-dev-environment‘ Gitlab Project, so we can render our Kubernetes manifests and commit those to ‘fsm-akka-dev-environment‘ Gitlab Repository.

GitOps / ArgoCD

Before we start explaining this chapter, I have to point out something, if you read the previous blogs, we are using ‘fsm-akka-dev-environment’ Git Repository to realise GitOps by rendering our k8s manifests with the ‘helmfile template’ functionality and commit those to a new branch so ArgoCD can read and deploy to our k8s cluster.

The problem, we are experiencing in this blog, Free Trial version of the Gitlab is not letting us programmatically create branches and commit files to it programatically because following feature turned off.

If this would not be a factor we could use the following Pipeline…

create-branch:
  only:
    variables:
      - $CREATE_BRANCH == "true"
  stage: build
  image: alpine/git
  before_script:
    -  |
        git config --global user.name "${GITLAB_USER_NAME}"
        git config --global user.email "${GITLAB_USER_EMAIL}"
  script:
    - |
      echo "Branch Name $BRANCH_NAME Repo Name $REPO_NAME for Version $VERSION"
      BRANCH_TO_CREATE=${BRANCH_NAME}-${REPO_NAME}
      git remote set-url origin "https://gitlab-ci-token:${PUSH_TOKEN}@gitlab.com/org.salgar.fsm.akka/fsm-akka-dev-environment.git"
      git remote -v
      git fetch --all
      git checkout -b $BRANCH_TO_CREATE
      git push -u origin $BRANCH_TO_CREATE

critical point being ‘${PUSH_TOKEN}‘ unfortunately we can’t configure this Token, so we can’t use the Gitlab Repositories to do GitOps.

Only way to continue with this blog, to use Github to do GitOps and create branch, commit files to it, I know it looks ugly but it is my only way to show you something running.

fsm-akka-dev-environment/.gitlab-ci.yml

First thing you have to pay attention, this pipeline is also suffering having too much functionality in one file and we have to control everything over ‘rules‘ elements with the help of ‘USE_CASE_X’ variables, that some part of the pipeline will run or not.

The pipeline uses a really practical image from Gitlab provide tools for Google Cloud, Kubernetes, Helm, Helmfile.

process-helm-development:
  stage: deploy
  image: "${CI_REGISTRY}/gitlab-com/gl-infra/k8s-workloads/common/k8-helm-ci:${CI_IMAGE_VERSION}"
  before_script:

‘k8-helm-ci’ really provides all the tools, ‘helm’, ‘helmfile’, ‘git’, you need to work for Kubernetes, then we will need a Helmfile configuration.

mehmetsalgar/fsm-akka-dev-environment/helmfile.yaml

repositories:
  - name: fsm-akka
    url: {{ .StateValues.url }}
    username: {{ .StateValues.username }}
    password: {{ .StateValues.password }}
    oci: true

environments:
  default:
    values:
      - default.yaml

releases:
  - name: foureyes
    namespace: fsmakka
    chart: oci://{{ .StateValues.url }}/{{ .StateValues.path }}
    version: {{ .StateValues.version }}
    values:
      - values-dev.yaml

As you can see, Helmfile configuration needs some input parameters and those are provided from Gitlab Pipelines.

This pipeline behaves differently, for preparing k8s manifest for ‘development’ environment and other branches while ‘helmfile’ input parameters are different. You also see the wall of code to be able to commit k8s manifest to the Github. Most critical part of this pipeline, is the ‘helmfile’ so let’s make a closer look to this command.

helmfile template --state-values-set username=$HELM_USER         /
                  --state-values-set password=$HELM_PASSWORD     / 
                  --state-values-set url=$HELM_URL               /
                  --state-values-set path=$HELM_PATH 
                  --state-values-set version=$UMBRELLA_CHART_VERSION /
                  --output-dir-template                         ./gitops/gitlab/fsmakka

First 4 parameters are related to Helm Repository and final one, the most important one, with which Helm Chart Version we will render k8s manifest. You can see the created manifests in Github Repository.

fsm-akka-dev-environment/gitops/gitlab/fsmakka/fsm-akka-helm-umbrella-chart-gitlab/charts

After these manifests are created and committed to the Github, for ‘development‘ branch you will have to create first ArgoCD Application CRD manually (don’t worry for the other environments like, ‘feature/*‘, ‘release/*‘, etc, will be created automatically).

Following is the Application Custom Resource Definition for ArgoCD.

You can see above the source of the k8s manifest, unfortunately as mentioned before is not the Gitlab Repository but the Github Repository, other then that the relevant configurations are, ‘recurse‘ option, which signals to ArgoCD that it should deploy all k8s manifests under the ‘path‘ parameter and automated deployment options.

Manual command that we have to execute is…

> helm upgrade --install  fsm-akka-4eyes . -n fsmakka -f values-gke.yaml

As you can see, there is no special configurations, it will take all its deployment parameters from default ‘values.yaml‘ and ‘values-gke.yaml‘ while we are installing to Google Cloud.

targetBranch: development
cluster:
  name: fsmakkaGKE
source:
  path: "gitops/gitlab/fsmakka"

Proof That All Works

Now this Use Case 1, start with a push to development branch in ‘address-check’, let’s see how Pipeline execution looked like.

Service Repository

Determine Version identified the next version of Address Check Service.

Gradle Java code compile, Docker Image creation / upload, Helm Chart packaging.

Helm Chart upload.

Helm Umbrella Chart

The Pipeline will continue it’s work at Helm Umbrella Chart Gitlab project.

First we determine the version of the Helm Umbrella Chart.

Gradle command to package Helm chart.

And Gradle packaging the Helm chart.

Helm Chart upload.

Development Environment Repository

Helmfile Template

And we have created k8s manifest with ‘helmfile template‘ and commit to the Github.

GitOps

fsm-akka-dev-environment/gitops/gitlab/fsmakka/fsm-akka-helm-umbrella-chart-gitlab/charts/address-check-application/templates/deployment.yaml

ArgoCD

And finally we can track that our commit to the ‘development‘ branch produced Address-Check Service version ‘1.4.0-135-alpha‘ and this version is deployed to our GKE Cluster :).

Google Cloud Kubernetes Engine (GKE)

Use Case 2: Prepare Environment for Merge Request

Trigger Action: Creation of Merge Request for ‘feature/x‘ branch or Commits to ‘feature/x‘ branch

Our second Use Case would be much more complex then the first one, our goal when we develop a Business Use Case in a ‘feature’ branch and software state reaches a certain maturity, you would need an environment in you Google Cloud Kubernetes Engine to demonstrate this state.

This pipeline will trigger with the creation of a Merge Request in Gitlab with target branch ‘development’

Service Pipeline

ci-cd-gitlab-pipelines/composite/ci.yml

ci-cd-gitlab-pipelines/composite/cd-pull-request.yml

First part of this pipeline will create a branch in ‘fsm-akka-helm-umbrella-chart-gitlab’ with the branch name from Service Repository plus the name of the Service Repository so we can track the changes.

As we previously mentioned, in Free Tier Gitlab, we can’t create a branch on Gitlab programmatically, so for this Use Case for test purposes you have to create the branch manually. In your Gitlab installation, you can use following code snippet.

If this would not be a factor we could use the following Pipeline…

create-branch:
  only:
    variables:
      - $CREATE_BRANCH == "true"
  stage: build
  image: alpine/git
  before_script:
    -  |
        git config --global user.name "${GITLAB_USER_NAME}"
        git config --global user.email "${GITLAB_USER_EMAIL}"
  script:
    - |
      echo "Branch Name $BRANCH_NAME Repo Name $REPO_NAME for Version $VERSION"
      BRANCH_TO_CREATE=${BRANCH_NAME}-${REPO_NAME}
      git remote set-url origin "https://gitlab-ci-token:${PUSH_TOKEN}@gitlab.com/org.salgar.fsm.akka/fsm-akka-dev-environment.git"
      git remote -v
      git fetch --all
      git checkout -b $BRANCH_TO_CREATE
      git push -u origin $BRANCH_TO_CREATE

${PUSH_TOKEN}‘, you have to create as Access Token in your Gitlab.

Second part of the Pipeline will trigger the packaging / upload of the Helm Umbrella Chart.

Final part will realise ArgoCD Application Custom Resource Definition deployment.

Environment Pipeline

After Service Pipeline prepares the Docker Image and package Helm Chart for Service, Environment Pipeline starts by preparing Helm Umbrella Chart and the new Environment for Merge Request.

fsm-akka-helm-umbrella-chart-gitlab/.gitlab-ci.yml

And here, you see again my main complaint about Gitlab Pipelines, while I have to realise several Use Cases in one single ‘.gitlab-ci.yml’, I have to use lots of logical structure to enable / disable parts of the Pipeline, when the number of Use Cases increases, it becomes more and more difficult to maintain the Pipelines.

fsm-akka-helm-umbrella-chart-gitlab/gitlab/composite/prepare-service-for-environment.yml

This Pipeline will realise the packing of the Helm Umbrella Chart and prepare the artifact for he upload to the Artifact Registry.

fsm-akka-helm-umbrella-chart-gitlab/gitlab/jobs/package-helm-chart.yml

fsm-akka-helm-umbrella-chart-gitlab/gitlab/jobs/prepare-helm-umbrella-chart.yml

GitOps / ArgoCD

As I mentioned previously as we go deeper and deeper into Pipeline Logic number if condition increases exponentially in Gitlab.

fsm-akka-dev-environment/.gitlab-ci.yml

Other then that the Pipeline here suffer also from the limitation of Free Tier Gitlab that we can’t create any Access Token to programatically commit Gitlab, so this pipeline will also commit to Github to be able to demonstrate the functionality.

If this would not be a factor we could use the following Pipeline…

create-branch:
  only:
    variables:
      - $CREATE_BRANCH == "true"
  stage: build
  image: alpine/git
  before_script:
    -  |
        git config --global user.name "${GITLAB_USER_NAME}"
        git config --global user.email "${GITLAB_USER_EMAIL}"
  script:
    - |
      echo "Branch Name $BRANCH_NAME Repo Name $REPO_NAME for Version $VERSION"
      BRANCH_TO_CREATE=${BRANCH_NAME}-${REPO_NAME}
      git remote set-url origin "https://gitlab-ci-token:${PUSH_TOKEN}@gitlab.com/org.salgar.fsm.akka/fsm-akka-dev-environment.git"
      git remote -v
      git fetch --all
      git checkout -b $BRANCH_TO_CREATE
      git push -u origin $BRANCH_TO_CREATE

${PUSH_TOKEN}‘, you have to create as Access Token in your Gitlab.

After the k8s manifests committed to the Github, we can let ‘fsm-akka-4eyes-argocd‘ deploy to GKE cluster to the new Namespace.

fsm-akka-4eyes-argocd/.gitlab-ci.yml

This Pipeline create the Namespace name from the branch name from the Service Repository. Second part of the Pipeline, while we want to install ArgoCD Application CRD for the new Namespace, has to authenticate against GKE.

Fortunately the Image we use ‘${CI_REGISTRY}/gitlab-com/gl-infra/k8s-workloads/common/k8-helm-ci:${CI_IMAGE_VERSION}’ has access to the necessary tools. We only have to create special variable containing the Key of the Service Account we created for GKE.

gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
gcloud container clusters get-credentials fsmakka-gke-dev --zone europe-west3-c --project fsmakka

Above snippet will login Gitlab Pipeline to GKE for our Cluster ‘fsmakka-gke-dev‘ for project ‘fsmakka‘.

and deploy ArgoCD Application CRD.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: foureyes
  namespace: {{ .Release.Namespace }}
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    name: {{ .Values.cluster.name }}
    namespace: {{ .Release.Namespace }}
  project: fsm-akka-4eyes-project
  source:
    repoURL: "https://github.com/mehmetsalgar/fsm-akka-dev-environment.git"
    path: {{ .Values.source.path }}
    directory:
      recurse: true
    targetRevision: {{ .Values.targetBranch }}
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
    syncOptions:
      - CreateNamespace=true
helm upgrade --install fsm-akka-4eyes . /
             --create-namespace /
             -n $K8S_NAMESPACE /
             -f values-gke.yaml /
             --set targetBranch=$BRANCH_NAME

Proof that it works

Service Repository

As you can see, Gitversion takes into count the branch name for Version calculation.

Umbrella Repository

Now we will transfer Helm Umbrella Chart Version to Helm packaging process.

As you can see Helm Umbrella Chart mit Version ‘2.6.0-usecase11-credit-score.1‘ is packaged, with the Gradle command

UMBRELLA_GRADLE_COMMAND: "helmPackage -P$SOURCE_REPO-version=$SERVICE_CHART_VERSION -Pversion="

using the Service Version identified by Service Pipeline.

And we uploaded Helm Chart ‘fsm-akka-helm-umbrella-chart-gitlab:2.6.0-usecase11-credit-score.1

GitOps

fsm-akka-dev-environment

As you can see we created necessary k8s manifests with ‘helmfile template‘ and committed those to Github for ArgoCD to deploy to Kubernetes Cluster.

ArgoCD

fsm-akka-4eyes-argocd

GKE authentication to be able interact with Helm.

Helm installation for new environment in Namespace ‘feature-usecase11-credit-score‘ for branch ‘feature/usecase11‘ for Credit Score Service.

As you can see exactly the version of the ‘credit-score‘ Service Repository that is transferred with Gradle command, deployed in GKE.

Pods

You can also see the from Google Cloud Portal to Kubernetes Engine, you will see that Pods are up and running in GKE.

Use Case 3: Environment Cleanup after Completed Pull Request [ Merged / Closed]

Trigger Action: Merge Request completed or closed

Now that we created a complete new Environment to evaluate our Pull Request, when this evaluation is complete (Pull Request merged or closed), we have to clean this Environment. If you read my previous blog, you can see there is GitHub Actions that has a build in mechanism to detect completion of a Pull Request.

credit-score/.github/workflows/cleanup-after-pull-request-closing-with-reusable.yaml

name: Cleanup Caller - Pull Request
run-name: Cleanup Environments after Pull Request for Branch ${{ github.event.pull_request.head.ref }} 
  triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  pull_request:
    types: [closed]
jobs:
  call-cleanup-workflow:

the notation “pull request -> types -> [closed]” is really nice way to automate this.

At time of writing of this blog, there is no such out of box solution from Gitlab, there is a possibility to realise this with following constellation.

continuous-deployment-pull-request-prepare-environment:
  stage: deploy
  needs:
    - job: continuous-deployment-pull-request
  trigger:
    project: 'org.salgar.fsm.akka/fsm-akka-dev-environment'
    branch: $UMBRELLA_BASE_BRANCH_NAME
    strategy: depend
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    on_stop: cleanup-pull-request-environment

cleanup-pull-request-environment:
  trigger:
    project: 'org.salgar.fsm.akka/fsm-akka-dev-environment'
    branch: $UMBRELLA_BASE_BRANCH_NAME
    strategy: depend
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    action: stop
  when: manual

but unfortunately at the writing of this blog, when we use ‘trigger‘ element in a Gitlab Pipeline DSL, we can’t use ‘environment‘ element, there is a change request to improve this behaviour and you can convert it to this constellation when the change request is completed.

My solution is to place a manual job to Gitlab Pipelines, so when the Merge Request is completed, ArgoCD Application CRD can be deleted. Namespace will be removed from GKE Cluster and the branches will be deleted from ‘fsm-akka-dev-environment‘, ‘fsm-akka-helm-umbrella-chart-gitlab‘.

Please pay attention that while this part of the Gitlab Job will may be runs days, weeks later to cleanup the environment, we have to use the ‘forward‘ functionality of the ‘trigger‘ keyword, without this, this functionality will not work.

Also as the reasons I explained in the chapters of Use Case 1 and Use Case 2, in Gitlab Free offering I can’t programmatically manipulate the Gitlab Git Repositories because of the missing Access Token functionality, I didn’t implement the deletion of the ‘fsm-akka-helm-umbrella-chart-gitlab‘ branches, if this functionality is your Gitlab installation, this function is trivial.

Use Case 4: Producing Release Candidates for Services

Trigger Action: Creation of ‘release/x.x.x’ branch or Commits to ‘release/x.x.x’ branch

After several Features developed / merged to development branch, now it is time to make a Release of our Service and evaluate that in a new Environment, this Pipeline is quite similar to the Pull Request Pipeline so it will re-use lots of the Pipeline.

Service Pipeline

ci-cd-gitlab-pipelines/composite/ci.yml

Environment Pipeline

fsm-akka-helm-umbrella-chart-gitlab/.gitlab-ci.yml

GitOps / ArgoCD

I will not go too much in details while we covered those a lot already in Use Case 2, major difference being while this pipeline will trigger with the push to the Release Branch, it will also compile the Java code, create and upload Docker Image, Helm Chart for Service, in Use Case 2 trigger was the creation of the Merge Request We are again using the usual Gitlab trick to control with variables which part of the Pipeline will run.

Proof that it works

Service Repository

which will transfer the Version number to Gradle build system and the Gradle will produce Docker Image and Helm Chart.

Helm Umbrella Repository

we have to also create Version number for our Helm Umbrella Chart.

and transfer that to package Helm Umbrella Char and Gradle will package the Chart and push the chart to Artifact Repository with the following command

UMBRELLA_GRADLE_COMMAND: "helmPackage -P$SOURCE_REPO-version=$SERVICE_CHART_VERSION -Pversion="

this way we will enter the Service Helm version to packaged Helm Chart.

Helm push to the Artifact Repository.

GtiOps

Writing k8s manifest to the Github Repository for GitOps.

ArgoCD

Installing ArgoCD Application CRD to the Namespace passing to the Branch name of the Service Repository.

As you can see exactly the version of the ‘credit-score‘ Service Repository that we placed in ‘gradle.properties‘ deployed in GKE.

Pods

Use Case 5: Release Environment Cleanup

Trigger Action:clean-up.releasenvironment‘ job manually triggered

This pipeline uses the same template from Clean Pull Request Environment, there is a Cleanup Gitlab Job with manual start, when you don’t need a release branch, you should execute manually this job to remove the environment.

ci-cd-gitlab-pipelines/composite/cd-release.yml

Again Github Actions does a better job there, there is dedicated Event to Branch delete ‘on -> delete‘ which allow automation of this Use Case.

credit-score/.github/workflows/cleanup-after-branch-delete.yaml

name: Cleanup after Branch Delete
run-name: Cleanup after Branch Delete triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  delete:
jobs:
  call-cleanup-workflow:
    if: ${{ contains(github.event.ref, 'release/') }}

Use Case 6: Integration Environment for Helm Umbrella Charts / Sanity Checks

Trigger Action: Creation of ‘integration/xxx‘ branch in ‘helm-umbrella-chart‘ with concrete Release Candidate versions of multiple services

address-check-version=1.1.3-rc1
credit-score-version=1.3.1-rc1
fraud-prevention-version=1.2.11-rc2
customer-relationship-adapter-version=1.1.7-rc1
fsm-akka-4eyes-version=1.0.3-rc4

As i explained in the previous blog, one thing that I found problematic in GitFlow is the merging a release brach from Helm Umbrella Chart to master branch with Services having release candidate versions will not work with GitFlow but we can’t just merge weeks of development work with out a sanity check, so I introduced here an integration branch to GitFlow.

Only thing you have to do enter Release Candidate versions of your Services to

fsm-akka-helm-umbrella-chart-gitlab/gradle.properties

credit-score-version=2.5.0-rc.1
address-check-version=1.5.0-rc.1

Environment Pipeline

first we have to determine the version of the Helm Umbrella Chart.

the rest of the Pipeline is reusing the previous components from the other Use Cases so I will not go into details.

Proof that all works

As you can see exactly the version of the ‘address-check‘ Service Repository that we placed in ‘gradle.properties‘ deployed in GKE.

As you can see exactly the version of the ‘credit-score‘ Service Repository that we placed in ‘gradle.properties‘ deployed in GKE.

Use Case 7: Integration Environment Cleanup for Helm Umbrella Chart

Trigger Action: Triggering of ‘cleanup-integration-environment‘ job manually ‘integration/x.x.x’ after sanity checks are completed.

Cleanup for the Integration Environment is again reuse of the Cleanup Environment Template for the ‘integration/x’ branches.

Execution of the Cleanup Job is again a manual process, when you are done with Integration Environment and you delete ‘integration/x‘ branch, you can execute this job to remove the environment.

Use Case 8: Service Release Process

Trigger Action: Manual start of Pipeline after setting concrete version with ‘git tag‘ or automated with the merge of the ‘release/x.x.x‘ branch version to ‘master‘ branch.

As I mentioned in my previous blog, while we are using Gitversion to give concrete version to a Release we have to either use ‘git tag’ or ‘+semver: major/minor/patch’, for this reason build can not triggered with a push request, Release GA pipeline must be triggered manually.

ci-cd-gitlab-pipelines/composite/release-ga.yml

Use Case 9: Helm Umbrella Chart Release Process

Trigger Action: Manual triggering after the use of ‘git tag‘ or automated start after merging ‘release/x.x.x‘ branch with concrete Service Versions in ‘helm-umbrella-chart‘ repository to ‘master‘ branch.

address-check-version=1.3.1
credit-score-version=1.2.0
fraud-prevention-version=1.10
customer-relationship-adapter-version=1.5.0
fsm-akka-4eyes-version=1.1.2

And this will allow us to do the Environment Promotion.

Even we can implement Blue / Green deployment in Production.

This will create a Helm Umbrella Chart with a concrete version with the help of the ‘git tag‘ at ‘master‘ branch, then we can manually trigger the Gitlab Pipeline Job. with that we can promote to test environment to prepare our application for Production.

fsm-akka-helm-umbrella-chart-gitlab/.gitlab-ci.yml

For promoting the environment, we have to place the concrete version number that we want to use of the Helm Umbrella Service Chart in ‘fsm-akka-test-environment‘ ‘default.yaml‘.

Naturally we have also promote Infrastructure Components compatible with Services, this pipeline will get the version of the Helm Infrastructure Chart from ‘infrastructure-version.txt‘.

Use case 10: Environment for Epic Stories

Final Use Case is related to Epic Stories that requires several Services collaborating for the development of one feature, we will deal with scenario having a branch ‘feature/epic-x‘ in ‘fsm-akka-helm-umbrella-chart-gitlab‘ and placing feature branch version of services in ‘gradle.properties‘ so they collaborate.

credit-score-version=2.6.0-usecase10.1
address-check-version=1.6.0-usecase-1.1

fsm-akka-helm-umbrella-chart-gitlab/.gitlab-ci.yml

this use case re-uses lots of components from other use cases, only different part is the use of the ‘gradle.properties‘.

Preparations:

In this blog, we will use Google Cloud to prove our concepts, so we have to prepare some components.

Google Cloud CLI

For lots of configuration in Google Cloud, we will need the Google Cloud CLI, you can install it by following instruction here.

Google Cloud Project

After we get our test account we have to create an Project that should contain all of resources (our Artifact Registries, Kubernetes Clusters, etc).

Google Cloud Artifact Registry

We will need two Artifact Repositories, one for Docker Images and another one for Helm Charts.

For Docker Images

You can create Docker Registry with the following instructions but if you follow the below screenshots you can achieve that also.

As you can see, I already created a Docker Registry ‘fsmakka-ar’.

For Helm Charts

After following above instructions, you can also create a Helm Repository with the name ‘fsm-akka-helm-ar’.

Service Account for Artifact Registry

Now that we created our Artifact Registry, we have to arrange permission mechanism so that Gitlab Pipelines can read and write artifacts to these registries.

Google Cloud has a concept of Service Accounts for control permissions / roles.

Now we have to give certain permissions / roles to this Service Account so we can upload our Docker Images / Helm Charts, which in this case is ‘Artifact Registry Writer’.

With this setup, your Gitlab pipeline would be able to upload Docker Images, Helm Charts to Artifact Repisotories.

Google Cloud Kubernetes Engine

Now we have to configure a Kubernetes Cluster to be able to deploy our Services to Kubernetes.

You can create a Kubernetes Cluster by following following instructions.

You can create your Kubernetes Cluster in Google Cloud portal using menu point ‘Kubernetes Engine -> Clusters’.

and clicking ‘Create’ button,

there are two option to create Kubernetes Cluster ‘Autopilot mode’, ‘Standard mode’, we are interested with the ‘Standard mode’, main difference in ‘Autopilot’ GKE takes lots of responsibility to actualise your Kubernetes Cluster, Autoscale it, etc…these are really nice options if you are new to Kubernetes concepts but I prefer the Standard mode.

then we have to do basic configuration like giving a name to our Kubernetes Cluster, a zone to run for (I am living in Germany so I have chosen ‘europe-west-3’ which is ‘Frankfurt’) , btw you can see at right side you can see monthly the cost estimates of you choices,

last relevant option which version of the Kubernetes we will use, we can pin our Kubernetes implementation to a specific version or let the Google Cloud automatically update current stable release version,

Next part of the configuration, is about the Security of our Kubernetes Cluster, as I mentioned in the previous chapter Google Cloud has a concept of Service Accounts, we will here define which service account we will use for our Cluster, if we don’t do anything GCloud will create a Service Account for us, I will use this option but as you can also create additional Service Account with necessary roles / permissions so our Gitlab pipelines can interact with our Kubernetes Cluster.

Here you can see the default account that GCloud created for us and also the Service Account that we will create further in the blog.

Service Account

Now let’s create and configure the Service Account that will interact with…

we should give the usual informations like Service Account name and id (id will look like an email address which we will need in further steps).

Service Account Roles / Workflow Identities

GCloud has concept of Workflow Identity 1 , Workflow Identity 2 to manage Permissions in Google Kubernetes Engine we have to use this concepts to add roles to our Service Account, you can find a general list of roles here.

Basic steps that we have to execute in you Gcloud CLI.

> gcloud services enable container.googleapis.com secretmanager.googleapis.com

and assign specific roles to Service Account.

> gcloud projects add-iam-policy-binding fsmakka --member "serviceAccount:fsmakka-gke-service-account@fsmakka.iam.gserviceaccount.com" --role "roles/composer.worker"

here you see our GCloud Project name ‘fsmakka’, our service account ‘fsmakka-gke-service-account@fsmakka.iam.gserviceaccount.com‘ and the role ‘roles/composer.worker‘ which contains most of the roles we need to access and configure our GKE Cluster from Gitlab (if a specific roles necessary for you action the error message explicitly state which permission is necessary, you can find it from role list and added this role to your Service Account).

Kubeconfig

Now that we created our GKE Cluster lets get our authentication information for it.

First let’s activate the following component,

> gcloud components install gke-gcloud-auth-plugin

and get the necessary input for ‘.kube/config‘ (off course you should realise a login to Google Cloud as described here). The input parameters that we need for this, are the name of the GKE Cluster ‘fsmakka-gke-dev‘ the zone that our cluster run ‘europe-west3-c‘ and the project that our GKE Cluster runs ‘fsmakka‘.

> gcloud container clusters get-credentials fsmakka-gke-dev --zone europe-west3-c --project fsmakka

Gitversion

If you read the my previous blogs in these series, you know that I am a big fan of the tool Gitversion to calculate the versions of our software. Github Actions and Azure DevOps has Gitversion as plugin but for Gitlab we have to include Pipeline code for. Gitversion to every Pipeline that we want to use.

You can find setup instructions here, or examples here. I also have placed this pipeline to my central Github Repository for Gitlab Pipelines. You can see the working semantics from Gitversion from my initial blog.

ArgoCD

The setup of ArgoCD for Google Cloud will be certainly different then the MS Azure are I explained here.

Basically we are using the same configuration for the ArgoCD for MS Azure and Google Kubernetes Engine, but off course there are nuance changes, for this reasons Helm files are organised once for GKE ‘values-gke.yaml‘ (and one other existing for ‘values-aks.yaml‘).

If we look to the configuration closely, first two parameters are very usual, Name of the cluster and URL of the cluster, third configuration is where start becoming interesting for the GKE, with these annotations GKE can figure out which Service Account should be used to interact with GKE (if you don’t do this, ArgoCD will get permission errors when it tries to manipulate the GKE).

the next config value is the interesting one, while kubernetes secrets must be base64, it will look cryptic but base64 decoded versions looks like the following.

As you can see in this link, the configuration of GKE connection for ArgoCD looks like the following.

The annotations you see in ‘values-gke.yaml‘ and this configuration is really critical for Workflow Identity concept of GKE, ‘caData‘ value you can take from your ‘.kube/config’ file.

The values for GKE environment is supplied from ‘values-gke.yaml‘, interesting parts are the namespaces ArgoCD can deploy, which Kubernetes object types ArgoCD can manipulate (deployments, namespace, jobs, secrets, etc). What you see here is no production configuration, for production, you have to restrict / harden these values.

Gitlab

In this blog, we will evaluate the Gitlab Pipelines, for this we can install on premise a Gitlab environment via Helm Chart or Gitlab presents a free trial opportunity for 30 days, free trial environment which provides nearly all features (except one or two which I will mention in the blog).

Gitlab Runner

One of the biggest limitation for the free trial is limited shared pipeline runner quotas, free trial only provides 400 hours quotas to execute for your Gitlab Pipeline, for 30 days trial this is too low, you must probably install one or more Gitlab runner in you local promise by following instructions in this article.

Basically you should execute following command but further details you can look above mentioned article.

> docker run -d --name gitlab-runner-1 -v /tmp/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner:v15.8.2

Project Access Token

If you visited other blogs in these series, to realise GitOps we have to commit Kubernetes manifests that are created via Helmfile to Gitlab Repositories, to do that in Gitlab you have to create a Project Access Token, unfortunately this feature is turned off in Free Trial of the Gitlab.

For this reason generated manifests would be placed into Github Repository and ArgoCD will access this repository to deploy our Services zu GKE instead of Gitlab repositories.

CI / CD Variables

Before we concentrate the internal mechanics of the Pipelines, we have set some Gitlab CI / CD Variables that will be used by our Pipeline Jobs (I set those in Gitlab Group level so they can be accessed in all Gitlab Projects).

Most critical ones are off course authentication information for Artifact Registry and GKE authentication information. As we discussed previously Google Cloud has a Service Account concept for authorisation concepts and we had already created a Service Account for the Access to the Artifact Registry. To use this Service Account for authorisation we have to create a Key for it, with the following instructions.

The moment you click ‘Create‘, created file would be downloaded to your computer, do not loose this file, there is no way to restore it, if you loose it, you have to create a new one and you would have to copy the JSON content of Service Account Key to ‘ARTIFACTORY_REGISTRY_KEY’ variables again.

The values like HELM_PASSWORD, while we use Artifact Repository both for Docker Images and Helm Charts, but you might have a scenario that you upload your Docker Images to Artifact Repository but your Helm Charts to the Artifactory, so you don’t have the modify the Pipelines that much, but while we are using Artifact Repository we will reference the previous variable.

The next one, is the HELM_URL and HELM_PATH variables, which we could get from Google Cloud Portal.

The variable HELM_USER is also special, normally you would expect here an username, but Google Cloud Authentication mechanism is really different with Service Accounts, we use here the value ‘_json_key’ but in a traditional Repository like Nexus or Artifactory this would be an username. You also see, as a feature of the Gitlab for sensitive variables, if you use ‘Mask variable’ option, these values would be unreadable in Pipeline logs (this mechanism can’t mask complex structures like JSON that was the reason it is not used for Password values).

Then you will see Variables starting with prefix ‘ORG_GRADLE_PROJECT_’, this is an implicit mechanism for Gradle, all the variables we define will be provided as Environment Variables to Gitlab Pipelines, if the variable is defined this way Gradle configuration will pick those automatically for its own configuration (off course we can define those without prefix and then Gitlab Pipelines pick those and pass it to Gradle but this would be unnecessary work).

You can now set the rest of the variables with these informations.

ORG_GRADLE_PROJECT_DOCKER_HUB_PASSWORD
ORG_GRADLE_PROJECT_DOCKER_HUB_USER
ORG_GRADLE_PROJECT_DOCKER_UPLOAD_PASSWORD
ORG_GRADLE_PROJECT_DOCKER_UPLOAD_USER
ORG_GRADLE_PROJECT_DOCKER_URL
ORG_GRADLE_PROJECT_HELM_DOWNLOAD_CLIENT(must be set to true)
ORG_GRADLE_PROJECT_HELM_PASSWORD
ORG_GRADLE_PROJECT_HELM_PROCESS(must be set to true)
ORG_GRADLE_PROJECT_HELM_URL($HELM_URL$HELM_PATH)
ORG_GRADLE_PROJECT_HELM_USER

Ideal CI / CD Pipeline for your Kubernetes Deployments

In this blog I would try to demonstrate how an ideal Continuous Integration / Continuous Deployment with GitOps (https://opengitops.dev/) to Kubernetes should look like using Github Actions, Gradle, Docker, JiB, Helm, Helmfile, Terraform and ArgoCD for Services applying the principles of Twelve Factor App. What I here demonstrate is based on my experiences that I collected from the previous Kubernetes projects that I was involved and compiling information from the lots of documentation that exists in the internets best practices about this subject.

  1. Introduction
    1. Pipelines
    2. Gitversion
    3. Terraform
    4. ArgoCD
  2. The Plan
  3. Service Pipelines:
    1. JiB / Docker
    2. Helm
    3. Continuous Delivery / Continuous Deployment ( ArgoCD )
      1. Helm Umbrella Chart for Services
      2. ArgoCD
    4. GitVersion
  4. Environment Pipelines
    1. Helm Umbrella Chart for Environment
      1. Apache Kafka
      2. Apache Cassandra
      3. Elasticsearch
    2. Deployment of the Infrastructure
  5. Environment Promotion
  6. Github Actions / Workflows
    1. Use Case 1: Continuous Integration and Deployment
    2. Use Case 2: Prepare Environment for Pull Request
    3. Use Case 3: Environment Cleanup after competed Pull Request [ Merged / Closed ]
    4. Use Case 4: Producing Release Candidates for Services
    5. Use Case 5: Release Environment Cleanup
    6. Use Case 6: Integration Environment for Helm Umbrella Charts / Sanity Check
    7. Use Case 7: Integration Environment Cleanup for Helm Umbrella Charts
    8. Use Case 8: Service Release Process
    9. Use Case 9: Helm Umbrella Chart Release Process
    10. Use Case 10: Environment for Epic Stories
  7. Preparations
    1. Google Cloud CLI
    2. Google Cloud Project
    3. Google Cloud Artifact Registry
      1. For Docker Images
      2. Service Account for Artifact Registry
    4. Google Cloud Kubernetes Engine
      1. Service Account
      2. Service Account Roles / Workflow Identities
      3. Kubeconfig
    5. GitVersion
      1. Setup
      2. Configuration
        1. Commit Messages
        2. Branch Configurations
      3. Lifecycle Operations
    6. ArgoCD
  8. Appendix
    1. To Helm Umbrella Chart or Not
    2. Kubernetes Operator Installations
      1. Apache Kafka (Strimzi Operator)
      2. Apache Cassandra (k8ssandra)
      3. Elasticsearch (ECK Operator)
    3. GraalVM Native Image Building
    4. Terraform
      1. GKE Cluster Creation
      2. GKE Cluster Destruction
      3. Configuration

Introduction

During my career, I saw too many projects which didn’t invest enough to their deployment pipelines during the startup with the assumption that they will comeback later and fix it, which as you know does not happen most of the time, you will be always under pressure to develop more Business Features but you would not really get any chance to make your house cleaning and pay for your Technical Debts, so it is better to start in correct way then compromising.

The solution that I would explain here, I didn’t invented myself, it was out there as puzzle parts from different sources but it was never fully explained / demonstrated as a complete solution, so my purpose here is to give a blueprint that you can adapt your pipelines / workflow with minimal changes for startup projects or better fix your existing ones.

You can find my proposal here, an overkill for small projects (you can streamline the process explained here for you needs off course) and ignore my warnings here but a word of caution here, if you just start with one Service (while it is a startup project or you don’t think you will reach the complexity levels that it would require what I will explain here), please remember, it would be very costly later on change your workflows to adapt these ideas. Even you are starting small, these ideas would be useful to you in long run and as you will see in further chapters, what we will implement here would be reusable in other projects of yours with minimal costs.

Let me give you short summary of what you expect to see in this blog

Pipelines

I would explain to you why it would be good idea to separate our DevOps pipelines as Service and Environment Pipelines. I will also show to you how to build those with this help of the GitHub Actions in this blog but in the future ones with MS Azure DevOps Pipeline, Gitlab, AWS Code Pipeline, CI / CD with Google Cloud.

You can see

  • Azure DevOps Pipelines implementation of these concepts in this follow up blog.
  • Gitlab Pipelines in Google Cloud implementation of these concepts in this follow up blog

After implementing the same scenarios with Azure DevOps Pipelines, Gitlab Pipelines, I should say that my favourite is Github Actions, if you can’t use ‘github.com’ because of company policies, I really advice you to check self-hosted / on premise options of the Github Enterprise Server.

Gitversion

I would also introduce to an every important tool for your GitFlow and GitHubFlow projects, which will solve all of your versioning problems.

How to install / configure it to several different Pipeline Environments like Github Actions / MS Azure / Gitlab / etc…

And how to operate Gitversion for the day to day tasks, like how to deal with feature branches, hotfixes, release processes, etc..

Terraform

I always found, specially for development, test and staging environment running on idle when nobody using them problematic. Let’s say people are active in staging environment one month every year, so you are practically paying the 11 month unnecessarily, so why not tear down the staging environment when it is not needed and build again in 5 minutes again with Terraform when needed. Or take this one step further, most probably your working for force is working between 07:00 – 18:00 o’clock time slot, so why not tear down test environment up 19:00 o’clock and recreate it with Terraform and ArgoCD at 06:00 o’clock in the morning again. These are the points you will achieve real cost saving in Kubernetes at different Cloud Providers. For an how to please look into Terraform chapter.

ArgoCD

Final piece of the puzzle, ArgoCD an automation tool to realise true GitOps, I will show you how your Kubernetes Manifests would be taken from GiT and automatically deployed to a Kubernetes Cluster. I will also explain basic concepts, how to configure and operate it.

The Plan

To be able to demonstrate these concepts here, we would require an actual projects in GiT and for that I would use another Proof of Concept study of mine, which I explained in this blog. Original blog has a really naive Deployment Strategy, while it main focus was to demonstrate the integration of several technologies for an Event Sourcing application with Apache Kafka, Apache Cassandra and Elasticsearch. This blog would show you how to go from that naive approach to a full fledge solution.

To convince you that these ideas works, we will demonstrates those in on Google Cloud Platform (GCP)’s Free Test account for Google Cloud.

For starters let’s see our battle plan.

PS. WordPress is unfortunately reducing the quality of the images, please click the images to see high resolutions of those.

Now let’s look to the diagram above, you should have to have two quick takes from it, we would have two pipelines, one is Service Pipeline and one Environment Pipeline. It makes sense to have these distinctions, a Service Pipeline would most probably trigger much more frequently then an Environment Pipeline, while we will have much more Software changes then the Environment changes.

Service Pipeline would be responsible for building Executables, Docker Images and upload those to Docker and Helm repositories.

Environment Pipeline would be responsible to maintain and deliver Helm Charts for infrastructure (in this case for Kafka, Cassandra, Elasticsearch) that are necessary to initialise Environments for Development (Feature / Bugfix / Integration / Release ), Test, Staging, Production environments.

You can also see from the above Diagram that Development, Test, Staging and Production environments served via ArgoCD from separate GiT Repositories, the main reasons for this solution are the Security and Quality concerns for GitOps, it is better to separate those.

Your software should reach a maturity level in Development Environment Git Repository first, to able to promote to Testing Environment Git Repository, where much less people would have the right to change anything in the Test Environment Git Repository, so the chances of unwanted things happening will reduce and hence the Quality will increase because this will prevent the things that can cause problems in Development Environments, to cause any havoc in Test Environment.

Same ideas applies to the Staging Environments Repository, even less people can modify those and for Production even fewer, in an fully automated GitOps process. These sort of precautions can save you lots of head aches later on.

At this point, I am seeing you scratching your head, thinking would not be better to have a branch per environment, that is an actually an Anti Pattern as the great Martin Fowler explains. We don’t want an environment version of our services to be a different commit, we want our service transferred as same binary / version between the all environments, that is the reason we will follow a Git Repository per Environment, after all there is a reason why you don’t see a ‘environment’ branch in GitFlow.

But let me be clear about one thing here, when we are promoting, for ex, Test Environment to Staging Environment, we only mean the transferring configuration information of our software, not the source code. When your Services built with automated workflows, their Docker Images and Helm Charts would be deployed to the Docker / Helm Repositories. We will only transfer between Environment Git Repositories which version of these Docker Images, Helm Charts should be deployed and their configuration informations (like which database they are connecting, how many instances should be up and so), that means promoting the version numbers of Docker Images, Helm Charts and connection strings, etc and nothing more.

Naturally our story will start from the Services, we will follow the approach of a GiT Repository pro Service, my PoC application will use following GiT Repositories..

Outside of ‘Four Eyes FSM Pekko Application‘ (which is a full fledged Event Sourcing application with Akka / Pekko, Kafka, Cassandra, Elasticsearch) and ‘Customer Relationship Adapter Service’ (which will be using Spring Boot 3.0 ‘native-image‘ feature with GraalVM (which you can see more in detail in Appendix) are boring Spring Boot Applications (only some primitive REST Service implementation) for simulating partner systems for ‘Four Eyes FSM Pekko’ while our focus lies in deployment features and not the software technologies.

To represent our whole software System, we will have Helm Umbrella Chart containing all of our Services (App of Apps Pattern in some sources), this would be our main deployment unit.

You will also see that we will have a clear distinction between the Infrastructure Deployments and Service Deployments. It is true that we can place the Infrastructure components like Kafka, Cassandra, Elasticsearch, etc, to same Helm Umbrella Chart but considering there will be a lot more change / commits in GiT about our Services then our Infrastructure, it does not make too much sense for every commit in a Service Repository, also to deploy the infrastructure components. We will have another Infrastructure Deployment pipeline and an Helm Umbrella Chart for Infrastructure components which would be really important also for initalizing new Environment in k8s Dev Cluster with our Github Action Workflows on the fly for Feature / Bugfix / Integration / Release branches.

A word of caution here, if you are bringing your 20 years old project into the Kubernetes for the hopes of cost savings, bad news is, you probably would not save too much money in production, you would probably need same amount of CPU / Memory Storage resources, real cost saving potentials lies for Development (including Feature / Bugfix / Integration / Release), Test and Staging Environments. With the solutions presented in this Blog, you can turn these environments off when you don’t need them instead of let those idling and costing you (and you are paying those for nothing). With the Infrastructure / Service Deployment Pipelines that I will demonstrate, you can start these environment inside of a 5 minutes, instead of those are idling for months ( while you are still paying CPU / Memory / Storage resources). In the Appendix section, you can find a demonstration about how to create a Kubernetes Test Environment during your office hours, by creating and destroying those with the help of the Terraform configurations. After all, if between 18:00 o’clock and 6:00 o’clock, nobody is testing your application, why should you pay for the resources.

So now back to main topic.

One final point here, once we have our Helm Umbrella Chart, actually we can directly install from this Chart all our system with Helm commands but I will follow another pattern, while we want to do GitOps, deployment of a software should be that auditable / manageable over GiT (We should be track who installed what, when and why, so in the case of an emergency, we can rollback those changes). To achieve this audibility / manageability we will use other awesome tools called Helmfile / ArgoCD.

The promise of this blog would be that you would able to deploy your infrastructure

and your services with the help of these pipelines.

Now let’s look to our Service und Infrastructure Pipelines.

Service Pipelines:

JiB / Docker

Now, what is really interesting about the Service Projects, is in their Gradle configurations ‘build.gradle‘ file. Normally people who code pipeline configurations, does the Docker Image generation and Helm Chart deployments in these pipelines. I am not big fan of this because if a developer wants to test a quick bugfix (2 lines of code change) in its ‘minikube’ (we can use same Helm Umbrella Charts Services and Infrastructure to create local environments), he / she should not wait for the whole turn-around of build pipeline (lets say 10 minutes).

I prefer that Developer should be able to create Docker Image locally, for this Google has awesome tool called JiB (you can look here for full configuration options). JiB can create Docker Images without a Docker demon and is also smart enough to optimise the Docker Image layering, so for example, to place concrete dependencies (which have a lower chance of changing) to a layer, SNAPSHOT dependencies to a layer and Application Code to another layer, so if they don’t change , let’s say concrete dependencies (like you Spring Framework libraries) that layer should not be build over and over and pushed to Docker registries, which would save quite a lot of time.

You can see these optimisation explained here in detail with Dive tool in my other blog, if you click the link 🙂

The configuration of JiB quite simple, as you can see from ‘Customer Relationship Adapter’ Service, ‘build.gradle‘.

jib {
	container {
		mainClass = "org.salgar.akka.fsm.cra.CustomerRelationshipAdapterApplication"
	}
	from {
		image = "ubuntu:latest"
		auth {
			username = "${props.DOCKER_HUB_USER}"
			password = "${props.DOCKER_HUB_PASSWORD}"
		}
	}
	to {
		image = "${props.DOCKER_URL}/${project.name}"
		tags = ["${project.version}"]
		auth {
			username = "${props.DOCKER_UPLOAD_USER}"
			password = "${props.DOCKER_UPLOAD_PASSWORD}"
		}
	}
	pluginExtensions {
		pluginExtension {
			implementation = 'com.google.cloud.tools.jib.gradle.extension.nativeimage.JibNativeImageExtension'
			properties = [
					imageName: 'customer-relationship-adapter-application'
			]
		}
	}
	allowInsecureRegistries = true
}
tasks.jib.dependsOn tasks.nativeCompile

As you can see the configuration of JiB extremely easy, just define base image, which repository that image will be uploaded and tags, then you can easily build a docker image without docker daemon in your workstation.

Helm

Now second interesting part, deployment of the Helm Charts, personally I don’t like to do Helm commands in build pipelines with shell commands as long as some Gradle Plugins can do it for me.

helm {
	downloadClient {
		enabled = downloadHelmClient()
		version = '3.11.2'
	}
	charts {
		craa {
			chartName = 'customer-relationship-adapter-application'
			chartVersion = "${project.version}"
			sourceDir = file('helm')
			filtering {
				values.put 'imageRepository', jib.to.image
				values.put 'imageTag', jib.to.tags.first()
				values.put 'appVersion', jib.to.tags.first()
			}
		}
	}
}
tasks.helmPackage.dependsOn tasks.jib

As you see, the configuration is really simple to configure which repository to upload the Helm Chart and plugin does really nice things for us, placing in Helm chart the ‘image repository‘, ‘image tag‘ and ‘appVersion‘.

So this part of pipeline will be nothing more then calling ‘./gradlew helmPackage‘ and so first part of CI / CD chain is complete.

Continuous Delivery / Continuous Deployment ( ArgoCD )

Helm Umbrella Chart for Services

Now we have five Services that we have to deploy to our Kubernetes Cluster, it is most logical to organise the several Helm Charts of the Services under an Umbrella Helm Chart (by following Apps of App Pattern), which you can see here.

apiVersion: v2
name: fsm-akka-helm-umbrella-chart
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "${appVersion}"

dependencies:
  - name: address-check-application
    version: "${addressCheckVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: address-check.enabled
  - name: credit-score-application
    version: "${creditScoreVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: credit-score.enabled
  - name: fraud-prevention-application
    version: "${fraudPreventionVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: fraud-prevention.enabled
  - name: customer-relationship-adapter-application
    version: "${customerRelationshipAdapterVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: customer-relation-adapter.enabled
  - name: fsm-akka-4eyes-application
    version: "${fsmAkka4eyesVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: fsm-akka-4eyes-application.enabled

Nothing fancy here, this Umbrella Helm Chart is just a glue to collect other smaller Service Helm Charts.

We define which concrete version or range would be installed in ‘gradle.properties‘ file according to Semantic Versioning principles.

address-check-version=<=1.1.2-beta
credit-score-version=<=1.3.0-beta
fraud-prevention-version=<=1.2.10-beta
customer-relationship-adapter-version=<=1.5.0-beta
fsm-akka-4eyes-version=<=1.0.2-beta

Now we can just install this Helm Chart in an Environment as simple command like the following.

> helm upgrade --install fsm-akka . -n fmsakka --create-namespace

This of course will work, the problem will be the audibility and traceability. Although it is possible, it is not that easy to figure out what we deployed to our Kubernetes Cluster, which Docker Images we used, which configuration parameters were active or worst of, what is the Delta from our previous release of our application.

The easiest way to find the answers to these questions is applying the principles GitOps, in which we can exactly see what we deployed, better if we are working with Pull / Merge Requests, we can see what will be difference from previous deployment to the next deployment.

ArgoCD

To reach this goal, we will use an awesome tool called ArgoCD, when we reach this end goal, this is how the things will look in our Kubernetes Cluster and in ArgoCD UI.

Now that we laid the foundation with ArgoCD lets continue with our pipelines.

GitVersion

Before I continue with further topics, I like to mention here another awesome tool which you can use in your pipelines to define Semantic Version of your application in GiT called GitVersion. This is a topic most of the development Teams ignore and just use hash codes created from GiT as version number, of course they are unique but not really human readable but Versioning is really important topic for our Continuous Deployment so it is important to identify which version we are deploying to which environment. This tool can follow GitFlow or GithubFlow concepts which ever is fitting you.

I will make in preparations chapter a small demonstration here, to show how to use this tool and integrate with Github Actions which can also serve as demonstration for our Service and Environment Pipelines (I do this with Github Actions while my sample code lies in Github but those can be with minimal changes adapted to Gitlab / MS Azure / etc and other Pipeline tools).

Environment Pipelines

At the beginning of the Blog, I mentioned that we will have separate Service Pipelines and Environment Pipelines. Until now we examined the Service Pipelines, now let’s looks Environment Pipelines.

The Proof of Concept that we use to demonstrate, the story we are explaining here, depends on Infrastructure components like Apache Kafka, Apache Cassandra and Elasticsearch. Every environment that we want to run our Services, these Infrastructure Components must be present.

As you will read further in the blog, we will create isolated environments to test our ‘features / epics / integrations / releases‘, to ensure that these Environments would not negatively effect each other, which means that these Environments will also need Infrastructure components (a Data State created in Cassandra for ‘feature/usecase-1‘ should not negatively effect development / tests in ‘feature/usecase-2‘). Now before Kubernetes days, this level of isolation was not feasible, nobody would install an complete new Instance of Apache Cassandra on a physical machine, just to test a ‘feature‘. The costs for it can’t be never justified.

With Kubernetes, creating new Instances of Apache Kafka, Apache Cassandra, Elasticsearch, is a thing of 5 minutes (and another 5 minutes to tear down the environment when you are done with development / testing of the feature), realised via robots. It is a powerful feature and actually should be the main motivation for you, to switch from your 20 olds Monolith to Kubernetes Environments, it should be the main driving force, the one that your increase of Quality and reduce your of Cost.

Now ‘State of the Art‘ methodology to install these Infrastructure to Kubernetes environments are via the Kubernetes Operators, every modern Infrastructure component now a days has an Operator, like Strimzi for Apache Kafka, k8ssandra for Apache Cassandra and ECK for Elasticsearch.

I will also follow this path to configure the Infrastructure for this blog.

Helm Umbrella Chart for Environment

As you might see, while the operators should be installed before hand to your Kubernetes Cluster (you can find the instruction the above links), the configuration of our Infrastructure is quite simple.

Let’s start with Apache Kafka.

Apache Kafka

fsm-akka-helm-infrastructure-chart/helm/templates/kafka/kafka.yaml

{{- if .Values.kafka.enabled -}}
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: {{ .Values.kafka.clusterName }}
spec:
  kafka:
    template:
      externalBootstrapService:
        metadata:
          annotations:
            cloud.google.com/load-balancer-type: "External"
      perPodService:
        metadata:
          annotations:
            cloud.google.com/load-balancer-type: "External"
    replicas: {{ .Values.kafka.replicas }}
    version: {{ .Values.kafka.version }}
    logging:
      type: inline
      loggers:
        kafka.root.logger.level: "INFO"
    resources:
      requests:
        memory: {{ .Values.kafka.resources.request.memory }}
        cpu: {{ .Values.kafka.resources.request.cpu }}
      limits:
        memory: {{ .Values.kafka.resources.limit.memory }}
        cpu: {{ .Values.kafka.resources.limit.cpu }}
    readinessProbe:
      initialDelaySeconds: 150
      timeoutSeconds: 10
    livenessProbe:
      initialDelaySeconds: 150
      timeoutSeconds: 10
    jvmOptions:
      -Xms: {{ .Values.kafka.jvmOptions.xms }}
      -Xmx: {{ .Values.kafka.jvmOptions.xmx }}
    #image: quay.io/strimzi/kafka:32.0-kafka-3.3.1-arm64 
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
        configuration:
          useServiceDnsDomain: true
      - name: tls
        port: 9093
        type: internal
        tls: true
      - name: external
        port: 9094
        type: loadbalancer
        tls: false

    config:
      auto.create.topics.enable: "false"
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      default.replication.factor: 1
      min.insync.replicas: 1
      inter.broker.protocol.version: "3.3"
      ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
      ssl.enabled.protocols: "TLSv1.2"
      ssl.protocol: "TLSv1.2"
    storage:
      type: jbod
      volumes:
        - type: persistent-claim
          id: 0
          size: 10Gi
    rack:
      topologyKey: topology.kubernetes.io/zone
  zookeeper:
    replicas: {{ .Values.zookeeper.replicas }}
    logging:
      type: inline
      loggers:
        zookeeper.root.logger: "INFO"
    resources:
      requests:
        memory: {{ .Values.zookeeper.resources.requests.memory }}
        cpu: {{ .Values.zookeeper.resources.requests.cpu }}
      limits:
        memory: {{ .Values.zookeeper.resources.limits.memory }}
        cpu: {{ .Values.zookeeper.resources.limits.cpu }}
    jvmOptions:
      -Xms: {{ .Values.zookeeper.jvmOptions.xms }}
      -Xmx: {{ .Values.zookeeper.jvmOptions.xmx }}
    storage:
      type: persistent-claim
      size: 10Gi
  entityOperator:
    topicOperator: {}
    userOperator: {}
{{- end -}}

Above you see the configuration of the Kafka Strimzi Operator, there is nothing fancy here. we configure number of Kafka Instances, how much Memory and CPU these Kafka Instances will get, only critical point, (if you are going to use these Charts as sample, please change the Kafka Image, this one specifically chosen for my M1 Mac Notebook, the performance with it, in your environment probably will not be good).

{{- if .Values.kafka.enabled -}}
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: credit-score-sm
  labels:
    strimzi.io/cluster: {{ .Values.kafka.clusterName }}
spec:
  partitions: {{ .Values.kafka.topics.partitions }}
  replicas: {{ .Values.kafka.topics.replicationFactor }}
  config:
    retention.ms: {{ .Values.kafka.topics.retention.ms }}
    segment.bytes: {{ .Values.kafka.topics.segment.bytes }}
{{- end -}}

Stimzi Operator’s Custom Resource Definition’s does also give us the possibility of deploying necessary Kafka Topics, even giving us the possibility to configure ‘replication-factors‘, ‘partitions‘ depending on which environments, via ‘values-xxx‘ files of the Helm Charts.

  zookeeper:
    replicas: {{ .Values.zookeeper.replicas }}
    logging:
      type: inline
      loggers:
        zookeeper.root.logger: "INFO"
    resources:
      requests:
        memory: {{ .Values.zookeeper.resources.requests.memory }}
        cpu: {{ .Values.zookeeper.resources.requests.cpu }}
      limits:
        memory: {{ .Values.zookeeper.resources.limits.memory }}
        cpu: {{ .Values.zookeeper.resources.limits.cpu }}
    jvmOptions:
      -Xms: {{ .Values.zookeeper.jvmOptions.xms }}
      -Xmx: {{ .Values.zookeeper.jvmOptions.xmx }}
    storage:
      type: persistent-claim
      size: 10Gi

And the Zookeeper configuration over Strimzi Operator.

Apache Cassandra

fsm-akka-helm-infrastructure-chart/helm/templates/cassandra/cassandra.yaml

{{- if .Values.cassandra.enabled -}}
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
  name: fsm-akka
spec:
  cassandra:
    serverVersion: "4.0.3"
    datacenters:
      - metadata:
          name: {{ .Values.cassandra.datacenter.name }}
        size: {{ .Values.cassandra.datacenter.size }}
        storageConfig:
          cassandraDataVolumeClaimSpec:
            storageClassName: standard-rwo
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi
        config:
          jvmOptions:
            heapSize: 512M
            heap_initial_size: 512M
{{- end -}}

It looks simple to start an Apache Cassandra Cluster isn’t with a Custom Resource Definition, it really is.

As I mentioned previously, it could be really critical for different development teams to have a certain data state in the Environments, for this Cassandra has a really cool tool called Medusa, k8ssandra Medusa Operator, Cassandra Restore1, Cassandra Restore 2, so you can arrange your workflow so that when you create a pull request, you can actually restore some data state to be able to proceed with your evaluations.

One minor note, while Cassandra will have to run on GKE, we have to use a special ‘storage-class‘ ‘standard-rwo‘, other then that base configuration of Cassandra is really simple.

Elasticsearch

fsm-akka-helm-infrastructure-chart/helm/templates/elasticsearch/elasticsearch.yaml

{{- if .Values.elasticsearch.enabled -}}
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: {{ .Values.elasticsearch.name }}
spec:
  version: {{ .Values.elasticsearch.version }}
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
    - name: master
      count: {{ .Values.elasticsearch.master.replicaCount }}
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: {{ .Values.elasticsearch.master.storageSize }}
          {{- if .Values.elasticsearch.master.storageClass.enabled }}
          {{- if .Values.elasticsearch.master.storageClass.name }}
            storageClassName: {{ .Values.master.storageClass.name }}
          {{- end }}
          {{- end }}
      config:
        node.roles: ["master", "data", "ingest"]
      podTemplate:
        spec:
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
                runAsUser: 0
              command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          containers:
            - name: elasticsearch
              resources:
                {{- toYaml .Values.elasticsearch.master.resources | nindent 16 }}
        {{- if .Values.elasticsearch.master.noAffinity }}
        affinity: {}
        {{- end }}
    - name: ingest-data
      count: {{ .Values.elasticsearch.ingest_data.replicaCount }}
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: {{ .Values.elasticsearch.ingest_data.storageSize }}
          {{- if .Values.elasticsearch.ingest_data.storageClass.enabled }}
          {{- if .Values.elasticsearch.ingest_data.storageClass.name }}
            storageClassName: {{ .Values.elasticsearch.master.storageClass.name }}
          {{- end }}
          {{- end }}
      config:
        node.roles: ["data", "ingest"]
      podTemplate:
        spec:
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
                runAsUser: 0
              command: [ 'sh', '-c', 'sysctl -w vm.max_map_count=262144' ]
          containers:
            - name: elasticsearch
              resources:
                {{- toYaml .Values.elasticsearch.ingest_data.resources | nindent 16 }}
        {{- if .Values.elasticsearch.ingest_data.noAffinity }}
        affinity: {}
        {{- end }}
{{- end }}

As you can see, initialising an Elasticsearch Cluster in Kubenetes is also as simple with Custom Resource Definitions, only special thing, Elasticsearch Cluster needs Nodes ‘master‘, ‘ingest‘, ‘data‘ with roles (at least one ‘master’ node but this is no production setting please pay attention to that).

Deployment of the Infrastructure

As you will see below during the Workflows, we will deploy the Helm Chart for the Infrastructure with ‘helm install’ while we were installing the Services via ArgoCD. The main reason for this, while development / changes of the Services are under our control and the frequency of changes quite higher then Infrastructure (let’s say, we will have an change / commit every 10 mins to ‘development‘ branch for a Service, a change / commit to Infrastructure once month) and the changes originating from the Framework developers are not under our control, I prefer it to install it with via Helm but if you prefer differently, you can modify the workflows to use ArgoCD for the deployment of the Infrastructure components.

One more point I like to express, it is quite popular in todays IT world to buy services from your Cloud Provider, like a PostgreSQL Instance from MS Azure, while you don’t want deal / administrate it yourself and use an ‘infrastructure as code‘ tool like Terraform to automate the creation of these Infrastructure components. You can manipulate the Workflows that I demonstrate here to execute these Terraform Configurations even for our Dev Kubernetes Clusters to create ‘feature / integration / release‘ Environments, but in my opinion that is an overkill (but if you really want to see how it is done, here is the link to the chapter). I think, it is much more logical to use PostgreSQL Operator (or similar tools for other Infrastructure Components) to instantiate an PostgreSQL instance in your Kubernetes Cluster for these environments but off course, the choice is yours.

Environment Promotion

As you can see from the first diagram of this blog, one of the most important ideas in this Blog is the Environment Promotion. When your application deployed to ‘Integration Environment‘ in Dev Cluster for Sanity Checks and to check certain quality benchmarks are reached, your Decision Gremium will say, this software state is mature enough to promote to ‘Test Environment‘ will be test there via your test team. If they are satisfied with the results, a decision will be taken to promote ‘Test Environment‘ to ‘Staging Environment‘ and finally to ‘Production Environment‘.

So how do we do that? It is basically setting concrete Version numbers for Helm Umbrella Charts for Services and Infrastructures after we identify as stable software, in appropriate Environment Git Repositories.

For ex, it would look like the following for

fsm-akka-test-environment/helmfile.yaml

environments:
  default:
    values:
      - environments/default/values.yaml

repositories:
  - name: fsm-akka
    url: fsmakka.azurecr.io
    username: {{ .StateValues.username }}
    password: {{ .StateValues.password }}
    oci: true

releases:
  - name: foureyes
    namespace: fsmakka
    chart: oci://{{ .StateValues.url }}{{ .StateValues.path }}
    version: {{ .StateValues.version }}
    values:
      - values-test.yaml

And ArgoCD will deploy those manifests created to our Test Kubernetes Cluster.

fsm-akka-test-environment/environments/defaults/values.yaml

username: ''
pwsd: ''
url: 'europe-west3-docker.pkg.dev/'
path: 'fsmakka/fsm-akka-helm-ar/fsm-akka-helm-umbrella-chart'
version: '2.6.0'

Probably when you first see the diagram at the beginning of the Blog, you were really sceptical about transferring information between Environment Git Repositories for Environment Promotion. At this point in the blog, as you can see, it is nothing more then committing the repository Version of our Umbrella Charts and placing configuration data specific to the environment.

Github Actions / Workflows

As previously mentioned, I will display our Service Pipeline with Github Actions, but you can convert the basic idea to any tool like Gitlab, MS Azure Pipelines, CI / CD with Google Cloud, AWS Code Pipeline.

You can see Azure DevOps Pipelines implementation of these concepts in this follow up blog.

Before I go into details, as you can see from the start of this blog there are going to be several Service Git Repository and basically they will use the same Workflows, so we would need a Central Git Repository to maintain these. Service Git Repositories would only have Trigger / Entrypoints to reuse these workflows.

Use Case 1: Continuous Integration and Deployment

Trigger Action: Commit to ‘credit-score‘ repository ‘development‘ branch

Let’s start with one of the easiest Workflows, which is building the Java Code, execute tests, build Docker Image, upload to our Docker Image Repository, make the Helm to package Helm Chart and upload to our Helm Repository and finally deploy those to our Dev Environment under ‘development’ namespace.

>>credit-score Service Repository

Now let’s look ‘build-with-reusable.yaml’ that is defining our Pipeline.

First we are calculating our Version.

name: Java / Gradle CI Caller
run-name: Building with Gradle triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  push:
    branches:
      - 'development'
      - 'release/**'
      - 'feature/**'
      - 'hotfix/**'
      - 'pull/**'
      - 'pull-requests/**'
      - 'pr/**'
    paths-ignore:
      - '.github/**'
jobs:
  call-build-workflow:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/build.yaml@master
    with:
      native: true
      chart-name: "customer-relationship-adapter-application"
    secrets: inherit

We define a name for our Workflow and under which condition this workflow will trigger, in this case with a push to Github Repository. As you can see we want this Workflow to trigger only for certain branch, for GitFlow all branches, except ‘master’ for the reasons we will explain shortly.

This Use Case would be standard and will be used by all Service Repositories, we will place in ‘fsm-akka-github-workflows‘, which will look like the following.

First part of our Workflow, which is getting the Version for us.

name: Java / Gradle CI
run-name: Java / Gradle CI triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      native:
        default: false
        required: false
        type: boolean
      chart-name:
        required: false
        type: string
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
      - name: Display GitVersion ouput
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  build:
    if: inputs.native == false
    runs-on: ubuntu-latest

Here, there is a provided Github Action, ‘gitversion’ which has two phases, first ‘setup’ and second ‘execute’ which will identify the Version for our build with GitVersion tool that I introduced before.

Now we need this version number for our Gradle Build, so we need this as output value, for this we defined ‘${{ steps.gitversion.outputs.semVer }}’ to be passed to the next part of the job.

  build:
    if: inputs.native == false
    runs-on: ubuntu-latest
    needs: calculate-version
    env:
      SEMVER: ${{ needs.calculate-version.outputs.semVer }}
      DOCKER_HUB_USER: ${{ secrets.DOCKER_HUB_USER }}
      DOCKER_HUB_PASSWORD: ${{ secrets.DOCKER_HUB_PASSWORD }}
      DOCKER_URL: ${{ secrets.DOCKER_URL }}
      DOCKER_UPLOAD_USER: ${{ secrets.DOCKER_UPLOAD_USER }}
      DOCKER_UPLOAD_PASSWORD: ${{ secrets.DOCKER_UPLOAD_PASSWORD }}
      HELM_URL: ${{ secrets.HELM_URL }}
      HELM_PATH: ${{ secrets.HELM_PATH }}
      HELM_USER: ${{ secrets.HELM_USER }}
      HELM_PASSWORD: ${{ secrets.HELM_PASSWORD }}
      HELM_DOWNLOAD_CLIENT: ${{ secrets.HELM_DOWNLOAD_CLIENT }}
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - name: Display GitVersion output
        run: |
          echo "SemVer: $SEMVER"
      - name: Set up JDK17
        uses: actions/setup-java@v3
        with:
          distribution: 'zulu'
          java-version: '17'
          cache: gradle
      - id: installHelm
        uses: azure/setup-helm@v3
        with:
          version: '3.11.2'
      - name: Validate Gradle Wrapper
        uses: gradle/wrapper-validation-action@v1
      - name: Build with Gradle
        uses: gradle/gradle-build-action@v2
        env:
          ORG_GRADLE_PROJECT_version: ${{ env.SEMVER }}
          ORG_GRADLE_PROJECT_DOCKER_HUB_USER: ${{ env.DOCKER_HUB_USER }}
          ORG_GRADLE_PROJECT_DOCKER_HUB_PASSWORD: ${{ env.DOCKER_HUB_PASSWORD }}
          ORG_GRADLE_PROJECT_DOCKER_UPLOAD_USER: ${{ env.DOCKER_UPLOAD_USER }}
          ORG_GRADLE_PROJECT_DOCKER_URL: ${{ env.DOCKER_URL }}
          ORG_GRADLE_PROJECT_DOCKER_UPLOAD_PASSWORD: ${{ env.DOCKER_UPLOAD_PASSWORD }}
          ORG_GRADLE_PROJECT_HELM_URL: ${{ env.HELM_URL }}
          ORG_GRADLE_PROJECT_HELM_PATH: ${{ env.HELM_PATH }}
          ORG_GRADLE_PROJECT_HELM_USER: ${{ env.HELM_USER }}
          ORG_GRADLE_PROJECT_HELM_PASSWORD: ${{ env.HELM_PASSWORD }}
          ORG_GRADLE_PROJECT_HELM_DOWNLOAD_CLIENT: ${{ env.HELM_DOWNLOAD_CLIENT }}
        with:
          arguments: |
            build
            --no-daemon
      - name: Run Helm Command
        id: helmLoginAndPush
        shell: bash
        run: |
          echo "$HELM_PASSWORD" | helm registry login $HELM_URL -u $HELM_USER --password-stdin
          helm push build/helm/charts/${{ inputs.chart-name}}-$SEMVER.tgz oci://$HELM_URL$HELM_PATH
      - name: Check Failure
        if: steps.helmLoginAndPush.outcome != 'success'
        run: exit 1

First thing second part of the Job does is to express that it depends on ‘calculate-version‘ with ‘needs‘ keyword which is important because this way we will be able to access the output variable from the previous step. This part continues preparing Java environment (which Java Version, which distribution to use, etc) to able to process the gradle build.

Finally we pass the Version to the gradle build by passing the value to ‘ORG_GRADLE_PROJECT_version: ${{ env.SEMVER}}‘ which is the implicit way pass parameters to Gradle over environment which get from Github Repository Secrets (which you have to configure for every Service Repository).

Final instruction of the workflow will start the Gradle build with arguments ‘build -no-daemon‘.

All the thing you observed until now is the setup for an usual Docker Registry / Helm Repository like Nexus, Artifactory but lately Cloud Registry like MS Azure’s, Google Cloud started using OCI protocol also for Helm Repositories, unfortunately Gradle Plugin that we are using for Helm functionality is not able to use OCI Registries, I will push these Helm Charts to Google Cloud Artifact Registry so I have to deal Helm Push also in the Pipeline.

After this part of the workflow is complete, a second part will trigger that will continuously deploy out development state to our ‘Dev Kubernetes Cluster’ which will trigger with the continuous-deployment-development-with-resuable.yaml’.

name: Continuous Deployment Caller - Development
run-name: Continuous Deployment for Development triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_run:
    workflows: [Java / Gradle CI Caller]
    branches: [development]
    types: [completed]

jobs:
  call-continuous-deployment-workflow:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/continuous-deployment-development.yaml@master
    with:
      repo-name: "customer-relationship-adapter"
      branch-name: "${{ github.event.workflow_run.head_branch }}"
    secrets: inherit

This workflow triggers upon the completion of the ‘Java / Gradle CI Caller’ workflow on ‘development’ branch with ‘[completed]’ condition, then the reusable workflow ‘continuous-deployment-development.yaml’ take over to complete the rest.

First thing you have to know, reusable workflow needs ‘on‘ -> ‘workflow_call‘ trigger and this workflow will also need the originating Git Repository name as input parameter and also the branch name that we want this workflow to run (please remember that this workflow triggered with ‘on‘ -> ‘workflow_run‘ trigger from Service Repository, this trigger type does not inherently pass ‘branch-name‘ to reusable workflows so we have to pass explicitly).

This functionality will be re-used from every Service Pipeline that we have so it is placed in a central repository.

fsm-akka-github-workflows/.github/workflows/continuous-deployment-development.yaml

name: Continuous Deployment - Development
run-name: Continuous Deployment for Development Branch triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      repo-name:
        required: true
        type: string
      branch-name:
        required: true
        type: string
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - name: Display Branch
        run: |
          echo "Branch: ${{ inputs.branch-name }}"
      - uses: actions/checkout@v3
        with:
          ref: ${{ inputs.branch-name }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
          additionalArguments: '"/b" "${{ inputs.branch-name }}"'
      - name: Display GitVersion output
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  deploy:
    runs-on: ubuntu-latest
    needs: calculate-version

We need to do this, to be able to tell Helm Umbrella Chart which development version of the service it should deploy to ‘Dev Environment’, after we identified that we pass this information further in the workflow.

deploy:
    runs-on: ubuntu-latest
    needs: calculate-version
    env:
      SEMVER: ${{ needs.calculate-version.outputs.semVer }}
    steps:
      - name: Deploy Helm Umbrella Chart
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'helm-publish-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-helm-umbrella-chart'
          ref: 'development'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
      - name: Prepare Environment
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'prepare-environment.yaml'
          repo: 'mehmetsalgar/fsm-akka-dev-environment'
          ref: 'development'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"source-repo": "${{ inputs.repo-name }}", "version": "${{ env.SEMVER }}"}'

Next part the workflow will concentrate to Deployment aspect to Dev Environment and first delegate the continuation of the workflow to Helm Chart Repisotory, with the ‘dispatch’ functionality of the GitHub Actions, which we see first time here.

To be able to realise that, first we have to define that we want to use ‘aurelien-baudet/workflow-dispatch@v2’ action, which needs the information to which repository we are dispatching, in this case, it ismehmetsalgar/fsm-akka-helm-umbrella-chart, which workflow there should be triggered, which would be helm-publish-with-reuse.yaml, which branch in target repository this workflow should be triggered and that would be ‘development’.

Off course triggering a workflow from another directory is a Security relevant operation, for this reason we have to pass to this action, our GitHub Token, which we already placed as GitHub Repository Secret.

This ‘dispatch‘ action has some under nice features like, waiting for the completion of the triggered workflow, normally dispatched workflow has the nature of ‘fire & forget’, so next step in the workflow will not wait for the termination of the dispatched workflow but start directly but we don’t want that, so we set ‘wait-for-completion’ parameter to true. Further parameters control how much time this workflow would wait for the ‘successful’ completion of the dispatched workflow or mark it as failure and how often should this status check should occur (a word of caution here, GitHub has a rate of call check on Dispatch, so if you set this parameter to 1s and you have too many service repository, you can lock yourself out of GitHub API).

Now first step of Deploy Job delegates to ‘fsm-akka-helm-umbrella-chart‘ and second one to the ‘fsm-akka-dev-environment‘, lets check both of those.

>>fsm-akka-helm-umbrella-chart Repository

helm-publish-with-reuse.yaml‘ looks like the following.

name: Helm Publish with Gradle reuse
run-name: Helm Publish with Gradle triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
jobs:
  call-helm-publish:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/helm-publish.yaml@master
    secrets: inherit

The dispatched workflows, must have ‘on‘ -> ‘workflow_dispatch‘ trigger and while we have the ‘helm publish’ scenario for lots of the use cases, I have placed that in ‘fsm-akka-github-workflows‘ and final interesting point, while this Workflow will be used from several Use Cases / Repositories, it should receive Repository Secrets from originating Workflow, for this, we are using the parameter ‘secrets: inherit’.

Let’s look how helm-publish.yaml’ workflow looks like.

name: Helm Publish with Gradle
run-name: Publishing to Helm triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - uses: actions/checkout@v3
        with:
          ref: ${{ github.ref }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
          additionalArguments: '"/b" "${{ github.ref }}"'
  helm-publish:
    runs-on: ubuntu-latest
    needs: calculate-version

First part the workflow try to identify the Version of the ‘development’ branch (workflow-dispatch action defines this branch) in ‘fsm-akka-umbrella-chart’ repository, so it can use a specific version while publish Helm Umbrella Chart.

At this point we have to look to the ‘Chart.yaml’ of the Helm Umbrella Chart in ‘development’ branch.

apiVersion: v2
name: fsm-akka-helm-umbrella-chart
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "${appVersion}"

dependencies:
  - name: address-check-application
    version: "${addressCheckVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: address-check.enabled
  - name: credit-score-application
    version: "${creditScoreVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: credit-score.enabled
  - name: fraud-prevention-application
    version: "${fraudPreventionVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: fraud-prevention.enabled
  - name: customer-relationship-adapter-application
    version: "${customerRelationshipAdapterVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: customer-relation-adapter.enabled
  - name: fsm-akka-4eyes-application
    version: "${fsmAkka4eyesVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: fsm-akka-4eyes-application.enabled

Every Services of us are represented as dependencies in our Umbrella Helm Chart, we maintain which version of our Service will be included in the Helm Chart is configured in the ‘gradle.properties’.

address-check-version=<=1.1.2-beta
credit-score-version=<=1.3.0-beta
fraud-prevention-version=<=1.2.10-beta
customer-relationship-adapter-version=<=1.5.0-beta
fsm-akka-4eyes-version=<=1.0.2-beta

For development branch, we want that Helm deploys a range of active development Services, as defined with the Semantic Versioning concepts, hence the notation <=1.3.0-beta as long as major/minor/patch does not change, our workflow would deploy -alpha,x versions of 1.3.0 for ‘credit-score’ service. As your development train moves on, you have to changes these as <=1.3.0-beta, <=2.1.0-beta’, etc….

The ‘master‘ branch, compared to this, will have concrete versions of our services, that is one of the reasons why ‘master’ branch would be treated differently during the workflows.

Last part show you last piece in the puzzle, how we are using Gradle’s Helm Plugin Filtering capability in ‘build.gradle‘ to place actual version values to the Helm Chart provided with gradle.properties,

if(ext.flag) {
    ext.props = [
            HELM_URL: property('HELM_URL'),
            HELM_PATH: property('HELM_PATH'),
            HELM_USER: property('HELM_USER'),
            HELM_PASSWORD: property('HELM_PASSWORD'),
            ADDRESS_CHECK_VERSION: property('address-check-version'),
            CREDIT_SCORE_VERSION: property('credit-score-version'),
            FRAUD_PREVENTION_VERSION: property('fraud-prevention-version'),
            CUSTOMER_RELATIONSHIP_ADAPTER_VERSION: property('customer-relationship-adapter-version'),
            FSM_AKKA_4EYES_VERSION: property('fsm-akka-4eyes-version')
    ]
}

helm {
    charts {
        uc {
            //publish = true
            chartName = 'fsm-akka-helm-umbrella-chart'
            chartVersion = "${project.version}"
            sourceDir = file('helm')
            filtering {
                values.put 'appVersion', "${project.version}-${gitBranch()}"
                values.put 'addressCheckVersion', "${props.ADDRESS_CHECK_VERSION}"
                values.put 'creditScoreVersion', "${props.CREDIT_SCORE_VERSION}"
                values.put 'fraudPreventionVersion', "${props.FRAUD_PREVENTION_VERSION}"
                values.put 'customerRelationshipAdapterVersion', "${props.CUSTOMER_RELATIONSHIP_ADAPTER_VERSION}"
                values.put 'fsmAkka4eyesVersion', "${props.FSM_AKKA_4EYES_VERSION}"
            }
        }
    }

Now that we have identified the Version on ‘development’ branch, we can continue with the publishing of the Helm Umbrella Chart.

helm-publish:
    runs-on: ubuntu-latest
    needs: calculate-version
    env:
      SEMVER: ${{ needs.calculate-version.outputs.semVer }}
      HELM_URL: ${{ secrets.HELM_URL }}
      HELM_PATH:  ${{ secrets.HELM_PATH }}
      HELM_USER: ${{ secrets.HELM_USER }}
      HELM_PASSWORD: ${{ secrets.HELM_PASSWORD }}
    steps:
      - uses: actions/checkout@v3
        with:
          ref: ${{ github.ref }}
      - name: Display GitVersion output
        run: |
          echo "SemVer: $SEMVER"
      - name: Set up JDK17
        uses: actions/setup-java@v3
        with:
          distribution: 'zulu'
          java-version: '17'
          cache: gradle
      - id: install
        uses: azure/setup-helm@v3
        with:
          version: '3.11.2'
      - name: Validate Gradle Wrapper
        uses: gradle/wrapper-validation-action@v1
      - name: Build with Gradle
        uses: gradle/gradle-build-action@v2
        env:
          ORG_GRADLE_PROJECT_version: ${{ env.SEMVER }}
          ORG_GRADLE_PROJECT_HELM_URL: ${{ env.HELM_URL }}
          ORG_GRADLE_PROJECT_HELM_PATH: ${{ env.HELM_PATH }}
          ORG_GRADLE_PROJECT_HELM_USER: ${{ env.HELM_USER }}
          ORG_GRADLE_PROJECT_HELM_PASSWORD: ${{ env.HELM_PASSWORD }}
        with:
          arguments: |
            helmPackage
            --no-daemon
            -Pversion=${{ env.SEMVER }}
      - name: Helm Publish
        id: helmPublish
        shell: bash
        run: |
          echo "$HELM_PASSWORD" | helm registry login $HELM_URL -u $HELM_USER --password-stdin
          helm push build/helm/charts/fsm-akka-helm-umbrella-chart-$SEMVER.tgz oci://$HELM_URL$HELM_PATH
      - name: Check Failure
        if: steps.helmPublish.outcome != 'success'
        run: exit 1

First I like to draw you attention to the ‘needs’ keyword in the workflow, this is the mechanism in Github Actions in which order the Jobs will run, while we need to calculate the version of the Helm Chart, first we have to have that part of the Workflow run.

Later on, on ‘helm-publish’ job, we access this Version number via, ‘${{ needs.calculate-version.outputs.semVer }}’ notation then we execute Helm Gradle plugin ‘helmPublish’ task with Version number and GitRepository Secrets.

Now our Helm Umbrella Chart is with this concrete version number uploaded to the Helm Repository, so use this in ‘fsm-akka-dev-environment’ to release new Version of our ‘credit-score’ Service via ArgoCD.

>>fsm-akka-dev-environment Repository

Before we start looking in detail the workflows in this Repository, let me first say few words about it existence reasons. With this repository we will synchronise the state we have in Git with our Kubernetes Cluster via ArgoCD.

Now I mentioned before we can actually easily deploy our application with the following command

> helm install fsm-akka fsm-akka/fsm-akka --version=1.2.0.alpha.1 -n development

The problem with it, is audibility…

  • you saw, we are using for development environment ‘alpha‘ version of our services, so what exactly we deployed?
  • what is delta between the current and our last deployment of our System?
  • who did the changes?
  • what caused the changes (requirement, feature, etc..)?

Even with ‘helm install‘, it is possible to find the answer of these questions but it is quite hard comparing to just making a ‘git diff‘ between pull request and current state, looking to commit history, etc…or worst case rolling back the changes.

So how can we make GitOps with the Helm, at this point an awesome tool called Helmfile comes to rescue.

To be able to use Helmfile we need an ‘helmfile.yaml‘ configuration file.

github.com/mehmetsalgar/fsm-akka-dev-environment/helmfile.yaml

environments:
  default:
    values:
      - environments/default/values.yaml

---

repositories:
  - name: fsm-akka
    url: {{ .StateValues.url }}
    username: {{ .StateValues.username }}
    password: {{ .StateValues.pswd }}
    oci: true

releases:
  - name: foureyes
    namespace: fsmakka
    chart: oci://{{ .StateValues.url }}{{ .StateValues.path }}/fsm-akka-helm-umbrella-chart
    version: {{ .StateValues.version }}
    values:
      - values-dev.yaml

fsm-akka-dev-environment/environments/default/values.yaml

username: ''
pwsd: ''
url: 'fsmakka.azurecr.io'
path: 'helm/fsm-akka-helm-umbrella-chart-az'
version: '<=2.6.0-beta'

As you can see, it is super simple, we have to first tell Helmfile from which Helm Repositories our Helm Umbrella Chart should be deployed. In the ‘releases’ part of the configuration, we will specify the name of the Helm Chart and most importantly which version we like to deploy and while we are at ‘development’ branch we again work with the range concept of versions of the Helm/Helmfile. This configuration would deploy highest range of ‘-alpha.x‘ for 1.2.0. If you need change major/minor/patch versions, you have change this file and commit to ‘development’ branch.

Let’s see how ‘prepare-environment.yaml‘ Workflow uses this.

name: Prepare Environment
run-name: Preparing Environment for Development for Repo ${{ inputs.repo }} - Version ${{ inputs.version }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      source-repo:
        required: true
        type: string
      version:
        required: true
        type: string
jobs:
  prepare:
    runs-on: ubuntu-latest
    env:
      HELM_URL: ${{ secrets.HELM_URL }}
      HELM_PATH: ${{ secrets.HELM_PATH }}
      HELM_USER: ${{ secrets.HELM_USER }}
      HELM_PASSWORD: ${{ secrets.HELM_PASSWORD }}
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - name: Remove File
        uses: JesseTG/rm@v1.0.3
        with:
          path: ./gitops/github/fsmakka
      - name: Setup helmfile
        uses: mamezou-tech/setup-helmfile@v1.2.0
        with:
          helmfile-version: "v0.153.1"
          helm-version: "v3.12.0"
      - name: DoTemplate
        shell: bash
        run: |
          echo 'Deploy Umbrella Chart for Service Pipeline'
          helmfile template 
                 -e default /
                 --state-values-set username=$HELM_USER /
                 --state-values-set pswd=$HELM_PASSWORD /
                 --state-values-set url=$HELM_URL /
                 --state-values-set path=$HELM_PATH /
                 --output-dir-template ./gitops/github/fsmakka
          echo $(date +%F_%T) > ./gitops/github/fsmakka/create_helm_templates.txt
      - uses: EndBug/add-and-commit@v9
        with:
          add: './gitops/github/fsmakka'
          message: 'Created Manifests for ${{ inputs.source-repo }}-${{ inputs.version }}'
          committer_name: GitHub Actions
          committer_email: actions@github.com
          tag: '${{ inputs.source-repo }}-${{ inputs.version }} --force'
          tag_push: '--force'

This workflow receives input parameters ‘source-repo‘ to know from which origin repository this workflow is triggered and ‘version’ is the version number of the Helm Umbrella Chart that we want to deploy.

The workflow first will delete directory ‘./gitops/fsmakka’ (which is the directory which Helmfile will write its output).

This workflow deals with the deployment of the ‘development’ branch, so checkout this branch and use ‘mamezou-tech/setup-helmfile@v1.2.0‘ GitHub Action to prepare Helm Environment and execute the command ‘helmfile template‘ which will create the following directory structure under ‘./gitops/github/fsmakka’.

Over these files, we can exactly know what are trying to deliver to Kubernetes Clusters.

Last step to ensure this is to commit these files to Git Repository and tag this commit with following pattern ‘${{ inputs.source-repo }}-${{ inputs.version }} –force’ to know from exactly which Service Repository these changes are coming and which version of our Service caused to us.

>> fsm-akka-4eyes-argocd

Now we need an ArgoCD Application Custom Resource Definition that will be deploying continuously development versions (-alpha versions) while we have to this process running continuously we have to install this once.

> helm upgrade --install  fsm-akka-4eyes . -n fsmakka -f values-gke-github.yaml

As you can see ArgoCD deployed the ‘development‘ version of the application.

Use Case 2: Prepare Environment for Pull Request

Trigger Action: Creation of Pull Request for ‘feature/x‘ branch or Commits to ‘feature/x‘ branch

Second workflow is much more challenging then the first one. This one will assume that you are working with a ‘feature’ branch and you are ready create a pull request to merge this to ‘development’ branch. To be able to do that, somebody has to assess the quality of the software in the ‘feature‘ branch. To make this possible, we will create a Namespace based on the name of the ‘feature‘ branch, at our ‘Dev Kubernetes Cluster’ so we can test the feature.

To be able to work in isolation (our feature branch would not disturb other development effort), we will create a new branch in Helm Umbrella Chart Git Repository, place the version of our service in this branch. then publish the Umbrella Helm Chart with the actual version of the Umbrella Chart repository.

Next step, we will need a branch at the Dev Environment Repository to generate the our Kubernetes manifests with the help of the Helmfile, so ArgoCD can read those to create a new Environment for us at Dev Kubernetes Cluster.

Final step would be that, we will configure the ArgoCD to pick this new branch from Dev Environment repository to deploy it to our Kubernetes Cluster using Feature Branch name as a Namespace.

This workflow, while it is going to be re-used from all Service Repositories, we will place to the our central repository for the workflow.

The trigger of the workflow will be the creation of Pull Request in Github,

>> credit-score Repository

name: Continuous Deployment Caller - Pull Request
run-name: Continuous Deployment for Pull Request triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  pull_request:
    types: [opened]
    branches:
      - 'development'
jobs:
  call-continuous-deployment-workflow-for-pull-request:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/continuous-deployment-with-environment.yaml@master
    with:
      repo-name: "customer-relationship-adapter"
      branch-name: "${{ github.event.pull_request.head.ref }}"
      umbrella-chart-base-branch-name: "development"
      infrastructure-base-branch-name: "master"
      value-file: "value-dev"
    secrets: inherit

This workflow will only trigger, if a ‘pull-request’ opened against ‘development’ branch, which in opinion signifies that you reached certain level of maturity with your Feature Branch and you want to show / test your progress in the feature development (Off course this is my interpretation of the development process, with minimal changes you can change this workflow to trigger the moment you create Feature Branch but in my opinion, at the start of the development of a Feature there would not be too many things to test / look for.

This trigger will exist for several other Service Repositories, so it will delegate directly to a reusable workflow in ‘fsm-akka-github-worklows‘ with input parameters such as…

  • for which repository this workflow triggered
  • the name of the feature branch
  • which base branch we should take for our Helm Umbralla Chart (may be you have an Epic Story and you want to include several Services to you feature environment, so you would take another branch then ‘development‘ as a base to Helm Umbrella Chart)
  • and finally which Helm Values these Helm Charts should be deployed (let’s say you want to have fewer instances / more instances of Kafka in feature environment)

Now let’s look to the workflow ‘continuous-deployment-with-environment.yaml‘…

name: Continuous Deployment - Pull Request
run-name: Continuous Deployment with Environment Creation triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      repo-name:
        required: true
        type: string
      branch-name:
        required: true
        type: string
      umbrella-chart-base-branch-name:
        required: true
        type: string
      infrastructure-base-branch-name:
        required: true
        type: string
      value-file:
        required: true
        type: string
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - name: Display Branch
        run: |
          echo "Branch: ${{ inputs.branch-name }}"
      - uses: actions/checkout@v3
        with:
          ref: ${{ inputs.branch-name }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
          additionalArguments: '"/b" "${{ inputs.branch-name }}"'
      - name: Display GitVersion output
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  create-branch-helm-umbrella:
    runs-on: ubuntu-latest
 

First part of the workflow must be familiar for you know, we explained in the previous Use Case, the function of the ‘calculate-version‘, with input parameter ‘branch-name‘ on the Service Repository it will identify branch version and pass this parameter further on the workflow. To be able to do this calculation we have to create a Branch in ‘fsm-akka-helm-umbrella-chart’ by combining the name of Service Feature Branch and the name of the Service Repository (to provide uniqueness) so Gitversion can calculate the version for Helm Umbrella Chart.

  create-branch-helm-umbrella:
    runs-on: ubuntu-latest
    needs: calculate-version
    env:
      SEMVER: ${{ needs.calculate-version.outputs.semVer }}
    steps:
      - name: Create Pull Request Branch Helm Umbrella Chart
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'create-branch-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-helm-umbrella-chart'
          ref: "${{ inputs.umbrella-chart-base-branch-name }}"
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": "${{ inputs.branch-name }}-${{ inputs.repo-name }}", "base-branch-name": "${{ inputs.umbrella-chart-base-branch-name }}"}'
      - name: Deploy Helm Umbrella Chart
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'publish-and-prepare-environment.yaml'
          repo: 'mehmetsalgar/fsm-akka-helm-umbrella-chart'
          ref: '${{ inputs.branch-name }}-${{ inputs.repo-name }}'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 10m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": 
                      "${{ inputs.branch-name }}-${{ inputs.repo-name }}", 
                    "umbrella-chart-base-branch-name": 
                       "${{ inputs.umbrella-chart-base-branch-name }}", 
                    "source-repo": 
                       "${{ inputs.repo-name }}", "version-number": "${{ env.SEMVER }}"}'
  create-infrastructure-in-k8s:

which will dispatch the workflow to ‘fsm-akka-helm-umbrella-chart‘ to run on the branch ‘${{ inputs.umbrella-chart-base-branch-name }}‘ as it is given as an input parameter (for this use case it is ‘development’, which will be critical in the development of the Epic Stories) to create a new Branch as the same name as Service Repository feature branch name.

Then it will dispatch the workflow to ‘publish-and-prepare-environment.yaml, which will create Infrastructure, Service Environments, now let’s looks these workflows closer.

>> fsm-akka-helm-umbrella-chart Repository

name: Create Branch with reuse
run-name: Creating Branch ${{ inputs.branch-name }} - Base Branch Name ${{ inputs.base-branch-name }}triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      branch-name:
        required: true
        type: string
      base-branch-name:
        required: true
        type: string
jobs:
  create:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/create-branch.yaml@master
    with:
      branch-name: ${{ inputs.branch-name }}
      base-branch-name: ${{ inputs.base-branch-name }}
    secrets: inherit

There is nothing special about ‘create-branch-with-reuse.yaml‘, while we will create branches from other workflows, this delegates to a centralised workflow, it can create a branch with the given branch name and base branch name, for this use case it is ‘development’ but for an epic branch it can be ‘epic/xxxx‘, etc.

name: Create Branch
run-name: Creating Branch ${{ inputs.branch-name }} - Base Branch Name ${{ inputs.base-branch-name }}triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      branch-name:
        required: true
        type: string
      base-branch-name:
        required: true
        type: string
jobs:
  create:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
          ref: "${{ inputs.base-branch-name }}"
          fetch-depth: 0
      - uses: peterjgrainger/action-create-branch@v2.2.0
        env:
          GITHUB_TOKEN: ${{ secrets.PERSONAL_TOKEN }}
        with:
          branch: 'refs/heads/${{ inputs.branch-name }}'

Workflow will continue with publish-and-prepare-environment.yaml workflow file.

name: Publish Helm Umbrella Chart with Gradle and prepare for Environment Deployment
run-name: Integrate Service ${{ inputs.source-repo }} Helm Chart to Umbrella Helm Chart / Publish / Prepare Environment for Branch ${{ inputs.branch-name }} Version ${{ inputs.version-number }} with Chart Base Branch ${{ inputs.umbrella-chart-base-branch-name }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      branch-name:
        required: true
        type: string
      umbrella-chart-base-branch-name:
        required: true
        type: string
      source-repo:
        required: true
        type: string
      version-number:
        required: true
        type: string
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - name: Display Branch
        run: |
          echo "Branch: ${{ github.ref }}"
      - uses: actions/checkout@v3
        with:
          ref: ${{ inputs.branch-name }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
          additionalArguments: '"/b" "${{ inputs.branch-name }}"'
      - name: Display GitVersion ouput
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  helm-publish-umbrella-chart:

This workflow accepts as input parameter..

  • ‘branch-name’ from the feature request branch name
  • ‘umbrella-chart-base-branch-name’ as base branch name for Git checkout/branch creation
  • source-repo‘ to identify from which repository this workflow triggered
  • the ‘version-number‘ from the Service repository. With this informations, the version for the umbrella chart is calculated.

  helm-publish-umbrella-chart:
    needs: calculate-version
    uses: ./.github/workflows/helm-publish-for-service.yaml
    with:
      source-repo: ${{ inputs.source-repo }}
      umbrella-chart-version: ${{ needs.calculate-version.outputs.semVer }}
      service-version-number: ${{ inputs.version-number }}
    secrets: inherit
  prepare-dev-environment:

helm-publish-for-service.yaml

name: Helm Publish for Services with Gradle CI
run-name: Helm Publish for Services ${{ inputs.source-repo }} - Version ${{ inputs.service-version-number }} for Chart Version ${{ inputs.umbrella-chart-version }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      source-repo:
        required: true
        type: string
      umbrella-chart-version:
        required: true
        type: string
      service-version-number:
        required: true
        type: string
jobs:
  helmPackage:
    name: Building Helm Umbrella Chart for Service
    runs-on: ubuntu-latest
    env:
      HELM_URL: ${{ secrets.HELM_URL }}
      HELM_PATH:  ${{ secrets.HELM_PATH }}
      HELM_USER: ${{ secrets.HELM_USER }}
      HELM_PASSWORD: ${{ secrets.HELM_PASSWORD }}
    steps:
      - uses: actions/checkout@v3
        with:
          ref: ${{ inputs.branch-name }}
          fetch-depth: 0
      - name: Display GitVersion output
        run: |
          echo "SemVer: $SEMVER"
      - name: Set up JDK17
        uses: actions/setup-java@v3
        with:
          distribution: 'zulu'
          java-version: '17'
          cache: gradle
      - name: Validate Gradle Wrapper
        uses: gradle/wrapper-validation-action@v1
      - name: Build with Gradle
        uses: gradle/gradle-build-action@v2
        env:
          ORG_GRADLE_PROJECT_version: ${{ env.SEMVER }}
          ORG_GRADLE_PROJECT_HELM_URL: ${{ env.HELM_URL }}
          ORG_GRADLE_PROJECT_HELM_PATH: ${{ env.HELM_PATH }}
          ORG_GRADLE_PROJECT_HELM_USER: ${{ env.HELM_USER }}
          ORG_GRADLE_PROJECT_HELM_PASSWORD: ${{ env.HELM_PASSWORD }}
        with:
          arguments: |
            helmPackage
            --no-daemon
            -Pversion=${{ inputs.umbrella-chart-version }}
            -P${{ inputs.source-repo }}-version=${{ inputs.service-version-number }}
      - name: Helm Publish
        id: helmPublish
        shell: bash
        env:
          UC_VERSION: ${{ inputs.umbrella-chart-version }}
        run: |
          echo "$HELM_PASSWORD" | helm registry login $HELM_URL -u $HELM_USER --password-stdin
          helm push build/helm/charts/fsm-akka-helm-umbrella-chart-$UC_VERSION.tgz oci://$HELM_URL$HELM_PATH
      - name: Check Failure
        if: steps.helmPublish.outcome != 'success'
        run: exit 1

Next part of the workflow will publish Helm Umbrella Chart to Helm Repository, for the version number that we identified on this feature branch. We will need the same Helm Packaging (publishing with Google Cloud Platform unfortunately little bit complicated with OCI Helm Repositories, the Gradle Plugin that I use unfortunately does not support it, so we have program Helm Push ourselves) functionality in the other workflows, it is also implemented as reusable workflow (but it is the first workflow that is reused from same repository, please pay attention to the notation) then we will set the ‘-P${{ inputs.source-repo }}-version=${{ inputs.service-version-number }}‘ the Version for the Service, finally set the Umbrella Chart version ‘-Pversion=${{ inputs.umbrella-chart-version }}‘ and push.

Helm Push part is little bit complicated while we have to use the key for the Service Account that we created in Google Cloud Platform to interact with our Google Kubernetes Engine, we have take the key from the Github Secret that we created and pass to Google Cloud authentication mechanism.

echo "$HELM_PASSWORD" | helm registry login $HELM_URL -u $HELM_USER --password-stdin
          helm push build/helm/charts/fsm-akka-helm-umbrella-chart-$UC_VERSION.tgz oci://$HELM_URL$HELM_PATH

and push the Helm Package.

Then we will need to create new environment in Dev Kubernetes Cluster, we need new branch in ‘fsm-akka-dev-environment ‘ so ArgoCD can pick up this branch to deploy our services to Kubernetes Cluster..

Use case will continue with ‘prepare-dev-environment.yaml‘ workflow.

  prepare-dev-environment:
    name: Building Dev Environment for Kubernetes
    needs:
      - helm-publish-umbrella-chart
      - calculate-version
    uses: ./.github/workflows/prepare-dev-environment.yaml
    with:
      branch-name: ${{ inputs.branch-name }}
      umbrella-chart-base-branch-name: ${{ inputs.umbrella-chart-base-branch-name }}
      tag: ${{ inputs.source-repo }}-${{ needs.calculate-version.outputs.semVer }}
      version: ${{ needs.calculate-version.outputs.semVer }}
    secrets: inherit

name: Prepare Dev Environments in Kubernetes
run-name: Prepare Dev Environments in Kubernetes for Branch ${{ inputs.branch-name }} Version ${{ inputs.version }} for Tag ${{ inputs.tag }} with Chart Base Branch ${{ inputs.umbrella-chart-base-branch-name }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      branch-name:
        required: true
        type: string
      umbrella-chart-base-branch-name:
        required: true
        type: string
      tag:
        required: true
        type: string
      version:
        required: true
        type: string
jobs:
  prepare-dev-environment:
    runs-on: ubuntu-latest
    steps:
      - name: Create Branch for PullRequest/ServiceRelease/Integration in dev-environment
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'create-branch-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-dev-environment'
          ref: "${{ inputs.umbrella-chart-base-branch-name }}"
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": 
                       "${{ inputs.branch-name }}", 
                    "base-branch-name": 
                       "${{ inputs.umbrella-chart-base-branch-name }}"}'
      - name: Prepare PullRequest/ServiceRelease/Integration in dev-environment
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'prepare-services-for-new-environment.yaml'
          repo: 'mehmetsalgar/fsm-akka-dev-environment'
          ref: '${{ inputs.branch-name }}'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 10m
          wait-for-completion-interval: 10s
          inputs: '{"tag": "${{ inputs.tag }}", 
                    "version": "${{ inputs.version }}"}'

This workflow will create a branch in ‘fsm-akka-dev-environment‘ based on defined at start of the workflow (this case ‘development‘ but for an epic story, this can be the branch of an epic) and in the next step we will contiue the ”prepare-services-for-new-environment.yaml‘ to render k8s manifests with the help of the Helmfile so ArgoCD can read those.

name: Prepare Services Environment
run-name: Preparing Environment for Tag ${{ inputs.tag }} - Version ${{ inputs.version }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      tag:
        required: true
        type: string
      version:
        required: true
        type: string
jobs:
  prepare:
    name: Preparing Services Environment
    runs-on: ubuntu-latest
    env:
      HELM_URL: ${{ secrets.HELM_URL }}
      HELM_PATH: ${{ secrets.HELM_PATH }}
      HELM_USER: ${{ secrets.HELM_USER }}
      HELM_PASSWORD: ${{ secrets.HELM_PASSWORD }}
      UMBRELLA_CHART_VERSION: ${{ inputs.version }}
    steps:
      - uses: actions/checkout@v3
        with:
          ref: '${{ github.event.workflow_run.head_branch }}'
          fetch-depth: 0
      - name: Display Branch
        run: |
          echo "Branch: ${{ github.ref }}"
      - name: Remove File
        uses: JesseTG/rm@v1.0.3
        with:
          path: ./gitops/github/fsmakka/
      - name: Setup helmfile
        uses: mamezou-tech/setup-helmfile@v1.2.0
        with:
          helmfile-version: "v0.153.1"
          helm-version: "v3.12.0"
      - name: DoTemplate
        shell: bash
        run: |
          echo 'Deploy Umbrella Chart for Service Pipeline'
          helmfile template 
                       -e default 
                       --state-values-set username=$HELM_USER  /
                       --state-values-set pswd=$HELM_PASSWORD /
                       --state-values-set url=$HELM_URL /
                       --state-values-set path=$HELM_PATH /
                       --state-values-set version=$UMBRELLA_CHART_VERSION / 
                       --output-dir-template ./gitops/github/fsmakka
          echo $(date +%F_%T) > ./gitops/github/fsmakka/create_helm_templates.txt
      - uses: EndBug/add-and-commit@v9
        with:
          add: './gitops/github/fsmakka/'
          message: 'Created Manifests for ${{ inputs.tag}}'
          committer_name: GitHub Actions
          committer_email: actions@github.com
          tag: '${{ inputs.tag }} --force'
          tag_push: '--force'

This part of the workflow will checkout the newly created branch, remove previous version of manifest (helmfile will not make a delta and delete old files) , setup Helmfile Action, execute..

helmfile template 
             -e default 
             --state-values-set username=$HELM_USER  /
             --state-values-set pswd=$HELM_PASSWORD /
             --state-values-set url=$HELM_URL /
             --state-values-set path=$HELM_PATH /
             --state-values-set version=$UMBRELLA_CHART_VERSION / 
             --output-dir-template ./gitops/github/fsmakka

‘which generate from Kubernetes Manifests from Helm Umbrella Chart Version we prepared in earlier steps of the workflow and the final step, we will commit newly generated manifest and tag those with a combination of the ‘source-repo‘ name that triggered this workflow originally and the version at Service Repository with the help of the following Helmfile configuration.

environments:
  default:
    values:
      - environments/default/values.yaml

---

repositories:
  - name: fsm-akka
    url: {{ .StateValues.url }}
    username: {{ .StateValues.username }}
    password: {{ .StateValues.pswd }}
    oci: true

releases:
  - name: foureyes
    namespace: fsmakka
    chart: oci://{{ .StateValues.url }}{{ .StateValues.path }}/fsm-akka-helm-umbrella-chart
    version: {{ .StateValues.version }}
    values:
      - values-dev.yaml

Please pay attention to ‘—‘, this is a signal for Helmfile to group its render areas, without this you might get some problems with the template variables we use in Helmfile configuration and also the OCI configuration parameter, so Helmfile knows how to connect an OCI Helm Repository.

This version of the Workflow, does not uses the version from environment default for the Helm Umbrella Chart version but get as a parameter from the workflow.

fsm-akka-dev-environment/environments/default/values.yaml

username: ''
pwsd: ''
url: 'fsmakka.azurecr.io'
path: 'helm/fsm-akka-helm-umbrella-chart-az'
version: '<=2.6.0-beta'

Now will the workflow will continue from ‘credit-score’ repository to create our new Infrastructure under new Namespace at our Dev k8s Cluster.

>> credit-score Repository

(This is a continuation of the Workflow ‘continuous-deployment-with-environment.yaml‘)

  create-infrastructure-in-k8s:
    name: Create Infrastructure in K8s with Branch Name as Namespace
    needs: create-branch-helm-umbrella
    uses: ./.github/workflows/create-infrastructure-in-k8s.yaml
    with:
      branch-name: ${{ inputs.branch-name }}-${{ inputs.repo-name }}
      base-branch-name: ${{ inputs.infrastructure-base-branch-name }}
      value-file: ${{ inputs.value-file }}
    secrets: inherit

This will call another reusable workflow while we will need the same functionality for ‘Integration‘ and ‘release‘ branch workflows with parameters containing the ‘branch-name‘ for ‘credit-score‘ feature branch, from which base branch we will create this branch in ‘fsm-akka-helm-infrastructure-chart‘ repository, which for this workflow at the moment ‘master‘ branch and finally which environment configuration should be used with ‘value-file’.

fsm-akka-github-workflows/.github/workflows/create-infrastructure-in-k8s.yaml

name: Create Infrastructure in Kubernetes
run-name: Creating Infrastructure in Kubernetes triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      branch-name:
        required: true
        type: string
      base-branch-name:
        required: true
        type: string
      value-file:
        required: true
        type: string
jobs:
  create-infrastructure-in-k8s:
    runs-on: ubuntu-latest
    steps:
      - name: Create Future Infrastructure Environment in K8s
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'build-infrastructure-environment-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-helm-infrastructure-chart'
          ref: "${{ inputs.base-branch-name }}"
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{
                     "branch-name": "${{ inputs.branch-name }}", 
                     "value-file": "${{ inputs.value-file }}"
                   }'

This will trigger the workflow ‘build-infrastructure-environment-with-reuse.yaml‘ in the repository ‘fsm-akka-helm-infrastructure-chart‘.

>> fsm-akka-helm-infrastructure-chart Repository

name: Build Infrastructure for Branch
run-name: Building an Infrastructure for Branch ${{ inputs.branch-name }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      branch-name:
        required: true
        type: string
      value-file:
        required: true
        type: string
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - name: Display Branch
        run: |
          echo "Branch: ${{ github.ref }}"
      - uses: actions/checkout@v3
        with:
          ref: ${{ github.ref }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
      - name: Display GitVersion ouput
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  calculate-namespace:
    runs-on: ubuntu-latest
    outputs:
      namespace: ${{ steps.findandreplace2.outputs.value }}
    steps:
      - id: findandreplace
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ inputs.branch-name }}
          find: '/'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace1
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace.outputs.value }}
          find: '.'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace2
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace1.outputs.value }}
          find: '_'
          replace: '-'
          replaceAll: 'true'
  call-build-environment-for-branch:
    permissions:
      contents: 'read'
      id-token: 'write'
    needs: [calculate-namespace, calculate-version]
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/helm-install.yaml@master
    with:
      helm-command: "helm upgrade fsm-akka-infrastructure . --install -n ${{ needs.calculate-namespace.outputs.namespace }} -f ${{ inputs.value-file }}.yaml --create-namespace --version=${{ needs.calculate-version.outputs.semVer }}"
    secrets: inherit

This workflow tries to create an acceptable Kubernetes Namespace from Feature Branch, so it removes unacceptable characters from branch name (please pay attention to see the steps are chained so we can call the same GitHub Action twice). The last Job will install our Infrastructure (Kafka, Cassandra, Elasticsearch) to this new Namespace with the following command.

helm upgrade fsm-akka-infrastructure . 
        --install / 
        -n ${{ needs.calculate-namespace.outputs.namespace }} /
        -f ${{ inputs.value-file }}.yaml /
        --create-namespace /
        --version=${{ needs.calculate-version.outputs.semVer }}

So we use calculated namespace ‘needs.calculate-namespace.outputs.namespace‘, ‘value-file‘, like values-dev.yaml, values-test.yaml, values-prod.yaml, values-xxxx.yaml, etc..and finally the version of the Helm Infrastructure Chart.

Which will deploy our infrastructure to Goigle Kubernetes Engine ( GKE ) under the namespace we create our feature branch.

You probably asking yourself, ‘we are deploying our Services with ArgoCD, why are we deploying our Infrastructure with Helm?’. Personally I think because of the amount of the changes happening for our Service it is much critical to track over the GitOps the changes. Compared to this, the Helm Charts of the Kafka, Cassandra, Elasticsearch the rate of changes are small, so I prefer to install with Helm Install, off course if you think, you want to follow GitOps principles for the Infrastructure and use Helmfile / ArgoCD, you can modify the workflow by taking Service Deployments as an example.

One word of caution here, generally speaking, Infrastructure components are the ones that are most resource hungry, so if you need lots of environments, while you are developing too many feature ini parallel, you have to allocate lots of Kubernetes resources but this bring the dilemma, if you need peak resource demand rarely, why should you constantly allocate and pay those resources.

Thankfully GKE has one feature that would be really helpful.

Enable cluster autoscaler‘ for your GKE node pool, this way when GKE needs more resources it would add those to node pool and remove those when those are not necessary. We also have a safety valve, if our Pipelines goes havoc, it will not allocate 1000s of instances, ‘Maximum numbers of nodes‘ will prevents that things goes out of control.

>> credit-score Repository

fsm-akka-github-workflows/.github/workflows/continuous-deployment-with-environment.yaml

  create-services-environment-in-k8s:
    name: Create Services Environment in K8s with Branch Name as Namespace
    needs: create-infrastructure-in-k8s
    uses: ./.github/workflows/create-services-environment-in-k8s.yaml
    with:
      branch-name: ${{ inputs.branch-name }}-${{ inputs.repo-name }}
      base-branch-name: 'master'
    secrets: inherit

The workflow will now continue in the context of ‘credit-score‘ Repository and will create an ArgoCD Application from the branch we previously created at ‘fsm-akka-dev-environment‘ repository. ArgoCD Repository ‘fsm-akka-4eyes-argocd‘ will be always based on ‘master‘ will only install ArgoCD Application Custom Resource Definition via Helm for new environment.

fsm-akka-github-workflows/.github/workflows/create-services-environment-in-k8s.yaml

name: Create Service Environment in Kubernetes
run-name: Create Service Environments in Kubernetes triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      branch-name:
        required: true
        type: string
      base-branch-name:
        required: true
        type: string
jobs:
  create-services-environment-in-k8s:
    name: Create Services Environment in K8s with Branch Name as Namespace
    runs-on: ubuntu-latest
    steps:
      - name: Create Future Environment in K8s
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'build-service-environment-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-4eyes-argocd'
          ref: "${{ inputs.base-branch-name }}"
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": "${{ inputs.branch-name }}"}'

The workflow will continue with ‘build-service-environment-with-reuse.yaml‘.

>> fsm-akka-4eyes-argocd Repository

name: Create ArgoCD Application for Future Branch Environment
run-name: Create ArgoCD Application for branch ${{ inputs.branch-name }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      branch-name:
        type: string
        required: true
jobs:
  calculate-namespace:
    runs-on: ubuntu-latest
    outputs:
      namespace: ${{ steps.findandreplace2.outputs.value }}
    steps:
      - id: findandreplace
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ inputs.branch-name }}
          find: '/'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace1
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace.outputs.value }}
          find: '.'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace2
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace1.outputs.value }}
          find: '_'
          replace: '-'
          replaceAll: 'true'
  call-build-environment-for-branch:
    permissions:
      contents: 'read'
      id-token: 'write'
    needs: calculate-namespace
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/helm-install.yaml@master
    with:
      helm-command: "helm upgrade fsm-akka . --install -n ${{ needs.calculate-namespace.outputs.namespace }} --create-namespace --set targetBranch=${{ inputs.branch-name }} -f values-gke-github.yaml"
    secrets: inherit

Which will do same process to remove unwanted character from branch name so we can convert to Kubernetes Namespace, after that we will deploy with this following command.

helm upgrade fsm-akka . /
                   --install /
                   -n ${{ needs.calculate-namespace.outputs.namespace }} /
                   --create-namespace /
                   --set targetBranch=${{ inputs.branch-name }} /
                   -f values-gke-github.yaml

Only interesting parts in this command is the ‘targetBranch‘ parameter, which tells ArgoCD to monitor which branch in ‘fsm-akka-dev-environment‘ repository and which namespace the application should be deployed into the Kubernetes Cluster,

You will understand this better when you observe the ArgoCD Application Custom Resource Definition.

fsm-akka-4eyes-argocd/helm/templates/fsm-akka.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: foureyes
  namespace: {{ .Release.Namespace }}
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    name: {{ .Values.cluster.name }}
    namespace: {{ .Release.Namespace }}
  project: fsm-akka-4eyes-project
  source:
    repoURL: "https://github.com/mehmetsalgar/fsm-akka-dev-environment.git"
    path: {{ .Values.source.path }}
    directory:
      recurse: true
    targetRevision: {{ .Values.targetBranch }}
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
    syncOptions:
      - CreateNamespace=true

You can see in this Custom Resource Definition, we are using Feature Branch name as Kubernetes Namespace, ArgoCD will observe ‘fsm-akka-dev-environment‘ repository, under the directory ‘gitops/fsmakka‘ for the branch defined in the ‘targetBranch‘.

This will complete the workflow for this Use Case, now that we completed a workflow that create a complete new Environment to test it from opening of an Pull Request, let’s look to a workflow that will clean up the Environment when Pull Request is merged or Closed :).

Use Case 3: Environment Cleanup after competed Pull Request [ Merged / Closed ]

Trigger Action: Pull Request completed or closed

This workflow is quite simple, after GitHub receive an Event that Pull Request completed (merged / closed), it will first remove the created feature branches from ‘fsm-akka-helm-chart‘, ‘fsm-akka-dev-environment‘, then removes Helm Installations for Infrastructure and then the ArgoCD Application.

customer-relationship-adapter/.github/workflows/cleanup-after-pull-request-closing-with-reusable.yaml

name: Cleanup Caller - Pull Request
run-name: Cleanup Environments after Pull Request for Branch ${{ github.event.pull_request.head.ref }}
  triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  pull_request:
    types: [closed]
jobs:
  call-cleanup-workflow:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/cleanup-for-service.yaml@master
    with:
      repo-name: "customer-relationship-adapter"
      branch-name: "${{ github.event.pull_request.head.ref }}"
    secrets: inherit

fsm-akka-github-workflows/.github/workflows/cleanup-for-service.yaml

name: Cleanup - Pull Request Closing
run-name: Cleanup Environment after closed Pull Request for Branch ${{ inputs.branch-name}} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      repo-name:
        required: true
        type: string
      branch-name:
        required: true
        type: string
jobs:
  cleanup-environment:
    uses: ./.github/workflows/cleanup-environment.yaml
    with:
      branch-name: ${{ inputs.branch-name }}-${{ inputs.repo-name }}
    secrets: inherit
  cleanup-umbrella-chart:
    runs-on: ubuntu-latest
    needs: cleanup-environment
    steps:
      - name: Clean Umbrella Helm Chart
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'delete-branch-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-helm-umbrella-chart'
          ref: 'master'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 10m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": "${{ inputs.branch-name }}-${{ inputs.repo-name }}"}'

And the removal of the Helm Installations.

fsm-akka-github-workflows/.github/workflows/cleanup-environment.yaml

name: Cleanup Environment
run-name: Cleanup Environment Branch ${{ inputs.branch-name}} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_call:
    inputs:
      branch-name:
        required: true
        type: string
jobs:
  remove-k8s-environment:
    runs-on: ubuntu-latest
    needs: cleanup-dev-environment
    steps:
      - name: Clean Environment in K8s
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'cleanup-environment.yaml'
          repo: 'mehmetsalgar/fsm-akka-4eyes-argocd'
          ref: 'master'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": "${{ inputs.branch-name }}"}'
      - name: Clean Infrastructure Environment in K8s
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'cleanup-infrastructure.yaml'
          repo: 'mehmetsalgar/fsm-akka-helm-infrastructure-chart'
          ref: 'master'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": "${{ inputs.branch-name }}"}'
  cleanup-dev-environment:
    runs-on: ubuntu-latest
    steps:
      - name: Clean dev-environment
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'delete-branch-with-reuse.yaml'
          repo: 'mehmetsalgar/fsm-akka-dev-environment'
          ref: 'master'
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s
          inputs: '{"branch-name": "${{ inputs.branch-name }}"}'

fsm-akka-4eyes-argocd/blob/master/.github/workflows/cleanup-environment.yaml

name: Clean Environment for a Future Branch
run-name: Clean Environment branch ${{ inputs.branch-name }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      branch-name:
        type: string
jobs:
  calculate-namespace:
    runs-on: ubuntu-latest
    outputs:
      namespace: ${{ steps.findandreplace2.outputs.value }}
    steps:
      - id: findandreplace
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ inputs.branch-name }}
          find: '/'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace1
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace.outputs.value }}
          find: '.'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace2
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace1.outputs.value }}
          find: '_'
          replace: '-'
          replaceAll: 'true'
  clean-environment-for-future-branch:
    permissions:
      contents: 'read'
      id-token: 'write'
    needs: calculate-namespace
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/helm-install.yaml@master
    with:
      helm-command: "helm delete fsm-akka -n ${{ needs.calculate-namespace.outputs.namespace }}"
    secrets: inherit
helm delete fsm-akka /
             -n ${{ needs.calculate-namespace.outputs.namespace }}

fsm-akka-helm-infrastructure-chart/.github/workflows/cleanup-infrastructure.yaml

name: Clean Infrastructure for a Future Branch
run-name: Cleaning up Infrastructure for Branch ${{ inputs.branch-name }} triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
    inputs:
      branch-name:
        required: true
        type: string
jobs:
  calculate-namespace:
    runs-on: ubuntu-latest
    outputs:
      namespace: ${{ steps.findandreplace2.outputs.value }}
    steps:
      - id: findandreplace
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ inputs.branch-name }}
          find: '/'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace1
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace.outputs.value }}
          find: '.'
          replace: '-'
          replaceAll: 'true'
      - id: findandreplace2
        uses: mad9000/actions-find-and-replace-string@3
        with:
          source: ${{ steps.findandreplace1.outputs.value }}
          find: '_'
          replace: '-'
          replaceAll: 'true'
  clean-environment-for-branch:
    permissions:
      contents: 'read'
      id-token: 'write'
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/helm-install.yaml@master
    needs: calculate-namespace
    with:
      helm-command: "helm delete fsm-akka-infrastructure -n ${{ needs.calculate-namespace.outputs.namespace }}"
    secrets: inherit
helm delete fsm-akka-infrastructure /
             -n ${{ needs.calculate-namespace.outputs.namespace }}

Use Case 4: Producing Release Candidates for Services

Trigger Action: Creation of ‘release/x.x.x’ branch or Commits to ‘release/x.x.x’ branch

This workflow is basically the same workflow as the Use Case 2 which triggers when a Pull Request is created from ‘development‘ branch. This Use Case will trigger when a Release Branch created or a Push to Release Branch occurs.

name: Continuous Deployment Caller - Release
run-name: Continuous Deployment for Release triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_run:
    workflows: [Java / Gradle CI Caller]
    branches: [release/**]
    types: [completed]
jobs:
  call-continuous-deployment-workflow:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/continuous-deployment-with-environment.yaml@master
    with:
      repo-name: "customer-relationship-adapter"
      branch-name: "${{ github.event.workflow_run.head_branch }}"
      umbrella-chart-base-branch-name: "master"
      infrastructure-base-branch-name: "master"
      value-file: "values-release"
    secrets: inherit

The only other change compare to Use Case 2 other then the trigger condition, this workflow takes ‘master‘ branches from Helm Umbrella Chart from Services and Infrastructure as base branches to create a new Environment for a Release Candidate. If you have an Epic Story and your configuration should depend to these Epic Branches (multiple Services collaborating for the implementation of an Epic Story) for the Release Candidate, you can change the workflow in ‘release/x‘ branch to these specific branches for Service and Infrastructure Umbrella Charts which we will closely in one of the following Workflows.

Use Case 5: Release Environment Cleanup

Trigger Action: ‘release/*’ branch is deleted

This workflow functions with same principles as Use Case 3, only its trigger condition is different, it will trigger when a ‘release/*’ branch Merged to ‘master‘ branch and ‘release/*’ branch is deleted-

name: Cleanup after Branch Delete
run-name: Cleanup after Branch Delete triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  delete:
    branches:
      - release/**
jobs:
  call-cleanup-workflow:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/cleanup-for-service.yaml@master
    with:
      repo-name: "customer-relationship-adapter"
      branch-name: "${{ github.event.ref }}"
    secrets: inherit

Use Case 6: Integration Environment for Helm Umbrella Charts / Sanity Check

Trigger Action: Creation of ‘integration/xxx‘ branch in ‘helm-umbrella-chart‘ with concrete Release Candidate versions of multiple services

address-check-version=1.1.3-rc1
credit-score-version=1.3.1-rc1
fraud-prevention-version=1.2.11-rc2
customer-relationship-adapter-version=1.1.7-rc1
fsm-akka-4eyes-version=1.0.3-rc4

As I previously mentioned in this blog, I am using GitFlow concepts, which is great for many scenarios but I have a problem with one specific topic. GitFlow dictates you have to start a ‘release/x‘ branch before you advance your application to production ‘master‘ branch, which mean artefacts that are produced from the ‘release/x’ branch, you would have version of ‘-rc.x’ but we don’t want our application promoted to ‘test‘, ‘staging‘, ‘production‘ with ‘-rc.x’ versions.

We want same concrete binary version of the application to be promoted between the environments, for ex, ‘1.2.09’ version of the binary deployed to all environments and not ‘1.2-rc2.4’ , as great Martin Fowler discusses here. With GitFlow, if we test the software state from ‘release/x’ branch in ‘test’ environment, the moment we merge it to ‘master’ branch binary will get another version number. In my opinion, it is also not realistic that that software that developed in a Sprint via 500 developers / 50 Scrum teams, can’t be directly merged to master and promoted between the environments.

Off course our automation tests can check the regression and assure our software quality is not deteriorated but new features / epic stories development, a robot can’t decide our requirements correctly implemented or not, so we will need a software state that we can do some Sanity Checks.

My solution to this dilemma is to introduce an ‘integration‘ branch GitFlow for ‘fsm-akka-helm-umbrella-chart‘ repository and create an Environment in our Dev Kubernetes Cluster, so preliminary Sanity Checks can be execute here before the software can be promoted to Test Environment. This way we can use ‘release/x‘ branch, which will use with concrete versions of our Services.

So Chart.yaml and gradle.properties which contains the versions in the ‘fsm-akka-helm-infrastructure-chart‘ for ‘integration/x’ will look like the following.

apiVersion: v2
name: fsm-akka-helm-umbrella-chart
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "${appVersion}"

dependencies:
  - name: address-check-application
    version: "${addressCheckVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: address-check.enabled
  - name: credit-score-application
    version: "${creditScoreVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: credit-score.enabled
  - name: fraud-prevention-application
    version: "${fraudPreventionVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: fraud-prevention.enabled
  - name: customer-relationship-adapter-application
    version: "${customerRelationshipAdapterVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: customer-relation-adapter.enabled
  - name: fsm-akka-4eyes-application
    version: "${fsmAkka4eyesVersion}"
    repository: oci://europe-west3-docker.pkg.dev/fsmakka/fsm-akka-helm-ar
    condition: fsm-akka-4eyes-application.enabled
address-check-version=1.1.3-rc1
credit-score-version=1.3.1-rc1
fraud-prevention-version=1.2.11-rc2
customer-relationship-adapter-version=1.1.7-rc1
fsm-akka-4eyes-version=1.0.3-rc4

If you like to compare this how this is going to look for a ‘release/x‘ branch.

address-check-version=1.3.1
credit-score-version=1.2.0
fraud-prevention-version=1.1.0
customer-relationship-adapter-version=1.1.6
fsm-akka-4eyes-version=~1.1.2

The trigger for this use case will look like the following.

fsm-akka-helm-umbrella-chart/.github/workflows/continuous-deployment-integration.yaml

name: Continuous Deployment Integration
run-name: ${{ github.actor }}
on:
  push:
    branches: [integration/**]
    paths-ignore:
      - '.github/**'
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - name: Display Branch
        run: |
          echo "Branch: ${{ github.ref }}"
      - uses: actions/checkout@v3
        with:
          ref: ${{ inputs.branch-name }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
          additionalArguments: '"/b" "${{ inputs.branch-name }}"'
      - name: Display GitVersion ouput
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  build:
    needs: calculate-version
    uses: ./.github/workflows/build-for-integration.yaml
    with:
      umbrella-chart-version: ${{ needs.calculate-version.outputs.semVer }}
  prepare-dev-environment:
    name: Building Integration Environment for Kubernetes
    needs:
      - build
      - calculate-version
    uses: ./.github/workflows/prepare-dev-environment.yaml
    with:
      branch-name: ${{ github.ref }}
      umbrella-chart-base-branch-name: "master"
      tag: ${{ needs.calculate-version.outputs.semVer }}
      version: ${{ needs.calculate-version.outputs.semVer }}
    secrets: inherit
  create-infrastructure-in-k8s:
    name: Create Infrastructure in K8s with Branch Name as Namespace
    needs: prepare-dev-environment
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/create-infrastructure-in-k8s.yaml@master
    with:
      branch-name: ${{ github.ref_name }}
      base-branch-name: 'master'
      value-file: 'value-integration'
    secrets: inherit
  create-services-environment-in-k8s:
    name: Create Services Environment in K8s with Branch Name as Namespace
    needs: create-infrastructure-in-k8s
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/create-services-environment-in-k8s.yaml@master
    with:
      branch-name: ${{ github.ref_name }}
      base-branch-name: 'master'
    secrets: inherit

This workflow triggers with a push to any ‘integration/x‘ branch, then will reuse the workflows that we previously demonstrated to create a new branch ‘integration/x‘ in ‘fsm-akka-dev-environment‘ and ‘fsm-akka-helm-infrastructure-chart’ Git Repositories and rendering Kubernetes Manifests via Helmfile, then dispatching the workflow to ‘fsm-akka-4eyes-argocd‘ to deploy an ArgoCD Application which will deliver our Kubernetes Manifests to our Dev Cluster under the namespace ‘integration-x‘.

Use Case 7: Integration Environment Cleanup for Helm Umbrella Charts

Trigger Action: Deletion of ‘integration/x.x.x‘ branch ´ of ‘helm-umbrella-chart‘ repository after sanity checks are completed.

The previous Use Case 5 created en Environment for us in Dev Cluster for Sanity Checks, naturally we should have a process to clear this Environments when the Sanity Checks are complete. For this a Cleanup Workflow (‘cleanup-after-branch-delete.yaml‘) will trigger on the deletion of an ‘integration/x‘ branch, by reusing the workflows we already demonstrated.

fsm-akka-helm-umbrella-chart/.github/workflows/cleanup-after-branch-delete.yaml

name: Cleanup after Branch Delete
run-name: Cleanup after Branch ${{ github.event.ref }} delete triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  delete:
jobs:
  call-cleanup-workflow:
    if: ${{ contains(github.event.ref, 'integration/') }}
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/cleanup-environment.yaml@master
    with:
      branch-name: "${{ github.event.ref }}"
    secrets: inherit

Use Case 8: Service Release Process

Trigger Action: Manual start of Pipeline after setting concrete version with ‘git tag‘ or automated with the merge of the ‘release/x.x.x‘ branch version to ‘master‘ branch.

In the previous Use Cases, we discussed Steps that can help us to bring our application closer to a Release, if you follow those workflows, you must be start having questions about Release Process.

Our Release Process for Services is not fully automated, it will not trigger automatically if you merge a ‘release/x’ branch to ‘master’, for the reasons I will explain shortly, if you think these reasons does not apply for you, you can make the necessary changes to workflow and you can convert those to an fully automated one. At its current state the Release workflow should be triggered via GitHub UI.

The main reason no to automate the Release workflow, it is not clear how to predict end user interaction with GitVersion.

  • If the version on the ‘master‘ branch would be controlled over ‘git tag command, we have to give the end user the possibility to tag ‘master‘ branch before it can start the Release workflow.
  • If end user uses ‘+semver: major‘, ‘+semver: minor‘, ‘+semver: patch‘ in its commit message in ‘release‘ branch consistently then this workflow can also be automated.
name: Release GA
run-name: Release GA triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
jobs:
  release:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/build.yaml@master

This will publish the Helm Chart of the Service with concrete release version which we can use in the next Use Case to build Release version of our Helm Umbrella Charts.

Use Case 9: Helm Umbrella Chart Release Process

Trigger Action: Manual triggering after the use of ‘git tag‘ or automated start after merging ‘release/x.x.x‘ branch with concrete Service Versions in ‘helm-umbrella-chart‘ repository to ‘master‘ branch.

address-check-version=1.3.1
credit-score-version=1.2.0
fraud-prevention-version=1.10
customer-relationship-adapter-version=1.5.0
fsm-akka-4eyes-version=1.1.2

And this will allow us to do the Environment Promotion.

Even we can implement Blue / Green deployment in Production.

After releasing our Services, to be able to promote our System between the Environments, we should also realise the release of our Helm Umbrella Chart for Services. To realise that first thing we have to do, is to place concrete Release Version Numbers in the ‘gradle.properties‘ in ‘fsm-akka-helm-umbrella-chart‘ on release branch, like the following.

address-check-version=1.3.1
credit-score-version=1.2.0
fraud-prevention-version=1.10
customer-relationship-adapter-version=1.5.0
fsm-akka-4eyes-version=1.1.2

A we discussed in the Service Releases, while we can’t dictate how the master branch versioned, over a tag or with ‘+semver: major/minor/patch‘, we can’t automate this workflow. If it is going to be ‘git tag‘ command, we have to give the chance the End User to tag the master branch before we start the release build and publish it to Helm Repository. If ‘+semver: major/minor/patch‘ used consistently in commit messages, then we can also automate this workflow.

release-ga.yml

name: Release GA
run-name: Release GA triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
jobs:
  release:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/build.yaml@master

this workflow is quite similar to other Helm Publish workflow, it delegates the execution to ‘build.yaml‘ in ‘fsm-akka-github-workflows

Use Case 10: Environment for Epic Stories

Until now we analysed the scenarios for Service Repositories that we are building environments to test one single Service. There might be scenarios that several Services should collaborate for the realisation of an Epic Story, so we have to configure specific versions of the services in Helm Umbrella Chart and create an environment for it in our Dev Kubernetes Cluster.

When we create an environment from a Service, the feature branch name in Helm Umbrella Chart has the pattern ‘feature/x’-‘service source-repo’, while our Epic Story will not be bound to a specific Service Repository, so for this user case, branch name would be ‘feature/x’ for Helm Umbrella Helm Chart.

name: Epic Deployment
run-name: Epic Deployment triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  push:
    branches:
      - 'feature/**'
    paths-ignore:
      - '.github/**'
jobs:
  calculate-version:
    runs-on: ubuntu-latest
    outputs:
      semVer: ${{ steps.gitversion.outputs.semVer }}
    steps:
      - name: Display Branch
        run: |
          echo "Branch: ${{ github.ref }}"
      - uses: actions/checkout@v3
        with:
          ref: ${{ inputs.branch-name }}
          fetch-depth: 0
      - name: Install GitVersion
        uses: gittools/actions/gitversion/setup@v0.9.15
        with:
          versionSpec: '5.x'
      - name: Determine Version
        id: gitversion
        uses: gittools/actions/gitversion/execute@v0.9.15
        with:
          useConfigFile: true
          configFilePath: GitVersion.yml
          additionalArguments: '"/b" "${{ inputs.branch-name }}"'
      - name: Display GitVersion ouput
        run: |
          echo "SemVer: $GITVERSION_SEMVER"
  build:
    needs: calculate-version
    uses: ./.github/workflows/build-for-integration.yaml
    with:
      umbrella-chart-version: ${{ needs.calculate-version.outputs.semVer }}
  prepare-dev-environment:
    name: Building Integration Environment for Kubernetes
    needs:
      - build
      - calculate-version
    uses: ./.github/workflows/prepare-dev-environment.yaml
    with:
      branch-name: ${{ github.ref }}
      umbrella-chart-base-branch-name: "development"
      tag: ${{ needs.calculate-version.outputs.semVer }}
      version: ${{ needs.calculate-version.outputs.semVer }}
    secrets: inherit
  create-infrastructure-in-k8s:
    name: Create Infrastructure in K8s with Branch Name as Namespace
    needs: prepare-dev-environment
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/create-infrastructure-in-k8s.yaml@master
    with:
      branch-name: ${{ github.ref_name }}
      base-branch-name: 'development'
      value-file: 'values-dev'
    secrets: inherit
  create-services-environment-in-k8s:
    name: Create Services Environment in K8s with Branch Name as Namespace
    needs: create-infrastructure-in-k8s
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/create-services-environment-in-k8s.yaml@master
    with:
      branch-name: ${{ github.ref_name }}
      base-branch-name: 'development'
    secrets: inherit

This workflow will activate with a push to a ‘feature/x‘ branch in Helm Umbrella Chart repository and prepare environment based on the development branch, if you want to change those, only thing you have to do, change those and push those in workflow file in the feature branch, this will initialise the environment with that configuration.

Preparations

Google Cloud CLI

For lots of configuration in Google Cloud, we will need the Google Cloud CLI, you can install it by following instruction here.

Google Cloud Project

After we get our test account we have to create an Project that should contain all of resources (our Artifact Registries, Kubernetes Clusters, etc).

Google Cloud Artifact Registry

We will need two Artifact Repositories, one for Docker Images and another one for Helm Charts.

For Docker Images

You can create Docker Registry with the following instructions but if you follow the below screenshots you can achieve that also.

As you can see, I already created a Docker Registry ‘fsmakka-ar‘.

Service Account for Artifact Registry

Now that we created our Artifact Registry, we have to arrange permission mechanism so that Gitlab Pipelines can read and write artifacts to these registries.

Google Cloud has a concept of Service Accounts for control permissions / roles.

Now we have to give certain permissions / roles to this Service Account so we can upload our Docker Images / Helm Charts, which in this case is ‘Artifact Registry Writer’.

With this setup, your Github Actions would be able to upload Docker Images, Helm Charts to Artifact Repisotories.

Google Cloud Kubernetes Engine

Now we have to configure a Kubernetes Cluster to be able to deploy our Services to Kubernetes.

You can create a Kubernetes Cluster by following following instructions.

You can create your Kubernetes Cluster in Google Cloud portal using menu point ‘Kubernetes Engine -> Clusters’.

there are two option to create Kubernetes Cluster ‘Autopilot mode’, ‘Standard mode’, we are interested with the ‘Standard mode’, main difference in ‘Autopilot’ GKE takes lots of responsibility to actualise your Kubernetes Cluster, Autoscale it, etc…these are really nice options if you are new to Kubernetes concepts but I prefer the Standard mode.

then we have to do basic configuration like giving a name to our Kubernetes Cluster, a zone to run for (I am living in Germany so I have chosen ‘europe-west-3’ which is ‘Frankfurt’) , btw you can see at right side you can see monthly the cost estimates of you choices,

last relevant option which version of the Kubernetes we will use, we can pin our Kubernetes implementation to a specific version or let the Google Cloud automatically update current stable release version,

Another basic configuration of the GKE Cluster is the Node Pool configuration.

Please pay close attention to the option, ‘Enable Cluster autoscaler‘, while we are creating in our Pipelines dynamically new Environments four our ‘feature/xxx’, ‘release/xxx’, ‘integration/xxx’, etc, branches, we might need more Kubernetes Resources. Off course, we can install hard capped resource set say, 20 Instances of 8 CPU, 32 GB Machines but there will be two negatives about this.

  • if we would have more environment then we can host in these 20 machines, our pipelines will fail
  • this one is worst, if we don’t have enough environments to occupy 20 Machines and %90 of resources sitting idle, we are paying those for nothing. This worst scenario for Kubernetes environment, you main business objective is to pay for what you need, so paying for %90 resources that you are not using is not good.

So a feature that enable us to allocate instances from GCP as we need those and giving those back when we don’t need those is ideal for us, this is exactly what ‘Enable Cluster autoscaler‘ does for us. Off course there is also an safety option so that our pipeline does not run crazy and allocates 1000s of instances, ‘Maximum number of nodes‘ so we can say ‘Ok, if you need more resources allocate 10 more but not more then that.’

And finally we choose the machine type for our Node Pools.

Next part of the configuration, is about the Security of our Kubernetes Cluster, as I mentioned in the previous chapter Google Cloud has a concept of Service Accounts, we will here define which service account we will use for our Cluster, if we don’t do anything GCP will create a Service Account for us, I will use this option but as you can also create additional Service Account with necessary roles / permissions so our Gitlab pipelines can interact with our Kubernetes Cluster.

Here you can see the default account that GCloud created for us and also the Service Account that we will create further in the blog.

Service Account

Now let’s create and configure the Service Account that will interact with…

we should give the usual informations like Service Account name and id (id will look like an email address which we will need in further steps).

Service Account Roles / Workflow Identities

GCloud has concept of Workflow Identity 1 , Workflow Identity 2 to manage Permissions in Google Kubernetes Engine we have to use this concepts to add roles to our Service Account, you can find a general list of roles here.

Basic steps that we have to execute in you GCP CLI.

> gcloud services enable container.googleapis.com secretmanager.googleapis.com

and assign specific roles to Service Account.

> gcloud projects add-iam-policy-binding fsmakka --member "serviceAccount:fsmakka-gke-service-account@fsmakka.iam.gserviceaccount.com" --role "roles/composer.worker"

here you see our GCP Project name ‘fsmakka’, our service account ‘fsmakka-gke-service-account@fsmakka.iam.gserviceaccount.com‘ and the role ‘roles/composer.worker‘ which contains most of the roles we need to access and configure our GKE Cluster from Gitlab (if a specific roles necessary for you action the error message explicitly state which permission is necessary, you can find it from role list and added this role to your Service Account).

Kubeconfig

Now that we created our GKE Cluster lets get our authentication information for it.

First let’s activate the following component,

> gcloud components install gke-gcloud-auth-plugin

and get the necessary input for ‘.kube/config‘ (off course you should realise a login to Google Cloud as described here). The input parameters that we need for this, are the name of the GKE Cluster ‘fsmakka-gke-dev‘ the zone that our cluster run ‘europe-west3-c‘ and the project that our GKE Cluster runs ‘fsmakka‘.

> gcloud container clusters get-credentials fsmakka-gke-dev --zone europe-west3-c --project fsmakka

GitVersion

Setup

First if you want to observe what GitVersion is doing, you can install locally for me it was

> brew install gitversion

After GitVerison installed, you can configure it for your Service GiT Repository, I will demonstrate that in ‘credit-score’ repository. You can initialise the GitVersion with the following command.

> gitversion init

GitVersion will ask you some standard question, personally most of the companies that I worked for are using GitFlow so I also used the GitFlow.

Configuration

After this command, you can see the default configurations with the following command.

> gitversion /showconfig

You will see some similar output.

Commit Messages
assembly-file-versioning-scheme: MajorMinorPatch
mode: ContinuousDelivery
tag-prefix: '[vV]'
continuous-delivery-fallback-tag: ci
major-version-bump-message: '\+semver:\s?(breaking|major)'
minor-version-bump-message: '\+semver:\s?(feature|minor)'
patch-version-bump-message: '\+semver:\s?(fix|patch)'
no-bump-message: '\+semver:\s?(none|skip)'
legacy-semver-padding: 4
build-metadata-padding: 4
commits-since-version-source-padding: 4
tag-pre-release-weight: 60000
commit-message-incrementing: Enabled
branches:

There are some really interesting things here, GitVersion gives you the ability bump version of service if you have certain commit message. What this mean, you are developing a feature and you know that is going to break the backward compatibility of your application, you can just place in your commit message ‘+semver: breaking’ (or major) and it will bump the major version of your application(for ex, ‘credit-score’ has the version ‘1.1.8’ this will bump to ‘1.2.0’).

Branch Configurations

Second interesting thing you see in the default configuration, GitVersion threats every branch differently.

branches:
  release:
    mode: ContinuousDelivery
    tag: rc
    increment: None
    prevent-increment-of-merged-branch-version: true
    track-merge-target: false
    regex: ^releases?[/-]
    source-branches:
    - develop
    - main
    - support
    - release
    tracks-release-branches: false
    is-release-branch: true
    is-mainline: false
    pre-release-weight: 30000
  develop:
    mode: ContinuousDeployment
    tag: alpha
    increment: Minor
    prevent-increment-of-merged-branch-version: false
    track-merge-target: true
    regex: ^dev(elop)?(ment)?$
    source-branches: []
    tracks-release-branches: true
    is-release-branch: false
    is-mainline: false
    pre-release-weight: 0

feature:
    mode: ContinuousDelivery
    tag: '{BranchName}'
    increment: Inherit
    regex: ^features?[/-]
    source-branches:
    - develop
    - main
    - release
    - feature
    - support
    - hotfix
    pre-release-weight: 30000
  pull-request:
    mode: ContinuousDelivery
    tag: PullRequest
    increment: Inherit
    tag-number-pattern: '[/-](?<number>\d+)'
    regex: ^(pull|pull\-requests|pr)[/-]
    source-branches:
    - develop
    - main
    - release
    - feature
    - support
    - hotfix
    pre-release-weight: 30000
  hotfix:
    mode: ContinuousDelivery
    tag: beta
    increment: Patch
    prevent-increment-of-merged-branch-version: false
    track-merge-target: false
    regex: ^hotfix(es)?[/-]
    source-branches:
    - release
    - main
    - support
    - hotfix
    tracks-release-branches: false
    is-release-branch: false
    is-mainline: false
    pre-release-weight: 30000

As you can see GitVersion can identify your GitFlow branches with the help of the regular expressions. For ex, every commit to the ‘master‘ branch of your service will increase ‘patch‘ without any tag. Now you ask what is a ‘tag‘, let’s look to the ‘develop‘ branch. There the tag is ‘alpha‘, every version that is delivered from GitVersion for this branch will contain ‘alpha‘ tag in it. Now this is little bit irritating for Java developers, we are used to ‘SNAPSHOT’ as tag for ‘development‘ branch, if you like you can change this configuration value to ‘tag‘ ‘SNAPSHOT” but personally I prefer this way.

Similarly, ‘release‘ branch uses as tag ‘beta‘, personally I change this to ‘rc‘ as ‘release candidate‘, so the version will look like ‘1.2.0-rc.1‘. One more fancy feature, if you look to the ‘feature‘ branch the tag there is ‘{BranchName}‘, the version number would contain the actual branch name.

Now you probably understand why this topic important for me, without a human interpretable versioning system it is not possible to build an complete automated Continuous Deployment system for our Feature, Release, Hotfix, Development branches in Kubernetes.

Lifecycle Operations

Now let’s look to the doings, now we have our ‘credit-score‘ service in GiT, to enable the GitVersion create version numbers for us, first we have to

> git tag 1.1.8

for our service in ‘master‘ branch. (this is because I already developed this application of course for a brand new Service your tag should be ‘git tag 1.0.0‘ )

After that if we call the

> gitversion

command.

We will see which values GitVersion supplies us to use in our pipelines.

{
  "Major": 1,
  "Minor": 1,
  "Patch": 7,
  "PreReleaseTag": "",
  "PreReleaseTagWithDash": "",
  "PreReleaseLabel": "",
  "PreReleaseLabelWithDash": "",
  "PreReleaseNumber": null,
  "WeightedPreReleaseNumber": 60000,
  "BuildMetaData": 111,
  "BuildMetaDataPadded": "0111",
  "FullBuildMetaData": "111.Branch.master.Sha.f936a7c9265a4030af4169eb8772678ebbcd4626",
  "MajorMinorPatch": "1.1.7",
  "SemVer": "1.1.7",
  "LegacySemVer": "1.1.7",
  "LegacySemVerPadded": "1.1.7",
  "AssemblySemVer": "1.1.7.0",
  "AssemblySemFileVer": "1.1.7.0",
  "FullSemVer": "1.1.7+111",
  "InformationalVersion": "1.1.7+111.Branch.master.Sha.f936a7c9265a4030af4169eb8772678ebbcd4626",
  "BranchName": "master",
  "EscapedBranchName": "master",
  "Sha": "f936a7c9265a4030af4169eb8772678ebbcd4626",
  "ShortSha": "f936a7c",
  "NuGetVersionV2": "1.1.7",
  "NuGetVersion": "1.1.7",
  "NuGetPreReleaseTagV2": "",
  "NuGetPreReleaseTag": "",
  "VersionSourceSha": "869b6e7070aad11ab0d6fcac3e7614700d3d04d4",
  "CommitsSinceVersionSource": 111,
  "CommitsSinceVersionSourcePadded": "0111",
  "UncommittedChanges": 7,
  "CommitDate": "2023-05-16"
}

I personally use ‘FullSemVer‘ but as you can see there are lots of possible values like containing branch name, git hash code, etc…..

Now if we switch to the ‘development‘ branch and execute the

> gitversion

command again, we will see the following.

{
  "Major": 1,
  "Minor": 2,
  "Patch": 0,
  "PreReleaseTag": "alpha.108",
  "PreReleaseTagWithDash": "-alpha.108",
  "PreReleaseLabel": "alpha",
  "PreReleaseLabelWithDash": "-alpha",
  "PreReleaseNumber": 108,
  "WeightedPreReleaseNumber": 108,
  "BuildMetaData": null,
  "BuildMetaDataPadded": "",
  "FullBuildMetaData": "Branch.development.Sha.f738427c6840f54d56127693258b315c67179031",
  "MajorMinorPatch": "1.2.0",
  "SemVer": "1.2.0-alpha.108",
  "LegacySemVer": "1.2.0-alpha108",
  "LegacySemVerPadded": "1.2.0-alpha0108",
  "AssemblySemVer": "1.2.0.0",
  "AssemblySemFileVer": "1.2.0.0",
  "FullSemVer": "1.2.0-alpha.108",
  "InformationalVersion": "1.2.0-alpha.108+Branch.development.Sha.f738427c6840f54d56127693258b315c67179031",
  "BranchName": "development",
  "EscapedBranchName": "development",
  "Sha": "f738427c6840f54d56127693258b315c67179031",
  "ShortSha": "f738427",
  "NuGetVersionV2": "1.2.0-alpha0108",
  "NuGetVersion": "1.2.0-alpha0108",
  "NuGetPreReleaseTagV2": "alpha0108",
  "NuGetPreReleaseTag": "alpha0108",
  "VersionSourceSha": "869b6e7070aad11ab0d6fcac3e7614700d3d04d4",
  "CommitsSinceVersionSource": 108,
  "CommitsSinceVersionSourcePadded": "0108",
  "UncommittedChanges": 6,
  "CommitDate": "2023-05-15"
}
  develop:
    mode: ContinuousDeployment
    tag: alpha
    increment: Minor
    prevent-increment-of-merged-branch-version: false
    track-merge-target: true
    regex: ^dev(elop)?(ment)?$
    source-branches: []
    tracks-release-branches: true
    is-release-branch: false
    is-mainline: false
    pre-release-weight: 0

We see that GitVersion incremented the ‘minor‘ part of the version and also placed the tag that was configured for the ‘development‘ branch and produced the version ‘1.2.0-alpha.108′.

This configuration tells us to increment the ‘Minor’ part of the Version for the development branch (while according to GitFlow after you release your software in ‘master‘ branch development should continue in next minor version) and as mentioned before tag is configured to be ‘alpha’.

Now let’s look what is happening in a feature branch, after a

> git checkout -b feature/usecase_gh_1

the command

> gitversion

delivers to us.

{
  "Major": 1,
  "Minor": 2,
  "Patch": 0,
  "PreReleaseTag": "usecase-gh-1.1",
  "PreReleaseTagWithDash": "-usecase-gh-1.1",
  "PreReleaseLabel": "usecase-gh-1",
  "PreReleaseLabelWithDash": "-usecase-gh-1",
  "PreReleaseNumber": 1,
  "WeightedPreReleaseNumber": 30001,
  "BuildMetaData": 111,
  "BuildMetaDataPadded": "0111",
  "FullBuildMetaData": "111.Branch.feature-usecase-gh-1.Sha.dfd39b1f46d4c4064f1d6d5c9769f2192547fe29",
  "MajorMinorPatch": "1.2.0",
  "SemVer": "1.2.0-usecase-gh-1.1",
  "LegacySemVer": "1.2.0-usecase-gh-1-1",
  "LegacySemVerPadded": "1.2.0-usecase-gh-1-0001",
  "AssemblySemVer": "1.2.0.0",
  "AssemblySemFileVer": "1.2.0.0",
  "FullSemVer": "1.2.0-usecase-gh-1.1+111",
  "InformationalVersion": "1.2.0-usecase-gh-1.1+111.Branch.feature-usecase-gh-1.Sha.dfd39b1f46d4c4064f1d6d5c9769f2192547fe29",
  "BranchName": "feature/usecase_gh_1",
  "EscapedBranchName": "feature-usecase-gh-1",
  "Sha": "dfd39b1f46d4c4064f1d6d5c9769f2192547fe29",
  "ShortSha": "dfd39b1",
  "NuGetVersionV2": "1.2.0-usecase-gh-1-0001",
  "NuGetVersion": "1.2.0-usecase-gh-1-0001",
  "NuGetPreReleaseTagV2": "usecase-gh-1-0001",
  "NuGetPreReleaseTag": "usecase-gh-1-0001",
  "VersionSourceSha": "869b6e7070aad11ab0d6fcac3e7614700d3d04d4",
  "CommitsSinceVersionSource": 111,
  "CommitsSinceVersionSourcePadded": "0111",
  "UncommittedChanges": 6,
  "CommitDate": "2023-05-16"
}
  feature:
    mode: ContinuousDelivery
    tag: '{BranchName}'
    increment: Inherit
    regex: ^features?[/-]
    source-branches:
    - develop
    - main
    - release
    - feature
    - support
    - hotfix
    pre-release-weight: 30000

As we discussed before, ‘feature‘ branch is so configured that it will name the branch name in the Version and it will also ‘inherit’ the Version from the branch it is originated from.

Now that we completed development of our feature we like to make release, let’s look how the GitVersion would act for a Release branch, let’s execute the following command.

> git checkout -b release/1.2

And look what GitVersion delivers as Version (For simplicity from now on I will only value of the FullSemVer with the following command).

> gitversion /showvariable FullSemVer

which will deliver.

1.2.0-rc.1+0

As you can see GitVersion is clever enough to tag this Version as ‘release candidate’. Now if we can some bugfixes and merge that to our release branch, which will increment the Version number.

1.2.0-rc.1+1

Now that our Release Candidate 1 is tested and we want to go for Release Candidate 2, to achive that we only have to

> git tag 1.2.0-rc.1

the result of the ‘gitversion’ would be

1.2.0-rc.1

Now if we continue to. development of Release Candidate 2 and the moment that we make a commit to ‘release/1.2′ branch the version number would look like the following so we can continue with the process.

1.2.0-rc.2+3

When we want to release our Service to Production naturally we have to merge code state to ‘master‘ branch as GitFlow suggest, of course merging ‘release/1.2′ branch to the ‘master’ will make ‘git tag 1.2.0-rc.1 visible in the ‘master’ to state our intention to release our application with Version we have to tag the ‘master’ branch with

> git tag 1.2.0

which will make our Version number.

1.2.0

Or we will use a smart feature from GitVersion, which explained here, if we use in the commit message on this Release Branch ‘+semver: patch’ it will automatically set the version to ‘1.2.0’ which ever is fitting you.

In the previous chapters, we used GitVersion extensively, now you probably understand why solid versioning concept is extremely important for me.

ArgoCD

To achieve that we should first install the ArgoCD to our Kubernetes Cluster under the ‘argocd‘ namespace.

For this purpose, we use the Helm Chart that is in this Github Repository.

fsm-akka-argocd/helm/Chart.yaml

apiVersion: v2
name: fsm-akka-argocd
description: A Helm chart for FSM Akka ArgoCD configuration
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
  - name: argo-cd
    version: 5.33.3
    repository: https://argoproj.github.io/argo-helm
    condition: argo-cd.enabled

fsm-akka-argocd/helm/values.yaml

projects:
  enabled: false

argo-cd:
  enabled: true

  dex:
    enabled: false
  controller:
    extraArgs:
      - --application-namespaces
      - "*"
  server:
    extraArgs:
      - --insecure
      - --application-namespaces
      - "*"
    config:
      repositories: |
        - name: nexus-repository-manager
          type: helm
          url: "https://sonatype.github.io/helm3-charts/"
        - name: k8ssandra
          type: helm
          url: "https://helm.k8ssandra.io/stable"
        - name: traefik
          type: helm
          url: "https://helm.traefik.io/traefik"
        - name: elasticsearch
          type: helm
          url: "https://helm.elastic.co/"
        - name: metrics-server
          type: helm
          url: "https://kubernetes-sigs.github.io/metrics-server/"

We are referencing a standard ArgoCD Helm Chart and configure for our needs.

If you are using this Helm Charts also for local environment, like me, I am writing this blog on Mac Pro M1 and local Kubernetes runs on the same hardware, so I need ‘arm64’ image, so us the following configuration.

fsm-akka-argocd/values-m1.yaml

argo-cd:
  global:
    image:
      tag: "v2.5.2@sha256:d9bad4c5ed867bd59ea1954f75988f5d1c8491a4eef5bd75f47b13a4bd1a53dc"

When this must run on MS Azure, AWS, Google Cloud or any on premise Linux Cluster, you can remove this.

Next configuration give ArgoCD the permissions manipulate all namespaces in my Kubernetes Cluster (this is no production level configuration, for production, you should explicitly tell ArgoCD which namespaces it can manipulate for the security / wildcards also work, like ‘feature-*’) and finally my local Kubernetes Cluster is not operating with ‘https’ so I have to turn off for ArgoCD also.

These are the necessary parts to run ArgoCD in my k8s Cluster. Now we can install it with the following command.

> helm upgrade argocd . /
            --install
            --create-namespace /
            -n argocd /
            -f values-development.yaml /
            -f values-gke.yaml

Please pay attention to ‘argocd’ namespace, some components of the ArgoCD can only be installed in this namespace and in a production level configuration, only your administrators would have rights to install here, while these installation would cause ArgoCD to deploy application, modify your k8s Cluster.

Now we have installed the runtime components of the ArgoCD but ArgoCD at the moment knows nothing about out Business demands, so to tell ArgoCD that we want deploy our ‘foureyes-fsm-akka’ Business Unit we have to define ArgoCD Project Custom Resource Definition for ‘fsm-akka-4eyes-project‘.

fsm-akka-argocd/helm/templates/project.yaml

{{- if .Values.projects.enabled -}}
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: fsm-akka-4eyes-project
  namespace: argocd
spec:
  description: "Project for FSM Akka Four Eyes Event Sourcing Application"
  destinations:
    - namespace: fsmakka
      name: {{ .Values.cluster.name }}
      server: {{ .Values.cluster.url }}
    - namespace: feature-*
      name: {{ .Values.cluster.name }}
      server: {{ .Values.cluster.url }}
    - namespace: integration-*
      name: {{ .Values.cluster.name }}
      server: {{ .Values.cluster.url }}
    - namespace: release-*
      name: {{ .Values.cluster.name }}
      server: {{ .Values.cluster.url }}
    - namespace: bugfix-*
      name: {{ .Values.cluster.name }}
      server: {{ .Values.cluster.url }}
    - namespace: hotfix-*
      name: {{ .Values.cluster.name }}
      server: {{ .Values.cluster.url }}
  sourceNamespaces:
    - fsmakka
    - feature-*
    - integration-*
    - release-*
    - bugfix-*
    - hotfix-*
  sourceRepos:
    {{ toYaml .Values.projects.sourceRepos }}
  clusterResourceWhitelist:
    - group: "*"
      kind: "*"
  namespaceResourceWhitelist:
    - group: "*"
      kind: "*"
{{- end -}}

To deploy this ArgoCD Project, we are using a Kubernetes Customer Resource Definition from ArgoCD, important points being.

  • Which k8s Clusters and Namespaces this ArgoCD Project can manipulate.
  • From which sources (Git and Helm Chart Repositories) it is allowed to install Applications(if you use an repository not listed here, you will get security exceptions and you application would not be installed).
  • What k8s Cluster resources the Applications belonging to this Project can modify (Again what you see here is not production configuration and only for demonstration purposes, for production you should use sensible restrictions).
  • What Namespace Cluster resources the Applications belonging to this Project can modify (Again what you see here is not production configuration and for demonstration purposes, for production you should use sensible restrictions).

Now before we install the ArgoCD ´in k8s Cluster, Custom Resource Definition would be unknown and would cause exceptions, for this reason I first installed ArgoCD now I can install the Project CRD with the following command.

> helm upgrade argocd . /
               --install /
               --create-namespace /
               -n argocd /
               --set projects.enabled=true /
               -f values-development.yaml /
               -f values-gke.yaml

Please pay attention that Project CRD is one of those things that we can only install in ‘argocd’ namespace.

While this demo runs against GCP and GKE we need some specific configurations for it, so GKE will allow ArgoCD to install your resources.

cluster:
  name: fsmakkaGKE
  url: https://35.246.194.179
  ###https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/
  config: xxx-kube/config/caData-xxx

argo-cd:
  enabled: true
  global:
    nodeSelector:
      iam.gke.io/gke-metadata-server-enabled: "true"
  controller:
    serviceAccount:
      annotations:
        iam.gke.io/gcp-service-account: your-service-account@fsmakka.iam.gserviceaccount.com
  server:
    serviceAccount:
      annotations:
        iam.gke.io/gcp-service-account: your-service-account@fsmakka.iam.gserviceaccount.com
  repoServer:
    serviceAccount:
      annotations:
        iam.gke.io/gcp-service-account: your-service-account@fsmakka.iam.gserviceaccount.com
  applicationSet:
    serviceAccount:
      annotations:
        iam.gke.io/gcp-service-account: your-service-account@fsmakka.iam.gserviceaccount.com

argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup

The value of ‘xxx-kube/config/caData-xxx’ would be ‘base64‘ encoding of the following data structure.

{
  "execProviderConfig": {
    "command": "argocd-k8s-auth",
    "args": ["gcp"],
    "apiVersion": "client.authentication.k8s.io/v1beta1"
  },
  "tlsClientConfig": {
    "insecure": false,
    "caData": "caData-from-your-kube-config"
  }
}

After these configurations, if we would make a ‘port-forward’ to Argo CD, we should see the following UI.

You can use ‘admin’ user and the password would be in following k8s Secret.

Appendix

To Helm Umbrella Chart or Not

If you read to whole blog, you would remember that I told you that I prefer to use Helm Umbrella Chart concept because it would be possible to make local deployments to our ‘minikube’ from those, to increase the development speed, prototyping, etc.

This can be a non-factor for you, your System could be so big that it would be unfeasible to install to a ‘minikube’ or you just don’t value this option, so other option could be removing the ‘Helm Umbrella Chart’ and directly using ‘Helmfile’ to manage your system. In ‘fsm-akka-dev-environment’ Git Repository, you can convert your ‘helmfile.yaml‘ to this.

environments:
  default:
    values:
      - environments/default/values.yaml

---

repositories:
  - name: fsm-akka
    url: {{ .StateValues.url }}
    username: {{ .StateValues.username }}
    password: {{ .StateValues.pswd }}
    oci: true

releases:
  - name: address-check
    namespace: fsmakka
    chart: fsm-akka/address-check-application
    version: {{ .StateValues.address-check-version }}
  - name: credit-score
    namespace: fsmakka
    chart: fsm-akka/credit-score-application
    version: {{ .StateValues.credit-score-version }}
  - name: fraud-prevention
    namespace: fsmakka
    chart: fsm-akka/fraud-prevention-application
    version: {{ .StateValues.fraud-prevention-version }}
  - name: customer-relationship-adapter
    namespace: fsmakka
    chart: fsm-akka/customer-relationship-adapter-application
    version: {{ .StateValues.customer-relationship-adapter-version }}
  - name: foureyes
    namespace: fsmakka
    chart: fsm-akka/fsm-akka-4eyes-application
    version: {{ .StateValues.foureyes-version }}

And the ‘default.yaml’ in ‘master’ branch.

address-check-version: "1.1.2"
credit-score-version: "1.1.8"
fraud-prevention-version: "1.2.10"
customer-relationship-adapter-version: "1.1.6"
foureyes-version: "1.0.2"

Which will be the place we would configure which concrete version of our services would be released.

For ‘development’ branch this will look like…

address-check-version: "<=1.2.0-beta"
credit-score-version: "<=1.2.0-beta"
fraud-prevention-version: "<=1.3.0-beta"
customer-relationship-adapter-version: "<=1.2.0-beta"
foureyes-version: "1.1.0-beta"

which will give us the possibility continuous deployment of our services from a certain range (of course some major/minor/patch versions changes we have to adapt here).

Finally in GitHub Actions, the workflow part the helmfile rendering the manifests, for the workflows that are working on releases / integration branches we don’t have to change anything but for workflows realising deployment of single services, we need slight modification.

helmfile template --state-values-set username=$HELM_USER /
                  --state-values-set pswd=$HELM_PASSWORD /
                  --state-values-set url=$HELM_URL /
                  --state-values-set path=$HELM_PATH /
                  --state-values-set ${{ inputs.source-repo }}-version=${{ inputs.version }} /
                  --output-dir-template ./gitops/github/fsmakka

Now Helmfile Template generation mechanism is responsible for the actualisation of the Service Version for ‘feature’ branches, etc….

Kubernetes Operator Installations

Apache Kafka (Strimzi Operator)

helm repo add strimzi https://strimzi.io/charts/
helm install fsm-akka-strimzi strimzi/strimzi-kafka-operator --namespace strimzi-operator --create-namespace --set watchAnyNamespace=true

Apache Cassandra (k8ssandra)

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace --set global.clusterScoped=true

Elasticsearch (ECK Operator)

helm repo add elastic https://helm.elastic.co
helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace

GraalVM Native Image Building

One final topic I want to mention here, as the startup times of Java Application becoming a problem K8s environment (even with minimal functionality they need 5-6s, in a realistic application 15 to 30s), when your application is under load and you have auto-scale configurations, 15-30 s is a lifetime, people were motivated for Native Image Java Applications, specially with the release of the Spring Boot 3.0.x.

A Spring Boot native image with GraalVM would have arounf 0.7 to 1 s start times, which is really powerful, now probably you would ask why everybody not using it. There are several hurdles on the way,

  • first you have to change lit bit how you develop, you have to really try to avoid the use of Java Reflection or be ready to write lots of configuration files so how native image generation mechanism interpret reflection data
  • secondly, lots of the libraries Spring Boot uses depends on Java Reflection, Spring Boot team tries to adapt most of these libraries that they have a direct dependency but it will take time to catch for the most of the frameworks. GraalVM is also trying to establish at preset of Reflection Metadata Repository for popular Java Libraries, you can find the configuration information here.
  • third, to build cross platform native images is really problematic, if you install GraalVM let’s say on Ubuntu 20.04 the image that you would produce will probably would have problems in ‘alpine‘, to be able to produce a native image for ‘alpine‘, you would have to first create Docker image based on ‘alpine‘, install on it a GraalVM and then produce a native image and upload to a Docker Container Registry only this native image then would work for alpine. At least for Github Actions, you might go for Matrix Strategy to build for Cross Platform builds as explained here, Github Action ‘graalvm/setup-graalvm@v1’ is really big help for the topic.
  • fourth, please read the paragraph below little bit scepticism, as June 14, 2023, there is a change to GraalVM Free Terms and Conditions (GFTC) license which you can read here, as I understand it previous Enterprise version of GraalVM is for free to use in production, so you have better Java Garbage Collectors and a faster JVM then CE version, of course you have to check that with your legal department, if new license is no option for you, then you are stuck with the problems below.

    A point that is not communicated not that clearly, GraalVM has one Community Edition and Enterprise Edition, as you may guess Enterprise has some costs attached to it, now what is not said is Community Edition is only supporting SerialGC, in my opinion it is no option for a serious enterprise application, it is the main culprit of the famous’ Stop the World’ problem during Java Garbage Collection. For a modern application, lets say with 32GB memory, a garbage collection with SerialGC will be most probably means 30s non reactive Java Application, which will most probably mean the end of that application. So be either ready to pay huge amount of money to Oracle for GraalVM Enterprise Edition for a reasonable GC or for trying something for radical, if you read the link above carefully, CE is also offers the possibility of using ‘Epsilon GC‘, this garbage collector does not garbage collect at all and let the application crash :). Is that sound weird? Well think like this if SerialGC collector stops the world like 30s, wouldn’t it be better to let the Spring Boot application crash with out of memory and start new in1s? One thing to think about 🙂

Now let’s look to the doing, Google’s JiB with Gradle will be huge help for us, so let’s look to the Gradle Configuration.

buildscript {	
   dependencies {
		classpath('com.google.cloud.tools:jib-native-image-extension-gradle:0.1.0')
		classpath "org.unbroken-dome.gradle-plugins.helm:helm-plugin:1.7.0"
	}
}

plugins {
	id 'java'
	id 'org.springframework.boot' version '3.0.0'
	id 'io.spring.dependency-management' version '1.1.0'
	id 'org.graalvm.buildtools.native' version '0.9.20'
	id 'com.google.cloud.tools.jib' version '3.3.1'
	id 'com.github.johnrengelman.shadow' version '7.1.2'
}

As you can see we need two Gradle plugins, GraalVM native build tools, JiB and JiB Native Image Extension,

graalvmNative {
	binaries {
		main {
			javaLauncher = javaToolchains.launcherFor {
				languageVersion = JavaLanguageVersion.of(17)
				vendor = JvmVendorSpec.matching("GraalVM")
			}
			quickBuild = true
			//buildArgs.add("--verbose")
			//necessary for ALPINE Images
			//buildArgs.add("--static")
			buildArgs.add("--enable-monitoring=all")
			runtimeArgs.add('--target linux')
		}
	}
	metadataRepository {
		enabled = true
	}
}

‘graalvmNative’ for setting up the GraalVM for native image creation.

jib {
	container {
		mainClass = "org.salgar.akka.fsm.cra.CustomerRelationshipAdapterApplication"
	}
	from {
		image = "ubuntu:latest"
		auth {
			username = "${props.DOCKER_HUB_USER}"
			password = "${props.DOCKER_HUB_PASSWORD}"
		}
	}
	to {
		image = "${props.DOCKER_URL}/${project.name}"
		//image = "fsmakka.azurecr.io/fsmakka/${project.name}"
		tags = ["${project.version}"]
		auth {
			username = "${props.DOCKER_UPLOAD_USER}"
			password = "${props.DOCKER_UPLOAD_PASSWORD}"
		}
	}
	pluginExtensions {
		pluginExtension {
			implementation = 'com.google.cloud.tools.jib.gradle.extension.nativeimage.JibNativeImageExtension'
			properties = [
					imageName: 'customer-relationship-adapter-application'
			]
		}
	}
	allowInsecureRegistries = true
}
tasks.jib.dependsOn tasks.nativeCompile

Finally some additional configuration for pluginExtension, ‘JiB‘.

For the Github Actions, while we need GraalVM to create ‘native-image’, pipeline looks little bit different.

mehmetsalgar/customer-relationship-adapter/.github/workflows/build-with-reusable.yaml

name: Java / Gradle CI Caller
run-name: Building with Gradle triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  push:
    branches:
      - 'development'
      - 'release/**'
      - 'feature/**'
      - 'hotfix/**'
      - 'pull/**'
      - 'pull-requests/**'
      - 'pr/**'
    paths-ignore:
      - '.github/**'
jobs:
  call-build-workflow:
    uses: mehmetsalgar/fsm-akka-github-workflows/.github/workflows/build.yaml@master
    with:
      native: true
      chart-name: "customer-relationship-adapter-application"
    secrets: inherit

mehmetsalgar/fsm-akka-github-workflows/.github/workflows/build.yaml

  build-native:
    if: inputs.native == true
    runs-on: ubuntu-latest
    needs: calculate-version
    env:
      SEMVER: ${{ needs.calculate-version.outputs.semVer }}
      DOCKER_HUB_USER: ${{ secrets.DOCKER_HUB_USER }}
      DOCKER_HUB_PASSWORD: ${{ secrets.DOCKER_HUB_PASSWORD }}
      DOCKER_URL: ${{ secrets.DOCKER_URL }}
      DOCKER_UPLOAD_USER: ${{ secrets.DOCKER_UPLOAD_USER }}
      DOCKER_UPLOAD_PASSWORD: ${{ secrets.DOCKER_UPLOAD_PASSWORD }}
      HELM_URL: ${{ secrets.HELM_URL }}
      HELM_PATH: ${{ secrets.HELM_PATH }}
      HELM_USER: ${{ secrets.HELM_USER }}
      HELM_PASSWORD: ${{ secrets.HELM_PASSWORD }}
      HELM_DOWNLOAD_CLIENT: ${{ secrets.HELM_DOWNLOAD_CLIENT }}
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      - name: Display GitVersion output
        run: |
          echo "SemVer: $SEMVER"
      - name: Set up GraalVM
        uses: graalvm/setup-graalvm@v1
        with:
          java-version: '17.0.7'
          distribution: 'graalvm' # See 'Options' for all available distributions
          components: 'native-image'
          github-token: ${{ secrets.PERSONAL_TOKEN }}
      - id: installHelm
        uses: azure/setup-helm@v3
        with:
          version: '3.11.2'
      - name: Validate Gradle Wrapper
        uses: gradle/wrapper-validation-action@v1
      - name: Build with Gradle
        uses: gradle/gradle-build-action@v2
        env:
          ORG_GRADLE_PROJECT_version: ${{ env.SEMVER }}
          ORG_GRADLE_PROJECT_DOCKER_HUB_USER: ${{ env.DOCKER_HUB_USER }}
          ORG_GRADLE_PROJECT_DOCKER_HUB_PASSWORD: ${{ env.DOCKER_HUB_PASSWORD }}
          ORG_GRADLE_PROJECT_DOCKER_URL: ${{ env.DOCKER_URL }}
          ORG_GRADLE_PROJECT_DOCKER_UPLOAD_USER: ${{ env.DOCKER_UPLOAD_USER }}
          ORG_GRADLE_PROJECT_DOCKER_UPLOAD_PASSWORD: ${{ env.DOCKER_UPLOAD_PASSWORD }}
          ORG_GRADLE_PROJECT_HELM_URL: ${{ env.HELM_URL }}
          ORG_GRADLE_PROJECT_HELM_PATH: ${{ env.HELM_PATH }}
          ORG_GRADLE_PROJECT_HELM_USER: ${{ env.HELM_USER }}
          ORG_GRADLE_PROJECT_HELM_PASSWORD: ${{ env.HELM_PASSWORD }}
          ORG_GRADLE_PROJECT_HELM_DOWNLOAD_CLIENT: ${{ env.HELM_DOWNLOAD_CLIENT }}
        with:
          arguments: |
            build
            --no-daemon
      - name: Run Helm Command
        id: helmLoginAndPush
        shell: bash
        run: |
          echo "$HELM_PASSWORD" | helm registry login $HELM_URL -u $HELM_USER --password-stdin
          helm push build/helm/charts/${{ inputs.chart-name}}-$SEMVER.tgz oci://$HELM_URL$HELM_PATH
      - name: Check Failure
        if: steps.helmLoginAndPush.outcome != 'success'
        run: exit 1

Do you like to see the difference between a ‘native-image‘ application and a normal one.

mehmetsalgar/customer-relationship-adapter

is configured to run as native Spring Boot 3.0 application.

and a normal Spring Boot 3.0 application.

mehmetsalgar/credit-score

Yes, 2.5s to 13.5s startup differences.

Terraform

At the start of the blog, you saw a diagram explaining our plan. In that picture you saw that we would have identical Kubernetes Clusters for our ‘development’, ‘test’, ‘staging’, ‘production’ (or any additional environment you might need), instead of creating this environments manually, it is better follow the ‘Infrastructure as Code‘ approach and use Terraform to create those.

Additionally, as I hinted previously, real potential of cost saving in Kubernetes Environment is possible for Development, Test Environments, so we have a really interesting use case for us.

In the projects that I was involved, I always criticised Development / Test environment running in idle for months while there is nothing to test, while ordering mechanism for new environment can take up to months, so it is easier keep them idle and pay for it than creating those when we need.

With Kubernetes, that is not anymore the reality, we can increase / decrease the capacity of my environment in the matters of minutes, but there is still room for optimisations. If you follow the paradigms mentioned in this blog, you are aware that we are creating a new environment for ‘Pull Request‘, ‘Releases‘, ‘Integration’ so those can be submitted to Quality Checks, these environments can be running for days, so can’t be automatically downscaled by Kubernetes, the dilemma here, most of the work force of the software companies are working between 06:00 and 18:00 o’clock, so between 18:00 – 06:00, we will pay for these resources for 12:00 hours for nothing.

My solution to this dilemma, to have a Kubernetes Environment for office hours and create this environment at start of working day, let’s say at 06:00 o’clock and destroy it 19:00 o’clock, when we create feature branch, we can place a marker file, let’s say ‘day_over.txt’, so our pipelines will know to install this feature to this special GKE Cluster.

As you can see in the pipeline that is responsible for creating new environments,

mehmetsalgar/fsm-akka-github-workflows/.github/workflows/continuous-deployment-with-environment.yaml

      - id: checkDayOver
        shell: bash
        run: |
          if test -f "day_over.txt"; then
            echo "cluster_name=fsmakkaGKEDayOver" >> "$GITHUB_OUTPUT"
            echo "cluster_name_not_normalised=fsmakka-gke-dev-day-over" >> "$GITHUB_OUTPUT"
          else
            echo "cluster_name=fsmakkaGKE" >> "$GITHUB_OUTPUT"
            echo "cluster_name_not_normalised=fsmakka-gke-dev" >> "$GITHUB_OUTPUT"
          fi
  create-branch-helm-umbrella:
  create-infrastructure-in-k8s:
    name: Create Infrastructure in K8s with Branch Name as Namespace
    needs: [calculate-version, create-branch-helm-umbrella]
    uses: ./.github/workflows/create-infrastructure-in-k8s.yaml
    with:
      branch-name: ${{ inputs.branch-name }}-${{ inputs.repo-name }}
      base-branch-name: ${{ inputs.infrastructure-base-branch-name }}
      value-file: ${{ inputs.value-file }}
      cluster-name-not-normalised: ${{ needs.calculate-version.outputs.cluster-name-not-normalised }}
    secrets: inherit
  create-services-environment-in-k8s:
    name: Create Services Environment in K8s with Branch Name as Namespace
    needs: [calculate-version, create-infrastructure-in-k8s]
    uses: ./.github/workflows/create-services-environment-in-k8s.yaml
    with:
      branch-name: ${{ inputs.branch-name }}-${{ inputs.repo-name }}
      base-branch-name: 'master'
      cluster-name: ${{ needs.calculate-version.outputs.cluster-name }}
      cluster-name-not-normalised: ${{ needs.calculate-version.outputs.cluster-name-not-normalised }}

Now I am hearing you are saying, what about our batch jobs that run in the middle of the night, we will have a dedicate Kubernetes Environment for that sort of testing these scenarios and install it to our default GKE Cluster.

GKE Cluster Creation

So how are we creating a Kubernetes Environment at 06:00 o’clock, with the help of Github Workflow Actions and Terraform.

mehmetsalgar/fsm-akka-helm-infrastructure-chart/.github/workflows/create-terraform.yaml

name: Terraform Create
run-name: Terraform create triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  schedule:
    - cron: "0 6 * * *"
  workflow_dispatch:
jobs:
  terraform:
    name: 'Terraform'
    runs-on: ubuntu-latest
    defaults:
      run:
        shell: bash
        working-directory: ./terraform
    env:
      TF_VAR_cluster_name: ${{ vars.GKE_CLUSTER_NAME }}
      TF_VAR_credential: ${{ secrets.GCP_CREDENTIALS }}
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1
      - name: Terraform Init
        run: terraform init
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Terraform Workspace
        run: terraform workspace new fsmakka_nightly
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Terraform Plan
        run: terraform plan -input=false
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Terraform Apply
        run: terraform apply -auto-approve -input=false
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}

Now that Terraform created our new GKE Cluster, we have to install necessary Kubernetes Operator that are responsible to instal Apache Kafka with Strimzi Operator, Apache Cassandra with k8ssandra-operator and Elasticsearch with ECK Operator.

         GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
  prepare-gke:
    permissions:
      contents: 'read'
      id-token: 'write'
    runs-on: ubuntu-latest
    steps:
      ...
      - name: Run Helm Command
        id: helmCommand
        shell: bash
        env:
          HELM_COMMAND: ${{ inputs.helm-command }}
          SERVICE_ACCOUNT: ${{ secrets.GCP_CREDENTIALS }}
        run: |
          echo $SERVICE_ACCOUNT > /tmp/${{ github.run_id }}.json
          gcloud auth activate-service-account --key-file /tmp/${{ github.run_id }}.json
          gcloud components install gke-gcloud-auth-plugin
          export USE_GKE_GCLOUD_AUTH_PLUGIN=True
          until gcloud container clusters get-credentials fsmakka-${{ vars.GKE_CLUSTER_NAME }} --zone europe-west3-c --project fsmakka;
          do
            echo "Try again for get-credentials!"
            sleep 10
          done
          until [[ $(gcloud container clusters describe fsmakka-${{ vars.GKE_CLUSTER_NAME }} --zone europe-west3-c --project fsmakka --format json | jq -j '.status') == 'RUNNING' ]];
          do
            echo "Try again for status!"
            sleep 10
          done
          helm repo add strimzi https://strimzi.io/charts/
          helm repo add elastic https://helm.elastic.co
          helm repo add k8ssandra https://helm.k8ssandra.io/stable
          helm repo add jetstack https://charts.jetstack.io
          helm repo update
          helm install fsm-akka-strimzi strimzi/strimzi-kafka-operator --namespace strimzi-operator --create-namespace --set watchAnyNamespace=true
          helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
          helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
          helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace --set global.clusterScoped=true
      - name: Check Failure
        if: steps.helmCommand.outcome != 'success'
        run: exit 1
  prepare-argo-cd:
    name: Prepare ArgoCD for new GKE Cluster

Unfortunately completion of the Terraform configuration does not mean that GKE Cluster is ready to serve the requests, so when want to update the ‘.kube/config’ we have to wait for Cluster initialisation.

until gcloud container clusters get-credentials fsmakka-${{ vars.GKE_CLUSTER_NAME }} --zone europe-west3-c --project fsmakka;
do
  echo "Try again for get-credentials!"
  sleep 10
done

when this succeeds, we could get the the authentication information but that does not mean the GKE ready, so we have. to wait until GKE report ‘RUNNING’ status.

until [[ $(gcloud container clusters describe fsmakka-${{ vars.GKE_CLUSTER_NAME }} --zone europe-west3-c --project fsmakka --format json | jq -j '.status') == 'RUNNING' ]];
do
  echo "Try again for status!"
  sleep 10
done

‘gcloud container cluster’ has really nice function ‘describe’, which display current state of the GKE Cluster, with ‘–format json’ option this will be delivered with JSON format so we can query with ‘jq -j .status’ and when it reports ‘RUNNING’ we can continue with Pipeline, which will install mentioned Kubernetes Operators.

Next step is to let the ArgoCD know the existence of new Kubernetes Cluster.

  prepare-argo-cd:
    name: Prepare ArgoCD for new GKE Cluster
    runs-on: ubuntu-latest
    steps:
      - name: Prepare ArgoCD for new GKE Cluster Step
        uses: aurelien-baudet/workflow-dispatch@v2
        with:
          workflow: 'prepare-new-gke-cluster.yaml'
          repo: 'mehmetsalgar/fsm-akka-argocd'
          ref: "master"
          token: ${{ secrets.PERSONAL_TOKEN }}
          wait-for-completion: true
          wait-for-completion-timeout: 5m
          wait-for-completion-interval: 10s

mehmetsalgar/fsm-akka-argocd/.github/workflows/prepare-new-gke-cluster.yaml

name: Prepare Day Over GKE
run-name: Prepare day over GKE triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  workflow_dispatch:
jobs:
  prepare-gke:
    permissions:
      contents: 'read'
      id-token: 'write'
    runs-on: ubuntu-latest
    steps:
      ...
      - name: Run Helm Command
        id: helmCommand
        shell: bash
        env:
          SERVICE_ACCOUNT: ${{ secrets.GCP_CREDENTIALS }}
          CLUSTER_URL: ${{ secrets.CLUSTER_URL }}
          CLUSTER_CONFIG: ${{ secrets.CLUSTER_CONFIG }}
          GCP_SERVICE_ACCOUNT: ${{ secrets.GCP_SERVICE_ACCOUNT }}
        run: |
          echo $SERVICE_ACCOUNT > /tmp/${{ github.run_id }}.json
          gcloud auth activate-service-account --key-file /tmp/${{ github.run_id }}.json
          gcloud components install gke-gcloud-auth-plugin
          export USE_GKE_GCLOUD_AUTH_PLUGIN=True
          gcloud container clusters get-credentials ${{ vars.CLUSTER_NAME_DEV_DAY_OVER }} --zone europe-west3-c --project fsmakka;
          gcloud container clusters get-credentials ${{ vars.CLUSTER_NAME_DEV }} --zone europe-west3-c --project fsmakka;
          cd helm
          kubectl config current-context
          helm repo add argo-cd https://argoproj.github.io/argo-helm
          helm repo up
          helm dep up
          CLUSTER_DAY_OVER_URL=$(gcloud container clusters describe ${{ vars.CLUSTER_NAME_DEV_DAY_OVER }} --zone europe-west3-c --project fsmakka --format json | jq -j '.endpoint')
          CLUSTER_DAY_OVER_CA_DATA=$(gcloud container clusters describe ${{ vars.CLUSTER_NAME_DEV_DAY_OVER }} --zone europe-west3-c --project fsmakka --format json | jq -j '.masterAuth.clusterCaCertificate')
          helm upgrade argocd \
                       . \
                       --install \
                       --create-namespace \
                       -n argocd \
                       -f values-development.yaml \
                       -f values-gke.yaml \
                       --set projects.enabled=true \
                       --set cluster.url=$CLUSTER_URL \
                       --set cluster.config=$CLUSTER_CONFIG \
                       --set clusterDayOver.url=https://$CLUSTER_DAY_OVER_URL \
                       --set clusterDayOver.caData=$CLUSTER_DAY_OVER_CA_DATA \
                       --set argo-cd.controller.serviceAccount.annotations."iam\.gke\.io/gcp-service-account=$GCP_SERVICE_ACCOUNT" \
                       --set argo-cd.server.serviceAccount.annotations."iam\.gke\.io/gcp-service-account=$GCP_SERVICE_ACCOUNT" \
                       --set argo-cd.repoServer.serviceAccount.annotations."iam\.gke\.io/gcp-service-account=$GCP_SERVICE_ACCOUNT" \
                       --set argo-cd.applicationSet.serviceAccount.annotations."iam\.gke\.io/gcp-service-account=$GCP_SERVICE_ACCOUNT"
      - name: Check Failure
        if: steps.helmCommand.outcome != 'success'
        run: exit 1

Now one really nice feature of ArgoCD Operator, our installation in our Default GKE Cluster can control the deployment to our newly created GKE Cluster with the helm of the following configuration.

mehmetsalgar/fsm-akka-argocd/helm/templates/project.yaml

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: fsm-akka-4eyes-project
  namespace: argocd
spec:
  description: "Project for FSM Akka Four Eyes Event Sourcing Application"
  destinations:
    ...
      #day over
    - namespace: fsmakka
      name: {{ .Values.clusterDayOver.name }}
      server: {{ .Values.clusterDayOver.url }}
    - namespace: feature-*
      name: {{ .Values.clusterDayOver.name }}
      server: {{ .Values.clusterDayOver.url }}
    - namespace: integration-*
      name: {{ .Values.clusterDayOver.name }}
      server: {{ .Values.clusterDayOver.url }}
    - namespace: release-*
      name: {{ .Values.clusterDayOver.name }}
      server: {{ .Values.clusterDayOver.url }}
    - namespace: bugfix-*
      name: {{ .Values.clusterDayOver.name }}
      server: {{ .Values.clusterDayOver.url }}
    - namespace: hotfix-*
      name: {{ .Values.clusterDayOver.name }}
      server: {{ .Values.clusterDayOver.url }}
  sourceNamespaces:
    - fsmakka
    - feature-*
    - integration-*
    - release-*
    - bugfix-*
    - hotfix-*
  sourceRepos:
    {{ toYaml .Values.projects.sourceRepos }}
  clusterResourceWhitelist:
    - group: "*"
      kind: "*"
  namespaceResourceWhitelist:
    - group: "*"
      kind: "*"

mehmetsalgar/fsm-akka-argocd/helm/values-gke.yaml

cluster:
  name: fsmakkaGKE
  url: "dummy"
  config: "dummy"

clusterDayOver:
  name: fsmakkaGKEDayOver
  url: "dummy"
  caData: "dummy"

argo-cd:
  enabled: true
  global:
    nodeSelector:
      iam.gke.io/gke-metadata-server-enabled: "true"

We can configure our GKE Cluster ‘fsmakkaGKEDayOver‘, over ‘fsmakkaGKE‘ but we have to give ArgoCD the authorisation information with the help of the following Kubernetes Secret.

mehmetsalgar/fsm-akka-argocd/helm/templates/fsmakkak8s-day-over-cluster-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: fsmakkak8s-day-over-cluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
data:
  name: {{ .Values.clusterDayOver.name | b64enc }}
  server: {{ .Values.clusterDayOver.url | b64enc }}
  config: {{ include "argocd.gke-configuration" . | b64enc }}

as you can see we have to provide the URL of newly create GKE Cluster with the help of the following snippet from ‘prepare-new-gke-cluster.yaml’.

CLUSTER_DAY_OVER_URL=$(gcloud container clusters describe ${{ vars.CLUSTER_NAME_DEV_DAY_OVER }} --zone europe-west3-c --project fsmakka --format json | jq -j '.endpoint')

which we get from GKE Cluster state.

Next in the line is the ‘Certificate Authority’ information to populate the following ArgoCD configuration.

mehmetsalgar/fsm-akka-argocd/helm/templates/_helpers.tpl

{{- define "argocd.gke-configuration" -}}
{
  "execProviderConfig": {
    "command": "argocd-k8s-auth",
    "args": ["gcp"],
    "apiVersion": "client.authentication.k8s.io/v1beta1"
  },
  "tlsClientConfig": {
    "insecure": false,
    "caData": "{{- printf "%s" .Values.clusterDayOver.caData }}"
  }
}
{{- end }}

with the help of the following snippet from ‘prepare-new-gke-cluster.yaml’.

CLUSTER_DAY_OVER_CA_DATA=$(gcloud container clusters describe ${{ vars.CLUSTER_NAME_DEV_DAY_OVER }} --zone europe-west3-c --project fsmakka --format json | jq -j '.masterAuth.clusterCaCertificate')

This will create necessary configuration in ArgoCD ‘Project’ Custom Resource Definition.

If we create a Feature Branch for ‘fraud-prevention’ and create a Pull Request, ArgoCD is creating the environment in the new Cluster.

As you can see ArgoCD is deploying to ‘fsmGkeDayOver’ cluster and in Lens IDE we see that our Infrastructure and Service are deployed to new GKE Cluster (don’t worry about yellow triangles, I didn’t give enough resources to new GKE Cluster).

GKE Cluster Destruction

We also need a mechanism to destroy the environment at “19:00 o’clock”.

mehmetsalgar/fsm-akka-helm-infrastructure-chart/.github/workflows/destroy-terrraform.yaml

name: Terraform Destroy
run-name: Terraform destroy triggered via ${{ github.event_name }} by ${{ github.actor }}
on:
  schedule:
    - cron: "0 19 * * *"
  workflow_dispatch:
jobs:
  terraform:
    name: 'Terraform'
    runs-on: ubuntu-latest
    defaults:
      run:
        shell: bash
        working-directory: ./terraform
    env:
      TF_VAR_cluster_name: ${{ vars.GKE_CLUSTER_NAME }}
      TF_VAR_credential: ${{ secrets.GCP_CREDENTIALS }}
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1
      - name: Terraform Init
        run: terraform init
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Terraform Workspace
        run: terraform workspace select fsmakka_nightly
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Show Destroy plan
        run: terraform plan -destroy
        continue-on-error: true
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Terraform destroy
        id: destroy
        run: terraform destroy -auto-approve
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}
      - name: Terraform Workspace Destroy
        run: |
          terraform workspace select default
          terraform workspace delete fsmakka_nightly
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_CREDENTIALS }}

Configuration

With the help of the following Terraform configurations to create a Google Kubernetes Engine (GKE) cluster.

mehmetsalgar/fsm-akka-helm-infrastructure-chart/terraform/main.tf

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.27.0"
    }
  }
  backend "gcs" {
    bucket  = "terraform-state-fsmakka"
    prefix  = "terraform/state"
  }
  required_version = ">= 0.14"
}

first thing we have to do is the Terraform Provider configuration, while we are working with GCP, we choose the ‘google‘ and ‘gcs‘.

mehmetsalgar/fsm-akka-helm-infrastructure-chart/terraform/providers.tf

provider "google" {
  credentials = var.credential
  project     = var.project
  region      = var.region
  zone        = var.zone
}

in this case, ‘credential’ is the JSON Key of Service Account, which is delivered with Github Action.

TF_VAR_credential: ${{ secrets.GCP_CREDENTIALS }}

mehmetsalgar/fsm-akka-helm-infrastructure-chart/terraform/network.tf

# VPC
resource "google_compute_network" "vpc" {
  name                    = "${var.project}-${var.cluster_name}-vpc"
  #routing_mode            = "GLOBAL"
  auto_create_subnetworks = "false"
}

# Subnet
resource "google_compute_subnetwork" "subnet" {
  name          = "${var.project}-${var.cluster_name}-subnet"
  region        = var.region
  network       = google_compute_network.vpc.name
  ip_cidr_range = var.subnetwork_ip_range

  secondary_ip_range = [
    {
      range_name    = "${var.project}-${var.cluster_name}-gke-pods-1"
      ip_cidr_range = var.subnetwork_pods_ip_range
    },
    {
      range_name    = "${var.project}-${var.cluster_name}-gke-services-1"
      ip_cidr_range = var.subnetwork_services_ip_range
    }
  ]

  lifecycle {
    ignore_changes = [secondary_ip_range]
  }
}

then we have to configure the network for our GKE Cluster, only interesting parts being that we are defining different IP ranges for Pods, Services and Terraform should ignore IP range changes because Query API for Secondary IP ranges does not deliver those every time in same time and Terraform unnecessary try to actualise the network.

mehmetsalgar/fsm-akka-helm-infrastructure-chart/terraform/gke.tf

# GKE cluster
resource "google_container_cluster" "fsmakka_cluster" {
  name     = "${var.project}-${var.cluster_name}"
  location = var.zone

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  network    = google_compute_network.vpc.name
  subnetwork = google_compute_subnetwork.subnet.name

  vertical_pod_autoscaling {
    enabled = var.vpa_enabled
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = "${var.project}-${var.cluster_name}-gke-pods-1"
    services_secondary_range_name = "${var.project}-${var.cluster_name}-gke-services-1"
  }

  addons_config {
    network_policy_config {
      disabled = false
    }
  }

  network_policy {
    enabled = true
  }

  lifecycle {
    ignore_changes = [
      node_pool,
      network,
      subnetwork,
      resource_labels,
    ]
  }
}

# Separately Managed Node Pool
resource "google_container_node_pool" "fsmakka_cluster_nodes" {
  name       = google_container_cluster.fsmakka_cluster.name
  location   = var.zone
  cluster    = google_container_cluster.fsmakka_cluster.name
  node_count = var.gke_num_nodes

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    labels = {
      env = var.project
    }

    # preemptible  = true
    machine_type = var.machine_type
    tags         = ["gke-node", "${var.project}-${var.cluster_name}"]
    metadata = {
      disable-legacy-endpoints = "true"
    }
  }
}

the GKE configuration, the location of Cluster, number of nodes, machine type for node pool, all delivered via Variables, mostly over default values, but if you want to customise those you can do over Github Variables.

One variable that does not have a default is the Custer Name which you have to define over Github variable.

TF_VAR_cluster_name: ${{ vars.GKE_CLUSTER_NAME }}

mehmetsalgar/fsm-akka-helm-infrastructure-chart/terraform/variables.tf

variable "cluster_name" {
  description = "GKE Name"
  type = string
}

variable "credential" {
  description = "Google Cloud Service Account Key"
  type = string
}

variable "gke_num_nodes" {
  default     = 3
  description = "number of gke nodes for Node Pool "
}

variable "machine_type" {
  default = "e2-medium"
  description = "machine type for our Node Pool"
}

variable "project" {
  default = "fsmakka"
  description = "Google Cloud Platform Project Name"
  type = string
}

variable "region" {
  default = "europe-west3"
  description = "GKE Region"
  type = string
}

variable "subnetwork_ip_range" {
  default = "10.156.0.0/20"
  description = "Google Cloud Subnetwork IP Range"
  type = string
}

variable "subnetwork_pods_ip_range" {
  default = "10.92.0.0/14"
  description = "Google Cloud Subnetwork Pods IP Range"
  type = string
}

variable "subnetwork_services_ip_range" {
  default = "10.96.0.0/20"
  description = "Google Cloud Subnetwork Services IP Range"
  type = string
}

variable "vpa_enabled" {
  default = false
  description = "GKE Vertical Pod Autoscaling Enabled"
  type = bool
}

variable "zone" {
  default = "europe-west3-c"
  description = "GKE Zone"
  type = string
}

and finally the Variables.