Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
mariusobert
Developer Advocate
Developer Advocate
2,755
This post will show you how you can use the full power of the GitHub platform for your Kubernetes-based projects. I’ll show you a minimal sample that uses GitHub to store, manage, and maintain its source code and build, publish, and deploy Docker images. In other words: We’ll use GitHub Actions to build a CI/CD pipeline that deploys to your SAP BTP Kyma cluster.


The stages of the sample project


 

Continuous integration and continuous deployment and delivery (in short: CI/CD) is a widely discussed topic.  The infamous quote of Werner Vogels (“You build it, you run it”) is already 15 years old and still valid more than ever. CI/CD and DevOps, in general, grew much in the past few years and touch the lives of almost every developer nowadays. And as always, with large fields, it becomes nearly a religion, and people discuss it with much passion, which is also why I don’t want to make this post about how to do CI/CD right. Instead, I want to focus on a straightforward scenario that you probably won’t use for your next production. This post might even be one of the worst flows as it neither involves feature branches, feature flags, test suites, nor multiple system landscapes.

But what this flow does offer is a template to build your next prototype rapidly with GitHub Actions and run it on the Kyma runtime. The best thing about all of this is that you can test this for free with SAP BTP Trial (or SAP BTP’s New Free Tier Plans) and the public repositories on GitHub.

Some similar blog posts focus on different aspects. E.g., jamie.cawley. I recommend reading these posts as well if you are curious about how to work with these technologies.

Automated Deployments to Kyma


This hands-on will be pretty simple: We’ll have a simple express.js app that starts a web server that returns a “Hello SAP Tech Bytes” string, followed by the current version string. Besides, we’ll also have a Dockerfile that describes the container that contains the express app. Of course, we also need a Kyma manifest file to deploy the app to the cluster. The base scenario is rounded up with a pipeline that runs the test suite on every "push".

All of this probably compares to your projects as well. What is unique about this post is that it highlights all the small steps needed to create a service account, embed the .kubeconfig in your GitHub repo, and trigger the kubectl commands from the CI/CD job.

And when all these pieces come together, you’ll be able to trigger the deployment of the latest version with these simple commands:
npm version patch
git push --follow-tags


Your dev deployment process could be as easy as this.


 

 

Disclaimer: Please note that the offering of similar GitHub Actions is vast. I found that these actions worked well for me, and more important: they worked well together. But this doesn’t mean that you need to pick the same actions for your project. This should just be guidance for you.

Hands-On


You can find all source code on GitHub. Feel free to fork the repository and run this code in your Kyma cluster. Note that the sample repository contains slightly different code than displayed here as it needs to run from a branch that is not the main branch.

0. Preparations


It probably won't surprise you that you need a Kyma cluster and a GitHub account for this hands-on. Besides, you also need to have the following tools and runtimes installed:

1. The Express App


The main app (server.js) is straightforward and starts an HTTP server that returns a hello world message when called.
'use strict';

const express = require('express');

const version = "version is in development"

const app = express();
app.get('/', (_, res) => {
res.send(`Hello SAP Tech Bytes! This ${version}`);
});

app.listen(process.env.PORT || 3000, () => {
console.log(`Started on port ${process.env.PORT || 3000}`);
});

The package.json descriptor lists the usual properties and the dependencies of the project. As we don't need a big test suite for this small server, I only added a "placeholder test" that is always successful.
{
"name": "tech-bytes_kyma-cicd",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js",
"build": "node buildScript.js",
"test": "echo \"Success\" "
},
"dependencies": {
"express": "^4.16.1",
"replace-in-file": "^6.2.0"
}
}

Note that there is also a build script in this file. During the build step, buildScript.js will replace the fixed string in the application with the current version string. You probably won’t do this in your app and instead call cds build or a similar command here.
const replace = require("replace-in-file");
const pkg = require("./package.json");

try {
replace.sync({
files: "server.js",
from: [/version is in development/g],
to: [`is version ${pkg.version}`],
});
}
catch (error) {
console.error("Error occurred:", error);
}

If you want, you can run the following commands to test the app locally:
npm install
npm start


Running on localhost.



2. Dockerize the App


As with all Kubernetes projects, this application needs to be wrapped in a Docker image. The following standard Dockerfile will do this for you.
FROM node:14-alpine

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./

RUN npm install

# Bundle app source
COPY . .

EXPOSE 8080
CMD [ "node", "server.js" ]

3. Create a Service Account


You might know that regular sessions based on.kubeconfigfiles from the Kyma console expire after eight hours. It would cause many problems in your CI/CD pipeline if there weren't an alternative. Luckily there is one: Creating a Kyma service account. This tutorial shows you how to create a.kubeconfig for a service account that doesn't expire.

Save the created file as we'll need it again in a future step.

4. The Kyma Manifest


The Kyma manifest will pick up the Docker image from GitHub Packages and deploy it to your development cluster. All parameters that describe this deployment are mentioned in the k8s/dev_deployment.yaml descriptor:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tech-bytes
spec:
replicas: 1
selector:
matchLabels:
app: tech-bytes
template:
metadata:
labels:
app: tech-bytes
version: v1
spec:
containers:
- image: ghcr.io/<user>/<repo name>:latest REPLACE THIS LINE
imagePullPolicy: Always
name: tech-bytes
ports:
- name: http
containerPort: 3000
resources:
limits:
memory: 2000Mi
requests:
memory: 32Mi
imagePullSecrets:
- name: regcred

---
apiVersion: v1
kind: Service
metadata:
name: tech-bytes
labels:
app: tech-bytes
spec:
ports:
- port: 8080
name: http
targetPort: 3000
selector:
app: tech-bytes

---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
name: tech-bytes
spec:
gateway: kyma-gateway.kyma-system.svc.cluster.local
service:
name: tech-bytes
port: 8080
host: tech-bytes
rules:
- path: /.*
methods: ["GET"]
accessStrategies:
- handler: noop
config: {}

Don’t forget to change the image tag in the file above. This tag needs to point to your GitHub user and package name.
Another essential detail of this file is the used imagePullSecret  "regcred". This secret is required as GitHub only allows authenticated pulls by default. Make sure you are logged in to kubectl and run the following commands to set this secret up.
kubectl -n tutorial create secret docker-registry regcred --docker-server=https://ghcr.io  --docker-username=<github user>  --docker-password=<github personal access token>

This guide might help you if you are not sure how to create a personal access token.

5. Set up the GitHub Repository



  1. In case you haven't done so yet, create a new public repository on GitHub.

  2. Now it's time to initialize the repository on your local machine.

  3. Commit and push all the changes to GitHub.
    git add .
    git commit -a -m "initial commit"
    git push​


  4. You should now see your source code in the repository.

  5. GitHub needs the previously generated service account to deploy the Docker image to Kyma. Create an encrypted secret DEV_KUBECONFIG to store the .kubeconfig next to your repository securely. Don't just drop the content of the file but make sure it's base-64-encoded.
    # For MAC
    cat tutorial-kubeconfig.yaml | base64



6.Create Two GitHub Workflows


Let's start slow and create an easy workflow definition .github/workflows/main.yaml.
name: Run Tests

on:
push:
branches:
- main

jobs:
run-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: 14

- run: npm install
- run: npm test

This flow is triggered on every push to the main branch, and it uses two default actions to check the repository out and install Node.js. Then, it executes two npm scripts to trigger the tests.

 

Let's shift up a gear and create a second workflow .github/workflows/deploy.yaml. This flow won't be trigger automatically and can only be executed by a button press on the GitHub website.
name: Deploy Manually

# Triggered manually
on: workflow_dispatch

env:
REGISTRY: ghcr.io
IMAGE_NAME: sap-samples/tech-bytes-kyma-cicd

jobs:
deploy-to-dev:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2

- run: npm install

- run: npm run build

- name: Log in to the container registry
uses: docker/login-action@v1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v3
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
labels: ${{ steps.meta.outputs.labels }}

- uses: steebchen/kubectl@v2.0.0
with:
config: ${{ secrets.DEV_KUBECONFIG }}
command: apply -n tutorial -f ./k8s/dev_deployment.yaml

This flow is a bit longer than the first one. We already know the first action from the previous flow, and then another two npm scripts are executed. The following three actions from Docker make sure to use the actor's credentials to log into the image repository, load metadata from the Dockerfile, build, and push the image to the registry. The last action will finally read the kubeconfig from the secret we created in the previous step and deploy the image to our Kyma cluster.

This workflow will only be picked up if the file is also stored in the default branch!

7. Trigger the Deployment Manually


Go to the repository on the GitHub website, navigate to the Actions tab, and select Run workflow and the right branch containing the Dockerfile. This makes sure that you use the flow and files that are defined in the respective branch.


Run a workflow from the website.



Refresh the page and click on the workflow that you now see.



You should screen a success message once the workflow is completed.


Once the task is completed, go to the Kyma console to find the URL of the demo application and access it.


You can find the URL of the application in the Kyma console.



You will see the production build of the application when you click on the link.



8. Update the Deployment on Each Push


You're almost there! Create the last workflow .github/workflows/publish.yaml, which will be invoked on every push that contains a new tag.
name: Release New Version

on:
push:
tags:
- "v*" # Push events to matching v*, i.e. v1.0, v20.15.10

env:
REGISTRY: ghcr.io
IMAGE_NAME: sap-samples/tech-bytes-kyma-cicd

jobs:
run-tests:
runs-on: ubuntu-latest
name: Run tests
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: 14

- run: npm install
- run: npm test

build-and-push-image:
needs: run-tests
name: Build and push the image
outputs:
image-tag: ${{ steps.get-image-tag.outputs.result }}
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- name: Checkout repository
uses: actions/checkout@v2

- run: npm install

- run: npm run build

- name: Log in to the container registry
uses: docker/login-action@v1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v3
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

- name: Get the image tag
uses: actions/github-script@v4
id: get-image-tag
with:
script: return ${{ steps.meta.outputs.json }}.tags[0]
result-encoding: string

deploy-to-dev:
needs: build-and-push-image
name: Deploy to dev
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v2

- name: Deploy to the dev environment
uses: steebchen/kubectl@v2.0.0
with:
config: ${{ secrets.DEV_KUBECONFIG }}
command: -n tutorial set image deployment/demo-app demo-app=${{needs.build-and-push-image.outputs.image-tag }}

create-release:
needs: build-and-push-image
name: Create release
runs-on: ubuntu-latest
steps:
- name: Create release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false

This flow consists of four jobs, whereas you already know almost all the steps of the first two jobs (test and build + push of the image). The last step of the second job (actions/github-script) defines a small JavaScript snippet that helps us extract one specific list element from the output of a previous step.

This tag is then moved to the output of the second job and will be used by the third job to trigger the creation of a new pod that will pull the latest image from GitHub packages.

Last but not least, we'll create a GitHub release. I found this useful as it will trigger a notification for all followers of a given repository.

9. Trigger the Pipeline


Now it's time to see all of this in action. Go to your local project and run the following two commands to kick everything off and watch GitHub Actions do the rest of the work for you.
npm version patch
git push --follow-tags


Notice that this push command triggered multiple workflows.


 


A succeeded "Release New Version" flow (this time for tag 1.0.2).



Just like magic, you now see that the deployed version has been updated. Yay!



Summary


In this post, you've learned how to

  • create a Kubernetes service account that has the right to deploy a new image

  • build the Docker image once a new tag has been pushed to the repository

  • use a workflow to deploy the project to the Kyma cluster manually

  • push this image to GitHub's own Docker registry (GitHub packages)

  • explain how to create the registry credentials secret so that Kyma can pull images

  • retrieve the tag of the image and hand it over to another job

  • update the current deployment to make use of the latest image


You can see it doesn't take much to deploy your Kyma project straight from GitHub to SAP BTP. As mentioned before, this is most likely not the perfect set of actions for your project. I'd be very interested in additional actions that you found helpful and use in your projects (and what problems they solve for you). Let me know in the comments 🙂 .