In this article we will cover the installation of open source Kyma on a local development system.
For some background information about container engines, orchestration, Kubernetes, Kyma open source and the SAP Business Technology Platform Kyma runtime, see
For the video tutorial series covering the SAP BTP Kyma runtime, see
For information about deploying Kyma on Rancher Desktop, see
Questions? Post as comment.
Useful? Give us a like and share on social media.
Thanks! |
Kyma 2.0
Happy Kyma-ing : )
As illustrated on the Kyma project website, installing Kyma is easy. All we need to do is install a few client tools (couple of seconds) and execute two commands to provision a cluster (30 seconds) and deploy Kyma (several minutes, depending on the available resources).
For the guide, see
As a reminder, quote/unquote:
Kyma /kee-ma/ is a cloud-native application runtime that combines the power of Kubernetes with a set of best-in-class tools and open-source components that empower you to develop, run, and operate secure and scalable cloud-native applications.
Hence, to run Kyma locally, we need to install and create a local Kubernetes cluster and deploy the Kyma runtime using the Kyma CLI (and optionally add the Kyma Dashboard).
What's New
Before getting started always good to check the docs. For those that have installed Kyma before, here is the delta.
With Kyma 2.0, we switched the local Kubernetes tool from minikube to k3d, which allows for a faster and more lightweight installation. The steps needed to set up a local and remote cluster are now the same.
The commands for installation have been updated as well:
- The deploy command replaces the install and upgrade commands.
- The undeploy command replaces the uninstall command.
For the release notes, see
About Local Kubernetes Clusters
Released shortly after Kubernetes,
minikube has been the standard sandbox environment to run a Kubernetes locally, i.e. on a developer computer running Linux, macOS, or Windows. Minikube uses a local hypervisor and a virtual machine, which adds overhead (although configurable). More recently, lightweight alternatives like k3d and
kind (kubernetes-in-docker) have grown in popularity.
For those interested, the CNCF webinar
Navigating the Sea of Local Kubernetes Clusters covers the details.
The previous installation using minikube required some more effort, see
Preparation
As preparation, we need to install the software that enables us to create a local Kubernetes cluster and the Kyma runtime.
1a. Install Docker
As you might have guessed, to run Kubernetes inside Docker we need ... Docker!
To download, go to
For large organisations (e.g. SAP), professional use of Docker Desktop requires a paid subscription.
|
Adjust the resources assigned to the Docker. The more the merrier.
1b. Install k3d
k3d, a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in Docker. To install k3d, you can use macOS
Homebrew or Windows
Chocolatey (takes about 7 seconds).
# macOS
brew install k3d
# Windows
choco install k3d
For additional installation information, see
To learn more, see
1c. Install Kyma CLI
To install the Kyma command line interface, we can use homebrew/chocolatey again (7 seconds).
# macOS
brew install kyma-cli
# Windows
choco install kyma-cli
1d. Install Kubectl (Optional)
The kubectl CLI is not a prerequisite for Kyma but might come in handy (as referenced in the Kyma documentation).
# macOS
brew install kubernetes-cli
# Windows
choco install kubernetes-cli
Note that according to the Kubernetes documentation, you must use a kubectl version that is within one minor version difference of your cluster.
With the current Kyma default settings, a Kubernetes v1.20 cluster is deployed and hence the most recent kubectl we can use is v1.21 (and not 1.22 or latest 1.23).
The Homebrew and Chocolatey package managers typically install the latest version and getting an earlier version might be challenging. For the latest four stable versions, see
Alternatively, use curl.
|
New to kubectl?
Kyma CLI
2. Provision Cluster
To create a k3s cluster for Kyma usage, we can use the Kyma CLI (~ 30 seconds).
# Cluster with default settings
kyma provision k
For CLI command options, see
# Command help
kyma provision k3d -h
# Cluster with arguments
kyma provision k3d --name='{CUSTOM_NAME}' --k3s-arg='xxx'
Note the default settings
- Cluster name = kyma
- Kubernetes version = 1.20
3. Deploy Kyma
Now that we have a (local) Kuberneter cluster, we can deploy the Kyma runtime. The deployment is configurable. With limited resources, i.e. when using a lap- or desktop, include the
evaluation parameter and make sure that Docker has enough resources allocated (see above).
kyma deploy -p evaluation
For information about how to configure the deployment, see
Deploying the evaluation environment takes a couple of minutes (depending on resource allocation).
Creating the cluster and deploying Kyma is identical on macOS and Windows.
For more information about the components, see
Take a Spin
Kyma Dashboard
You can install and access the dashboard with command
kyma dashboard
This will pull the latest version and start a
busola container. For the documentation, see
With the Kyma dashboard we can get information and configure cluster resources.
Plus create and configure namespaces resources.
What's in the Box?
Docker
As k3s runs inside Docker, we can use the Docker Desktop client to take stock of containers, images, and volumes.
Behind the screen, the
kyma provision command calls the
k3d create cluster kyma command, which initiated the download (docker pull) of container images from a Registry (Docker Hub). Note the small image size(s).
When we deployed Kyma Dashboard, a fifth images was downloaded (from the Google Container Registry or GCR this time)
From the images, Docker started (docker run) five containers (virtualised processes)
- k3d server
- k3d agent
- k3d server load balancer
- k3d registry
- k3d tools
The equivalent commands are
docker images and
docker ps.
Kubernetes (k3s)
With the
kubectl command, we can query the cluster. The Kubernetes version is 1.20 as specified by the
kyma provision command (default).
Note that there are two nodes running Linux using
containerd as container runtime (i.e. not the Docker container engine).
To recap, our developer computer is running a single operating system process (com.docker.hyperkit on macOS, Hyper-V on Windows). Inside the host process, a number of virtualised Linux processes are running (containers) which create the k3s environment with a server (control plane) and agent node. The agent will create new virtualised process environments (pods), this time using the
containerd engine. Kyma is running in these pods.
For more information about the k3s architecture, see
Kubectl
Using the kubectl command, you can query the cluster for namespaces, deployments, services, and pods.
# get namespaces
kubectl get namespaces
# get deployments
kubectl get deployment -o wide -n kube-system
# get services
kubectl get services -o wide -n kube-system
# get pods
kubectl get pods -o wide -n kube-system
There are three deployments in the
kube-system namespace, running on the server node.
Similarly, we can query the Kyma runtime.
# get deployments
kubectl get deployment -o wide -n kyma-system
kubectl get deployment -o wide -n kyma-integration
kubectl get deployment -o wide -n istio-system
# get services
kubectl get services -o wide -n kyma-system
# get pods
kubectl get pods -o wide -n kyma-system
# get all endpoints
kubectl get endpoints -A
# container resource usage
kubectl top pod --containers -A
Note the large number of pods. Explains why it took some time to deploy Kyma.
Maintenance
k3d
We can create a k3s cluster using the Kyma CLI but not perform any cluster maintenance. For this, use the
k3d command. For the command options, see
k3d node list
k3d cluster list
# assuming default name "kyma"
k3d cluster stop kyma
k3d cluster start kyma
k3d cluster delete kyma
Docker
Similarly, we can use the docker command to clean up containers, images, flotsam, jetsam, lagan, and derelict.
# clean up unused resources
docker image prune -af
docker volume prune -f
docker system prune -af
# remove all containers
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
# remove all images
docker rmi $(docker images -a -q)
Share and Connect
Questions? Please post as comment.
Useful? Give us a like and share on social media.
Thanks!
If you would like to receive updates, connect with me on
For the author page of SAP PRESS, visit
Over the years, for the SAP HANA Academy, SAP’s Partner Innovation Lab, and à titre personnel, I have written a little over 300 posts here for the SAP Community. Some articles only reached a few readers. Others attracted quite a few more.For your reading pleasure and convenience, here is a curated list of posts which somehow managed to pass the 10k-view mile stone and, as sign of current interest, still tickle the counters each month.
|