In this Cloud-Native Lab post, I’ll compare the CLI clients of two runtimes within SAP BTP – the Cloud Foundry and the Kyma runtime. In other words, I compare thekubectl
tool of Kyma and Kubernetes with the cf
tool of Cloud Foundry.
This post will have a different focus but follow the same idea as my last one, in which I compared the
project manifests of both runtimes. Don't worry if you didn't read that post (yet). I can TLDR that one for you: Project manifests are the recipes that describe how to cook your grandma's dish (an end-to-end business solution) based on the prepared ingredients (microservices). In this analogy, the Kyma and the Cloud Foundry runtimes were the chefs that refine the dish. But even the best chef cannot be successful without having the recipe. You still need a way to pass the recipe over to the chef and provide additional tips -
and this is precisely what the command-line interfaces (CLIs) do.
Comparing Cloud Foundry and Kyma
I recommend downloading and installing both CLI clients before continuing to read. This will allow you to code along. Please refer to the following tutorials as needed:
And keep in mind: You can
test both runtimes for free with the SAP BTP Trial account.
Connectivity
The CLI clients don't mind
how your runtime cluster is hosted and
who operates it, but they are certainly interested in
where your app needs to be running.
Let's have a look at
Cloud Foundry first: The first question you need to answer is "Where would you like to deploy your app?". Do you want to deploy to a self-hosted Cloud Foundry instance or one that is provided (like SAP BTP does)? In case you go for the provided option and you want to run your applications in a data center in Frankfurt, you need to point to the API endpoint of this Cloud Foundry instance:
cf login -a https://api.cf.eu10.hana.ondemand.com
Connect thecf
CLI to the Cloud Foundry instance in Frankfurt.
When you connect to the API endpoint, the first prompt you see asks for your email and password (or single-sign-on). This helps the client to provide you contextual information in the next step. Usually, an endpoint is shared by multiple customers, and it contains a large number of orgs (aka tenants). As you are now logged in, you will get a list of all available orgs (referring to all subaccounts with the Cloud Foundry runtime enabled). It's time to define the virtual location of your project by selecting your
org. An org can contain multiple
spaces in which your applications and services live and interact with each other. You can use various spaces if you want to separate certain service instances and applications from each other. It's recommended to use multiple orgs (and therefore multiple subaccounts) to separate your landscapes such as dev, QA, and prod.
To summarize, you need to define a tuple that consists of an API endpoint, an org, and a space to describe where your apps should run.
Cloud Foundry calls this tuple a target that you can check at any time with
cf target
.
Kubernetes combines their API endpoint and org concepts and calls this artifact cluster (later more about it). In the context of SAP BTP, clusters also have a 1:1 relationship with subaccounts. And the equivalent of Cloud Foundry's space would be a Kubernetes
namespace. Like in CF, this unit contains multiple resources such as deployments, pods, API rules, and service instances.
kubectl
also needs some
user information to connect to a cluster, but in contrast to the
cf
tool from above, this information cannot be retrieved via a credentials prompt but needs to be provided via an access token.
This
tuple of the cluster, the namespace, and the user
is called a context is Kubernetes.
kubectl config current-context
This command will print the current context, aka the cluster, kubectl
client is currently using.
As in almost all aspects, Kubernetes and Kyma behave like Cloud Foundry but offer more configuration options and are therefore more complex. That's also valid here as the various items of the cluster tuple are not simple strings but objects that contain multiple properties such as cluster certificates and access tokens.
kubectl
stores the user, cluster, and context definition in the so-called
kubeconfig. This configuration can contain various context definitions between which you can switch as you want. This means
kubectl
supports by default multiple active connections at the same time.
kubectl config view # Show default kubeconfig settings.
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 # use multiple kubeconfig files at the same time
kubectl config view # Show Merged kubeconfig settings.
kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
There can be multiplekubectl
configuration files that contain different contexts.
Basic usage
With the information you read so far, you know how to connect the clients to cloud systems. But you don't know yet
how to start (edit and remove) a new project on the connected cloud-native runtime - this will be explained in the next few paragraphs.
It's fair to say that both clients are used to manage the resources of the underlying resources. As explained in the
last post, the complexity of these resources differs. Cloud Foundry resources are mainly apps (microservices) and services, whereas Kubernetes resources can be anything that implements the
Kubernetes API (like a deployment, a pod, or a service).
The
cf client offers basic commands to create (
cf push
), read (
cf app
), update (
cf push
), and delete (
cf push
) applications. Similarly, there are also
CRUD commands for service instances. Namely:
cf create-service
, cf service
,cf update-service
, and cf delete-service
. And as all services need to be connected to at least one application, there are commands to establish and dissolve this connection:
cf bind-service
and
cf unbind-service
. These are the essential commands you need to know to start a project on Cloud Foundry. Additionally, it's beneficial to know other commands that can be used to inspect running applications (
cf env
, cf logs
, and
cf ssh
) and to change their state
(cf stop
, cf start
, cf restart
, cf scale
, cf restage
) but you can easily get to know them on the fly.
cf push
This command is all it takes to kickstart a simple Cloud Foundry project.
kubectl (I pronounce it cube-c-t-l, but I guess you can also pick
any of these pronunciations) cannot offer such a set of commands as there a few tens resource types. Having dedicated commands for each resource type's CRUD operation would be very hard to manage (and remember). Each command would require a different set of options making everything even more complicated. Instead, the client's creators came up with a brilliant idea: They added one command for each crud operation:
kubectl create
, kubectl get
, kubectl edit
, and kubectl
. All of which accept the resource type as the first parameter. To list all pods of a given namespace, you would execute
kubectl get pods --namespace myFirstNamespace
. It can be tricky to remember the various options for creating and editing operations as each resource type requires different parameters. Therefore, there is a dedicated
kubectl explain
command to let the CLI print the documentation of the resource type passed as an option. Most invocations would get too long if you were to add all options to the base command. The solution to this problem is the so-called manifest file (aka recipe in the analogy). This
yaml
file offers you a way to define all resources, possibly of multiple resource types, in one document and create them all at once.
kubectl apply -f deployment/deployment.yaml
This command is all it takes to kickstart a simple Kyma project.
Basic kubectl
commands
kubectl run
can be compared to
cf push
as it is a bit of a shortcut to run a Docker image without providing a lot of details. Like the Cloud Foundry client, there are also commands to inspect provisioned resources and modify their state, but I don't want to dig deeper to focus on the commands mentioned above.
Extensibility
We briefly inspected the fundamental commands to connect the clients to the SAP BTP runtimes and deploy implementation projects. Besides the introduced commands, there are quite a few other commands that can come in handy as well. I recommend running
cf --help
and
kubectl --help
to explore them.
But no matter how many standard commands the clients offer, it is also clear that there are always use-cases not covered by them. To mitigate these "shortcomings," both clients implement an extensibility concept via plugins. This gives the developer community a chance to increase the utility of the clients even further.
The Cloud Foundry foundation dedicated an
entire page to community plugins. Go there to browse the list of available commands and install the ones that catch your eye:
cf install-plugin -r CF-Community <plugin name>
Command to install plugins in the cf
CLI
At this point, I would also like to recommend the posts of my friend
iinside who provided in-depth explanations on the
multipapps and
top plugins. Besides the ones that were recommended by Max, I also recommend the
cf targets and
cf html5 plugin. The first one comes in handy when you deploy to multiple targets, and the latter provides useful insight into the HTML5 Application Repository service. As the latter one shows, there can also be a CLI plugin to interact only with a dedicated Cloud Foundry service.
I haven't used many
kubectl
plugins yet as I'm just at the beginning of my "Kyma journey." My colleague
andreas.thaler01 pointed me to this great extension
kubectx which I installed the second I learned about it. The extension makes it easy to switch between multiple contexts and namespaces which basically cuts the characters I need to type per command in half! There are
multiple installation options available. Mac and Unix users can use their favorite package manager (there are plans for support Chocolatey as well):
brew install kubectx
sudo port install kubectx
sudo apt install kubectx
sudo pacman -S kubectx
kubectl
plugins can be installed with supported package managers
I also found a few recommendations that seemed useful, but I'm not able to endorse them. Feel free to drop a comment and link your favorite plugins to help me getting started :).
Summary
We saw that both tools more or less fulfill the same needs but differ in individual commands' complexity and mightiness. It makes sense that kubectl
is more complex as the project manifest of Kyma projects is more complex and mighty as well.
The connectivity to cloud clusters works similarly in both clients. Due to security reasons, both clients limit the token lifetime before a re-authentication is required. The Cloud Foundry token is valid for a full day (24 hours) and the Kyma token (downloaded from the console) expires after a workday (8 hours).
An interesting difference is the behavior of the CLI when a deployment has been triggered. Once the
cf
CLI initiated the deployment, it automatically attaches the CLI to the running container to print the logs.
kubectl
, on the other hand, only fires off the deployment intent and prints a success message that the cluster has received the intent. If needed, you can use a separate command to check the logs of the pod(s) to see how the deployment is going.
On closer inspection, I would say that both clients aren't too different after all.
Next Steps
This was the third blog post of my series #CloudNativeLab. The name already says all there is to it; This series won’t be about building pure business apps with cloud-native technology. I think there are already plenty of great posts about those aspects out there. Instead, this series covers background information and thinks outside the box to demonstrate unconventional use-cases of cloud-native technology such as Cloud Foundry, Kyma, Gardener, etc. |
Previous episode: Cloud-Native Lab #2 – Comparing Cloud Foundry and Kyma Manifests
Next episode: Cloud-Native Lab #4 – Multi-tenant Apps in SAP BTP