The SAP Job Scheduling service can be consumed both from the SAP BTP, Cloud Foundry environment and the SAP BTP, Kyma runtime. You may have seen the blogs about Cloud Foundry (CF) - if not, you can find them in the Overview of Blogs.
What are the differences between using the Job Scheduling service for CF and for Kyma:
In addition, you will learn:
Here is what the file structure will look like at the end:
kyma-part-1/ ├── Dockerfile ├── manifest.yaml ├── package.json └── server.js
We will start by preparing our application for deployment in Kyma.
We can reuse one of the applications provided in the tutorials for Cloud Foundry.
We define our dependencies in package.json:
{
"main": "server.js",
"dependencies": {
"express": "^4.16.3"
}
}We define our endpoint logic in server.js using the express library:
const express = require('express'); const app = express(); app.get('/runjob', function(req, res){ console.log('==> [APP JOB LOG] Job is running . . .'); res.send('Finished job'); }); const port = process.env.PORT || 3000; app.listen(port, function(){ console.log('listening'); })
In CF, we've defined a manifest file and used cf push to deploy it to the platform. In Kubernetes, we have one additional step. We need to prepare a Docker image before applying our configurations to the cluster.
# Use NodeJS base image FROM node:20 # Create app directory WORKDIR /usr/src/app # Install app dependencies by copying package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Setting env variables ENV PORT 80 # Copy app source COPY . . # Bind the port on which a container will listen for traffic EXPOSE 80 # Define the Docker image's behavior at runtime CMD ["npm","run","start"]
docker build . -t <your-user>/jobapp:part1
These instructions are for Docker Hub, but you can use any other Docker repository you want.
We need this repository to upload the image we've build in the previous step.
1. Log in to Docker Hub:
docker login -u <your-user>
Alternatively: podman login docker.io -u <your-user>
2. Push the image to your Docker registry:
docker push <your-user>/jobapp:part1
Alternatively: podman push jobapp:part1 <your-user>/jobapp:part1
In order to work with Kyma (which is in fact, a Kubernetes cluster with additional custom resource definitions), we need a Kubeconfig to authenticate and the kubectl CLI.
1. Open the SAP BTP cockpit.
2. Under your subaccount, go to Overview, and find the section Kyma Environment.
3. Open the KubeconfigURL link. This will download the Kubeconfig file on your local PC.
4. Then open your terminal (Mac, Linux):
export KUBECONFIG=<location of the downloaded file>
For windows use: Powershell: $env:KUBECONFIG = "…\kubeconfig.yaml", to verify: echo $env:KUBECONFIG
CMD: set KUBECONFIG=…\kubeconfig.yaml, to verify: echo %KUBECONFIG%
It's a good practice to use separate namespaces, which is why we will create one for this exercise.
kubectl create namespace jobapp-part1
The manifest.yaml is like the mtad.yaml (MultiTarget Application), but for Kubernetes.
This YAML file contains all configurations for the Kyma cluster to deploy our application, expose an endpoint, create a service instance and bindings, and mount them to the deployment of our application.
Note that you need to fill:
image: <your-username>/jobapp:part1 (...) host: jobapp-part1-<your-suffix>.<kyma-apps-domain>
You can find help as comments in the manifest.yaml:
# defining the service instance apiVersion: services.cloud.sap.com/v1 kind: ServiceInstance metadata: # this will be the name of your service instance name: jobscheduler-instance labels: app.kubernetes.io/name: jobscheduler-instance annotations: {} namespace: jobapp-part1 spec: # the values supplied here are the ones from the Service Marketplace # serviceOfferingName is the "Technical name" of the Job Scheduling service serviceOfferingName: jobscheduler servicePlanName: standard --- # defining the service binding apiVersion: services.cloud.sap.com/v1 kind: ServiceBinding metadata: # this will be the name of your service binding name: jobscheduler-binding labels: app.kubernetes.io/name: jobscheduler-binding annotations: {} namespace: jobapp-part1 spec: # specify the name of the service-instance serviceInstanceName: jobscheduler-instance --- apiVersion: v1 kind: Service metadata: namespace: jobapp-part1 name: jobapp labels: run: jobapp spec: type: ClusterIP ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: jobapp --- apiVersion: apps/v1 kind: Deployment metadata: namespace: jobapp-part1 name: jobapp labels: app: jobapp version: nodejs spec: replicas: 1 selector: matchLabels: app: jobapp version: nodejs template: metadata: labels: app: jobapp version: nodejs spec: containers: - name: jobapp # replace <your-username> with your Docker Hub image namespace/user image: <your-username>/jobapp:part1 imagePullPolicy: Always ports: - containerPort: 80 env: - name: SERVICE_BINDING_ROOT value: /bindings # this mounts a volume from the service-binding volumeMounts: - mountPath: /bindings/jobscheduler-instance name: jobscheduler-volume readOnly: true volumes: # creates volume for the binding secret - name: jobscheduler-volume secret: defaultMode: 420 secretName: jobscheduler-binding --- apiVersion: gateway.kyma-project.io/v1beta1 kind: APIRule metadata: namespace: jobapp-part1 name: jobapp labels: app.kubernetes.io/name: jobapp spec: gateway: kyma-gateway.kyma-system.svc.cluster.local rules: - accessStrategies: - handler: allow config: {} methods: - GET - POST - PUT - DELETE path: /.* # host is the address that your app will be accessible over # you can get <kyma-apps-domain> from the APIServerURL in the cockpit by removing the https://api or # by executing: # kubectl config view --minify --output 'jsonpath={.clusters[0].cluster.server}' | awk -F'api.' '{print $2}' # <your-suffix> can be whatever you want - as long the the DNS is unique host: jobapp-part1-<your-suffix>.<kyma-apps-domain> service: name: jobapp port: 80
kubectl apply -f manifest.yaml
Now that your application is deployed, it's time to test it.
Open your endpoint in the browser (for example, https://jobapp-part1-<your-suffix>.<kyma-apps-domain>/runjob).
You should have specified this URL in the manifest.yaml (host).
To view the dashboard, your user has to have the SAP_Job_Scheduling_Service_Admin role assigned from the SAP BTP cockpit.
To find the dashboard URL:
1. Go to Instances and Subscriptions in the SAP BTP cockpit.
2. Choose the "jobscheduler" service instance.
3. Choose View Dashboard:
🎉 That's it! 🎉
🥳 You have done it - you have successfully called your newly developed Kyma application using the SAP Job Scheduling service.
After you are ready, you can delete all of your resources using:
kubectl delete -f manifest.yaml
so that you are ready for our 🔝 next tutorial 🔝.
Q: Error on kubectl apply?
A: Have you created your namespace? If not, use:
kubectl create namespace jobapp-part1
Q: kubectl apply is successful, but the application is not working?
A: You can open the Kyma UI to validate that all resources are correctly provisioned.
OR You can run the following command and check the READY field in the output:
kubectl -n jobapp-part1 get all,serviceinstance,servicebindings,APIRule
Q: Is my service instance created?
A: You can check that using the
kubectl -n jobapp-part1 describe serviceinstance.services.cloud.sap.com/jobscheduler-instance or in the Kyma or in the UI:
Q: How to remove my resources?
A: Use CLI:
kubectl delete -f manifest.yaml
Or remove them manually in the Kyma UI.
Q: When I call the application endpoint I get: no healthy upstream
Option 1: Check your pods, especially your Docker image - have you added your username?
kubectl -n jobapp-part1 get pods
kubectl -n jobapp-part1 describe jobapp-<unique-id-from-previous-step>
Option 2: Do you have enough quota for the Job Scheduling service? Run:
kubectl -n jobapp-part1 describe serviceinstance.services.cloud.sap.com/jobscheduler-instance
If you see something like the following, your quota is not enough and you need to increase it.
Status: Conditions: Last Transition Time: 2024-09-13T10:07:26Z Message: BrokerError:, Status: 400, Description: Subaccount quota limit for specified service plan has exceeded. Please contact service administrator. Observed Generation: 1 Reason: CreateInProgress Status: False Type: Succeeded Last Transition Time: 2024-09-13T10:07:26Z Message: Reason: NotProvisioned
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 59 | |
| 55 | |
| 36 | |
| 36 | |
| 30 | |
| 24 | |
| 23 | |
| 22 | |
| 22 | |
| 19 |