
deployment.yaml
of Kyma with the mta.yaml
file of SAP's Cloud Foundry Deploy service.A dockerized SAPUI5 sample app running on the Cloud Foundry and the Kyma runtime
deployment.yaml
of Kubernetes and Kyma and the mta.yaml
file of the deploy service of SAP Cloud Platform, Cloud Foundry. As the name suggests, both manifests use YAML as the file format. While this format is not always easy to write, it is significantly easier to read than other formats such as JSON.SAP Cloud Platform cockpit view that shows the services instances of both runtimes
mta.yaml
file is the manifest of the SAP Cloud Foundry deploy service distribution and the manifest.yaml
is the general manifest - you can use both in SAP Cloud Platform. In practice, I see moremta.yaml
files, which is why I'll focus on them here._schema-version: 3.2.0
ID: project
version: 1.0.0
modules:
- name: module1
type: javascript.nodejs
path: folder1
requires:
- name: service_name
parameters:
disk-quota: 512M
memory: 512M
resources:
- name: service_name
type: org.cloudfoundry.managed-service
parameters:
path: ./configuration.json
service: service-name
service-plan: service-plan
The simple structure of the mta.yaml
manifest
build-parameters
of the project. They can be leveraged to trigger the build process with the mbt tool. This makes it easier to include the project in an optimized CI/CD pipeline later on. These build steps are executed locally, and only the build results will be included in the .mtar
archive later. As the build steps are not needed during deployment, they are removed from the manifest. Only the resulting "deployment manifest", which is then called mtad.yaml
, will be included in the .mtar
archive..mtar
archive is, among other commands, provided by the MultiApps Cloud Foundry CLI Plugin:cf deploy archive.mtar
cf undeploy archive.mtar
cf mta archive
cf mtas
# and more
deployment.yaml
(the file name can vary) manifest to organize its resources..yaml
files, which are then sent to the Kubernetes API Server. It is no surprise that this additional complexity offers a lot of freedom that the Cloud Foundry environment cannot offer. But we all know there is no free lunch: Apps build on Kyma are potentially more powerful than apps build on Cloud Foundry, but it is harder to design and set up applications that use the Kyma runtime.---
.The first deployment resource describes a pod that includes one container with fixed compute resources, a Docker image, a port that needs to be exposed (internally), and an attached service binding. The second service resource exposes the internal port to an internal service. The API rule resource exposes this service to the public internet and defines how communication can happen. The service instance resource describes the service name, the service plan, and the provisioning parameters. And the last service binding resource describes the service credentials of the provisioned service. This resource is also referenced in the first deployment resource above.apiVersion: apps/v1
kind: Deployment
metadata:
name: value
labels:
app: name
spec:
replicas: 1
selector:
matchLabels:
app: name
template:
metadata:
labels:
app: name
spec:
volumes:
- name: service-name
secret:
secretName: service-name-binding
containers:
- image: user/image
imagePullPolicy: Always
name: name
ports:
- name: http
containerPort: 5000
resources:
limits:
memory: 250Mi
requests:
memory: 32Mi
volumeMounts:
- name: service
mountPath: "/etc/secrets/sapcp/servicename/name_service"
readOnly: true
---
apiVersion: v1
kind: Service
metadata:
name: value
labels:
app: name
spec:
ports:
- name: http
port: 5000
selector:
app: name
---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
name: value
labels:
app: name
spec:
service:
host: approuter
name: value
port: 5000
gateway: kyma-gateway.kyma-system.svc.cluster.local
rules:
- path: /.*
methods: ["GET", "POST"]
accessStrategies:
- handler: noop
mutators: []
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: service-instance
spec:
clusterServiceClassExternalName: destination
clusterServicePlanExternalName: lite
parameters:
param1: value1
param2: value2
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: service-name-binding
spec:
instanceRef:
name: service-instance
The structure of the deployment.yaml
manifest
deployment.yaml
manifest is, among other commands, provided by kubectl:kubectl apply -f file
FROM node:12-alpine
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package.json ./
RUN npm install --only=production
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
mta.yaml
. Paste the following content in the file and then save it._schema-version: 3.2.0
ID: cloudnativelab2
version: 1.0.0
modules:
- name: approuter
type: javascript.nodejs
build-parameters:
no-source: true
requires:
- name: cloudnativelab2_destination
- name: cloudnativelab2_uaa
parameters:
disk-quota: 512M
docker:
image: iobert/dockerized-sapui5-app
memory: 512M
resources:
- name: cloudnativelab2_destination
type: org.cloudfoundry.managed-service
parameters:
path: ./destination.json
service: destination
service-plan: lite
- name: cloudnativelab2_uaa
type: org.cloudfoundry.managed-service
parameters:
path: ./xs-security.json
service: xsuaa
service-plan: application
destination.json
:{
"init_data": {
"subaccount": {
"existing_destinations_policy": "update",
"destinations": [
{
"Name": "Northwind",
"Description": "Automatically generated Northwind destination",
"Authentication": "NoAuthentication",
"ProxyType": "Internet",
"Type": "HTTP",
"URL": "https://services.odata.org"
}
]
}
}
}
xs-security.json
:{
"xsappname": "cloudnativelab2-cf",
"tenant-mode": "dedicated",
"oauth2-configuration": {
"redirect-uris": [
"https://*/**"
]
}
}
mbt build
cf deploy mta_archives/cloudnativelab2_1.0.0.mtar
deployment.yaml
file with the following content:apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudnativelab2
labels:
app: approuter
spec:
replicas: 1
selector:
matchLabels:
app: approuter
template:
metadata:
labels:
app: approuter
spec:
volumes:
- name: destination
secret:
secretName: destination-service-binding
- name: xsuaa
secret:
secretName: uaa-service-binding
containers:
# replace the repository URL with your own repository (e.g. {DockerID}/approuter:0.0.x for Docker Hub).
- image: iobert/dockerized-sapui5-app
imagePullPolicy: Always
name: approuter
ports:
- name: http
containerPort: 5000
volumeMounts:
- name: destination
mountPath: "/etc/secrets/sapcp/destination/cloudnativelab2_destination"
readOnly: true
- name: xsuaa
mountPath: "/etc/secrets/sapcp/xsuaa/cloudnativelab2_uaa"
readOnly: true
resources:
limits:
memory: 250Mi
requests:
memory: 32Mi
---
apiVersion: v1
kind: Service
metadata:
name: cloudnativelab2
labels:
app: approuter
spec:
ports:
- name: http
port: 5000
selector:
app: approuter
---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
labels:
app: approuter
name: cloudnativelab2
apirule.gateway.kyma-project.io/v1alpha1: approuter
spec:
gateway: kyma-gateway.kyma-system.svc.cluster.local
service:
host: approuter.c-8a96de0.kyma.shoot.live.k8s-hana.ondemand.com # TODO: Update URL here
name: cloudnativelab2
port: 5000
rules:
- path: /.*
methods: ["GET", "POST"]
accessStrategies:
- handler: noop
mutators:
- handler: header
config:
headers:
x-forwarded-host: approuter.c-8a96de0.kyma.shoot.live.k8s-hana.ondemand.com # TODO: Update URL here
x-forwarded-proto: https
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: uaa-service-instance
spec:
clusterServiceClassExternalName: xsuaa
clusterServicePlanExternalName: application
parameters:
xsappname: cloudnativelab2-kyma
tenant-mode: dedicated
oauth2-configuration:
redirect-uris:
- https://*/**
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: uaa-service-binding
spec:
instanceRef:
name: uaa-service-instance
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: destination-service-instance
spec:
clusterServiceClassExternalName: destination
clusterServicePlanExternalName: lite
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: destination-service-binding
spec:
instanceRef:
name: destination-service-instance
x-forward-host
andx-forward-proto
headers to run the approuter in the Kyma (see here why). Find the "TODO" comments left in this file and replace the values with the ID of your Kyma cluster.kubectl apply -f deployment/deployment.yaml
.mtar
archive.This was the second blog post of my bi-monthly series #CloudNativeLab. The name already says all there is to it; This series won't be about building pure business apps with cloud-native technology. I think there are already plenty of great posts about those aspects out there. Instead, this series rather thinks outside the box and demonstrates unconventional use-cases of cloud-native technology such as Cloud Foundry, Kyma, Gardener, etc. |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
13 | |
10 | |
7 | |
7 | |
6 | |
6 | |
6 | |
5 | |
5 | |
5 |