Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
mariusobert
Developer Advocate
Developer Advocate
7,698
In this Cloud-Native Lab post, I'll compare the manifest files of two runtimes within SAP Cloud Platform - the Cloud Foundry and the Kyma runtime. In other words, I compare thedeployment.yaml of Kyma with the mta.yaml file of SAP's Cloud Foundry Deploy service.

Update 6th Nov 2020: I added a more elegant way to deploy to the Kyma Runtime

This comparison serves two purposes. First, it will help you understand the fundamental differences between the Cloud Foundry and the Kyma runtime. And second, you'll learn what kinds of directives exist for each manifest file and how to "translate" them to each other.


A dockerized SAPUI5 sample app running on the Cloud Foundry and the Kyma runtime


In the screenshot above, you can see that I deployed a SAPUI5 sample app to both runtimes - Kyma and Cloud Foundry. I created a simple SAPUI5 web app that is embedded in an approuter, which is a Node.js application. This approuter consumes two SAP Cloud Platform services (destination services and xsuaa services). I dockerized the entire application and uploaded it to DockerHub, from where it can be consumed by both cloud-native runtimes. For this consumption, each runtime later needs to define the compute resources and bound service instances.

Manifests


Manifest files are quite common in software development. They are used outside of the SAP-World (e.g., Android App Manifests, Node.js package.json) and inside the SAP-World (SAPUI5 manifest.json). They usually include metadata about projects such as the project ID, the project name, and the project's packages or dependencies. The manifest of the SAP Cloud Platform runtimes contain metadata as well, but they use different properties than the example from above.

While there is a clear difference in the number of available parameters and their effects, there are also many similarities between the deployment.yaml of  Kubernetes and Kyma and the mta.yaml file of the deploy service of SAP Cloud Platform, Cloud Foundry. As the name suggests, both manifests use YAML as the file format. While this format is not always easy to write, it is significantly easier to read than other formats such as JSON.

Both manifest file types are used to specify the respective parameters that the platform can offer. Typical vectors here are the compute resources (memory size, disk size, CPU shares, etc.), the bound service instances, the attached volumes, the environment variables, and so on. Platforms that make more assumptions of the hosting setup (such as Cloud Foundry) typically offer fewer configuration parameters and offer a more simplistic manifest. More mightily platforms, such a Kyma, on the other side, offer many tuning parameters to configure the project setups and apply best practices manually. As a consequence, the manifest becomes more verbose.

Services in SAP Cloud Platform


At last years' TechEd, SAP's CTO Jürgen Müller announced that the Business Technology Platform's goal is “to provide the fastest way to turn data into business value.”  This goal also applies to the SAP Cloud Platform as it is part of the Business Technology Platform. The value of a platform heavily depends on the value of the services offered on this platform. To provide high business value, the SAP Cloud Platform offers many business services that make life easy for SAP developers.

Such services are, for example, the destination and connectivity services that help you to connect your cloud apps with cloud solutions (SAP S/4 HANA Cloud, SAP SuccessFactors, non-SAP system...) and on-premise solutions (SAP S/4 HANA, SAP NetWeaver...). The Launchpad service is used to provide access to all your business apps via the Fiori Launchpad. The Workflow Management service can create flexible workflows for your processes and define business rules. The Document Information Extraction service uses machine learning to extract information from documents such as bills and recipes. With this technical service, you can add these capabilities to your application with a simple REST request. All SAP Cloud Platform services can be found here.

To reiterate: The value of the SAP Cloud Platform comes from its services; the runtimes are the connective tissue that binds the services with each other while creating business value. In that message's spirit, we started reorganizing the SAP Cloud Platform cockpit to bring the services more in the developers' focus. As the screenshot below shows, we now display the provisioned services instances of all runtimes next to each other.


SAP Cloud Platform cockpit view that shows the services instances of both runtimes



The Cloud Foundry Manifest


To be more precise: The manifest of the Cloud Foundry Deploy Service


The mta.yaml file is the manifest of the SAP Cloud Foundry deploy service distribution and the manifest.yamlis the general manifest - you can use both in SAP Cloud Platform. In practice, I see moremta.yamlfiles, which is why I'll focus on them here.

This manifest defines two types of resources: the modules (applications) and resources (services) consumed by the modules. Modules contain a type, source code files, and parameters to specify the runtime environment's compute resources. As an alternative to the source code, you can also refer to a prebuilt Docker image.  Resources are defined with a service name, a service plan, and possibly individual parameters that specify the service instance's configuration. This configuration can be externalized in a JSON file to keep the manifest short and concise. Overall the manifest is quite easy to read as it provides a clearly laid out set of parameters. The creators behind it strictly followed the KISS principle to make cloud deployments as easy as possible.
_schema-version: 3.2.0
ID: project
version: 1.0.0

modules:
- name: module1
type: javascript.nodejs
path: folder1
requires:
- name: service_name
parameters:
disk-quota: 512M
memory: 512M
resources:
- name: service_name
type: org.cloudfoundry.managed-service
parameters:
path: ./configuration.json
service: service-name
service-plan: service-plan

The simple structure of the  mta.yaml manifest


The snippet above shows a service binding. This means that the service credentials are injected into the environment variables of the module. All major programming languages provide directives to read these variables. To make life easier, you can also use packages to abstract these calls and directly access the service credentials.

The manifest can also be used to describe the build-parameters of the project. They can be leveraged to trigger the build process with the mbt tool. This makes it easier to include the project in an optimized CI/CD pipeline later on. These build steps are executed locally, and only the build results will be included in the .mtar archive later. As the build steps are not needed during deployment, they are removed from the manifest. Only the resulting "deployment manifest", which is then called mtad.yaml, will be included in the .mtar archive.

The command to deploy the .mtar archive is, among other commands, provided by the MultiApps Cloud Foundry CLI Plugin:
cf deploy archive.mtar 
cf undeploy archive.mtar
cf mta archive
cf mtas
# and more

 

The Kyma Manifest


To be more precise: The Kubernetes manifest


As Kyma builds on top of Kubernetes, it uses the deployment.yaml(the file name can vary) manifest to organize its resources.

Kubernetes provides much more resource types than Cloud Foundry. On top of that, Kyma adds additional resource types. Possible types are deployments, services to route traffic, secrets, API gateways, service instances, and services bindings. All these resources can be freely configured, connected, and annotated with so-called labels. These labels make it easier to organize, access, and patch resources later on. The resources can be described in one or multiple .yaml files, which are then sent to the Kubernetes API Server. It is no surprise that this additional complexity offers a lot of freedom that the Cloud Foundry environment cannot offer. But we all know there is no free lunch: Apps build on Kyma are potentially more powerful than apps build on Cloud Foundry, but it is harder to design and set up applications that use the Kyma runtime.

At the time of this writing, not all SAP Cloud Platform services are available in the Kyma runtime. But I can assure you, we're working on extending the list of available services.

The following snippets show a similar application to the one we've seen above. It includes multiple resources that are separated by---.The first deployment resource describes a pod that includes one container with fixed compute resources, a Docker image, a port that needs to be exposed (internally), and an attached service binding. The second service resource exposes the internal port to an internal service. The API rule resource exposes this service to the public internet and defines how communication can happen. The service instance resource describes the service name, the service plan, and the provisioning parameters. And the last service binding resource describes the service credentials of the provisioned service. This resource is also referenced in the first deployment resource above.
apiVersion: apps/v1
kind: Deployment
metadata:
name: value
labels:
app: name
spec:
replicas: 1
selector:
matchLabels:
app: name
template:
metadata:
labels:
app: name
spec:
volumes:
- name: service-name
secret:
secretName: service-name-binding
containers:
- image: user/image
imagePullPolicy: Always
name: name
ports:
- name: http
containerPort: 5000
resources:
limits:
memory: 250Mi
requests:
memory: 32Mi
volumeMounts:
- name: service
mountPath: "/etc/secrets/sapcp/servicename/name_service"
readOnly: true

---
apiVersion: v1
kind: Service
metadata:
name: value
labels:
app: name
spec:
ports:
- name: http
port: 5000
selector:
app: name

---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
name: value
labels:
app: name
spec:
service:
host: approuter
name: value
port: 5000
gateway: kyma-gateway.kyma-system.svc.cluster.local
rules:
- path: /.*
methods: ["GET", "POST"]
accessStrategies:
- handler: noop
mutators: []

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: service-instance
spec:
clusterServiceClassExternalName: destination
clusterServicePlanExternalName: lite
parameters:
param1: value1
param2: value2

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: service-name-binding
spec:
instanceRef:
name: service-instance

The structure of the  deployment.yaml manifest


At first sight, both manifest looks very different, but when we have a closer look, we see that very similar things are going on. We see that Kyma provides more options to define how services are exposed to the public and, therefore, also provides options to use services only for inter-pods communication. Another similarity is that you can use the xsenv package to retrieve the credentials from the service bindings. This package is the reason why we attached the service bindings to volumes.

The command to trigger the process that is described in the deployment.yaml manifest is, among other commands, provided by kubectl:
kubectl apply -f file

Hands-on: Deploy a Docker image with two bound services


I'll deploy the same Docker image that consumes two SAP Cloud Platform services (destination and xsuaa service) to the Kyma and the Cloud Foundry runtime in the rest of this post. I think this example serves well to illustrate the similarities and differences between both approaches. In the end, we'll see two deployed SAPUI5 apps that display data from the Northwind service and are accessible via Single-Sign-On. To save some time, I already created the Docker image and uploaded it to DockerHub. I added the Dockerfile that I used here for the sake of completion, but you don't have to worry about it as the image has already been created.
FROM node:12-alpine

WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package.json ./

RUN npm install --only=production
COPY . .

EXPOSE 5000
CMD [ "npm", "start" ]

0. Preparation


Before we get to the fun part, we need to install some tools which are mandatory for cloud development on SAP Cloud Platform (if you haven't done so already):

1. Create the manifest for Cloud Foundry


First, you need to create the manifest file, the mta.yaml. Paste the following content in the file and then save it.
_schema-version: 3.2.0
ID: cloudnativelab2
version: 1.0.0

modules:
- name: approuter
type: javascript.nodejs
build-parameters:
no-source: true
requires:
- name: cloudnativelab2_destination
- name: cloudnativelab2_uaa
parameters:
disk-quota: 512M
docker:
image: iobert/dockerized-sapui5-app
memory: 512M
resources:
- name: cloudnativelab2_destination
type: org.cloudfoundry.managed-service
parameters:
path: ./destination.json
service: destination
service-plan: lite
- name: cloudnativelab2_uaa
type: org.cloudfoundry.managed-service
parameters:
path: ./xs-security.json
service: xsuaa
service-plan: application

You'll notice that this manifest outsources the service instance definitions. Therefore you need to create the following files destination.json:
{
"init_data": {
"subaccount": {
"existing_destinations_policy": "update",
"destinations": [
{
"Name": "Northwind",
"Description": "Automatically generated Northwind destination",
"Authentication": "NoAuthentication",
"ProxyType": "Internet",
"Type": "HTTP",
"URL": "https://services.odata.org"
}
]
}
}
}

And xs-security.json:
{
"xsappname": "cloudnativelab2-cf",
"tenant-mode": "dedicated",
"oauth2-configuration": {
"redirect-uris": [
"https://*/**"
]
}
}

Both files will perform some service-specific configuration steps.

2. Deploy to the Cloud Foundry environment


The deployment here is straight forward. First, you need to build the .mtar archive (that includes the manifest), and then you need to deploy it.
mbt build
cf deploy mta_archives/cloudnativelab2_1.0.0.mtar

You'll find the URL of the app in the console output once the deployment is finished.

Tip: You won't need to wait until the deployment finished to start the next step.

3. Create the manifest for Kyma


As mentioned above, the Kubernetes manifest is more verbose due to the higher complexity. For the sake of simplicity, I wrote all definitions in a single file. Create a deployment.yamlfile with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudnativelab2
labels:
app: approuter
spec:
replicas: 1
selector:
matchLabels:
app: approuter
template:
metadata:
labels:
app: approuter
spec:
volumes:
- name: destination
secret:
secretName: destination-service-binding
- name: xsuaa
secret:
secretName: uaa-service-binding
containers:
# replace the repository URL with your own repository (e.g. {DockerID}/approuter:0.0.x for Docker Hub).
- image: iobert/dockerized-sapui5-app
imagePullPolicy: Always
name: approuter
ports:
- name: http
containerPort: 5000
volumeMounts:
- name: destination
mountPath: "/etc/secrets/sapcp/destination/cloudnativelab2_destination"
readOnly: true
- name: xsuaa
mountPath: "/etc/secrets/sapcp/xsuaa/cloudnativelab2_uaa"
readOnly: true
resources:
limits:
memory: 250Mi
requests:
memory: 32Mi

---
apiVersion: v1
kind: Service
metadata:
name: cloudnativelab2
labels:
app: approuter
spec:
ports:
- name: http
port: 5000
selector:
app: approuter

---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
labels:
app: approuter
name: cloudnativelab2
apirule.gateway.kyma-project.io/v1alpha1: approuter
spec:
gateway: kyma-gateway.kyma-system.svc.cluster.local
service:
host: approuter.c-8a96de0.kyma.shoot.live.k8s-hana.ondemand.com # TODO: Update URL here
name: cloudnativelab2
port: 5000
rules:
- path: /.*
methods: ["GET", "POST"]
accessStrategies:
- handler: noop
mutators:
- handler: header
config:
headers:
x-forwarded-host: approuter.c-8a96de0.kyma.shoot.live.k8s-hana.ondemand.com # TODO: Update URL here
x-forwarded-proto: https
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: uaa-service-instance
spec:
clusterServiceClassExternalName: xsuaa
clusterServicePlanExternalName: application
parameters:
xsappname: cloudnativelab2-kyma
tenant-mode: dedicated
oauth2-configuration:
redirect-uris:
- https://*/**

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: uaa-service-binding
spec:
instanceRef:
name: uaa-service-instance

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: destination-service-instance
spec:
clusterServiceClassExternalName: destination
clusterServicePlanExternalName: lite

---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: destination-service-binding
spec:
instanceRef:
name: destination-service-instance

We need to add some special  x-forward-hostandx-forward-protoheaders to run the approuter in the Kyma (see here why). Find the "TODO" comments left in this file and replace the values with the ID of your Kyma cluster.

You'll notice that this manifest contains the same information as the Cloud Foundry manifest. The only big difference is the destination service's missing service parameters, but this could be added as well. Another minor difference is that there is, as far as I know, no way to externalize the service parameters in an external file (please leave a comment if you know how to do this). Therefore, we included the configuration of the xsuaa service in the manifest. On top, you see metadata labels and networking configurations needed to expose the port of the Docker image to the outside world.

4. Deploy to the Kyma environment


Deploying apps to Kyma requires the following command:
kubectl apply -f deployment/deployment.yaml

You'll find the URL of the app in the Kyma console or you can directly access the URL you inserted in the deployment file in the previous step.

5. Inspect both apps


Once the deployment is finished, you can access both web apps. You'll see that you won't see a difference except for the URL in the browser. Both apps use the same Docker image and the same services, and therefore, they are identical.


Summary


I hope this post helped you to grasp the differences between both runtimes. We've seen that the Kyma runtime catalog currently only includes a subset of the available services in SAP Cloud Platform - but this will definitely change in the future.
Another difference is that Kyma requires a Docker image and you need a development process to build a Docker image based on a Dockerfile. This image needs to be uploaded to a registry before the actual deployment can be triggered. Cloud Foundry does not require Docker images but can leverage buildpacks to run the code directly from the .mtar archive.
The third big difference that I want to summarize is that I've been highly impressed by the speed of Kyma. The console (web interface) and the CLI are very fast and have almost no noticeable loading time. The same goes for the deploy time. The Cloud Foundry app took about 1:12 min to deploy while the Kyma app was about 3x faster and only required 0:27 min.

You now also understand that runtimes shall be used to combine the platform's services to create business value. A runtime by itself does neither solve a business problem, nor does it create value. This is why it does make sense to pick the simplest runtime that fulfills your needs. In case the Cloud Foundry runtime offers all you need, I recommend using it. Cloud Foundry makes many assumptions and takes work off your developers and, therefore, saves time. If you want to build more complex apps that require features such as internal routing, different scaling behavior, or if you deliberately want to diverge from the assumptions that the Cloud Foundry environment makes, I recommend the Kyma runtime.

Next Steps



 

Disclaimer: It might also make sense to use Istio features to redirect traffic between your application's services. Depending on your individual project setup, it might not be necessary to include an approuter to the project.






This was the second blog post of my bi-monthly series #CloudNativeLab. The name already says all there is to it; This series won't be about building pure business apps with cloud-native technology. I think there are already plenty of great posts about those aspects out there. Instead, this series rather thinks outside the box and demonstrates unconventional use-cases of cloud-native technology such as Cloud Foundry, Kyma, Gardener, etc.

Previous episode: Cloud-Native Lab #1 - 7 Ways to Define Environment Variables

Next episode: Cloud-Native Lab #3 – Comparing Cloud Foundry and Kyma Clients

 
13 Comments
thomashertz
Explorer
thanks a lot for putting so much effort in and writing down your experience here!

thomas
mariusobert
Developer Advocate
Developer Advocate
You're welcome, Thomas ?
IG1
Active Participant
0 Kudos
Great blog Marius. Very nicely explained.

wondering if you have a git repo for your docker image content. would be good to look at that as well.

 
0 Kudos
Thanks for the blog Marius !

I am involved in an effort to run Approuter in Kyma. I was able to get it to forward requests to my java spring app without xsuaa.

For me however, the scopes are not being set correctly at runtime. So although authentication works, I am getting a 403.

 
mariusobert
Developer Advocate
Developer Advocate
0 Kudos
Hi Ishaan,

thanks for your kind feedback. I didn't go all the way to publish the repo on GitHub because I didn't think it would be too interesting. In case you are curious, you could run the image locally and SSH into it to see all files there. At the core is just a plain SAPUI5 app that has been generated with easy-ui5 🙂
mariusobert
Developer Advocate
Developer Advocate
0 Kudos
Hi Sundar,

Did you follow all the steps that I mentioned in this blog post? If so, please reach out to me internally and I'll have a look.
Thanks for the blog Marius.

It is really helpful in understanding the difference between Cloud Foundry & Kyma

Regards,

Mayank
brampurnot
Employee
Employee
0 Kudos
Quick question: how do you specify the destination then? I know that we can do it in the manifest.yml file for CloudFoundry but this is not being used in Docker. I'm trying to deploy my app but it keeps on giving me an error that the destination cannot be found. Is the only way of overcoming this by using the destination service and specify the destination there?

Thanks,
Bram
mariusobert
Developer Advocate
Developer Advocate
0 Kudos
Hi,

I defined the destinations in the destination service (with subaccount or instance-level scope) here but you could also use environment variables for this. Here are some more details on this in case you are using the approuter.
SL1
Discoverer
0 Kudos

Hi,
We are trying to connect a java application running on kyma in BTP with a SAP S4 Public cloud running on the same BTP. However when we try to deploy the java application it fails with an "Error creating bean with name "destinationConfigImpl": Injection of autowired dependencies failed ... Could not respolve placeholder "<environment-variable-name-here>"
The environment variables are defined in config map, yml and annotated in the java config class. Names match as they should and destination and xsuaa service have been created in the BTP. We followed the approach described here
https://community.sap.com/t5/technology-blogs-by-sap/the-new-way-to-consume-service-bindings-on-kyma...

Any idea in which direction we can investigate? 

quovadis
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hello,

Assuming you are using SAP Cloud SDK in your java workload to call into S4 system via BTP destination. Please share the deployment manifest of yours for inspection. 

Thank you

SL1
Discoverer
0 Kudos

yes we are using the SDK. Here is the deployment manifest

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: testapp-s4cloud-connection
  namespace: tests-testapp-s4hanacloud-connection
  labels:
    app.kubernetes.io/name: testapp-s4cloud-connection
spec:
  replicas: 1
  selector:
    matchLabels:
      app: testapp-s4cloud-connection
  template:
    metadata:
      labels:
        app: testapp-s4cloud-connection
        sidecar.istio.io/inject: 'false'
    spec:
      imagePullSecrets: [{'regcred'}]
      containers:
        - name: testapp-s4cloud-connection
          image:  repo2024/testapp_btp_connection:0.1.7-SNAPSHOT
          resources:
            requests:
              memory: 64Mi
              cpu: 50m
            limits:
              memory: 128Mi
              cpu: 100m
            ports:
              - containerPort: 8080
                name: http
            imagePullPolicy: Always
            env:
              - name: SERVICE_BINDING_ROOT
                value: "/bindings"
            volumeMounts:
              - name: testapp-destination-test
                mountPath: "/bindings/testapp-destination-test"
                readOnly: true
              - name: my-xsuaa-test
                mountPath: "/bindings/my-xsuaa-test"
                readOnly: true
      volumes:
        - name: testapp-destination-test
          secret:
            secretName: testapp-destination-test-binding
        - name: my-xsuaa-test
          secret:
            secretName: my-xsuaa-test-binding
---
 apiVersion: v1
 kind: Service
 metadata:
  name: testapp-s4cloud-connection
  labels:
    app: testapp-s4cloud-connection
 spec:
   type: NodePort
   ports:
    - name: http
      port: 8080
      targetPort: 8080
      protocol: TCP
   selector:
     app: testapp-s4cloud-connection

 

 

quovadis
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hello, thank you for the above deployment manifest.

1. The SERVICE_BINDING_ROOT part of it looks fine. 

 

            env:
              - name: SERVICE_BINDING_ROOT
                value: "/bindings"
            volumeMounts:
              - name: testapp-destination-test
                mountPath: "/bindings/testapp-destination-test"
                readOnly: true
              - name: my-xsuaa-test
                mountPath: "/bindings/my-xsuaa-test"
                readOnly: true
      volumes:
        - name: testapp-destination-test
          secret:
            secretName: testapp-destination-test-binding
        - name: my-xsuaa-test
          secret:
            secretName: my-xsuaa-test-binding

 

2. Assuming the below is the destination service instance and binding manifests with the S/4 destination definition pointing to a SAP S/4HANA Public Cloud sandbox from SAP API Business Hub.

 

apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
  name: {{ .Values.services.dest.name }}
  labels:
    app.kubernetes.io/name: {{ .Values.services.dest.name }}
spec:
  externalName: '{{ .Values.services.dest.name }}-{{ .Release.Namespace }}'
  serviceOfferingName: destination
  servicePlanName: lite 
  parameters:  
    init_data:
      instance:
        existing_destinations_policy: ignore
        existing_certificates_policy: ignore
        destinations:
          - Name: faas-s4hc-api
            Description: S/4HANA Cloud
            URL: https://sandbox.api.sap.com/s4hanacloud
            Type: HTTP
            ProxyType: Internet
            Authentication: NoAuthentication
            URL.headers.APIKey: {{ .Values.services.dest.APIKey }}
            URL.headers.Application-Interface-Key: {{ .Values.services.dest.ApplicationInterfaceKey }}
            HTML5.DynamicDestination: "true"
            
          - Name: faas-nw
            Description: Northwind
            URL: https://services.odata.org/v2/Northwind/Northwind.svc
            Type: HTTP
            ProxyType: Internet
            Authentication: NoAuthentication
            HTML5.DynamicDestination: "true"

apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
  name: {{ .Values.services.dest.bindingName }}
  labels:
    app.kubernetes.io/name: {{ .Values.services.dest.bindingName }}
spec:
  serviceInstanceName: {{ .Values.services.dest.name }}
  externalName: {{ .Values.services.dest.name }}
  secretName: {{ .Values.services.dest.bindingSecretName }}
  parameters: {}
  parametersFrom: []

 

Both are instance level destination definitions to help test your java code. 

You should be able to call these two destination using SAP Cloud SDK for java, namely: https://sap.github.io/cloud-sdk/docs/java/overview-cloud-sdk-for-java

3. with the following values:

 

clusterDomain:
gateway:
services:
  app:
    name: testapp-s4cloud-connection
    port: '8080'
  uaa:
    name: my-xsuaa-test
    xsappname: fun
    bindingName: my-xsuaa-test-binding
    bindingSecretName: my-xsuaa-test-binding-secret
  dest:
    name: testapp-destination-test
    bindingName: testapp-destination-test-binding
    bindingSecretName: testapp-destination-test-binding-secret
    ApplicationInterfaceKey: saptest0
    APIKey: <-- your Business Hub API Key goes here 

 

 

 

 

Thank you.

PS. For instance, a code snippet in java script to retrieve the sales order from the sanbox using the SAP Cloud SDK for javascript

 

 

        const { retrieveJwt } = require('@sap-cloud-sdk/connectivity');
        const { setGlobalLogLevel, createLogger } = require('@sap-cloud-sdk/util');
        const { desc } = require('@sap-cloud-sdk/odata-v2');
        const { salesOrderService } = require('@sap/cloud-sdk-vdm-sales-order-service');
        const { salesOrderApi, salesOrderItemApi } = salesOrderService();
 
       async function getSalesOrders(req) {
          return salesOrderApi.requestBuilder()
            .getAll()
            .filter(salesOrderApi.schema.TOTAL_NET_AMOUNT.greaterThan(2000))
            .top(3)
            .orderBy(desc(salesOrderApi.schema.LAST_CHANGE_DATE_TIME))
            .select(
              salesOrderApi.schema.SALES_ORDER,
              salesOrderApi.schema.LAST_CHANGE_DATE_TIME,
              salesOrderApi.schema.INCOTERMS_LOCATION_1,
              salesOrderApi.schema.TOTAL_NET_AMOUNT,              salesOrderApi.schema.TO_ITEM.select(salesOrderItemApi.schema.MATERIAL, salesOrderItemApi.schema.NET_AMOUNT)
            )
            .execute({
              destinationName: 'faas-s4hc-api'
              ,
              jwt: retrieveJwt(req)

            });
        }