Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 


Testing is an important part of modern programming culture, and CAP provides many tools to make  our life easier. For simple applications, we can use the provided `cds.test` to launch a CAP test server to work with. (See reference here This is similar to `cds watch` where an in-memory database is spun up as default in the development profile.

The problems start to arise if we have native SAP HANA Artifacts in our CAP project (See reference here These include for example the hdbprocedures and hdbviews mentioned in the headline.
These objects don’t play along with sql databases and are a roadblock in testability. They need a runtime on a HANA Database (or to be more precise in an HDI container, to learn about HDI containers checkout and ).


In this blog

Therefore, in this blog we will go over on how to provide an HDI container, only used for our testing. This includes:

  1. Selecting a place to host our HDI containers

  2. Spinning up a new HDI container

  3. Giving CAP access to our new HDI container

  4. Deploy to our HDI container

  5. Running hybrid with our HDI (single tenant)

  6. Running a test with our HDI (multitenancy tenant)

  7. Closing words (hint for HANA Express)

These steps are done automatically by CAP if you deploy to hana via mta, but we’re going to do it manually to power our testing.


Selecting a place to host our HDI containers

As HDI containers hold all the artifacts of our app, we don’t care where they are, as long as we have access to them. You can spin them up on an XS-Advanced Instance, HANA-Express or SAP HANA Cloud. We are going to use SAP HANA Cloud, as it is very easy to set up on BTP. To set up your own SAP HANA Cloud Instance take a look at this Tutorial, for development purposes allow access from all IP-addresses as this will result in problems otherwise. Once you have your database instances created, copy the Instance ID and write it down, as we are going to need it later.


Spinning up a new HDI container

Creating a new HDI container is different for each platform, please see the documentation for different hosting options.

For SAP HANA Cloud in the Cloud Foundry Environment, we will create an HDI container as a service instance. First execute `cf login` and login to the correct region your Subaccount is in (visible on overview page as Cloud Foundry Environment API Endpoint). Then execute the following command to create a new HDI container.
cf create-service hana hdi-shared hana-hdi-shared-manual -c '{"database_id":"<<your_instance_id>>}' --wait

This will create an instance of the hana service with the hdi-shared plan in your selected space on your SAP HANA Cloud (specified via the Instance ID) with the name “hana hdi-shared hana-hdi-shared-manual”.

Congratulations, now you have an active yet empty HDI container. For later use, we also have to create credentials in the form of a service key. This is done with the command:
cf create-service-key hana-hdi-shared-manual hana-hdi-shared-manual-key --wait

Note that the key has the same name as the instance, with `-key` at the end. As this is suggested default key name by CAP (


Giving CAP access to our new HDI container

Our newly created HDI container is currently empty. The next step is to deploy our artifacts to it. But before, we have to give the hdi-deployer access to our container. Also, it's deprecated in for CDS, the `default-env.json` file is the easiest way to create a connection between the HDI container and hdi-deployer. To create this file, we use the hana-cli, which is a community npm package around the official API ( The following command will take our service key and instance name and create the needed file.
hana-cli serviceKey hana-hdi-shared-manual hana-hdi-shared-manual -key

as commented by Gregor, cds has a build in functionalaty to inject its binding to antoher process. Therefore we can avoid using the `default-env.json` all together. Creating the access file can be done with the following command:
cds bind -2 hana-hdi-shared-testing --profile hybrid --kind hana


Deploy to our HDI container

Deploying to is done via the @Sap/hdi-deploy npm package. Therefore, in your CAP project run `npm i @Sap/hdi-deploy` to add it to your dependency list if it is not there already. Once installed you can add a script to your package.json file which will look like this:`
"deploy": "node node_modules/@sap/hdi-deploy/ --root gen/db"

Edit: Additional script with binding:
"bound-deploy": "cds bind --profile hybrid --exec npm run deploy",

Here you can see, that the root path of the artifacts is set to gen/db. This directory is created and populated when executing `cds build` (remember to specify the profile if you have a non hana development profile with the flag –-profile <<profile_name>>)
Once you run npm run deploy, it will read the configuration from the default-env.json file and deploy our HDI container, just like a normal deploy.

Great, now we ready to go HDI container for our app, that is not known to any other service, so we can work with it peacefully.


Running hybridly with our HDI (single tenant)

The only thing left to on our agenda is to use our HDI in cap. For a hybrid setup you can just start cds with your hana profile 🙂 If you are more comfortable using non deprecated measures of connecting, you can use `cds bind -2 hana-hdi-shared-manual --kind hana`
to create a  `.cdsrc-private.json` file.
Now you can send requests to your OData service, which will use your hana runtime. Therefore, you can also run procedures, which will not interfere with anyone else!


Running a test with our HDI (multitenancy tenant)

Okay now addressing the elephant in the room: Running a jest test for a procedure in a multitenant application.

First, we need to add necessary packages to our project with the following command: ` npm add -D axios chai chai-as-promised chai-subset jest` (See this page for details: Once these packages are installed, create a file called initial-sqlite.test.js in your tests directory. Below you can find the needed test code.
const cds = require('@sap/cds')

// Set really high timeout for testing

describe('CDS Service Level Testing', () => {
let srv, Headers

beforeAll(async () => {
await cds.deploy(__dirname + '/../srv/api-service.cds').to('sqlite::memory:')
srv = await cds.serve('api').from(__dirname + '/../srv/api-service.cds')
Headers = srv.entities.Headers

it('GETs all headers empty', async () => {
console.log("GETs all headers empty");
const headers = await srv.get(Headers)


First needed modules are mocked, this will depend on your project, in my case it was axios, @Sap/xsenv and passport. A high jest timeout is set, just to be sure.
In the beforeAll method, we deploy our project to an in-memory database and serve it.
This will start your cds project on a service level which will NOT go through your `server.js` file. This can be a good or bad thing, depending on your setup. For me, it was great, because it skips all the mtx related code in there. Also, the beforeAll method is theoretically not a test, it already checks if our entity is defined.

This Setup beforehand helps to identify any problems in our test infrastructure, so we can deal with those before we add hana. The initial test will check if the request return successfully, with empty items, as I didn’t provide any in the deployment. I created a script in my package.json, called jest, which will just call jest "jest": "jest". When executing `npm run jest` jest will pick up our test file and run our test. Common pitfalls that may occur (you can skip this section if this did not encounter any errors)

Edit: the command to launch jest can be written like so:
"test": "cds bind --profile hybrid --exec jest",


... fails If you read the VCAP_APPLICATION environment variable directly in your code, which you shouldn’t and use @Sap/xsenv package, you can update your jest script to
"jest": "cross-env VCAP_APPLICATION={} jest”


which set that variable to an empty object. Note that I used the cross-env package, so this script is also cross-platform.


Request.reject throws an exception regarding auth.js / auth/restrict.js

Well, this is an authorization related topic. As of my research, This occurs because the Anonymous user has no authorization. I tried passing `{user: new cds.User.Privileged() } ` as a second parameter to the srv.get method, only the sql query gets updated on my cds version (5.7.4) and the req.user object is empty.
I navigated around this by disabling @restrict annotations for the test execution using the npm pre- and post-hooks. You can find my node scripts here.
Please note, that this approach seems to be broken in cds 6.1.1, but as the primary focus of this blog is to highlight the general setup, it’s a problem to be solved in the future.

If anybody knows what is happening here, feedback towards a propper solution is greatly apprechiated.


Finally adding hana

Now that we have made it here, we “just” need to connect to hana. This can be done by creating a separate initil-hana.test.js file. Here we must exchange the deploy statement for an connect statement.
  beforeAll(async () => {
process.env.CDS_ENV = "test"
await cds.connect('hana')
srv = await cds.serve('api').from(__dirname + '/../srv/api-service.cds')

Note, that I also set the CDS_ENV variable to ‘test’, as this the profile for the cds server used. This test profile is also a major contributor for dealing with (the old) mtx, we can disable multitenant related settings in our test profile like shown here:
"requires": { "[test]": { "multitenancy": false }, "[production]": { "multitenancy": true }, …

For me, this was related to the general requires section and the db multiTenant entry.

Now, with all this in place, we can execute `npm run jest` again and our test is going to use the hana connection from the `default-env.json` file. If you have your procedures implemented in actions, you can access them as methods on the srv object.


Closing words

This blog is meant as a showcase on how to test your HANA native artifacts, using CAP. We created a separate HDI container, deployed to it and connected it to our CAP application. All programmatically, that it can be executed in a CI/CD pipeline 😊
Some parts of the overall setup are quite hacky, so if anybody knows how to implement user-based testing properly, please leave a comment!
Here are my final scripts in package.json: (see terminateHungCommandOn script can be found here and stops a command on a given output, as the hana-cli and hdi-deploy command wouldn’t terminate on their own):
    "pretest": "node tests/removeRequires.js",
"bound-test": "cds bind --profile test --exec npm run test",
"test": "cross-env VCAP_APPLICATION={} jest || npm run posttest",
"posttest": "node tests/revertRemovalOfRequire.js ",
"pretest:ci": "npm run create-testing-HDI && npm run create-testing-HDI-key && npm run create-db-binding && npm run cds-build-production && npm run deploy-to-testing-HDI",
"cf-login": "true",
"create-testing-HDI": "sh -c 'cf create-service hana hdi-shared hana-hdi-shared-testing-main-$(whoami) -c tests/HanaCloudConfig.json -t \"MANUAL\" --wait'",
"create-testing-HDI-key": "sh -c 'cf create-service-key hana-hdi-shared-testing-main-$(whoami) hana-hdi-shared-testing-main-$(whoami)-key --wait'",
"create-db-binding" : "sh -c 'cds bind -2 hana-hdi-shared-testing-main-$(whoami) --profile test --kind hana'",
"deploy-to-testing-HDI": "sh 'stopped.' npm run bound-deploy",
"cds-build-production": "cds build --profile production",
"test:ci": "npm run bound-test",
"posttest:ci": "sh -c 'cf delete-service hana-hdi-shared-testing-main-$(whoami) --force --wait'",
"bound-deploy": "cds bind --profile test --exec npm run deploy",
"deploy": "node node_modules/@sap/hdi-deploy/ --root gen/db"
Labels in this area