Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Active Participant

Starting with version 7.0.0 @sap/cds now has native support for PostgreSQL. Please use the official database adapter @cap-js/postgres in favor of the described packages below and check out the official documentation for more details.

There is also a great blog post by tiaxu which includes a detailed step-by-step-guide on how to use @cap-js/postgres in a local environment as well as in combination with Cloud Foundry.


    • 28.06.23 – cds-pg and cds-dbm are now deprecated in favor of the official PostgreSQL support by @Sap/cds: 7.x

    • 09.03.23 – Updated codings and replaced cds-dbm build with cds build to reflect the latest changes with full support for @Sap/cds: 6.x

PostgreSQL support for the SAP Cloud Application Programming Model (CAP) has been in the news lately, thanks to ranking 1st and 3rd in SAP's Devtoberfest and some blog posts introducing the related Open Source project(s) from the SAP Community:

  1. PostgreSQL persistence adapter for CAP (Node.js)

  2. Getting started with CAP on PostgreSQL (Node.js)

The first post by vobu and gregorw is an introduction to cds-pg, the database adapter for PostgreSQL. The second post contains a step-by-step guide on how to get started with PostgreSQL in a local development environment. It also contains the introduction of cds-dbm, the tool required to deploy the CDS data model to the PostgreSQL database.

With the latest developments in both Node.js modules, it is now possible to deploy a CAP application with PostgreSQL as a database to the SAP Business Technology Platform  Cloud Foundry environment.

Since there also is a native PostgreSQL service available on Cloud Foundry (PostgreSQL on BTP, hyperscaler option), the deployment can be done without any extra work (creating a User Defined Service, manually managing the database schema...), just by leveraging CAP, the additional PostgreSQL related Node.js modules and the MTA build and deployment tools.

If this grabs your attention, then you will find a comprehensive description of the required steps to prepare and deploy a local CAP project running on PostgreSQL (the one created in my first blog post) to a SAP BTP Trial account below.

Honorable mention
gregorw did an outstanding job in getting his head through the various deployment scenarios in his pg_beershop and created the necessary foundation for the SAP BTP deployment. His repository also contains various other deployment scenarios. So if you want to deploy to Kyma, Azure or even Google Cloud Platform, have a look at his Github repo.


If you want to follow the steps and deploy the project by yourself, you have to do some preparations:

  • You need a BTP trial account (tutorial is here)

  • Ensure, that the PostgreSQL, hyperscaler option is in the list of entitlements of your trial account. If not, then you need to add the service manually.

  • You need to have the CF command line tools installed on your system (tutorial is here)

  • You need to have the CF multi apps plugin installed (tutorial is here)

  • You need to have the devtoberfest project in your local environment (via git clone

  • You need to have VS Code (or any other code editor) at hand

Preparing the project

The following steps are based on the small devtoberfest-related CAP project I introduced in my first blog post. If you want to follow along to deploy the project by yourself, just use the local-dev-only branch (git checkout local-dev-only). The main branch already contains all the stuff we are going to do.

Create a mta.yml file

The first thing we need to do is to generate a mta.yml file, that will contain the module and services definitions for SAP BTP CF. We can do this by hand, but @Sap/cds is supporting us by providing a command:
cds add mta

This will generate the mta.yml file in the root folder of the project. Please ignore the warning, that no SAP HANA module will be created. That's totally fine, since we do not want to use HANA, but PostgreSQL instead.

Create the cds build options

Before we are going to alter the generated mta.yaml file, we first need to add the build configuration. CAP comes with a default configuration, but we need to explicitly redefine the build tasks, because we want to make use of the PostgreSQL related task that is required to prepare everything for a Cloud Foundry deployment.
The following code snippet needs to be either inserted in the .cdsrc file of your project or directly integrated into the package.json, right unter cds:
"build": {
"tasks": [
"use": "node-cf",
"for": "node-cf",
"src": "srv"
"use": "cds-dbm/dist/build/postgres-cf",
"for": "postgres-cf",
"src": "db",
"options": {
"deployCmd": "npx cds-dbm deploy --load-via delta --auto-undeploy"

The configuration basically specifies two build tasks, one for our server module (node-cf, which is a standard build task provided by @Sap/cds) and the other one to prepare the PostgreSQL related stuff (postgres-cf).
The deployCmd option inside postgres-cf will be used on Cloud Foundry to trigger the actual deployment, but is only required, if want to use a different behaviour, than the default one (which is: npx cds-dbm deploy --load-via delta). For a description of all available options, have a look at the documentation.

We can now trigger the build manually right after we applied the configuration by simply executing the following command:
npx cds build

This should have generated a gen folder in the project, with two sub folders, one for each build task. The PostgreSQL-specific stuff has been written to gen/db and contains the following files:

  • csn.json
    The CSN definition of the data model.

  • package.json
    The package.json containing the required dependencies.

    A shell script which will be executed to trigger the deployment and prepares the environment. This is required because cds-dbm internally is relying on the Java-based liquibase framework.

  • apt.yml
    A required configation file for the CF apt-buildpack, that is used to add Java to the default Node.js-buildback.

  • manifest.yml
    In case you want to use cf push instead of cf deploy

  • undeploy.json
    The file to contain tables and views, that should be removed from the PostgreSQL database (if one is available)

  • data/*csv
    All relevant .csv files copied from the original db folder that contain default data, that will be imported during deployment, based on the defined load strategy.

Since all generated files and folders are ready to be deployed, we can just go over to the last preparation step and adjust the mta.yml file.

Update the mta.yml file

The initially generated mta.yml file needs to be adjusted and enhanced in several places:

Update the build command

The first thing to do is to replace the default cds build command with the one from cds-dbm:

- builder: custom
- npm install --production
- npx cds build --production

Define the PostgreSQL service

As a second step, we need to define, that we want the PostgreSQL service to create an instance, that we can use. For this, just add the following resources definition:
- name: devtoberfest-database
path: ./pg-options.json
service: postgresql-db
service-plan: trial
parameters: true
type: org.cloudfoundry.managed-service

There are two important aspects of this definition:

  • The skip-service-update is mandatory, since the PostgreSQL service in the trial environments does not allow any updates to the configuration after the initial deployment.

  • The referenced pg-options.json file can contain configuration parameters for the PostgreSQL service. A list of possible values is available in the official documentation.
    Since we want to use version 11 of PostgreSQL, we can just add this to the pg-options.json file:
    "engine_version": "11"

Create the postgres-deployer module

As a next step, we need to define the deployer module, that will be required to deploy the database schema (tables and views) to the PostgreSQL database. While the general idea is very similar to the default HANA deployment, the deployer module itself is different (thanks again gregorw for spending some sleepless nights on this one).
Please add the following definition in your modules section:
Update 01.12.2020
Updated the task`s command from ./ to chmod 755 && ./ to support building and deploying from Windows based systems.
For more information, please look in the comments below.

  - name: devtoberfest-db-deployer
type: custom
path: gen/db
buildpacks: [, nodejs_buildpack]
no-route: true
no-start: true
disk-quota: 2GB
memory: 512MB
- name: deploy_to_postgresql
command: chmod 755 && ./
disk-quota: 2GB
memory: 512MB
ignore: ["node_modules/"]
- name: devtoberfest-database

This basically defines a new module/application, that does not use a pre-defined type, but is relying on some custom configuration:

Instead of one, two buildpacks will be used for the runtime. Next to the default Node.js buildpack, we also include the apt-buildpack, which enables us to install Java next to Node.js onto the runtime container. And since we only want to trigger the deployment and do not keep the application running (and consuming precious resources), we do not apply a route (no-route) and also no start script (no-start) neither. Instead, we define a Cloud Foundry task (with an arbitrary name), that calls the script, which, as mentioned above, sets some required environment variables and then just triggers the deployCmd(default or specified in the package.json).

Add the postgres service to the server module

As a final step, we need to add the PostgreSQL service to the server module by simply adding the requires statement:
  # --------------------- SERVER MODULE ------------------------
- name: devtoberfest-srv
# ------------------------------------------------------------
type: nodejs
path: gen/srv
- name: srv-api # required by consumers of CAP services (e.g. approuter)
srv-url: ${default-url}
# also use our PostgreSQL database
- name: devtoberfest-database

And that's it, we applied all the configuration required and are now ready to go on with the actual deployment.

The final mta.yml file can be viewed here.

Add mbt to the package.json

As a last, but important step, which isn't related to something PostgreSQL specific:
Since we want to deploy our application using the MTA approach and our app will consist of multiple apps and services, we will leverage the Cloud MTA Build Tool (MBT) provided by SAP to create the required .mtar file. As we are using Node.js, we can simply add the mbtmodule as dev dependency in the package.json and trigger the install via npm ifrom our project folder.
"devDependencies": {
"mbt": "~1.0.16"

Deploying to SAP BTP Cloud Foundry

The deployment part is straight forward and simple: just use one command to trigger the build and then another one to deploy to SAP BTP.

Build via mbt build

The first task to be done is to build the whole project and create the .mtar file. Thanks to the Cloud MTA Build Tool, this is just a simple call on the command line:
npx mbt build

When this is done, you should now have a deployable file, which contains all the source code and instructions required to deploy and run the project on SAP BTP: mta_archives/devtoberfest_1.0.0.mtar.

Deploy via cf deploy

Before starting the final step, make sure you are logged into your Cloud Foundry trial space through the command line tools (cf login). Then, just trigger the deployment:
cf deploy mta_archives/devtoberfest_1.0.0.mtar

The deployment will take some time, since (at least in my case), the creation of the PostgreSQL service took several minutes. But in the end you should get the message, that everything has been executed and successfully deployed:

  • The PostgreSQL service instance has been created and is available.

  • The devtoberfest-db-deployer module has been created and the task executed to deploy the data model to the PostgreSQL database

  • The devtoberfest-srv module has been created and started, so it should be available as an HTTP entpoint

If you now go to the Cloud Cockpit of your Trial account, you should see the deployed server application and the available, but stopped deployer module.

Now simply open the URL of your server application in the browser and voilá, there are the Devtoberfest projects, deliciously served from your PostgreSQL database through CAP on Cloud Foundry.

Some enhancements

In the devtoberfest project on GitHub, I wrapped the build and deploy calls for convenience in some npm scripts:

  • npm run build:cf will call mbt build internally

  • npm run deploy:cf will call cf deploy internally

I also removed the database credentials for the local development from the package.json file, since we don't want to move them around all the time. In the CF, the database credentials get automatically injected during the deployment and via environment variables. The local credentials should therefore be placed in a default-env.json file (please also look at the readme of the project on GitHub)

Some closing words

In this post I have shown the various steps required to deploy a CAP project with PostgreSQL as a database to SAP BTP Cloud Foundry. And if you have already some experience with CAP and CF deployments using SAP HANA, you should see, that there are only small things, that differ in the whole process. This even applies to continuous deployments of your application: If you adjust things in your local application and do a re-deploy, cds-dbm will not drop your already deployed schema and data, but identify the delta between the existing schema/data and your updated data model and just alter everything safely.

While this sounds and looks already very promising, please keep in mind, that both, cds-pg and cds-dbm are still very young and in active development and there are some features missing to have parity with the default CAP functionality (see the todo list in the cds-pg project). Nevertheless I would encourage you to start using both for cases, in which you want to take advantage of the power of CAP but where SAP HANA may be oversized or somehow not suitable to act as the persistence layer.

And if you encouter some bugs or identify some missing features, please reach out to the developers on GitHub and file in respective issues... or even better, since the libraries are Open Source, get your hands dirty and directly contribute to the projects!
Labels in this area