Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
A customer was running a heavy workload that needed to combine data from PostGIS and SAP HANA Cloud. A first step before exploring data replication and federation is to colocate the two inside the same account on the SAP Business Technology Platform.

The SAP BTP service PostgreSQL hyperscaler option is an abstraction of the AWS/Azure/GCP/Alicloud native postgres service. The PostgreSQL services provided by hyperscalers allow to pick the engine version and to activate extensions such as PostGIS. It's also possible to select high availability and get a clustered instance spanning more than one data center availability zone.

The goal of this post is to complement the sevice documentation with sample commands instead of  just API specification.

Instances can be created from cloud foundry or from Kyma. They are only be accessible from the environment they have been provisioned. To expose the instance to the outside world, consider hosting in Kyma or Cloud Foundry a middleware to handle a web protocol such as open layer.


Provisioning the instance is decently explained. As for any service, entitlements to the service plan must exist on the global account, and quotas must be added to the subaccount.

Activating the postgis extension (with Cloud Foundry)

Activating the extensio requires the cloud foundry command line and curl to call the extension API.

You will first need the use the cloud foundry API url of the subaccount.

Cloud Foundry API url in the SAP BTP cockpit

Initiate connection to the Cloud Foundry API using
cf login --sso -a

This should open a browser page, authenticate you using your SSO, and produce a one time login code that you should copy paste into the command line. Then you will be prompted to select an organization and space which should point to where the PostgreSQL service is provisioned.

For this blog, the postgres service was provisioned with
cf create-service postgresql-db development pgdemo -c '{"engine_version":"14"}'

You can check the progress of the provisioning operation with
cf service pgdemo
Showing status of last operation:
status: create in progress

And a few minutes later
   status: create succeeded

Now we need to fetch the guid and database name of the new service. The following lines extract those information into variables.
guid=$(cf service pgdemo --guid)

cf create-service-key pgdemo access_postgis_key
cf service-key pgdemo access_postgis_key \
| tail -n +2 \
| jq > access_postgis_key.json
dbname=$(jq -r '.credentials.dbname' access_postgis_key.json)
pghost=$(jq -r '.credentials.hostname' access_postgis_key.json)
pgport=$(jq -r '.credentials.port' access_postgis_key.json)
pguser=$(jq -r '.credentials.username' access_postgis_key.json)
pgpass=$(jq -r '.credentials.password' access_postgis_key.json)

Next we need to prepare authentication against the service broker, please replace eu10 with the region of your cloud foundry API.
token=$(cf oauth-token)

And we activate the PostGIS extension by calling the extension API
curl -X PUT \
-H 'content-type: application/json' \
-H "Authorization: ${token}" \
-d "{\"database\":\"${dbname}\"}" \


Postgres can be accessed directly from applications in BTP. From a laptop, we need to establish a tunnel.

Create a dummy application called ssh_pgdemo. Execute the 4 cf commands one by one to better control the result.
mkdir dummydir
cd dummydir
touch onefile
cf push ssh_pgdemo -b staticfile_buildpack -m 64m
cf enable-ssh ssh_pgdemo
cf restart ssh_pgdemo
cf ssh -L 63306:$pghost:$pgport ssh_pgdemo

The last command should open a prompt into the dummy application, the tunnel is now up.

As a final validation, we import some Openstreet map data using gdal.
ogr2ogr -f PostgreSQL \
PG:"dbname='${dbname}' host='localhost' port='63306' user='${pguser}' password='${pgpass}'" \

0...10...20...30...40...50...60...70...80...90...100 - done.

This creates and populates tables in the public schema with OSM data for the microstate Andorra into the newly created postgis instance running in SAP BTP 👍


If you had provisioned the instance with Kyma, you could keep a small pod running to interactively execute some SQL commands and monitor your application.

In the namespace where service bindings exist, create a pod with the postgres client program and mount the credentials in a folder
cat <<EOF | kubectl apply -n your_namespace -f -
pipe heredoc> apiVersion: v1
kind: Pod
name: pgclient
- name: pgcreds
secretName: pgauthdata
- name: pgclient
image: postgres:14-bullseye
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
- name: pgcreds
mountPath: /tmp/pgcreds

pod/pgclient created

Then open a shell to the pod and use the psql command. You can then type SQL commands.
kubectl exec -n your_namespace -ti pgclient -- /bin/bash
root@pgclient:/# psql -h $(</tmp/pgcreds/host) -p $(</tmp/pgcreds/port) "dbname='$(</tmp/pgcreds/database)' user=$(</tmp/pgcreds/user) password='$(</tmp/pgcreds/password)'"

psql (14.10 (Debian 14.10-1.pgdg110+1), server 13.11) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
<your_dbname>=> select * from pg_tables limit 3;

schemaname | tablename | tableowner | tablespace | hasindexes | hasrules | hastriggers | rowsecurity
pg_catalog | pg_statistic | rdsadmin | | t | f | f | f
pg_catalog | pg_type | rdsadmin | | t | f | f | f
public | langchain_pg_collection | dbo | | t | f | t | f
(3 rows)

<your_dbname>=> exit
root@pgclient:/# exit