Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
Product and Topic Expert
Product and Topic Expert
The main job of SAP Cloud Integration (CPI) is to pass through and process data from connected systems and, that way, to integrate business processes that span multiple applications or systems.

The built-in options to store data are mainly covering temporary data storage (like the JMS adapter or the data store where the maximum retention time is 180 days). However, in many scenarios it might be required, nevertheless, to also permanently store data. With the JDBC (Java Database Connectivity) adapter developed recently, you can connect an integration flow with a database and execute SQL operations on the database.

In this blog, I will give you a step-by-step description of how to set up a simple integration scenario using a JDBC adapter.

If you are interested in a concise summary of all steps required to set up the connection to an ASE database system with CPI using the JDBC adapter, check out the following SAP Community blog: Configuring JDBC Adapter: A Step-by-Step Guide

For those who are curious to know what else you can do with the JDBC adapter, as follow-up to this blog, I recommend to check out the following one: Cloud Integration – Using the JDBC Adapter to Implement a Robust Scenario that Handles an Exception.

The Scenario

In SAP Help Portal you find the description of a simple “getting started” integration flow that reads product information from an external data source (a Webshop) and processes that data further-on (see HTTPS Adapter: Example Integration Flow). In the scenario, an HTTP client sends a request to CPI that contains a productIdentifier value. In a subsequent OData call, product data for this productIdentifier value is retrieved and sent back to the HTTP client in the response (additionally, the integration flow sends the product information to an email account to showcase the Mail adapter). In this blog, I will show you how to enhance this scenario in order to include a JDBC database.

When you have finished modelling, the integration flow will do the following: Each time the integration flow has retrieved data for a specific product from the catalog, it stores a corresponding data record in a database table. Here, the following rule is applied: In case a corresponding record exists already in the table, nothing is written to the table. In case no such record exists, a corresponding entry is written (by the JDBC adapter).

In a subsequent step, the integration flow reads all records from this table and sends back the result to the sender (and forwards it in addition to an email receiver). With each new HTTP request (with a different productIdentifier value not yet used already), a new record is added to the database table.

That way, with each request you get back the actually requested product data in addition to the product data from all previous requests from the past (providing a “history” of past product requests).

The following figure shows the high-level setup of involved components.

Set Up the Connection to the ASE Database System

I assume that, as a prerequisite, you have been provided with an ASE database system on an SAP Business Technology Platform subaccount (SAP ASE Service). You can find the database system and schema in SAP Business Technology Platform cockpit. Go to the related subaccount (that hosts the database system) and under SAP HANA / SAP ASE choose Database Systems.

Create a Database Schema

With the following steps, you first create a database schema.

  1. In SAP Business Technology Platform cockpit go to the subaccount that hosts the database system and choose SAP HANA / SAP ASE > Databases & Schemas.

  2. Choose New.

  3. Enter a Database ID (for example: mydbschema). You need this ID later when generating an access token to grant access to the database schema).

  4. As Database User enter a user name (for example, mydbuser). You need this name in another later step when configuring the JDBC data source on the CPI tenant. Enter a password for the database user (which you also need to notice for later reference).

  5. Click Create.

Grant the CPI Runtime Node Access to the Database

As you like to access this database from a CPI runtime node (where the integration flow is deployed), you need to grant the subaccount (associated with the CPI runtime node) permission to access the database. To do that, you first need to generate an access token.

To generate the access token, you need an SAP SDK command line tool that you can download from the page: (download the following SDK: Java EE 7 Web Profile TomEE 7).

When you have downloaded and unpacked the SDK, make sure that the environment variables of your computer are set so that the PATH variable points to the folder that contains the SDK. The following figure shows an example configuration for Windows.

Before you can generate the access token with the SDK command line tool, you need to find out certain parameters:

  • The name of the subaccount that hosts the database: Open SAP Business Technology Platform cockpit, go to the subaccount and choose Overview. You can find the name under Technical Name.

  • The ID of the database schema. To find this information, in SAP Business Technology Platform cockpit, go to the subaccount that hosts the database and choose SAP HANA / SAP ASE > Databases & Schemas.

  • The technical name of the provider subaccount of the CPI runtime node: In SAP Business Technology Platform cockpit, open the subaccount that hosts the CPI runtime node and choose: Applications > Subscriptions.
    Look for the row that contains an entry ending with iflmap.
    The entry in that row in column Provider Subaccount is the provider subaccount of the CPI runtime node, whereas the entry in that row in column Application is the CPI runtime node application name. You need both entries later when generating the access token.

Open the command line tool. Navigate to the tools folder of the SDK you have downloaded before. Using the command line tool, enter the following command (with the values for the parameters in angle brackets as explained above):
neo grant-schema-access --account <subaccount name> --host --user <user required to access the subaccount that hosts database> --id <Id of the database schema> --application <technical name of provider subaccount of the CPI runtime node>:<CPI runtime node application>

neo grant-schema-access --account abcd12345 --host --user user1 --id mydbschema --application defghi:a1234iflmap

Enter the password of user1. Note that this is the user to access the subaccount, not the one defined for the database schema (in our example, mydbuser). The latter will be required at a later step.

Important: As mentioned above, the access token (to grant access to the database) needs to be generated for the runtime node (on which, finally, the integration flow which connects to the database using the JDBC adapter is been processed). Therefore, make sure that in the command you specify the CPI runtime node application. For historical reasons, the corresponding ID ends with iflmap (in the example command, this is the entry defghi:a1234iflmap).

Do not enter any application ID here that ends with tmn, because in this case you specify the tenant management node (tmn) application which is the virtual machine that acts as agent between the human user and the runtime components of SAP Cloud Integration. The tmn application is not responsible to process integration flows at runtime. If you enter the tmn application, you will get a deployment error later when trying to deploy the JDBC Data Source artifact with the token generated this way (as described below under Create the JDBC Data Source (CPI Tenant)). For more information on the architecture, see Technical Landscape (Neo).

The access token is provided by the command line tool:
Access token to bind application ‘defghi:a1234iflmap' to schema ‘mydbschema’: <access token>

Copy the access token to the clipboard or a text editor (you need it later when creating the JDBC data source on the CPI tenant).

Create the JDBC Data Source (CPI Tenant)

To connect an integration flow with the JDBC adapter to a data source, you need to create a JDBC data source artifact first. This artifact which will be used to store the access data for the database (as defined and generated with the previous steps). When later-on designing the integration flow, in the JDBC adapter you then only need to point to the alias of the JDBC data source. No additional configurations are then required in the integration flow.

  1. Open the Web UI for the CPI tenant (https://<tenant address>/itspaces).

  2. Go to the Monitor view and choose the tile JDBC Material.

  3. Choose Add.

  4. Specify the following settings:

    Parameter Description

    Enter any name of choice for the artifact. You need to specify this name later in the JDBC adapter.

    Note that the name needs to be unique across the tenant.
    Description Enter any description of your choice.
    Database Type Select SAP ASE service.
    User Enter the user defined for the database schema (mydbuser in our example).
    Password Enter the password specified when defining the database schema.
    Database ID Enter the name of the database schema (mydbschema in our example).
    Access Token Enter the access token provided by the SDK command above.

  5. Click Deploy.

In case you have forgotten the access token, you can display it (without generating a new token) by using the following command (using the command line tool):
neo list-schema-access-grants --host <host> --account <subaccount_technical_name> --user <e-mail_or_user>

Example (with the same data as used above when generating the token):
neo list-schema-access-grants --host --account abcd12345 --user user1

This might also be necessary in case at a later point in time you like to create a new JDBC Data Source artifact using the same database access data.

You have done now all work to enable your CPI tenant to connect to the database. As a next step, you create certain integration flows that access the database. All you have to do in order to connect to the database is to refer to this JDBC data source artifact. The required credentials and access token are stored in the artifact and you don’t need to re-enter these again.

Initially Create the Database Table

Let’s first check if the connection to the database works. To do that, create a simple integration flow that does nothing else than creating a database table in the ASE database. To use synergies, you create with this integration flow the table that is required for the demo scenario later-on.

For the demo scenario, you need a database table that contains the following columns (to store product ID and name for each product retrieved from the product catalog):

  • productId

  • productName

Let’s get started.

  1. In the Design view of the CPI Web UI, open any integration package and create a new integration flow.

  2. Model the following integration flow:

  3. For the Timer event, under Scheduler, select Run Once.

  4. Click on the JDBC adapter and as JDBC Data Source Alias enter the name of the artifact just created before. Keep the other default settings.

  5. Now configure the Content Modifier.

At this step, let me point out the following: As soon as you have connected a CPI tenant with a JDBC data source, you can inject any SQL statement you like through the integration flow. In this example, you use a Content Modifier to create a message body that contains a simple SQL statement to create a database table. Note that CPI also supports the SQL XML format that was already introduced with the SAP Process Integration JDBC adapter (more information in the documentation sources mentioned at the end of this blog).
Click the Content Modifier shape, go to the Message Body tab and enter the following:

CREATE TABLE products (productId varchar(255), productName varchar(255))

With this command, a table with name products and two columns (productId and productName) is created. I propose to use the varchar data format as the values of the fields from the product catalog can have variable length. The following figure shows the Content Modifier settings.

After you have saved and deployed the integration flow, the Timer start event will trigger exactly one message processing run during which the table will be created. You can check in the Operations view under Monitor Message Processing if everything went well.

Create Integration Flow to Access the Database Table

Our demo integration flow will process data (retrieved from a product catalog) in the way as illustrated in the figure at the beginnig of this blog post.

To implement this scenario, you now modify the example integration flow (described under HTTPS Adapter: Example Integration Flow) by adding steps to include access to the database table.

  1. Model the following integration flow (as described in the above-mentioned documentation section):

  2. To keep it simple, configure the OData receiver adapter (through which the product data is retrieved from the Webshop) that way that you enter this simple select statement (as you are only interested in the product attributes productid and Name):
    $select=ProductId,Name&$filter=ProductId eq '${header.productIdentifier}'

    The figure shows the OData adapter configuration (Processing tab):

  3. Add another Content Modifier shape after the Request Reply step (that sends the request to the Webshop). In the Message Header tab of the Content Modifier, add a header with the following parameters:

    Parameter Enter or select the following …
    Name Enter: productName
    Type Choose: XPath
    Data Type Enter: java.lang.String
    Value Enter: //Name

    The defined header will contain the value of the Name field of the response retrieved from the Webshop (for the given productIdentifier).

  4. Right to this Content Modifier, add another Content Modifier and in the Message Body tab enter:
    IF NOT EXISTS (SELECT 1 FROM products WHERE productId = '${header.productIdentifier}')


    INSERT INTO products (productId, productName) VALUES ('${header.productIdentifier}', '${header.productName}')


    The figure shows the Content Modifier settings:

    This SQL command checks first if (for the actually provided productIdentifier value contained in the message header) a record already exists in the table and, if this is not the case, updates the table by adding the product record associated with this productIdentifier value.

  5. To finally operate this SQL statement on the database table, add a Request Reply step next to the Content Modifier and connect it with an additional receiver through the JDBC adapter. In the JDBC adapter, as JDBC Data Source Alias enter the name of the artifact just created before.

The integration flow should now have the following appearance:

Finally, model an additional database access step where the whole content of the table is read out.

  1. First, add another Content Modifier where you specify the SELECT statement.
    SELECT productId,productName FROM products

    The figure shows the Content Modifier settings:

  2. To finish this step, add another Request Reply step and connect it to an additional receiver – again, you guess it, with a JDBC adapter that is configured in the same way.

The integration flow should now have the following appearance:

Let’s run the integration flow to see if it works. As explained under Send the HTTP Request and Process the Integration Flow, provide a JSON message with a productIdentifier value HT-1080 (for example, ), and your HTTP client (I used Postman which you can download from will get back a response such like the following one (and the email receiver will get a corresponding message as well):

You notice that the JDBC adapter provides the content from the database in a specific format as described under Defining XML Documents for Message Protocol XML SQL Format.

Now, post another HTTP request, and as value for productIdentifier enter HT-1081. You will get this response.

If you do again a call with productIdentifier specify HT-1081, you will get the same response (as the integration flow checks if the record exists and only updates the table in case not).

More Information

I would like to thank Suyash Mani Sharma for providing valuable help and guidance through this topic.