Technology Blog Posts by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
1,962

Integration Between SAP CPI and SAP DataSphere (JDBC Connection)

JDBC – JAVA DATABASE CONNECTIVITY

Why Recommendation for JDBC Over OData API :
JDBC is recommended over OData when consuming large-scale records (e.g., 100,000+) because JDBC streams data directly from the database with better performance and less overhead, while OData is optimized for lightweight, paginated, service-based access.

Problem statement: 3337495 - OData API returns less records than expected due paging
Pagination limits in OData and Ariba APIs can be handled in SAP CPI using a looping process call. I’ll cover this with a clear explanation in an upcoming post.

Goal: Connect CPI to a database used by DataSphere (JDBC) and run a simple read data from the (Analytical Model / Table /View).

For the Write / Delete / Update method, the attached SAP Help Portal Link has syntax in the reference section.

Prerequisites:

  1. SAP DataSphere – Subscribed account (⚠ Trial has limited features, JDBC not supported)
  2. SAP Integration Suite – Subscribed or Trial (JDBC actions supported)

 

SAP DataSphere Step by Step Guide :

Step

Action / Notes

1. Create a Space

DataSphere → Space Management → New Space → Name it → Create.

2. Create Table / Analytical Model

Data Builder → In your Space → New → Table or Analytical Model → define fields & data types → Save & Publish.
*Verify Table/Model Deployed Successfully. *

3. Prepare / Load Data

Load data manually for testing cases. Otherwise load CSV/import to table via Data Builder/Data Integration.

4. Note Schema & Object Names

Record schema name, table name, and view names for JDBC SQL use.
* Created space name is the SCHEMA name and Collect Table / Model name *

5. Decide Where to Create DB User

If HANA Cloud → use HANA Cockpit/DB Explorer. If on-prem DB → use DB admin tools or contact DB Admin.
* We are using the HANA cloud system for practical session*

6. Create JDBC DB User

DB admin tool → Security/Users → New User → set username & strong password → Save.
*Check Active status of User*

7. Grant Privileges for the DB user

Assign only required privileges (e.g., SELECT for read; add INSERT/UPDATE/DELETE for CRUD). Best practice: create role JDBC_ROLE and assign.

8. Prepare JDBC Connection Details

Gather JDBC URL (e.g.  sample URL from datasphere: z*********-abc.hana.prod-eu10.hanacloud.ondemand.com
Format for CPI JDBC Material:
jdbc:sap://<host>:<port>/?encrypt=true&validateCertificate=true


SAP Integration Suite Step by Step Guide :

Create a Package & Artifact

  • In CPI → Design → Create a new package → Add an integration flow artifact.
  • Make sure your CPI user has the required roles to create and access design-time artifacts.

Go to Monitoring → JDBC Material

  • In CPI → Monitor → Integrations and APIs → Manage Security → JDBC MaterialMUGILAN_KANAGARAJ_9-1761743104479.png

     

  • → Add JDBC Data Source. → Select HANA cloud
    MUGILAN_KANAGARAJ_10-1761743104489.png

     



  • Provide JDBC URL in the correct format (e.g., jdbc:sap://<hana-host>:443/?encrypt=true&validateCertificate=true).
    MUGILAN_KANAGARAJ_11-1761743104501.png

     

  • Enter DB username and password (use the dedicated JDBC user created earlier in DataSphere).
    MUGILAN_KANAGARAJ_12-1761743104508.png

     

  • Save and deploy the JDBC material.

Apply JDBC Material in iFlow

  • In your integration flow, configure the JDBC receiver adapter → select the JDBC data source created.
  • Use SQL queries (SELECT) in the Processing tab or provide XML query body. Here, I’m using SQL SELECT * to fetch all records from the table.

MUGILAN_KANAGARAJ_13-1761743104513.png

 


Step 1: Timer Start
 In this iFlow, the Start Timer is configured with a Simple Schedule → None → On Deployment, which means the integration flow automatically triggers immediately after deployment.

MUGILAN_KANAGARAJ_14-1761743104520.png

 


Step 2: Content Modifier
Use this SQL query to fetch all records with the body operation.
 SELECT * FROM "<Schema>"."<Model/TableName>"

MUGILAN_KANAGARAJ_15-1761743104527.png

 



Step 3: Request Reply & JDBC Receiver Adapter
 → Use the deployed JDBC Data Source alias in the JDBC Material in the previous step and set Max records count based on your requirement.
→ JDBC Maximum Records per call:  2,147,483,647

MUGILAN_KANAGARAJ_16-1761743104534.png

 


Sample data Response from JDBC Connection:

MUGILAN_KANAGARAJ_17-1761743104538.png

References :
same blog by me for clear picture quality: Integration of SAP CPI and SAP DataSphere using JD... - SAP Community
CPI JDBC – XML Query in Body for CRUD Operations (Syntax Guide)
 link:
https://help.sap.com/docs/cloud-integration/sap-cloud-integration/payload-and-operation

3 Comments
swatisonawane
Explorer

Hi @MUGILAN_KANAGARAJ 

Thanks for the detailed blog information!

I have one question: Where should we create the empty table—within DataSphere or at the HANA DB level—to load the data into DataSphere?

Best regards,

Swati

Hi @swatisonawane 
Thanks for reaching!!!

For each scenario, the table should be created in DataSphere.
For GET operations from CPI, you can create an analytical model/view/table within DataSphere.
For POST operations from CPI, the table must be created under the space user in DataSphere to receive data.

Here, the HANA DB mentioned refers to the database where DataSphere mounted. The above blog focuses on fetching data from DataSphere. You can check the References section for a link to the SAP Help Portal that explains how to post data from CPI to DataSphere using the XML query format.

Thanks and Regards,
Mugilan K

swatisonawane
Explorer

Hi @MUGILAN_KANAGARAJ ,

Thanks for the clarification.

Best Regards,

swati