Integration Between SAP CPI and SAP DataSphere (JDBC Connection)
JDBC – JAVA DATABASE CONNECTIVITY
Why Recommendation for JDBC Over OData API :
JDBC is recommended over OData when consuming large-scale records (e.g., 100,000+) because JDBC streams data directly from the database with better performance and less overhead, while OData is optimized for lightweight, paginated, service-based access.
Problem statement: 3337495 - OData API returns less records than expected due paging
Pagination limits in OData and Ariba APIs can be handled in SAP CPI using a looping process call. I’ll cover this with a clear explanation in an upcoming post.
Goal: Connect CPI to a database used by DataSphere (JDBC) and run a simple read data from the (Analytical Model / Table /View).
For the Write / Delete / Update method, the attached SAP Help Portal Link has syntax in the reference section.
Prerequisites:
SAP DataSphere Step by Step Guide :
Step | Action / Notes |
1. Create a Space | DataSphere → Space Management → New Space → Name it → Create. |
2. Create Table / Analytical Model | Data Builder → In your Space → New → Table or Analytical Model → define fields & data types → Save & Publish. |
3. Prepare / Load Data | Load data manually for testing cases. Otherwise load CSV/import to table via Data Builder/Data Integration. |
4. Note Schema & Object Names | Record schema name, table name, and view names for JDBC SQL use. |
5. Decide Where to Create DB User | If HANA Cloud → use HANA Cockpit/DB Explorer. If on-prem DB → use DB admin tools or contact DB Admin. |
6. Create JDBC DB User | DB admin tool → Security/Users → New User → set username & strong password → Save. |
7. Grant Privileges for the DB user | Assign only required privileges (e.g., SELECT for read; add INSERT/UPDATE/DELETE for CRUD). Best practice: create role JDBC_ROLE and assign. |
8. Prepare JDBC Connection Details | Gather JDBC URL (e.g. sample URL from datasphere: z*********-abc.hana.prod-eu10.hanacloud.ondemand.com |
SAP Integration Suite Step by Step Guide :
Create a Package & Artifact
Go to Monitoring → JDBC Material
Apply JDBC Material in iFlow
Step 1: Timer Start
In this iFlow, the Start Timer is configured with a Simple Schedule → None → On Deployment, which means the integration flow automatically triggers immediately after deployment.
Step 2: Content Modifier
Use this SQL query to fetch all records with the body operation.
SELECT * FROM "<Schema>"."<Model/TableName>"
Step 3: Request Reply & JDBC Receiver Adapter
→ Use the deployed JDBC Data Source alias in the JDBC Material in the previous step and set Max records count based on your requirement.
→ JDBC Maximum Records per call: 2,147,483,647
Sample data Response from JDBC Connection:
References :
same blog by me for clear picture quality: Integration of SAP CPI and SAP DataSphere using JD... - SAP Community
CPI JDBC – XML Query in Body for CRUD Operations (Syntax Guide)
link:
https://help.sap.com/docs/cloud-integration/sap-cloud-integration/payload-and-operation
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 9 | |
| 6 | |
| 6 | |
| 5 | |
| 5 | |
| 5 | |
| 4 | |
| 4 | |
| 3 | |
| 3 |