In the ever-evolving landscape of enterprise solutions, integrating data seamlessly has become a key driver for efficiency and informed decision-making. Within a customer’s SAP landscape, there could be a mix of different SAP solutions that serve the purpose of transactional data storage and intelligent data processing. Usually, there is a need for a robust integration between SAP solutions to be able to access the data in the system of records (e.g., SAP Ariba) and process it in the analytical layer (e.g., SAP Analytics Cloud).
This blog discusses the integration of SAP Ariba REST API with SAP HANA Cloud, SAP Datasphere, and SAP Analytics Cloud. The solution uses SAP Business Technology Platform (BTP) and Cloud Application Programming (CAP) Node.js extractor application to integrate SAP solutions.
TL;DR Hands On Tutorial
SAP HANA Cloud is supported as the CAP standard database and is recommended for productive use. The solution uses CAP Core Data Services (CDS) for service definition and domain modeling with HANA Cloud as the data persistency layer.
As SAP HANA Cloud is available as a service within SAP BTP, and it is also the database on which SAP Datasphere is built, there can be two scenarios depending on which SAP HANA Cloud is used for data persistence. The solution can work on both, and they are discussed in detail in this blog.
The steps to implement the solution are:
3. Data modeling in SAP Datasphere and consumption in SAP Analytics Cloud.
Solution Architecture
To kickstart the integration journey, we tap into the SAP Ariba Network Purchase Orders Buyer API, as listed on the SAP Business Accelerator Hub. Here we will identify essential header and item data, such as Vendor ID, Document Number, Supplier Name, Quantity, and Purchase Order Amount.
Ariba Network Purchase Orders Buyer API Header detail
Ariba Network ID default
API Key can be obtained from “Show API Key” option.
In this step, we harness the capabilities of the SAP Business Technology Platform (BTP) and the Cloud Application Programming (CAP) Node.js application. We create a data model based on the API fields, establish a service model for OData endpoint provisioning, and develop the application logic for reading API data, parsing it, and writing to SAP HANA Cloud. Please refer to the blog Develop a CAP Node.js App Using SAP Business Application Studio which discusses the steps that would be necessary for this project.
Data model for the CAP application
Service model for the CAP application
Implementation of the data extractor application
In this implementation, data persistence is achieved through the usage of SAP Cloud Application Programming Model (CAP) and the cds module. Here, cds.connect.to(serviceName) connects to the CAP service (serviceName), and UPSERT.into(Entity) inserts data into the specified entity (Entity). This operation persists the data into the underlying database associated with the CAP service.
The data synchronization process is automated using the setInterval function. The fetchData function is repeatedly called at regular intervals (every 5 seconds) using setInterval, ensuring that the data is periodically fetched and compared for changes. This automatic polling mechanism ensures that the application continuously retrieves and updates data from the specified API endpoints.
The provided implementation fetches the entire content from the API endpoints every time the fetchData function is called. The comparison between the fetched data and the previously stored data (data) is done to identify changes.
Different APIs handle deltas in different ways but generally, a delta fetch can be achieved by specifying a filter by date in the request.
For example:
Adding startDate as a filter condition will fetch data from the source system for a period of 31 days from the startDate. This startDate can be configured to only bring daily, weekly, or monthly changes.
cds build to create the HANA artifacts before deployment.
Test the application
Application testing using cds watch –profile hybrid
OData response from the CatalogService and entity Orders.
Fiori preview of the OrderHeader service and entity Orders.
Database table created when running the application.
Database table with selected data from the Ariba REST API
Note: This is a preferred approach if the customer data needs to be persisted in HANA cloud which runs on a separate datacenter than SAP Datasphere or SAP Analytics Cloud.
After the successful deployment of the CAP application with the generated artifacts, the HANA Deployment Infrastructure (HDI) container is available for integration with SAP Datasphere.
SAP Datasphere connection with HDI Container
Create a view using tables available
Note: This is a preferred approach if the customer data can be persisted in the HANA cloud underlying the SAP Datasphere.
In this scenario, the CAP application performs the ETL (Extract, Transform, Load) task to load the SAP Ariba data into the Open SQL Schema/HDI Container table, which is then used in the SAP Datasphere for data modeling.
Scenario B overview
The SAP Datasphere tenant and SAP Business Technology Platform organization and space must be in the same data center (for example, eu10, us10). The Datasphere space must be provisioned for access from SAP BTP and Business Application Studio.
Please refer to the following blogs and the help document for more details about the steps necessary for this.
Prepare Your HDI Project for Exchanging Data with Your Space
SAP Datasphere Space configuration
SAP BTP configuration
Updated mta.yaml for Scenario B
Read privileges required for Scenario B
The instances thus created in the BTP space would look as below:
SPA BTP objects for Scenario B
Assign new schema/HDI container to SAP Datasphere space
Create a view using the objects in the shared schema
This scenario provides a simplified landscape and is free from the release cycle of HANA Cloud. As the underlying HANA Cloud can be used together with the BTP services, the possibility to use BTP features now become possible with this already simplified integration. For example, filtering or massaging the persisted data, running ML models on the persisted data etc. Besides, the native integration to the SAP Analytics Cloud can seamlessly work with the exposed data models from the SAP Datasphere.
In this step the HDI container tables created in either of the scenarios are used for modeling graphical views and analytic models, which are exposed for consumption in SAP Analytics Cloud
Graphical View in SAP Datasphere
Sample report in SAP Analytics Cloud
Summing up our exploration, this blog highlights the versatility of Cloud Application Programming in SAP BTP, offering a robust platform for HANA Cloud project development. Whether utilizing sidecar HANA Cloud or tapping into the underlying HANA Cloud of Datasphere, CAP ensures a seamless and efficient data integration process. This is one of the many ways of bringing data into Datasphere and some of the benefits are custom application logic for data handling which includes parsing, filtering etc., managing source system loads by periodic executing of API calls. Using BAS (Business Application Studio) has certain benefits like native support for HANA Cloud integration and application deployment, Source Control via Git integration etc.
Hope you liked the use case showing the data integration capabilities of the SAP solution and tools. Below are the sample code repositories from this implementation. Thank you.
GitHub Repository for Scenario A Code
GitHub Repository for Scenario B Code
I would like to thank Stefan Hoffmann and Axel Meier from the HANA Database & Analytics Cross Product Management team for their valuable guidance in completing this implementation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 42 | |
| 21 | |
| 20 | |
| 18 | |
| 18 | |
| 18 | |
| 17 | |
| 16 | |
| 16 | |
| 15 |