Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
tobias_koebler
Product and Topic Expert
Product and Topic Expert
9,412

How can an ABAP CDS view consumed in SAP Data Hub?


Introduction


We enhanced our CDC mechanism and have now the possibilities to detect and read/replicate ABAP CDS views to make the data available in a SAP Data Hub pipeline. When the business data is in a pipeline available, you can leverage all capabilities of operators. Like moving the data to afile storage, messaging queue or to a self-developed operators. Or all at the same time with a multiplexer in between. One time extraction, multiple targets.

If you are not familiar wit the overall concept of the ABAP Integration, please have a look at the overview blog for ABAP Integration.

Prerequisites


As you can imagine, new technologies are based on new code :P. We shipped our new CDC mechanism with the new ABAP pipeline engine which was made available with SAP S/4HANA 1909, this is the minimum on-premise release and there is no downport planned or even possible. For cloud we aim to get it out for SAP S/4HANA 1911 cloud edition.

SAP Data Hub has to be on version 2.7 or SAP Data Intelligence on 1909.

Besides, you need to be able to establish a RFC connection from your SAP Data Hub system to the SAP system. Ideally, you have already created this connection via SAP Data Hub Connection Management. To get more details about the connectivity, have a look at following note: 2835207 - SAP Data Hub - ABAP connection type for SAP Data Hub/ Data Intelligence

Use Case


Getting business data into SAP Data Hub and consume it there, link it to big data or write it to a target are just a few use cases to mention. Very often I see requests to move the data to cheap storage or to a messaging system like Kafka. In this blog I want to explain the most frequently asked scenario: "How can I move data to a file storage or SAP HANA or a SAP BW/4HANA".

 


In the SAP S/4HANA source system (left) you will find certain artifacts:

  • ABAP Pipeline Engine: This is the environment, where ABAP operators are getting executed

  • CDC Engine: Internal framework to allow delta detection and movement. This is based on "SLT technology" and was improved to work with ABAP CDS views. In a nutshell, we use again database triggers and a logging tables. For the ABAP CDS approach, the system automatically creates triggers and logging tables for all related Application Tables of an ABAP CDS view. With the new CDC engine also the 1:4 limitation which is known from SLT is removed. The CDC engine is automatically triggered by the ABAP CDS Reader operator.


In the SAP Data Hub system the pipeline is modeled in the following way:

  • ABAP CDS Reader: This operator will call into the source and start the replication for the defined ABAP CDS view.

  • ABAP Converter: Within this operator the data with an internal ABAP format will be converted to a string. This allows other operators to use the data.

  • Write File: Standard operator to write data to a arget. This operator could be replaced in other scenarios, for example with a Kafka Producer operator to feed a messaging queue. Here also other operators like a SAP HANA writer oder any SAP application producer (e.g. write to BW/4HANA) can be used.


And action - how to implement it in the system


For the example we used the well known "FLIGHT" data model and created a custom ABAP CDS View (yes - custom ABAP CDS views are supported ;)).

The view consists of table SCARR (which holds carrier information) and a custom table ZRATING_01 that holds rating information for the carriers. Both tables getting joined and should be replicated afterwards.

ABAP CDS view in the source


We want to replicate the ABAP CDS view Z_CDS_RATING01 which can be viewed via ABAP Development tools. See the screenshot below.



So you see we just go for an easy join, but there are annotations used to activate the extraction and the delta capturing. We will explain this in an upcoming blog in more details.

The representation in the source system is via the SQL view "Z_SQL_RATING_01". You will see it in the screenshot below.





The existing records are looking like that. We have three records with the following values.



Note: If the screenshots are to small you can "right-click" and show them in real size in a new browser tab.

Model the pipeline in the SAP Data Hub Modeler


All operators can be easily "drag and dropped" into the pipeline. We will use the ABAP CDS Reader for the load and replication from an ABAP CDS View. The operator ABAP Converter will be used to translate from an internal ABAP format into an usable string (json, csv, xml). Afterwards the Write File operator will create a new file in S3. The result should look like that.



Let us have a detailed look at the used operators.

ABAP CDS Reader


This operator will ensure that the CDC engine is called correctly. Triggers and logging tables will be created in the source system.



First you have to specify the ABAP Connection. We re-used a connection which was specified in the Connection Management called ABAP_RFC.

In the field ABAP CDS Name you should specify the name of your view, in this case Z_CDS_RATING_01.

With the Transfer Mode you can define how the data should be consumed.


  • I - Initial Load only




  • R - Replication of delta information (including initial load)




  • D - Replication of delta information only (no initial load)




You will find the full documentation here.

 

ABAP Converter


To move the data directly into an existing operator like the Write File operator, it is required to convert the data to a string. This can be easily done with the ABAP Converter operator. The conversion will be done in the ABAP Pipeline Engine in the source and only the string will be moved to the pipeline on SAP Data Hub.



First you have to specify the ABAP Connection. We re-used a connection which was specified in the Connection Management called ABAP_RFC.

Afterwards you select the format.This can be csv, xml or json.

You will find the full documentation here.

 

Write File


We use the standard operator Write File to create a new file S3 . Every update after the initial load should be appended to this file.



First select your preferred Service. In our case, we want to write to S3. Afterwards you need to select the Connection. The connection was already configured in the Connection Management and was named S3.

The location of the file should be in

  • Bucket: bucket1

  • Path: abap/ta01/cds.csv

  • Mode: append


You will find the full documentation here.

Having a look into the folders on S3 will show you the following:



As you can see, there are already other files stored within the same folder on our S3 bucket.

Note: This is just an example with MinIO to illustrate this better.

 

Get the data flowing


The pipeline is now ready to start and the execute button can be hit.



The pipeline is now running and we expect that the file is created on S3.



The new file cds.csv was created and the initial load was performed. After downloading the file will be show like the following.



These are the same values we have in the SAP S/4HANA source system (see above ;))

 

How does a delta replication look like


Triggers and logging tables were automatically created during the creation process of the pipeline. This means that the CDC mechanism is activated and any change will automatically transferred. We will now manually modify a record in the source system for one of the joined tables.



In this example we just change the RAT AMOUNT to 5000 and save the record. The system will automatically detect this and send it via the pipeline to S3. As we configured to append new records we will see an additional record in the cds.csv file.



The updated record was appended and the information about the operator was added and the end of the record. In this case an U for update For an insert (new record) or delete of an existing record, the operations flag would be an I or a D.

Thank you for reading this blog. Feel free to try it in your own and share your feedback with us.

BR from WDF, britta.jochum and Tobias

 
4 Comments