Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 



In this blog post I will show you, how you can make use of the "SLT Connector" operator to consume up to date business data within SAP Data Hub and SAP Data Intelligence.

Remark: SAP Data Hub and SAP Data Intelligence can be treated for the purpose of this scenario exactly the same. For simplicity reasons I will mention SAP Data Intelligence only. In case you would like to run this scenario with a SAP Data Hub system, the procedure is exactly the same.

SAP Data Intelligence offers a built-in integration with SAP Landscape Transformation Replication Server (SLT), the real-time replication technology from SAP positioned for data replication out of SAP systems. The pre-delivered SLT Connector operator within SAP Data Intelligence will handle the communication to the remote SLT component on the source system and allows doing delta replication of tables into SAP Data Intelligence based on SLT technology.

This functionality is part of the ABAP Integration within SAP Data Intelligence. If you are not familiar with the overall concept of the ABAP Integration, please have a look at the overview blog post for ABAP Integration.



For any SAP S/4HANA systems greater than 1610, you are good to start. The remote SLT component is included in the core of your SAP S/4HANA system.

If you however run this scenario with a SAP Business Suite source system, you need to make sure, that the non-modifying Add-On DMIS 2018 SP02 (or DMIS 2011 SP17) is installed on that system.

Besides, you need to be able to establish a RFC connection from your SAP Data Intelligence system to the SAP system. Ideally, you have already created this connection via SAP Data Intelligence Connection Management. To get more details on the connectivity, have a look at the following note: 2835207 -SAP Data Hub - ABAP connection type for SAP Data Hub / SAP Data Intelligence 

Use Case


We received many requests by customers and internal stakeholders with a use case pretty similar to the one you see in the below picture.


There is flight data stored in a custom table (ZSFLIGHT) on a SAP Business Suite System, that we would like to store on a S3 file system. It is important for our use case, that the data in the S3 file is always up to date, respecting any changes to the flight data in the source system.

The SAP Business Suite system has DMIS 2018 SP02 installed. It includes SLT functionality such as SLT's read engine and the built-in change data capturing mechanism that will allow fetching deltas.

To provision the data to the S3 bucket, we will use a SAP Data Intelligence pipeline, that reads the data via SLT into SAP Data Intelligence, transforms the  data in a compatible format and finally writes it to S3.



Prepare the source system (ABAP system)


First of all we will logon to the SAP Business Suite system to prepare SLT. Before we can communicate from our SAP Data Intelligence pipeline with SLT, we need to have a SLT Configuration in place (which you can imagine like a project entity inside SLT, representing basically a combination of a source system connection and a target system connection).

  1. Therefore go to the SLT cockpit by entering transaction code ltrc in the command field. Within this environment you can find details to existing SLT data replications and you can also create, monitor and execute additional ones.

  2. Click on "new" to create a new SLT Configuration.

  3. Provide a SLT Configuration Name, for instance "SLT_DEMO" and click "next".

  4. Specify the source system connection, in our case RFC Connection equals none (as we like to load data out of the same system, that also SLT is running on). Click "next".

  5. Specify the target system connection to SAP Data Hub or SAP Data Intelligence. Therefore choose option "Others" and specify "SAP Data Hub / SAP Data Intelligence".

  6. Define the SLT Job Settings. If you plan just a simple test of replicating a single table to SAP Data Intelligence, it is fine to provide one job for "Data Transfer Jobs" and as well for "Calculation Jobs".

  7. Click "next" and then "create".

  8. Note down the Mass Transfer ID, that has been generated. This ID uniquely identifies the SLT Configuration and is required later for the configuration of the SLT Connector operator.


Implement the data pipeline (SAP Data Intelligence)

Having created the SLT configuration, we are good to start building our pipeline in SAP Data Intelligence.

  1. Open your SAP Data Intelligence Modeler and click on the "+" to create a new pipeline.

  2. Make sure, that all categories are selected for the operator repository (especially we need the category of ABAP operators).

  3. Drag and Drop the SLT Connector operator to your workspace. If you can't find it, you might want to use the search functionality. 

  4. Now we need to configure the SLT Connector operator. We need to provide the Mass Transfer ID of our SLT Configuration, the table that we would like to replicate and the connection to the ABAP system. Ideally, this has already been created in the central Connection Management. If so, we can just reuse it. If not, we can also specify the connection manually.

  5. Drag and Drop the ABAP Converter Operator to the workspace. This operator is required in order to transform the table records coming from the SLT Connector operator into a standard string format (based on JSON, CSV or XML).

  6. To configure the ABAP Converter, we need to specify the same ABAP connection as before and need to define the format that we would like to use. In our case we will use CSV.

  7. Drag and Drop now the Write File operator. This operator will write the records of table sflight down to the S3 file system.

  8. The Write File operator needs the following configuration values.

  9. Connect the three operators to a pipeline and save it. Note, that the SLT Connector offers at the moment two outports, "outRecord" and "outTable". The outRecord outport will pass the data record by record, whereas the outTable outport hands the data over in bulks (one RFC call takes a portion of records at one glance). Typically we will use the outTable ourport, as this is faster.

Execute the data transfer

  1. Before starting the actual pipeline, let's take a look at our SFLIGHT table in the ABAP system. We can check the data via transaction se16: We see 13 records. As we did not implement any kind of filtering on the way to SAP Data Hub, we expect the very same data records also at the end in our S3 file.

  2. Also we will take a look at our S3 bucket. To browse the bucket for our file, we will use the MinIO Browser. At the moment it looks like this:

  3. There is not yet a file sflight.csv for our flight data created ( - the other file rating.csv can be ignored).

  4. Now start the execution of the pipeline.

  5. Once the pipeline is running, we will see within the SLT Cockpit, that the table replication is being scheduled.

  6. Having a look at the MinIO Browser, we see that the file has been created right away.

  7. Let's download the file to verify the result.

  8. As we are also interested in delta data, the pipeline is running constantly. If the source table is changed, the delta is immediately arriving in the S3 file. No we will provoke changes to the source data, to verify also the delta replication. We now delete and update a record via se16 in the ABAP system.

  9. Checking the file again via MinIO Browser we can see that the file has changed already with the timestamp within the "Last Modified" column.

  10. Now let's open the file. We can see that there have been two additional records appended, one for the delete and one for the update operation. At the end of each record, we can also track whether the delta record results out of an inser, update or delete operation (see the D and U at the end). This is pretty cool, as it allows us to react on the operations differently. We might face scenarios, where we are not interested in replicating deletes, but only updates and inserts. For such scenarios we could easily extend the pipeline with an additional operator, that filters out certain records.


Thank you for reading this blog post. Feel free to try it out on your own and share your feedback with us.