Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
CameronK
Product and Topic Expert
Product and Topic Expert

Introduction

SAP Datasphere has introduced a new feature, 'Replication Flows.' This new capability (now available with Amazon S3) allows for the process of copying multiple tables from one source to another, offering a fast and seamless experience in data management. For detailed insights into replication flows and their functionalities, please refer to our comprehensive guide.

In this blog, we'll provide a step-by-step tutorial on replicating data from SAP S/4HANA to Amazon S3, showcasing the practical application and efficiency of this new feature in real-world scenarios.

The steps outlined below are the same for SAP S/4HANA On-Premise and SAP S/4HANA Cloud.

Now, let's dive in. We'll walk you through each step necessary to effectively utilize 'Replication Flows' for transferring data from SAP S/4HANA to Amazon S3.

Steps:

1. To start, you will need to create a connection in your SAP Datasphere instance to Amazon S3.

CameronK_0-1708629391506.png

2. Please ensure you have a Dataset in your Amazon S3 that you would like to replicate the tables into.

3. Make sure you have a source connection (Cloud or On-Premise). In this case, we will be using S/4HANA On-Premise. You will need to create this connection in the Connections tab in SAP Datasphere.

CameronK_1-1708629391515.png

4. Navigate to SAP Datasphere and click on Data Builder on the left panel. Find and click the “New Replication Flow” tile.

CameronK_2-1708629391523.png

5. Once you are in ‘New Replication Flow’ Click on ‘Select Source Connection’.

CameronK_3-1708629391534.png

6. Choose the source connection you want. We will be choosing SAP S/4 HANA On-Premise
7. Next, click on ‘Select Source Container’.

CameronK_4-1708629391545.png

8. Choose CDS Views and then click Select.

CameronK_5-1708629391547.png

9. Click “add source objects” and choose the views you want to replicate. You can choose multiple if needed. Once you finalize the objects, click add selection.

CameronK_6-1708629391556.png

CameronK_7-1708629391568.png

CameronK_8-1708629391576.png

10. Now, we select our target connection. We will be choosing S3 as our target. If you experience any errors during this step, please refer to the note at the end of this blog.

CameronK_9-1708629391584.png

CameronK_10-1708629391585.png

11. Next we choose the target container. Recall the dataset you created in S3 earlier in step 2. This is the container you will choose here.

CameronK_11-1708629391592.png

CameronK_12-1708629391593.png

12. In the middle selector, click ”settings” and set your load type. ‘Initial only’ means to load all selected data once. ‘Initial and delta’ means that after the initial load, you want the system to check every 60 minutes for any changes (delta) and copy the changes to the target.

CameronK_13-1708629391600.png

13. Once done, click on the edit projections icon on the top toolbar to set any filters and mapping. For more information on filters and mapping, please refer here, and here.

CameronK_14-1708629391604.png

14. You also have the ability to change the write settings to your target through the settings icon next to the target connection name and container.

15. Finally, rename the replication flow to the name of your choosing in the right details panel. Then, save, deploy, run the replication flow through the top toolbar icons. You can monitor the run in the “data integration monitor” tab on the left panel in SAP Datasphere.

CameronK_15-1708629391611.png

CameronK_16-1708629391618.png

CameronK_17-1708629391625.png

CameronK_18-1708629391635.png

16. When the replication flow is done, you should see the target tables in Amazon (AWS) S3 as such. It should be noted that every table will have 3 columns added from the replication flow to allow for delta capturing. These columns are operation_flag, recordstamp, and is_deleted.

CameronK_19-1708629391644.png

Note: You may have to include Premium Outbound Integration block in your tenant to deploy the replication flow.

CameronK_20-1708629391668.jpeg

 

Conclusion

Congratulations on successfully setting up a replication flow from SAP S/4HANA to Amazon S3!
 This integration exemplifies the power and efficiency of using SAP Datasphere's ‘Replication Flow’ feature for streamlined data management. Should you have any inquiries or need further assistance, feel free to leave a comment below. Your feedback and questions are invaluable to us.
Thank you for following this guide and stay tuned for more insights on leveraging SAP Datasphere for your data integration needs!

 

 

 

 

 

 

 

 

2 Comments
TuncayKaraca
Active Contributor
0 Kudos

Hello @CameronK,

Thank you for sharing an example of using Amazon Simple Storage Service as a target. 

It's so similar with SAP Datasphere Replication Flow from S/4HANA to Azure Data Lake Isn't it? It's all about Cloud Storages. 

Regards,
Tuncay

 

sidraz99
Explorer
0 Kudos

Can we also push data stored in SAP Datasphere to AWS?