"Operational Data Provisioning supports extraction and replication scenarios for various target applications and supports delta mechanisms in these scenarios. In case of a delta procedure, the data from a source (the so called ODP Provider) is automatically written to a delta queue (the Operational Delta Queue – ODQ) using an update process or passed to the delta queue using an extractor interface. The target applications (referred to as ODQ ‘subscribers’ or more generally “ODP Consumers”) retrieve the data from the delta queue and continue processing the data."
(Source: Operational Data Provisioning (ODP) FAQ)
Picture: S-API and ABAP CDS based Extraction
The first step is the creation of a SAP ECC connection in the SAP Data-Hub Connection Management.
Important Parameter in connection configuration:
ODP Context
Picture: SAP Data Hub Connection Management
The SAP Data Hub Metadata Explorer enables Data-Engineers to work with multiple datasources like:
In practise a data engineer or data scientist is enabled browse and understand SAP data-sources without the need for having deep SAP background knowledge or the prerequisite to install dedicated SAP frontends. Vice versa a SAP-user is enabled to conveniently browse data-sources like HDFS or ADL.
Browse SAP ODP data sources on an SAP ECC:
Picture: SAP Data Hub Metadata Explorer - ECC Connection
Preview result of a SAP ODP extractor (EPM Demo: Product):
Picture: SAP Data Hub Metadata Explorer - Data Preview
After finding and previewing the appropriate data-source, the data engineer will now start to build the SAP Data Hub Pipeline.
The Pipeline for writing from ODP to Kafka consists of the following main components:
- Workflow trigger / terminator
- Flowagent CSV Producer
- SAP BusinessSuite ODP Object Consumer
- Wiretab (Console-Out)
- Kafka Producer
Picture: Data-Hub Pipeline ODP2Kafka
The data from the ODP extractor is stored in Kafka with the topic "FROM_ERP"
Picture: Kafka Producer Topic "FROM_ERP"
For demo purposes a second pipeline was built as well. The second pipeline will be used use to read data from the Kafka broker and to display it in the Wiretap console-out
Picture: Data-Hub Pipeline for reading Apache Kafka Broker
After successful implementation the Data-Hub pipelines will be started and the ECC data will be written to the broker in the first pipeline.
The second pipeline will fetch data from the broker and displayed it in the Wiretap as text representation:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
25 | |
15 | |
14 | |
12 | |
10 | |
9 | |
9 | |
8 | |
8 | |
8 |