In part 1 of this article series, we have motivated the usefulness of event driven data integration for near real-time analytical applications, and built a plain end-to-end scenario between SAP Sales and Service Cloud Version 2 (SSCV2) and SAP Datasphere:
Event-Driven Data Integration from SAP Sales and Service Cloud V2 to SAP Datasphere (Part 1 of 2)
In this second part, we will take the scenario that has been built as a basis for more complex integration scenarios, where integrated data entities are referencing each other in different ways:
If not done before, the required artefacts - iFlow definition and DDL statements for HANA Cloud table creation can be downloaded from the following GitHub repository:
https://github.com/sapstefan/SSCV2-Datasphere-Integration
We want to enhance the data model by joining the Cases data source to the Individual Customer data source. This can be established with a simple foreign key relationship. The Cases DB table already has an INDIVIDUAL_CUSTOMER_ID field, so we create another database table for Individual Customers, using the CREATE TABLE statement from the downloaded ind_customers_create_table.sql file.
The table created by this statement should look as follows.
The mapping for Individual Customer has already been deployed with the iFlow. So, create an Individual Customer now in SSCV2:
This should trigger the Individual Customer event and via the iFlow create an entry in the Individual Cusomers table on Datasphere / HANA Cloud.
Use the Data Builder view of your Datasphere tenant to create a view that joins the Cases data source with the Individual Customers data source in your space. The View Name here is "SSCV2_CASES_VIEW".
There should be a Left Join between the CASES table / data source and the INDIVIDUAL_CUSTOMERS data source / table, which maps the CASES.INDIVIDUAL_CUSTOMER_ID to INDIVIDUAL_CUSTOMERS.ID.
For simple validation, the view can for example be reduced to the following fields through the projection:
Deploy the view. The result should then look as follows, when displaying the data of the view, and after a case has been created for the new customer.
The next scenario of data references looks at code lists, which typically consist of code value / name / description combinations. Let's take the service category hierarchy of SSCV2 as an example. The case object / record keeps the code values, whereas the analytical view should display the code name rather than the code value. The list of categories is supposed to be rather static, and there is no event type in SSCV2 that could be used to integrate changes between SSCV2 and DataSphere. We therefore use a simple CSV file download / upload mechanism, to create a DB table and entries in DataSphere for service categories.
Download the list of service categories from SSCV2 via Settings - "Import and Export Data".
The exported csv file can directly be used in the Datasphere HANA Cloud DB, to create a table and fill in the content, using the "Import Data" function.
Verify table and content creation in the database explorer:
Now, the category code values of the Case data source can be joined with the Categories data source. Again choose a left join to map the code values to each other, and add additional fields to the view from the Categories data source - e.g. L1 Category Name, L2 Category Name.
The result view and content should look like this.
The next use case looks at referencing collections of related entities. For example, a single Case in SSCV2 can be created for multiple assigned Registered Products. Simple means that a reference doesn't carry any further attributes.
If not done yet, let's first create the REGISTERED_PRODUCTS table via DDL statement from the dowloaded file registered_products_create_table.sql.
In this scenario, collection references can be managed by adding a field of type Array to the CASES table in DataSphere, by making the following addition to the DDL statement for table creation in DataSphere HANA Cloud.
A drawback of this approach is that a field of type array is not directly available for view creation in the Data Builder. We therefore have to create a view on DB level first, that joins the CASES table and the REGISTERED_PRODUCTS table, using the ARRAY field. An example DDL statement is again available for download from the GitHub repository as file cases_rp_create_view.sql. This example creates a simple DB view that only provides CASE_ID and REGISTERED_PRODUCT_ID, to link the two entities.
Since the iFlow as deployed in article part 1 already contains the mapping of Registered Products, the creation or change of Registered Products in SSCV2 should lead to data replication into SAP Datasphere HANA Cloud, REGISTERED_PRODUCT table. For Cases with assigned Registered Products, the CASES_RP_VIEW should then provide the corresponding relationship entries.
This DB level view can now be used in the Data Builder (after import), to join additional fields from the Registered Products per Case. If a Case has more than one Registered Product assigned, the resulting analytical view will have multiple entries, as here for the example of Case ID 151.
Detailed view of this Case in SSCV2:
In the above screenshot of a case with 2 assigned Registered Products, one of the Registered Products (43) is flagged as the "main" Registered Product. If this information is supposed to be transferred to Datasphere, i.e. an additional attribute at the relationship (isMain), then the apporach of DB table fields of type Array doesn't work anymore. Instead, a separate relationship table will be required, to represent the relationship between Case and Registered Product.
Let's create such a relationship table with additional attribute, with the DDL statement provided in file cases_registered_products_create_table.sql.
Data integration into this relationship table raises an additional challenge. The iFlow needs to be able to transfer a deletion of entries in addition to inserts and updates.
A possible approach could be to introduce a lookup of existing records here per Case, and then send a deletion via the JDBC adapter for all entries that haven't been sent again with the current event.
A less complex approach (in terms of API call orchestration in the iFlow) is provided here, which is to delete all relationship entries per Case, and then re-create all entries again based on the event sent from SSCV2. An example JDBC XML, to combine DELETE and UPSERTs would look as follows.
<?xml version="1.0" encoding="UTF-8"?>
<root>
<Delete>
<dbTableName action="DELETE">
<table>CASES_REGISTERED_PRODUCTS</table>
<key>
<CASE_ID>3a84ad79-a715-11ef-b33d-839ffb830493</CASE_ID>
</key>
</dbTableName>
</Delete>
<Upsert>
<dbTableName action="UPDATE_INSERT">
<table>CASES_REGISTERED_PRODUCTS</table>
<key>
<CASE_ID>3a84ad79-a715-11ef-b33d-839ffb830493</CASE_ID>
<REGISTERED_PRODUCT_ID>1a9f5be0-cd45-454b-9c80-ce4af9c32ec5</REGISTERED_PRODUCT_ID>
</key>
<access>
<CASE_ID>3a84ad79-a715-11ef-b33d-839ffb830493</CASE_ID>
<REGISTERED_PRODUCT_ID>1a9f5be0-cd45-454b-9c80-ce4af9c32ec5</REGISTERED_PRODUCT_ID>
<IS_MAIN>false</IS_MAIN>
</access>
</dbTableName>
</Upsert>
<Upsert>
<dbTableName action="UPDATE_INSERT">
<table>CASES_REGISTERED_PRODUCTS</table>
<key>
<CASE_ID>3a84ad79-a715-11ef-b33d-839ffb830493</CASE_ID>
<REGISTERED_PRODUCT_ID>447c06c7-8afd-4f3b-b3a5-f385244dd977</REGISTERED_PRODUCT_ID>
</key>
<access>
<CASE_ID>3a84ad79-a715-11ef-b33d-839ffb830493</CASE_ID>
<REGISTERED_PRODUCT_ID>447c06c7-8afd-4f3b-b3a5-f385244dd977</REGISTERED_PRODUCT_ID>
<IS_MAIN>true</IS_MAIN>
</access>
</dbTableName>
</Upsert>
</root>
The XML combines three statements in one message:
The respective part of the Case message mapping of the attached iFlow looks as follows.
Processing of these multiple statements in one call as a single transaction requires the Batch Mode to be active on the JDBC sender adapter in the iFlow.
With the provided example iFlow, both techniques are tested - field of type Array and relationship table. Entries in the relationship table are transferred as follows, note that the additional information about "IS_MAIN" is provided.
And the table can be used in the analytical view, again by joining the relationship table data source with the CASES data source.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
4 | |
2 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 |