Technology Blog Posts by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
1,702

When transferring data from SuccessFactors to SAP Datasphere via the OData API, key challenges often arise around ensuring data completeness, accuracy, and integrity.

  1. Loss or omission of records during the transfer.
  2. Picklist attributes access.

 In this blog, we will dive into these challenges and offer a technical solution to ensure the data is accurately retrieved and mapped in the SAP Datasphere.

 

Ensuring Complete Data Transfers from SAP SuccessFactors to Datasphere via OData API

While extracting data from SF via OData API, users may encounter missing records in their target table. This issue typically arises due to pagination limitations in OData API or inconsistencies in the data extraction process due to large data volumes. Complete data transfer is essential for accurate reporting and analysis.

Scenario

  • Source: SuccessFactors OData API.
  • Destination: SAP Datasphere table.
  • Process: Data extraction via dataflows in SAP Datasphere.

Symptoms

  • Some records are not transferred to the target table.
  • Dataflow execution completes without errors.
  • Discrepancies are identified during data validation.

Root Cause

The OData API in SuccessFactors handles large datasets using pagination. Without proper configuration, the following issues can occur:

  • Dynamic Data Changes: Data in SuccessFactors can change during the extraction process (e.g., records updated or deleted).
  • Pagination Gaps: If the API paginates data dynamically, it may skip records or retrieve incomplete pages due to changes in the underlying dataset during runtime.

Solution

Setting the OData Parameter: paging=snapshot.
The paging=snapshot parameter ensures that the OData API captures a consistent snapshot of the dataset at the time of the query execution. This prevents records from being missed during the extraction process.

Step-by-step guide to implement

Step 1: Access the Dataflow

  • Log in to SAP Datasphere.
  • Navigate to the Data Builder and open the Dataflow where the issue occurs.

Step 2: Modify the OData Query Parameters

  • In the Source Node of the dataflow, locate the OData Parameter for the SuccessFactors data source.

Divya_Susan_Manoj_0-1740391914761.png

  • Click on the Edit OData Parameters to open the OData Parameters dialogue and select the option “paging” and enter the value “snapshot”.     

Divya_Susan_Manoj_1-1740391914985.png

Step 3: Save and Deploy the Dataflow

  • Save the changes to the dataflow.
  • Delete the data from the Target Table and Re-Run the dataflow and monitor the logs to confirm successful data extraction.
  • Compare the source data in SuccessFactors with the data in the SAP Datasphere table to ensure no records are missing.

 

Integrating Picklist Attributes of SAP SuccessFactors in Datasphere

 When integrating SAP SuccessFactors with SAP Datasphere via the OData API, a common issue users face is that picklist attributes do not appear in the target table after creating a Data Flow. This can lead to incomplete or unreadable data in reports and dashboards, as fields like Job Title, Department, Business Unit, and Country might only display IDs instead of meaningful descriptions.

Root Cause

Picklist fields in SF are exposed as navigation attributes by SF OData API, which by default are not consumed by SF OData connection.

Workaround

  • For source tables from OData remote connections, you can edit the depth properties of a selected source table in the OData Properties section.

Divya_Susan_Manoj_2-1740391915088.png      

  • Click Edit OData Properties to open the OData Properties dialogue. You can set the depth of the source table to either 1 or 2.

Divya_Susan_Manoj_3-1740391915031.png

The depth of an OData object refers to the level of related entities that are included in the response when querying the OData service. The depth is by default set to 1 so that only the properties of the requested entity are returned. You can change the depth to 2 to include a second level. Depth is useful when you want to optimize performance by controlling the amount of data returned in a single request.

Limitations

When the depth is set to 2, the Columns section shows the columns from the two levels and the following problems can occur:

  • The Data Preview cannot show more than one collection. The columns from the second collection cannot be previewed.
  • The Data Flow run will fail if the dataset is large. Retaining columns from just one level and deleting the rest fixes the issue. To improve performance, store picklist values separately and join them in Dataflows or as Dimension Associations.

Documentation links relevant to integrating SAP SuccessFactors with SAP Datasphere:

SAP Help Portal

https://userapps.support.sap.com/sap/support/knowledge/en/2735876

https://me.sap.com/notes/0003136568

Acquiring, Preparing, and Modeling Data with SAP Datasphere | Help Guide

2 Comments