Human Capital Management Blogs by SAP
Get insider info on SAP SuccessFactors HCM suite for core HR and payroll, time and attendance, talent management, employee experience management, and more in this SAP blog.
cancel
Showing results for 
Search instead for 
Did you mean: 
Lara
Product and Topic Expert
Product and Topic Expert
1,147
Summary:
This article will explain how to create an Express Data Loader (XDL) template to purge data from the Stage Tables in SAP SuccessFactors Incentive Management, formerly known as SAP Commissions. XDL is formerly known as Commissions Data Loader (CDL), and many of the links and references in this article still reference the names SAP Commissions, Commissions Data Loader, or CDL.

Details:

When using the External Data Loader (XDL) to load data into Incentive Management, all records are loaded into a series of staging tables prior to being imported into the application. As more data is imported into the environment over time, performance for imports can degrade due to the growing volume of stage data. Fortunately, once stage data has been imported, there is often no need to keep it in the stage tables. Therefore, it's a good idea to regularly purge data from the stage tables. This can be done in the user interface, but it's also possible to use XDL to perform the purge.

 

There are two advantages to cleaning up stage data from the database:

1) Increased performance in the XDL import process

2) Reduced storage

 

In prior versions of Incentive Management, the ODPU template was used to perform purges. This template has been replaced with the PIPELINE template, which is used to perform all the Pipeline processes.

The PIPELINE template can be downloaded from the CDL Templates link in the Incentive Management online help.

Format and Naming


Express Data Loader has the same naming conventions as its predecessor, ODI. File naming conventions are very specific and must conform to the following requirements:

cust_PIPELINE_<DEV or TST or PRD>_<yyyymmdd>_<hhmiss>_<xxxxxx>.<file-suffix>

Table 1 below shows a breakdown of the elements of the file name:























cust The four-digit tenant identifier assigned to each customer
DEV or TST or PRD The type of target environment
yyyymmdd The date of the transmission, for example 20210701
hhmiss The time stamp, for example 1:42:57 PM is "134257"
xxxxxx A unique content tag used to identify the batch, such as TRANSACT01

For example, for a tenant ID "5555" in the production environment, the file name might be:

5555_PIPELINE_PRD_20210701_134257_MyPurge.txt

Contents of the PIPELINE template for a Purge batch


Table 3 below shows how the PIPELINE template should be populated for a purge.



















































COLUMN VALUE
Command PipelineRun
StageTypeName Purge
ProcessingUnitName PU_Americas
PeriodName {Leave Blank}
RunMode {Leave Blank}
BatchName Complete file name including .txt extension. See Table 1 for an example.
Module Refer to the following table
StageTables Refer to the following table
RunStats false
GenerateODSReports false
Defer false

Modules and Stage Tables


Table 3 below contains a complete list of modules and stage tables.







































































Modules Stage Tables
TransactionalData ["TransactionAndCredit"]
["Deposit"]
OrganizationData ["Participant"]
["Position"]
["Title"]
["Relationship"]
ClassificationData ["Category"]
["ClassifierAssignments"]
["Customer"]
["Product"]
["PostalCode"]
["GenericClassifier"]
PlanData ["FixedValue"]
["VariableAssignements"]
["LookupTable"]
["Quota"]


Below are some facts about the behavior of the Purge process.

  • The process purges records batch-by-batch.

  • Batches are never partially purged. Either the whole batch is purged or none of it is purged.

  • Any batch with unprocessed records will not be purged.

  • Archived records will be in the EXT schema with a _HIST suffix in the table name, e.g. XXXXEXT.CS_STAGEPARTICIPANT_HIST


Comments:


All Boolean fields in the file should be in lower case.

If you have a lot of data in the system, the purge process will take a while. Please benchmark the process in the lower environments to get an idea of how long it takes to clean up the data.

It is best to run this process on a regular basis to avoid excessive buildup of stage data.