This post shares a practical integration case on SuccessFactors MDF Payroll Result (Pay-slip) Integration. It covers how to design MDF objects to store pay-slip data from third-party payroll systems, the integration architecture, and global payroll result analysis design, along with key considerations and limitations. While it appears simple, complexity arises with large data volumes, multiple payroll systems, and diverse data structures.
In this case, the customer is a multinational enterprise using SuccessFactors as a global HRIS, but with different payroll systems per region. As payroll data resides in local systems, the headquarters lacked consolidated insight into workforce costs, limiting analytics and planning. Therefore, centralizing payroll data became essential.
Global HRIS: SAP SuccessFactors(Employee Master Data Source)
Third-party payroll systems:
The flow is bidirectional — employee master data moves from SAP SuccessFactors to payroll systems, and pay-slip data flows back into SuccessFactors.
Integration Center / SFTP acts as the middleware.
Pay results land in MDF objects (PayResultParent, PayResultItem) and feed into Story / People Analytics for HQ insights.
The table below summarizes the requirements, reasoning, implementation approach, and complexity level.
| Requirement | Category | Reasoning | How to Achieve | Difficulty (1–5) | Notes |
| UI must be user-friendly; HR and employees can view pay slips in Employee Profile | MDF Design | Multiple records per month, with history & audit trail | Use effective-dated parent-child MDF entity (version history = completed) | 2 | |
| Non-eligible allowance/pay result items must be deleted in the relevant month | Integration | Prevent inheritance of obsolete records | • Use API deletion method: • purgeType=record for parent MDF;• purgeType=full for child MDF in Integration Center | 5 | Avoid pushing zero-amount records and invalid record — it creates excessive invalid data. |
| Analysis should support both local and global pay result items | MDF Design | Mapping between local and global pay result items is needed | Use MDF (not Picklist); create custom attributes for analysis | 3 | |
| Pay amounts should support analysis by month, BU, department, job grade, etc. | MDF Design | Requires linking MDF with employee master data | Parent MDF externalCode field type = User | 2 | |
| Integration should support re-updates (back pay or corrections) | MDF Design | externalCode should be meaningful for referencing | Use local payroll record ID or rule-generated code (e.g., userId+effectiveDate+allowance) | 4 | SuccessFactors External Code Design Best Practice |
| Table | Field | Name | Field Type |
cust_PayResultParent | applicant | User | Automatically populated |
cust_PayResultParent | effectiveDate | Date | |
cust_PayResultItem | externalCode | String | Use local payroll record ID for cross-system lookup; if null, auto-generate by concatenating UserId, AllowanceCode, PayrollDate. |
cust_PayResultItem | currency | GO | Use standard SF currency MDF |
cust_PayResultItem | localPayResultItem | GO | |
cust_PayResultItem | amount | Decimal | Use decimal for payroll amounts |
cust_PayResultItem | payEffectiveDate | Date | Supports back pay scenarios (e.g., overtime in July paid in September). |
cust_PayResultItem | amountConversion | Decimal | Auto-calculated by business rule referencing exchange rate |
Not Recommended Design: Using a single MDF for multi-country company with large amount of actual pay component is not scalable or analysis-friendly.
| Table | Field Name | Type | Notes |
cust_PayResult | applicant | User | Automatically populated |
cust_PayResult | effectiveDate | Date | |
cust_PayResult | baseSalaryAmount | Decimal | |
cust_PayResult | carAllowanceAmount | Decimal | Too many predefined fields required |
cust_PayResult | currency | GO |
Payroll data can be integrated via API or SFTP.
SFTP is generally recommended because it is auditable (via stored flat files), simple in structure, and suitable for low-frequency data transfers (weekly or monthly).
In case API integration is required due to security necessity or other, I also introduce API-based integration menthod. In this method, avoid inserting parent and child MDF records in one request. Use two separate upsert requests — one for the parent and another for the child — for clarity, completeness, and broader parameter support. Below is a example of single MDF payload request:
{
"__metadata": {
"uri": "https://api4preview.sapsf.com/odata/v2/PaymentInformationV3",
"type": "SFOData.PaymentInformationV3"
},
"effectiveStartDate": "/Date(1753432694289)/",
"worker": "Berg01",
"toPaymentInformationDetailV3": {
"results": [
{
"__metadata": {
"uri": "https://api4preview.sapsf.com/odata/v2/PaymentInformationDetailV3",
"type": "SFOData.PaymentInformationDetailV3"
},
"PaymentInformationV3_effectiveStartDate": "/Date(1753432694289)/",
"PaymentInformationV3_worker": "Berg01",
"paySequence": "0",
"cust_bank": "BBVA"
},
{
"__metadata": {
"uri": "https://api4preview.sapsf.com/odata/v2/PaymentInformationDetailV3",
"type": "SFOData.PaymentInformationDetailV3"
},
"PaymentInformationV3_effectiveStartDate": "/Date(1753432694289)/",
"PaymentInformationV3_worker": "Berg01",
"paySequence": "0",
"cust_bank": "CHNN"
}
]
}
}Parent–child MDF integration cannot be handled in a single job. Below shows the comparison of purge types in different operation environment for referrence:
| Import and Export Data | Integration Center | Odata API | Result |
| Parent Object = Full Purge | Parent–Child payload in one API (Full Purge) | Parent–Child payload in one job (Full Purge) | All existing effective record cleared and the payload data inserted |
| Parent = Incremental, Child = Incremental | Parent = Incremental, Child = Full | purgeType=record (parent MDF upser API endpoint), purgeType=full (Child MDF upser API endpoint) | Old records cleared in the new effective record, new data inserted in the new effective record |
| Parent = Incremental, Child = Incremental | purgeType=incremental | Previous records inherited in the new effective record |
Although two jobs are required, the third-party system can still produce one file, as overlapping fields exist between parent and child MDFs and SuccessFactors integration job could consume the single job.
In addition, it is recommended to set source page size to '1000' to make it stable and efficient.
Incorrect Setup (Single Job):
Correct Setup:
1.Parent Job:
2.Child Job:
Currency Exchange:
Avoid calculating via Story Concatenation Columns — large datasets exceed system limits.
Refer to Guardrails for People Analytics Story and SAP Note 3025688.
Each query supports:
Monthly Aggregation:
To analyze sums per month, use a calculated column (Payroll Month) to convert MM/DD/YYYY → YYYY-MM with Year() and Month() functions.
This handles varying pay frequencies (e.g., bi-weekly in the US vs. monthly in Singapore) efficiently via pivot aggregation.
Historical Record Retrieval:
Use Time Filter for all historical MDF data but “Today” filters for navigation tables (Job Info, Personal Info, etc.) to avoid duplication and incorrect joins.
This project demonstrates a complex yet practical scenario integrating MDF, API, Integration Center, and Story Reporting. The challenges stemmed from data volume, multiple payroll systems, and sensitivity concerns. It took approximately two months for me to complete the full implementation, so I hope that this guide helps you reduce your learning curve for similar integration requirements.
For detailed questions or discussion, feel free to comment below or contact me in LinkedIn.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 12 | |
| 7 | |
| 6 | |
| 2 | |
| 2 | |
| 2 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |