Large scale SAP S/4HANA conversions and migrations often face a fundamental constraint: business cannot tolerate long downtime. For global systems processing billions of transactions, extended downtime is simply not an option.
To address this challenge, SAP introduced the Downtime Optimized Conversion (DoC) approach within SUM (Software Update Manager). It moves time consuming database migration and data model conversions to SUM uptime phases with only "delta" processing in downtime
Over time, this approach has now evolved further with the introduction of Uptime Customizing for FI Conversion.
This article shares some of the architecture decisions, achievements, challenges, and lessons learned during a project where we leveraged this technology. The project served as a pilot implementation for "Uptime Customizing for FI Conversion", and its success contributed to the feature becoming generally available (GA) in later SUM releases post successful go live.
Enterprise SAP landscapes often support mission-critical operations such as:
Global financial activities and consolidation
Payroll for hundreds of thousands of employees
Transaction processing across multiple business units and regions
For such environments, even short outages can impact business operations/monetary losses and regulatory obligations.
Standard conversion using SUM require large portions of system migration and data model conversion activities to occur during the technical downtime window. Downtime Optimized Conversion addresses this by moving large parts of the migration workload into the system uptime phase, significantly reducing the duration of technical downtime.
The traditional DoC approach shifts selected activities of a standard conversion into uptime processing, including:
Data migration
Field conversions/FIN and ML data model conversions.
Migration of selected large tables
The below diagram would give a refresher:
However, earlier DoC still had some major drawbacks: Majorly a customizing freeze required in the production system. This was because:
This created several project challenges:
For organizations with high number of configuration changes due to the nature of their business, this constraint significantly slowed project progress, resulting in increased costs and efforts with extra cycles, increasing the overall project timeline and reducing flexibility of changes allowed in the production system for extended periods.
For simplicity, I will refer to the previous DoC as "DOC - legacy" and new DoC as "DOC - UCFC" in this article.
To address these limitations, a new enhancement was introduced: Uptime Customizing for FI Conversion (UCFC)
With this capability, FI "delta" customizing can be executed during SUM uptime in the temporary system instance itself.
Remember that an initial standard cycle at the start of the project with a copy of production system is still required to capture "Initial FI customizing for S/4HANA" transport. Following that any customizing changes in production can be adjusted in temporary instance in a new cycle. The users can dynamically perform missing customizing steps for new configuration (i.e. new plants, company codes etc.) in a new cycle during the SUM uptime phases at a specific phase in TMP instance itself.
(Update: With latest SUM 2.0 SP25 released in February 2026, you don't even need the initial "Standard" Sandbox cycle to create "Initial FI customizing transport", all FI customizing can be done in any DoC cycle in the TMP instance! No need to carry old FI customizing transports in the CTI buffer.)
This approach can drastically reduce project timelines. Below diagram illustrates it:
Initial complete FI Customizing
Delta FI customizing
With every cycle, you can do "Delta customizing" on the Temp instance and include those "Delta customizing" TRs in the CTI buffer.
With this project go-live, DoC - “Uptime Customizing for Finance Conversion” was made GA (Generally available) with SUM 2.0 SP23 and is the new default Downtime-Optimized Conversion technology since May 2025.
The key difference in the evolved technology that enables "delta customizing" to be possible in the SUM created Temp instance (TMP) is migration of ALL application tables in uptime. This is explained in detail below:
An example of tables processed during DoC - UCFC compared to DoC - Legacy:
A much higher count of tables being handled in uptime processing of SUM means some new challenges to manage as part of the project.
Many more tables get “Read-only” flag as it now depends on the “Delivery Class” of a table. More chances of having an unplanned business restriction especially if a table’s “Delivery Class” is strictly not adhered to by Application. i.e. Tables categorized as "Customizing" but having mixed-use as both Transactional/Application tables and master data tables may get a read-only flag during SUM uptime and result in a business process being restricted during SUM uptime. Detailed Impact analysis via SUM toolbox becomes more crucial than ever.
Example: Some processes like "Run regeneration programs for Substitution/validation rules" may access customizing tables and fail in case the underlying tables are set to "read-only". Usually, the processes like these can be identified and run early before even starting SUM uptime to avoid impact.
"Load verification Cycle" -- A cycle where SUM "uptime" only is done entirely on production in advance to check all impacts. SUM is reset from downtime confirmation screen in actual production system while the downtime is executed on a clone. Running LV cycles during a downtime minimization project leveraging technologies like Downtime optimized conversions or ZDO (Zero downtime option) is crucial to a project's success.
Since we have much more tables with "Change recording and Replay" in DoC-UCFC as compared to DoC-Legacy, it means more “Delta Recording” changes to be replayed before downtime.
During early validation cycles on production, several billion transactional changes were captured through change-recording triggers. Despite several days of replication, only about 94% of the recorded changes could be replayed before the downtime window began. The remaining delta had to be replicated during downtime, which added additional hours to the technical downtime.
Further analysis revealed an important pattern. Many tables with very small data size but extremely high change rates were contributing disproportionately to the total volume of recorded changes. Although these tables were small in size, they generated hundreds of millions of changes, making them inefficient candidates for uptime migration.
Based on this observation, the migration strategy was refined by excluding certain high-change, low-sized tables from uptime migration and instead migrating them during downtime. The low size meant it added just seconds/minutes to downtime migration. This was achieved using CRRTABLIST.LST file as documented in SAP Note 3444013.
This optimization delivered two key improvements:
The uptime window was shortened and better controlled.
The total volume of delta changes was reduced significantly.
As a result, during the production go-live cycle the system achieved 100% replication of recorded changes before the downtime phase, allowing the downtime window to proceed without additional replication delays.
So previous DoC, you had to think of “Including big tables” in uptime, now you should think of “Excluding tables” from uptime for customer systems with “Very High” transactional volume/change rate but not that high size, especially for customers with “Very busy” systems.
However, be careful of what you move to downtime as it should not be a table which may be needed for FI conversion in uptime. Whenever in doubt submit an incident to BC-UPG-TLS-TLA or BC-UPG-DTM_TLA. Our product support experts can guide you.
With Uptime Customizing for FI conversion, FI configuration is handled through the CTI buffer, which contains the transports generated during earlier conversion cycles. While this enables the system to execute FI conversion during uptime, it also introduces a new challenge: the CTI buffer may contain older customizing transports created months earlier in the project. Especially, the initial customizing FI transports from standard cycle which may be up to a year old or more depending on the length of your project.
If production customizing changes occur later in the project, these older transports in the CTI buffer can unintentionally overwrite newer production configuration during the conversion. This risk becomes more significant in long-running transformation programs where the initial FI customizing snapshot may be several months old.
To address this, a strong retrofit strategy is required between the source system and the S/4HANA landscape. Ideally, all relevant retrofits—including simple changes that require no functional adjustment—should be incorporated into the CTI buffer for each conversion cycle so that the customizing state remains aligned with the current production system.
In practice, managing this balance requires careful governance. Including too many transports in the CTI buffer can introduce new classification changes or read-only table situations (Especially with a lot of VDAT/CDAT objects), while excluding them may require post-conversion corrections.
During the project, we adopted a controlled CTI strategy, where the buffer was stabilized at certain points in the project timeline and only essential delta customizing changes were added afterward. Other transports were handled post-conversion when necessary. This approach helped maintain conversion stability while still allowing ongoing project development activities in parallel (N+1) Landscape.
We were able to include around 2300+ customer transports in the CTI buffer in SUM which would have taken another 10+ hours of import had they not been included in SUM.
During the project, a major operational challenge was observed related to the database log management behavior of older DB2 versions.
In this environment, both committed and uncommitted transactions share the same active log space, and advanced log space management available in later database versions was not present. This created a risk during DoC delta replication, where a large number of migration processes (R3loads) continuously generate commits.
Under normal conditions this replication is stable, but if even a single long-running uncommitted transaction is active, it can hold the oldest log entry and prevent log reuse. As replication commits continue to accumulate, the active log can rapidly fill up, causing num_log_span violations potentially causing transaction rollbacks or even system instability.
During early validation cycles, log usage increased sharply shortly after SUM delta replication using CRR (Change, record and replay framework) started, highlighting the need for careful control of replication activity.
To mitigate this risk, the execution strategy included:
Identifying long-running jobs that could hold open transactions (Like BW extraction jobs).
Carefully limiting the number of replication processes running concurrently.
Implementing continuous log monitoring during uptime replication and pausing/reducing replication processes (R3loads) when log usage spikes were detected.
These controls ensured stable delta replication throughout the migration cycles.
Another important issue identified during the project involved database lock contention caused by high update parallelism and DoC inline triggers. More tables in uptime means more triggers and more triggers may cause unanticipated performance issues and need to be identified by running load verification/load simulation cycles.
Financial postings triggered updates across multiple related ledger tables as part of a single logical unit of work. When many update processes attempted to modify the same row simultaneously, the system experienced database lock waits, causing update processes to queue behind each other.
Although lock waits appeared on the first table in the update sequence, the underlying cause was the cumulative update latency across several tables in the transaction flow. The presence of DoC inline triggers slightly increased update processing time, which amplified lock contention under very high parallelism.
Because update requests were distributed across many application servers, a large number of update work processes competed to update the same data rows. This resulted in:
rapidly increasing database lock waits
a growing backlog of pending update records
other business activities temporarily waiting for update processes to become available
As we reduced the update work processes instead of increasing them, the updates flowed faster and contention was released. Understanding this behavior helped refine the system configuration and operational monitoring during the migration window to ensure stable execution of business transactions during uptime processing.
Simplify the downtime-optimized System Conversion ... - SAP Community
Support Portal Page of Software Update Manager: https://support.sap.com/en/tools/software-logistics-tools/software-update-manager.html
Support Portal Page of downtime-optimized conversion approach of Software Update Manager: https://support.sap.com/en/tools/software-logistics-tools/software-update-manager/downtime-optimized...
SAP Note on Uptime Customizing for FI Conversion: SAP Note 3444013
Feel free to comment below.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 36 | |
| 28 | |
| 27 | |
| 26 | |
| 26 | |
| 26 | |
| 25 | |
| 24 | |
| 23 | |
| 23 |