Technology Blog Posts by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
ajay_kalra2
Product and Topic Expert
Product and Topic Expert
0 Likes
1,256

From Pilot to Global Standard: Uptime Customizing for FI Conversion in Downtime-Optimized S/4HANA Conversions

Large scale SAP S/4HANA conversions and migrations often face a fundamental constraint: business cannot tolerate long downtime. For global systems processing billions of transactions, extended downtime is simply not an option.

To address this challenge, SAP introduced the Downtime Optimized Conversion (DoC) approach within SUM (Software Update Manager). It moves time consuming database migration and data model conversions to SUM uptime phases with only "delta" processing in downtime

Over time, this approach has now evolved further with the introduction of Uptime Customizing for FI Conversion.

This article shares some of the architecture decisions, achievements, challenges, and lessons learned during a project where we leveraged this technology. The project served as a pilot implementation for "Uptime Customizing for FI Conversion", and its success contributed to the feature becoming generally available (GA) in later SUM releases post successful go live.


The Downtime Challenge in Large SAP Transformations

Enterprise SAP landscapes often support mission-critical operations such as:

  • Global financial activities and consolidation

  • Payroll for hundreds of thousands of employees

  • Transaction processing across multiple business units and regions

For such environments, even short outages can impact business operations/monetary losses and regulatory obligations.

Standard conversion using SUM require large portions of system migration and data model conversion activities to occur during the technical downtime window. Downtime Optimized Conversion addresses this by moving large parts of the migration workload into the system uptime phase, significantly reducing the duration of technical downtime.


Downtime Optimized Conversion - A refresher

 

The traditional DoC approach shifts selected activities of a standard conversion into uptime processing, including:

  • Data migration

  • Field conversions/FIN and ML data model conversions.

  • Migration of selected large tables

The below diagram would give a refresher:

ajay_kalra2_5-1773415195540.png

 

However, earlier DoC still had some major drawbacks: Majorly a customizing freeze required in the production system. This was because:

  1. FIN customizing (on S/4HANA level) is set up during standard conversion run, "stored" in a transport request. The transport request is included using CTI (Customer transport integration) buffer used for subsequent DoC cycles. Since DoC does FI conversion in uptime it needs the FI customizing adjustments for S/4HANA to do the data model conversions.
  2. A new standard run is required whenever FIN customizing (on actual production system) changes as previously prepared CTI has no adjustments for the changes. No possibility of adapting changes directly in a new DoC cycle.
  3. Customizing soft freeze with a lot of restrictions in the actual production system is required to maintain the viability of "Stored" transport request.

This created several project challenges:

  • Customizing changes had to stop for extended periods
  • Multiple standard conversion cycles were required
  • Any new business configuration changes in production forced additional cycles

For organizations with high number of configuration changes due to the nature of their business, this constraint significantly slowed project progress, resulting in increased costs and efforts with extra cycles, increasing the overall project timeline and reducing flexibility of changes allowed in the production system for extended periods. 

For simplicity, I will refer to the previous DoC as "DOC - legacy" and new DoC as "DOC - UCFC" in this article.


Introducing Uptime Customizing for FI Conversion

 

To address these limitations, a new enhancement was introduced: Uptime Customizing for FI Conversion (UCFC)

With this capability, FI "delta" customizing can be executed during SUM uptime in the temporary system instance itself.

Remember that an initial standard cycle at the start of the project with a copy of production system is still required to capture "Initial FI customizing for S/4HANA" transport. Following that any customizing changes in production can be adjusted in temporary instance in a new cycle. The users can dynamically perform missing customizing steps for new configuration (i.e. new plants, company codes etc.) in a new cycle during the SUM uptime phases at a specific phase in TMP instance itself.

(Update: With latest SUM 2.0 SP25 released in February 2026, you don't even need the initial "Standard" Sandbox cycle to create "Initial FI customizing transport", all FI customizing can be done in any DoC cycle in the TMP instance! No need to carry old FI customizing transports in the CTI buffer.)

This approach can drastically reduce project timelines. Below diagram illustrates it:

ajay_kalra2_1-1773409934036.png

ajay_kalra2_6-1773415993681.png Initial complete FI Customizing                          ajay_kalra2_7-1773416001404.png Delta FI customizing

With every cycle, you can do "Delta customizing" on the Temp instance and include those "Delta customizing" TRs in the CTI buffer.

 

Achievements possible due to DoC - Uptime customizing for FI conversion:

 

  • Saved resources, efforts, cost and time (>4-5 Months) of at least four “Standard” conversion cycles which would have needed to be run with the old DoC Technology/Approach but were not needed with DoC-UCFC.
  • Eliminated a need of change freeze of more than >19-20 weeks in production system.
  • Migrated and converted an ECC instance with >40TB database size, including billions of ACDOCA records, and billions of generated DoC delta CRR records with zero critical data loss.
  • Cut downtime from 95+ hours (Standard conversion) to just 15 hours — an 85% improvement — while integrating >2,300 customer transports in the same SUM technical downtime (these alone would have added ~10 hours).

With this project go-live, DoC - “Uptime Customizing for Finance Conversion” was made GA (Generally available) with SUM 2.0 SP23 and is the new default Downtime-Optimized Conversion technology since May 2025.

 

Technology changes enabling UCFC:

 

The key difference in the evolved technology that enables "delta customizing" to be possible in the SUM created Temp instance (TMP) is migration of ALL application tables in uptime. This is explained in detail below:

ajay_kalra2_3-1773410470951.png

An example of tables processed during DoC - UCFC compared to DoC - Legacy:

ajay_kalra2_4-1773410591332.png

A much higher count of tables being handled in uptime processing of SUM means some new challenges to manage as part of the project.

 

Challenges/Potential pitfalls to manage:

 

More “Read-Only” Tables

Many more tables get “Read-only” flag as it now depends on the “Delivery Class” of a table. More chances of having an unplanned business restriction especially if a table’s “Delivery Class” is strictly not adhered to by Application. i.e. Tables categorized as "Customizing" but having mixed-use as both Transactional/Application tables and master data tables may get a read-only flag during SUM uptime and result in a business process being restricted during SUM uptime. Detailed Impact analysis via SUM toolbox becomes more crucial than ever.

Example: Some processes like "Run regeneration programs for Substitution/validation rules" may access customizing tables and fail in case the underlying tables are set to "read-only". Usually, the processes like these can be identified and run early before even starting SUM uptime to avoid impact.

  • Impact analysis for “Read-only” table is now much more crucial. Gather production use statistics for each week for several months. Track every conflict (Read-only table against number of changes in production) and map to exact business process if possible.
  • Convince customer to do some business testing in earlier cycles during SUM uptime to catch as many conflicts as early as possible.
  • Be prepared for surprises in “Load Verification cycle”. Following LV, decisions can be taken regarding these tables for actual go live.
  • Very Important: Plan SUM uptime during go-live in the same weeks of month as done for LV cycles. Example, If LV cycle ran for first 2 weeks of the month and Go-live cycle runs in the last two weeks of a month, then go-live may have the customer running new business processes like Month-end close that may have new conflicts never found in LV cycle!

"Load verification Cycle" -- A cycle where SUM "uptime" only is done entirely on production in advance to check all impacts. SUM is reset from downtime confirmation screen in actual production system while the downtime is executed on a clone. Running LV cycles during a downtime minimization project leveraging technologies like Downtime optimized conversions or ZDO (Zero downtime option) is crucial to a project's success.

 

More uptime tables with Change recording and replay (CRR) via triggers

Since we have much more tables with "Change recording and Replay" in DoC-UCFC as compared to DoC-Legacy, it means more “Delta Recording” changes to be replayed before downtime.

During early validation cycles on production, several billion transactional changes were captured through change-recording triggers. Despite several days of replication, only about 94% of the recorded changes could be replayed before the downtime window began. The remaining delta had to be replicated during downtime, which added additional hours to the technical downtime.

Further analysis revealed an important pattern. Many tables with very small data size but extremely high change rates were contributing disproportionately to the total volume of recorded changes. Although these tables were small in size, they generated hundreds of millions of changes, making them inefficient candidates for uptime migration.

Based on this observation, the migration strategy was refined by excluding certain high-change, low-sized tables from uptime migration and instead migrating them during downtime. The low size meant it added just seconds/minutes to downtime migration. This was achieved using CRRTABLIST.LST file as documented in SAP Note 3444013

This optimization delivered two key improvements:

  • The uptime window was shortened and better controlled.

  • The total volume of delta changes was reduced significantly.

As a result, during the production go-live cycle the system achieved 100% replication of recorded changes before the downtime phase, allowing the downtime window to proceed without additional replication delays. 

So previous DoC, you had to think of “Including big tables” in uptime, now you should think of “Excluding tables” from uptime for customer systems with “Very High” transactional volume/change rate but not that high size, especially for customers with “Very busy” systems.

However, be careful of what you move to downtime as it should not be a table which may be needed for FI conversion in uptime. Whenever in doubt submit an incident to BC-UPG-TLS-TLA or BC-UPG-DTM_TLA. Our product support experts can guide you.

 

CTI (Customer transport integration) Buffer and Retrofits

With Uptime Customizing for FI conversion, FI configuration is handled through the CTI buffer, which contains the transports generated during earlier conversion cycles. While this enables the system to execute FI conversion during uptime, it also introduces a new challenge: the CTI buffer may contain older customizing transports created months earlier in the project. Especially, the initial customizing FI transports from standard cycle which may be up to a year old or more depending on the length of your project.

If production customizing changes occur later in the project, these older transports in the CTI buffer can unintentionally overwrite newer production configuration during the conversion. This risk becomes more significant in long-running transformation programs where the initial FI customizing snapshot may be several months old.

To address this, a strong retrofit strategy is required between the source system and the S/4HANA landscape. Ideally, all relevant retrofits—including simple changes that require no functional adjustment—should be incorporated into the CTI buffer for each conversion cycle so that the customizing state remains aligned with the current production system.

In practice, managing this balance requires careful governance. Including too many transports in the CTI buffer can introduce new classification changes or read-only table situations (Especially with a lot of VDAT/CDAT objects), while excluding them may require post-conversion corrections.

During the project, we adopted a controlled CTI strategy, where the buffer was stabilized at certain points in the project timeline and only essential delta customizing changes were added afterward. Other transports were handled post-conversion when necessary. This approach helped maintain conversion stability while still allowing ongoing project development activities in parallel (N+1) Landscape.

We were able to include around 2300+ customer transports in the CTI buffer in SUM which would have taken another 10+ hours of import had they not been included in SUM.

 

Delta Replication Challenges – Database Log Management Limitations

During the project, a major operational challenge was observed related to the database log management behavior of older DB2 versions.

In this environment, both committed and uncommitted transactions share the same active log space, and advanced log space management available in later database versions was not present. This created a risk during DoC delta replication, where a large number of migration processes (R3loads) continuously generate commits.

Under normal conditions this replication is stable, but if even a single long-running uncommitted transaction is active, it can hold the oldest log entry and prevent log reuse. As replication commits continue to accumulate, the active log can rapidly fill up, causing num_log_span violations potentially causing transaction rollbacks or even system instability.

During early validation cycles, log usage increased sharply shortly after SUM delta replication using CRR (Change, record and replay framework) started, highlighting the need for careful control of replication activity.

To mitigate this risk, the execution strategy included:

  • Identifying long-running jobs that could hold open transactions (Like BW extraction jobs).

  • Carefully limiting the number of replication processes running concurrently.

  • Implementing continuous log monitoring during uptime replication and pausing/reducing replication processes (R3loads) when log usage spikes were detected.

These controls ensured stable delta replication throughout the migration cycles.

 

Performance Issues

Another important issue identified during the project involved database lock contention caused by high update parallelism and DoC inline triggers. More tables in uptime means more triggers and more triggers may cause unanticipated performance issues and need to be identified by running load verification/load simulation cycles.

Financial postings triggered updates across multiple related ledger tables as part of a single logical unit of work. When many update processes attempted to modify the same row simultaneously, the system experienced database lock waits, causing update processes to queue behind each other.

Although lock waits appeared on the first table in the update sequence, the underlying cause was the cumulative update latency across several tables in the transaction flow. The presence of DoC inline triggers slightly increased update processing time, which amplified lock contention under very high parallelism.

Because update requests were distributed across many application servers, a large number of update work processes competed to update the same data rows. This resulted in:

  • rapidly increasing database lock waits

  • a growing backlog of pending update records

  • other business activities temporarily waiting for update processes to become available

 As we reduced the update work processes instead of increasing them, the updates flowed faster and contention was released. Understanding this behavior helped refine the system configuration and operational monitoring during the migration window to ensure stable execution of business transactions during uptime processing.

 

Additional Resources

Simplify the downtime-optimized System Conversion ... - SAP Community

Support Portal Page of Software Update Manager: https://support.sap.com/en/tools/software-logistics-tools/software-update-manager.html

Support Portal Page of downtime-optimized conversion approach of Software Update Manager: https://support.sap.com/en/tools/software-logistics-tools/software-update-manager/downtime-optimized...

SAP Note on Uptime Customizing for FI Conversion: SAP Note 3444013

Feel free to comment below.

1 Comment