Enterprise Resource Planning Blogs by SAP
Get insights and updates about cloud ERP and RISE with SAP, SAP S/4HANA and SAP S/4HANA Cloud, and more enterprise management capabilities with SAP blog posts.
Showing results for 
Search instead for 
Did you mean: 
I work for SAP in the area of Consulting and have an experience of over 20 years. Of late i have been involved in a lot of SAP S/4HANA conversion projects be it with or without RISE with SAP. My core area has always been downtime optimization for migrations, Software Update Manager process but I also work on Technical Reviews, Sizing, SAP S/4HANA Greenfield Implementation, Move to Hypescalers to name a few

Present scenario is for a customer who i am helping in the capacity of Technical Advisor for their SAP S/4HANA conversion with the aim to reduce their Technical downtime so as to reduce their overall Business Downtime for the SAP S/4HANA conversion project

System – SAP ERP Central Component (ERP 6.0 EHP8)

Maintenance Event – SAP S/4HANA 2021 conversion including migration from SQL to SAP HANA, Migration from SAP CRM to SAP S/4HANA CE (including Data Migration), SAP Fiori Apps and SAP Intelligent Robotic Process Automation.

Hyperscaler- within AWS

Software Update Manager Tool version - SUM 2.0 SP13

Source DB size 4TB

Business Downtime Window – 48hrs

Like any other conversion project, the process began with a copy of PRD to SBX and this was the first time Software Update Manager tool was executed. Optimization used for the first run - None, apart from the normal calculation of the R3load processes as per the number of CPUs.

The output from “Technical Downtime Optimization” APP is below

System -1- Conversion Timelines

Data Transfer 2 days 9 Hrs

Post Data Transfer 9 Hrs
Structure Change 28min 20 sec
Main Import 3hrs 24min (R3trans=#CPU)
Data Conversion 2hrs 57min
MM Migration (inventory management migration) 8min 23 sec
SD Migration (Sales distribution shadow field migration) 3 min 23sec
Other Phases 1 hr 35min

So if I calculate the downtime, I was looking at a Technical downtime of  2days 19hrs 12min ( 67hrs 12min) just for the Software Update Manager tool and Finance conversion. This was obviously not looking well for a business downtime of 48hrs.

Armed with above knowledge further analysis were performed to select the optimal method for downtime reduction.

SAP provides the following options for downtime reduction:


Standard nZDM (near Zero Downtime Maintenance) Downtime Optimized DMO (DO-DMO) Downtime Optimized Conversion (DO-DMO Conversion) NZDT
Generally available Applicable with Source DB as SAP HANA Applicable with Source DB not SAP HANA Requires SAP Consultant with Advanced Knowledge Service Based. Requires tool license and consulting service
Normal Approach Move table structure adaptations and content party in uptime for upgrade Migrate selected large tables party in uptime Move migration and data conversion partly to uptime High effort, specialized skills, some business restrictions while NZDT triggers are active. Based on clone approach

From the TDO App it was clear FI conversion at 3hrs 12min was not an issue so DO-DMO conversion (which caters for the FI conversion as well) was not required. Since the DB was non SAP HANA - nZDM was not applicable. Finally NZDT was out of picture as DO-DMO looked promising for further technical downtime reduction

So for the next run, the following were considered in addition to DO-DMO.

The data transfer rate is dependent on number of CPUs and network bandwidth. System#1 had 8CPU on PAS which were too less as can be seen from this result. CPUs in System#2 bumped to 48CPUs in PAS and used a 10Gbits/s network (recommended).

Optimize table splitting and shuffling using migration duration files

https://blogs.sap.com/2015/12/15/optimizing-dmo-performance/ and SAP Note 2383750
Reduce MAIN_IMPORT phase (part of data conversion) by using more number of R3trans (provided CPUs are bumped up PAS)
Make sure that SAP HANA parameters are set as recommended SAP Note 2186744 – Execute Mini-checks to find the recommended setting for parameters. This will be the baseline. These parameters can further be tuned during performance testing
Make sure that the source database is configured as recommended. SAP Note 2312935 (SQL 2016)
Use Downtime Optimized DMO option for top 20 large tables, so data is partially transferred during uptime.  From analysis top 11 tables cover 2.2TB of the 4.5TB system. During this time the workbench will be locked (Transport route) and CODE freeze will be in place. This is also a standard practice is you use standard SUM Tool.

The specs for our next conversion (System2 )were as below

System 2- Conversion Timelines

Data Transfer 12hrs 10min

Post Data Transfer 4hrs 34min
Structure Change 24min 14sec
Main Import 25min 57sec
Data Conversion 1hr 30min
MM Migration (inventory management migration) 5min 14sec
SD Migration (Sales distribution shadow field migration) 3min 27sec
Other Phases 2 hr 14min


The SUM Technical Downtime  including FI conversion now comes at roughly 19 hrs. What an achievement!

For the next runs I further intend to shave off a couple of hrs by ensuring data archiving is executed on the source system thereby reducing the data transfer/ migration times and also increasing the number of tables from 20 to 30 for uptime transfer.

Point to note: the more data you transfer in uptime the more will be your SUM uptime. We are expecting an uptime of roughly 3 days. Please always ensure to add some contingency to cover unforeseen circumstances.

Feel free to comment/reach out to me in case you need any guidance  for this topic. It is available for partners to use and there is no license for using this option of SUM DMO.