How to effectively use DURATIONS file (MIGRATE_DT_DUR.XML) to reduce downtime.
An out of the box solution to fast forward the downtime stage.
SUM parallel mode execution
Let's start with how MIGRATE_DT_DUR.XML file can contribute.
When you run one complete DMO cycle, it creates the MIGRATE_DT_DUR.XML file under /usr/sap/<SID>/SUM/abap/analysis directory and is available until you cleanup SUM. The best way to use the durations file is to use the latest one from every successful run. As an example, if in your landscape you migrate the DEV system first, the duration file generated from this run might not be very helpful for QAS system but if your quality was refreshed from production, then the duration file generated from QAS migration can be used for your next system (production copy).
A durations file from your full export-import benchmarking results can also be used for your next SUM run. i.e. when SUM runs on a system for the first time, it estimates the export/import runtime and table split algorithm based on table type, size, number of R3load processes and a lot of other factors, but if this data is already provided to SUM, it'd optimize the export and import process and run much faster.
We were lucky that we could run many MOCKs, but if you cannot, then run only 2 MOCKs for DMO as mentioned below.
Data cleansing and reorg which was discussed in PART1 can also be done in this environment for business critical tables to a bit of a room for planning these activities in production.
For this second MOCK run, make sure that SAP housekeeping activities for identified tables and database reorg is complete, DB statistics are up to date and MIGRATE_DT_DUR.XML is added to SUM download directory. Now run several benchmarks and measure downtime, you'd already notice an improvement as compared to MOCK1 because of the changes that were done.
Every time, you run a benchmark, use the latest MIGRATE_DT_DUR.XML because the table durations, split and sorting for your next run will be based on your previous one and when you finally run the last benchmark, keep that duration file for your MOCK 2. You'd see following when durations file is used by DMO.
We saw a 50% improvement in export duration and downtime by providing durations file. Below picture speaks for itself.
If this MOCK run fulfills your downtime requirement, then you are all set.
We however were not happy with this downtime and wanted further improvement. So we looked for other options. I must admit that this was one of the most stressful times and we looked everywhere for a solution.
2. The magic solution
Most of the times when a hardware migration (system move) is involved the source system is an old generation server with low performance as compared to the target hardware which is high end and HANA compatible. One of the reasons of longer export durations during downtime is hardware performance in many cases. To overcome this, we must think of ways to either upgrade the hardware or increase the resources if possible. we didn't have room for any of this and here's what we resorted to which I call "MAGIC"
Here we leveraged the flexibility that SAP offers in terms of heterogeneous installations. The source OS was HP-UX and target was Linux. We used one of the the target AAS on Linux to connect with source DB and CI and started SUM on this AAS. Our aim was to leverage the better performing hardware of this AAS to perform the export.
The only pre-requisite this arrangement requires is the ASCS split on source and a matching kernel version. Mounting of /trans and /sapmnt is not mandatory for it's a complex process to mount a file system from a different OS and we do not want any executables synchronization either.
An important point is to create a batch server group pointing to this AAS and make sure that SUM runs all of it's batch jobs on this server otherwise you'll get into unnecessary batch job related errors.
Here's how this scenario would look like
Now that your SUM runs on AAS with an upgraded hardware, you'll be able to increase the number of R3load, SQL and background process parameters in SUM. The export duration which was around 12 hours got reduced to 1 hour and 13 minutes. Isn't that unbelievable? the results are shown below, doesn't this look like a perfect R3load graph without any tail?
This was a major breakthrough and we could finally meet the downtime requirement. A comparison view between final migration and MOCK1 looks like below.
Using above solution not only reduced the export duration, but also helped in PARALLEL execution and file transfer process to the target SUM. In a heterogeneous migration where source and target OS is different, rsync setup must be done manually and it sometimes lead to performance issues if you are not an rsync expert. But when we could use the SAP delivered dmosystemmove.sh (which is delivered for Linux only environments), it worked like a charm.
The best part of a SoH or S/4HANA migrations is that you have an assurance that your target database is nothing less than a rocket. All you have to do is set the right parameter and and test those in MOCK migrations and analyze the impact. Please do go through the recommendations provided in SAP note https://launchpad.support.sap.com/#/notes/2600030