Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
8,374
The nZDM capability of SUM uses the shadow instance for the main import.

 

Facts about nZDM for SUM:

 

Introduces the “Record & Replay” technique for business transactions based on database trigger technology

Minimizes manual effort: all steps run automatically in background

Minimal additional hardware requirements due to shadow-technique, only additional DB space needed (80 – 350 GB)

The additional DB space is needed for the transferred tables to the shadow (30 – 150 GB) and for logging tables (50 – 200 GB).

When you want to use nZDM in combination with customer transport requests import you have to consider z-tables for table conversions in addition

Available for all ABAP-based Business Suite products

Record & Replay means that the database changes during uptime of the maintenance process create triggers, which records the changes. The recordings are only needed for tables which are used for the update / upgrade in the shadow. With the help of the recording the shadow tables will be updated after the upgrade/update phases in the uptime. The majority of the recording updates are run in the uptime. Only the delta (last 10%) has to run just before the switch in the downtime.

 

What are the consequences for CPU and memory consumption?

 

You maintain the amount of process which are used by the Record & Replay technique (CRR). The default value for the amount of processes is three. This is the value which is recommended from our side based on the experiences we made with our customers. I am not aware of an impact on pereformance for the enduser.

Lets discuss process by process the thought process I used:

  1. ABAP Processes :


Configure according to BGD processes available in the main system. For downtime, you can use the maximum available. As per the SUM Guide, the returns stagnate after a value of 8. So below is what I used for system with 10 BGD available:

UPTIME : 6

DOWNTIME : 10

  1. SQL Processes :


Different databases may slightly differ when dealing with execution of SQL in parallel. But core concept remains the same. More CPUs Help. Once you have a number, like 8 cores in my example, You next need to finalize the degree of parallelism (DOP – Oracle Term) – The number of parallel threads each CPU will be executing. For example, if 16 SQL Processes would have been used in my case – 2 threads would be executing per CPU – A choice I didn’t took as I wanted minimal impact on the productive operation of the system during the uptime phases.

recommended DOP is 1-2 times the number of online CPUs.

UPTIME : 8 (DOP=1)

DOWNTIME : 12 (DOP = 1.5) Will make this 16 in the next system.

  1. R3trans Processes :


There is a parameter “Mainimp_Proc” which is used in the backend to control the number of packages imported in parallel and the below KBA explains just that – The entire concept.

1616401 – Understanding parallelism during the Upgrades, EhPs and Support Packages implementations

1945399 – performance analysis for SHADOW_IMPORT_INC and TABIM_UPG phase

As per SUM Guide : The Value larger than 8 does not usually decrease the runtime <sic>.

You also have to keep in note the memory. A 512 MB of RAM per R3trans Process seems a good guideline. The end result for me was the same process count as SQL Processes :

UPTIME : 8

DOWNTIME : 12

4. R3Load processes :

“There is no direct way to determine the optimal number ofprocesses. A rule of thumb though is to use 3 times the number of available CPUs.” The Count I used: we can see utilisation of CPU and accordingly increase count as CPU * 3 to 5

UPTIME : 12

DOWNTIME : 24

 

Summary : 

Above blog explained how to optimize downtime when using NZDM option as well as how can we calculate and manually change processes during downtime.
1 Comment
Labels in this area