cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

MDG Sizing issue regarding to sizing guide

former_member849042
Discoverer
0 Likes
1,292

We are implementing MDG project , now we faced performance issue.

Support team give us a Sizing guide, but we still have some question need support to clarify.
Sizing guide for SAP MDG 9.0: https://help.sap.com/doc/6d95f74117f746bdae96763d1c3347fe/SIZING/en-US/Sizing_Guide_MDG_90_v1_3_new....

With the sizing guide provided, we know there is T-shirt sizing model. In Page 10, chapter 3.1 assumption, it descripted reference to in 3.2 as a single process per hour. with “ A change request always contained only one material, supplier or account.”

In chapter 3.2,the table 2 guideline show large category with 1500 process per hour for Material require 16000 SAPs.

Then in Page 11, there is an example: You plan to use Multi-Records Processing for master data governance. Your estimated number of processes shall be 20 per hour and the number of objects 100 per process. Adjust your CPU consumption per object by 0.5 SAPS.20 MDG-M processes * 100 objects * 0.5 SAPS = 1.000 SAPS.
Add the additional required number of SAPS to your already estimated CPU consumption in order to meet the sizing requirements.

We are doubting about the example:
1. If multi-records saps calculation is an additional consumption to T-shirt guideline? Should be added to guideline SAPs like the last sentence of the example? 2. If above question's answer is no, means the example can work independently with the T-shirt table, then we no need refer to the Table

2. just follow the example formula is fine to get a full sizing result?
By the same time, we have more than 460 end users who will process material in MDG, in the end of month/year, will be parallel mass change/creation material business.
So we need more information to check if we need a new sizing.

I am not sure who can support to clarify the Size Guide? Many thanks!

Accepted Solutions (0)

Answers (3)

Answers (3)

studencp
Active Participant
0 Likes

Hi Yuan Li,

you can order a service from SAP to review your performance problems (but I guess it is not free), on my three projects we had performance issues and there was called SAP for investigation - in all three cases the improvement after their recommendation was max 20-30% (but if users have to wait at screen 1.5min or 1.2min they even don't notice the improvement 😉

I assume you are writing about Multi record processing in MDG-M Central Governance, in this case you can forget about CRs with 1000 materials (the default limit in view V_USMD_MC_LIMIT is 100 records and from my experience this is the reasonable max for small materials). For multiple records you should think about Mass Processing in MDG Consolidation (the FIORI applications and MDCIMG config) - however it is "more technical", the workflow is just simple sequential and UI adjustment is limited.

0 Likes
Hi, I am using SAP MDG for Customer and can i know what is the value which i should maintain in this view table V_USMD_MC_LIMIT ...Thanks.
former_member849042
Discoverer
0 Likes

Dear Przemyslaw Studencki,

Many thanks for your kindly classification.

Yes, we have created incident and communicated with production team for a long time, most of the performance note has tried.
And we also let customer understand SAP MDG is focus on Governance, but not mass processing.

But the trouble is that mass process is a normal operation to our customer, they used SAP PS module, and all the business based on projects.
For new project or new plant rollout, may need to create or extend thousands of materials in short time.
Also they frequently do mass material change such as change the purchase price, material group, safe stock number, planning data, etc, and also monthly and yearly finance data change such as planned price, etc.

They have 500 data maintenance end users located in 10 plants with 10 process point of creation process.
So they hope a good user experience of 1000 materials mass creation, and 1~2K materials mass change.

During stress testing, we found that each user session consumed about 8% CPU, which means about12 users do parallelly mass processing in the same time will exhaust the CPU and all other users will hang to waiting for longer operation time.

So we raised an incident to seek production teams support again, and they provided the MDG Sizing Guideline.

Follow the Sizing Guideline may be a good way to explain how many CPU resource needed.
But when we try to follow the guideline, found another issue:

There is T-shirt sizing model in the guideline docs. In Page 10, chapter 3.1 assumption, it descripted reference to in 3.2 as a single process per hour. with “ A change request always contained only one material, supplier or account.”

This customer may have no so many processes per hour, but each process has mass objects, such as 1000 materials per process.

And we found another example to explain how to calculate the mass processing CPU requirement.


Now the question is: if the total sizing result include : T-shirt base requirement + Additional requirement according to example in Page 11?

there is an example: You plan to use Multi-Records Processing for master data governance. Your estimated number of processes shall be 20 per hour and the number of objects 100 per process. Adjust your CPU consumption per object by 0.5 SAPS.20 MDG-M processes * 100 objects * 0.5 SAPS = 1.000 SAPS.

Add the additional required number of SAPS to your already estimated CPU consumption in order to meet the sizing requirements.
studencp
Active Participant
0 Likes

Hi Yuan Li,

SAP MDG central governance is slow because of its architecture, especially for materials. The speed mostly depends on the size of the processed materials - if you have tens of records in MARC and MVKE for single material - you can be sure there will be lags. There are many notes "improving the performance" or instructing how to tune it up (to use hierarchical layout instead of flat, to not implement "heavy code" in derivations/validations/enhancements, reduce governance scope etc.) but finally if you have "big materials" there is a speed limit which you can't cross (something like speed of light - but muuuch slower ;-).

If the response time does not differ much in case there is just one user working in the system or simultaneously 100 users, then the problem is not hardware but the software (MDG). This can't be improved by adding more processors/memory (even HANA DB don't help much) because processing of single CR can't be much parallelized

I mean there is some (not very high) level of hardware resources above which you will not notice improvement.

I might be wrong, but it was performance problems what drove SAP to implement MDG Consolidation and Mass Maintenance (with totally different architecture)