2012 Dec 07 1:25 PM
Hi all,
we are performing migration tests under production-like conditions.
PARTNER object
50 parallel jobs
20.000 records/job
70 Commit interval
We have accomplished per job 13.000 Throughput rec/h
So in 2 hour 1.000.000 Partners will be migrated
I know that it look really good but the number of Partners for the GO-Live is 10.000.000 and it will take approximately 15 hours
We need to achieve a better result.
So, after a Runtime Analysis Eavaluation (SE30)
ABAP is 81,1%
Database is 18,9%
and "Call Func. BUS_DI_DATA_COMBINE" is responsible for that.
Do you have anything to suggest?
Regards
Roula
2012 Dec 17 3:58 AM
Hi Roula
BP load performance
I assume you followed the Migration Cookbook and checked standard database performance problems, via execution plans, and optimised DB access as required (see Ricardo's reply above).
I do not think that function BUS_DI_DATA_COMBINE did run "slowly", but the number of executions added up to a high runtime. So there's not much you can do unless you reduce the amount of data/structures per customer.
Unfortunately, the Business Partner got more and more additional functionality over time, which you might not require, but is still executed. Check with the functional team that all unnecessary functionality is deactivated, for example in area Menu BUPT (item applications) and transaction BF11 (Business Events). Maybe you don't need the SD customer, which might get created in parallel, either.
One thing you should certainly check is the decrease in throughput between the start and the end of a file (or the time increase for creating a BP between the first and last one within a file). We recently discovered a problem that reduced the performance significantly over time within a single file. Those problems occur when internal tables are not properly cleared within the SAP code, in our case it was internal table GT_GLOBAL_LOCKS in function group BUPA_BUTX_DIALOG.
The problem can be reduced by putting less objects into a file (if you file number goes over 999 you should check out the migration company set-up to allow 4 digits split file numbers) or clearing the table within EMIGALL.
General note
Not sure what are all the objects you have to migrate at go-live, but if you have 10 million customers and have to migrate all required technical & business master data, meter reading history, financial history, device installations etc. I don't know your cut-over window, it almost certainly doesn't fit within a (long) weekend (unless your server landscape can handle a lot more than 50 parallel streams).
As comparison, the fasted I've ever done was around 3 million customer with all related migration objects/history in 16 to 17 hours... this included significant customer modifications to standard SAP - reducing the number of SQL statements by around 60% and enqueue by over 90% - and one of the fastest server worldwide (at the time) allowing to run in nearly 250 parallel streams.
I suggest to get in direct contact with SAP and/or split the total volume into smaller ones in case it doesn't fit into your cut-over window.
Yep
Jürgen
2012 Dec 13 11:42 PM
Hi Roula,
Have you been through the Migration Cookbook document from SAP? The one I used was the version 1.6 3 years ago. SAP may have a new one.
Something I used to do during the IS-U migration via EMIGALL was to run statistics for tables like BUT000 and BUT020 using tcode DB20. Talk to the basis team and arrange to have the statistics recreated every 1 million records.
Check note: Note 168960 - Deactivate statistics update for migration
Specific for PARTNER:
Note 713101 - Performance problem during check of form of address
Note 720223 - Change documents are written when migrating from BP
Note 735229 - Inperfomant accesses within ISU_Partner_Memory_Get
Note 752926 - Unnecessary accesses on table ECUS
Note 760709 - Change Document for Tax Number not suppressed
Don't worry about the "age" of the note, they helped in 2003 and again in 2008 so they may help you as well.
For all objects:
Note 713659 - IS migration - Performance increase with access to KSM
Note 752943 - Performance - Update migration statistics only with commit
Note 759426 - Commit buffering does not perform with migration object
Try to balance the job distribution, number of records per file, files per job and the processors. Work with the basis team is a must for IS-U.
Good luck.
RM.
2013 Jan 09 3:38 PM
Thank u Ricardo for you reply
We've checked the Migration Performance cookbook 1.6 and we have implement all the relevant notes.
The Notes that you've mentioned
Already in the system: Note 168960 - Deactivate statistics update for migration
Already in the system: Note 713101 - Performance problem during check of form of address
Already in the system: Note 720223 - Change documents are written when migrating from BP
There no correction for release 630: Note 735229 - Inperfomant accesses within ISU_Partner_Memory_Get
There no correction for release 630: Note 752926 - Unnecessary accesses on table ECUS
Already in the system: Note 760709 - Change Document for Tax Number not suppressed
Already in the system: Note 713659 - IS migration - Performance increase with access to KSM
Already in the system: Note 752943 - Performance - Update migration statistics only with commit
Already in the system: Note 759426 - Commit buffering does not perform with migration object
We will try to balance the job distribution, number of records per file, number of parallel jobs and commit buffering.
2012 Dec 17 3:58 AM
Hi Roula
BP load performance
I assume you followed the Migration Cookbook and checked standard database performance problems, via execution plans, and optimised DB access as required (see Ricardo's reply above).
I do not think that function BUS_DI_DATA_COMBINE did run "slowly", but the number of executions added up to a high runtime. So there's not much you can do unless you reduce the amount of data/structures per customer.
Unfortunately, the Business Partner got more and more additional functionality over time, which you might not require, but is still executed. Check with the functional team that all unnecessary functionality is deactivated, for example in area Menu BUPT (item applications) and transaction BF11 (Business Events). Maybe you don't need the SD customer, which might get created in parallel, either.
One thing you should certainly check is the decrease in throughput between the start and the end of a file (or the time increase for creating a BP between the first and last one within a file). We recently discovered a problem that reduced the performance significantly over time within a single file. Those problems occur when internal tables are not properly cleared within the SAP code, in our case it was internal table GT_GLOBAL_LOCKS in function group BUPA_BUTX_DIALOG.
The problem can be reduced by putting less objects into a file (if you file number goes over 999 you should check out the migration company set-up to allow 4 digits split file numbers) or clearing the table within EMIGALL.
General note
Not sure what are all the objects you have to migrate at go-live, but if you have 10 million customers and have to migrate all required technical & business master data, meter reading history, financial history, device installations etc. I don't know your cut-over window, it almost certainly doesn't fit within a (long) weekend (unless your server landscape can handle a lot more than 50 parallel streams).
As comparison, the fasted I've ever done was around 3 million customer with all related migration objects/history in 16 to 17 hours... this included significant customer modifications to standard SAP - reducing the number of SQL statements by around 60% and enqueue by over 90% - and one of the fastest server worldwide (at the time) allowing to run in nearly 250 parallel streams.
I suggest to get in direct contact with SAP and/or split the total volume into smaller ones in case it doesn't fit into your cut-over window.
Yep
Jürgen
2013 Jan 09 3:48 PM
Hi Jürgen,
thank you for the very detailed answer.
we will increase the number of parallel jobs and we will try to create smaller files.
Moreover I have to check if we can create files with less structures as possible and we have to see if we really need the creation of the SD customer.
I hope these changes will help.
There is no way to do modifications to standard SAP so it's better to contact with the SAP directly.
Regards
Roula
2012 Dec 17 1:39 PM
Hi Roula,
Also consider to deactivate address validation if you have activate something. Additionally, please check if business partner replication is also deactivated.
Regards,
Avinash