on 2007 Mar 07 1:24 PM
Hi All,
We were having some problem with Delta Queue of 0CRM_SALES_CONTR_I and it was stuck up for around 7 days in SYSFAIL. We have managed to correct the queue and then restarted the delta upload with option "PSA and subsequently into Targets". It completed the pushing from CRM to BW in around 32 hrs with 3.12 lacs records ( in 2634 data pkts).
But the subsequent upload from PSA to Target is running for around 70 hrs now and it has completed 80%.
The point of worry is that the job was initially taking less than a minute for each data pkt, but now it is taking more than 4 mins to update one.
Things we have already checked for:
1. All DB Statistics are current.
2. The PSA table partition for this Request contains only around 4 lacs record. indexes are current.
3. Around 40% of server memory is free. Moreover, no IO contention.
Can you please help me out with ideas to speed up the things.
Regards,
Avijit.
Request clarification before answering.
We have managed to complete the task in 5 days time. SAP AG was also involved in the process. Certain parameter change was suggested by them but could not be done as that needs system restart.
Thanks all for all the ideas.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Couple of thoughts -
While your indexes were current at the beginning of the load, the data volumes you have loaded are large enough that the execution plans used orignally by the DB, are no longer optimal. Some DBs may allow you to refresh stats, while a load is running, but that may be different from DB to DB.
Do any of your update rules read other tables to enhance the data. Perhaps the stats on those tables are not current.
Best bet is to review <b>Note 892513 - Consulting: Performance: Loading data, no of pkg, req size</b> whihc recommends the following:
When you are loading data to BW, bear the following in mind:
1. Ensure that you do not create more than 1000 data packages for each request.
2. Ensure that you do not load more than 10 million records for each request.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi:
Did you try dropping the index before the load and then re-build the index after the load.
Since your Delta record number looks significant, you can try this.
Chamarthy.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Chamarthy,
Thanks for the idea.
Actually we have already dropped all the indexes on the target except the one for primary key which is mandatory.
I admit that the problem is due to the whopping number of records in the delta, something to do with memory and IO mangement. I have monitored the process, and even if IO contention is not there, aquiring Exclusive locks on RSMON* tables is taking time.
Any idea on memory management of the process.
Cheers!
Avijit.
Hi:
<i>See, our upload from CRM to BW PSA is already thru. The current job is pushing the data from PSA to target. So, datapkt sizes will have no effect as of now. FYI, we are already having a packet size of max 1000.</i>
Even between PSA & Data Target, DdataPackage size does matter. Because BW processes each data package through Update Rules, etc., one after the other.
As far as memory management is concerned, only option you have as a BW developer is to play with different settings (Max size, IDOc frequency).
Talk to BASIS and ask them to look at the DB changes while this load is going on and see if its memory problems. If it is, then you should <b>Decrease</b> the Max size in datapackage.
<b>While you are there, also try increasing the Data Package size too.</b>
I would go through this Check list for the next Delta load onwards.
1) make sure your S-System ---> BW data transfer parameters are all in-sync.
2) Play with different parameters of size and idoc freq to get optimum speed in your system (more is not always faster as your system may not have enough resources to manipulate a larget DataPackage in Update Rules).
If I think about anything else, I will let you know.
Chamarthy
Message was edited by:
Udayabhanu Pattabhi Ram Chamarthy
Hi,
Yeah that is quite possible to happen if you are manually updating teh dataa from PSA. Normally system will process the data packets every few seconds intially but when therea re huge number the processing will be slowed down with more data packets. This must have happened in ur case.
You can increase the number of processes to increase the speed of this load by setting the parameters in ROOSPRMS. Here you need to provide the data source name, max size and the frequency.
Assign points if it helps
Regards
Srinivas
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Srinivas,
Thanks for the response.
See, our upload from CRM to BW PSA is already thru. The current job is pushing the data from PSA to target. So, datapkt sizes will have no effect as of now. FYI, we are already having a packet size of max 1000.
Can you share some idea on how to tweak memory management to make the job run faster?
Cheers!
Avijit.
User | Count |
---|---|
73 | |
30 | |
8 | |
7 | |
6 | |
6 | |
6 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.