‎2006 Oct 27 10:42 AM
Hi Friends,
Iam getting the performance problem for this below statement and its running lot of time in debug mode itself and timingout and in catsdb table raufpl and raplzl fields are not keyelments .Actually for this select statement iam taking the data from AFVC table aufpl and aplzl.
IF NOT it_afvc[] IS INITIAL.
SELECT counter workdate raufpl raplzl status catshours FROM catsdb
INTO TABLE it_catsdb
FOR ALL ENTRIES IN it_afvc
WHERE workdate < sy-datum
and raufpl = it_afvc-aufpl
AND raplzl = it_afvc-aplzl
AND status EQ '30'.
Can you please suggest any one how to optize fro this statement
points will be award and your answers are highly apprciable
regards,
Govind
‎2006 Oct 27 10:55 AM
‎2006 Oct 27 10:57 AM
In your case it seems that the number of records it is trying to fetch is causing the issue...
<b>The solution for the same is to either select the data using package size OR use Cursors.</b>
‎2006 Oct 27 11:18 AM
generally...u need to compramise on performance or the no of records..fetch records in a fixed time and ur problem would be solved. and the other thing which may b small but advisable is to use symbols where ever possible donot use EQ instead use =.
regards,
santhosh
‎2006 Oct 27 11:43 AM
Hi Govind
I would say your problem is that you are not using keys or an index. The table already has 5 indices (on our system anyway) so adding another might affect the performance of any updates to this table. There are 2 existing indices which might be of use;
1. Uses Pernr, Status and Workdate
2. Uses Pernr and Workdate.
Obviously these are no use if you haven't got the Pernr - is there anyway that you can re-write your previous code to get a Pernr value? You can then make use of the either of these indices. Given that you are not testing for equality against workdate (i.e. workdate < sy-datum rather than workdate = a_date) the first one would be my preference.
Other than that you might have to use Packages or Cursors - it would be better to try and use an index though.
Hope this helps
Andrew
‎2006 Oct 27 12:57 PM
Hi ,
In my code pernr it wont come actually iam calculating for each project wise getting the total no of catshours.
Can u pls suggest how i will use package or cursor concept ?
thanks & Regards,
govind
‎2006 Oct 27 1:01 PM
Data : C1 type cursor.
Open cursor C1 for select * from dbtable where .
Do.
fetch next cursor C1 into wa.
if sy-subrc <> 0.
Close cursor.
Exit.
endif.
==>process the data
enddo.
‎2006 Oct 27 3:59 PM
Hi Govind
I'm not sure about packages or cursors having never had to use them. However, I wonder if there would be any benefit in creating a select-option style table based on the counter field. You could then populate it with ranges of data, for example, 1 to 30000 (depending on the contents of the field). You could then put the main select within a loop that increments the range on each iteration i.e. 2nd iteration would be 30001 to 60000 until the select returns no records. After each select you would then append the lines to a main internal table that will eventually hold all the relevant records. It would be using the PK field but as it would be selecting all records I don't know if there would be a performance gain - perhaps other people would like to comment on the principle? The thought has only just occurred to me so I suspect it is fatally flawed somewhere but might be worth a shot!
I still suspect the poor performance is down to not using PK or index fields effectively as much as the number of records.
Kind regards
Andy
‎2006 Oct 27 12:57 PM
govind
One thing you can try your hand at. just comment the code 'workdate < sy-datum'. once you select all the entries into table it_catsdb, then delete records where workdate < sy-datum.
2. just check how much of data is there in the table for your select query. if the volume of record is more, then check size of your internal table i.e., it_catsdb. if frequently paging is done, this will also hamper performance.
shane
‎2006 Oct 27 1:05 PM
Hi Shane,
okay, actually in it_catsdb getting around 90,000
records ,at the time of debugging the cursor its not moving out of this select statement and timingout .Is any other possible ways i will do?
Regards,
Govind
‎2006 Oct 27 1:57 PM
is it happening, even after removing code " workdate < sy-datum" from select query?
and secondly what is the size of your internal table. i mean to say you would have declared something like
it_catsdb initial size 100 / it_catsdb occurs 100. whatever you have declared, what is that size.
‎2006 Oct 27 2:04 PM
Hi Shane,
yes right now i removed workdate and still its timing out bcz of some 90,000 records and I declared internal table it_catdsdb size 0 ie occurs 0
Regards,
Govind
‎2006 Oct 27 2:25 PM
yes i can understand it is getting tough on you. but one thing you are selecting for all entries in it_afvc. so is there any means you can select " COUNTER " field into it_afvc prior to selecting data from db table CATSDB. do you understand what i mean to say. first you would be filling internal table IT_AFVC. so what i mean to say over here is, if you can fill this table by the field "COUNTER " prior to selecting data from CATSDB, then it may solve your problem. beacuse COUNTER is the key field over there. so that time your code looks like follows:
sort it_afvc by counter.
IF NOT it_afvc[] IS INITIAL.
SELECT counter workdate raufpl raplzl status catshours FROM catsdb
INTO TABLE it_catsdb
FOR ALL ENTRIES IN it_afvc
WHERE counter = it_afvc-counter
and raufpl = it_afvc-aufpl
AND raplzl = it_afvc-aplzl
AND status EQ '30'.
if there is no such provision of doing so, then probably you have to use the technique of PACKAGE SIZE.
hope it makes sense for you and works
shane
‎2006 Oct 27 3:08 PM
Hi Shane,
I checked it in AFVC table here didn't have the counter field but its exist only in catsdb but u mentioned <b>WHERE counter = it_afvc-counter</b> ??
and can you please tell how i will use PACKAGE SIZE ?
Your answers are highly appreciable
Thanks in advance,
Govind
‎2006 Oct 27 2:41 PM
‎2006 Oct 27 3:12 PM
Hi Rob,
In catsdb total records around 6 lakh but at the time of seeing the data from AFVC table to CATSDB
ie afvc-aufpl = catsdb-raufpl and afvc-aplzl = catsdb-raplzl for this getting around 90,000 records for these records in debug mode the cursor not moving out from this select statement and timing out.
Thanks & Regards
Govind
‎2006 Oct 27 3:25 PM
OK - does this SELECT run more quickly or more slowly than yours:
SELECT counter workdate raufpl raplzl status catshours
FROM catsdb
INTO TABLE it_catsdb
WHERE workdate < sy-datum
AND status EQ '30'.
Rob
‎2006 Nov 05 10:11 PM
OK so you have a problem.
Looks like your query is really for something like year to date, or beginning of time to date or something like it.
This is much better suited for BW. Don't do it in R3. You are going against a growing target and what solves this today may not work tomorrow...
Or else go against a relatively small and fixed period of time...
Enjoy
Message was edited by: F.J. Brandelik
Note that the status=30 might have the same effect (if it means open)... or not...(if it means processed)
‎2006 Nov 06 5:37 AM
Hi Brandelik,Rob,
I am facing a similiar problem in BW extraction
of CATSDB data.
I am concerned that when data volume increases
to say 2 or 3 years of data in CATSDB ,
my function module would time-out.
Kindly suggest best practices to keep the runtime
for data extraction to bare minimum.
Should I use the standard function module
CATS_BIW_GET_DATA along with a BADI ?
If one of you can provide me with
links/ best practices for
CATSDB extraction to BW ,
that would be highly appreciated.
Thanks for your precious time,:-)
AbyJacob, Bangalore
===================
‎2006 Oct 30 5:51 AM
govind
i can see, this chapter getting hard on you. but finally, i have two proposals for you. firstly, if your selection criteria does not fit with any of the existing indexes, then probably you have to create one more index according to your selection criteria, which means your db index should consider the following fields in your case:
1. raplzl
2. raplzl
3. workdate
but mind one thing, the order of the index should be decided by you. like frequently, based on which fields and in what order you are retrieving data from CATSDB. also one more thing you have to look at. that is how many indexes are already existing in database for CATSDB. if there are more than 5 - 6 number of indexes already existing in database, then it is quite risky to build one more index, before deleting one.
the second and final proposal from my side is if non of the technique works in your case, then you should get back to your basis guyz regarding the timeout issue, probably they can increase the timeout duration for the test ambience.
hope it helps
shane
‎2006 Oct 30 2:12 PM
Shane,
In my catsdb table already 5 database indexes are created i think for a database table only 5 database indexes are possible right? and my basis person also not ready to increase the time out period from 5 mts.
Actually as of now i used package size and cursor technique also still for these also timing out and the existing index pernr status workdate anyway its exist but still taking time (but in my select query pernr wont come).
Regards,
Govind
‎2006 Oct 30 2:48 PM
It's possible to have more than five indexes for a database, but it's better to find a programming solution because of the added overhead of an index.
Usisng packages will help if you have memory problems, but won't help with performance.
Did you try the select I suggested last week? If so, was it any faster?
Rob
‎2006 Oct 31 5:43 AM
Hi Rob,
As you suggeted to as last week as per the below only one select statement its taking very very less time so this statement alone is not a problem(catsdb having 6 lakh records)
but my problem is based on the for all entries in AFVC fields aufpl,aplzl consider and retreving the data from CATSDB
ie raufpl = it_afvc-aufpl and raplzl = it_afvc-aplzl in this case its taking time out .Can you pls suggest if u have any idea as per my requirement ?
SELECT counter workdate raufpl raplzl status catshours
FROM catsdb
INTO TABLE it_catsdb
WHERE workdate < sy-datum
AND status EQ '30'.
Thanks & Regards,
Govind
‎2006 Oct 31 6:23 AM
govind
your problem got solved. the solution i am goin to give is just an addition to Rob. as you are saying the select proposed by Rob is working quickly. now your concern is to comapre values from "it_afvc-aufpl" and "it_afvc-aplzl". so just write the select query proposed by Rob and then do the following quick manipulation. you got the result.
after select query has returned data set. do the following steps. also define another internal table with identical structure as IT_CATSDB. LET IT BE IT_CATSDB1
SORT it_afvc BY raufpl raplzl.
SORT it_catsdb BY raufpl raplzl.
LOOP AT it_afvc.
READ TABLE IT_CATSDB INTO IT_CATSDB1 WITH TABLE KEY RAUFL = IT_AFVC-RAUFL RAPLZL = IT_AFVC-RAPLZL.
ENDLOOP.
cheers
shane
‎2006 Oct 31 10:00 AM
Shane,
Sorry..still timing out for this procedure also
why because in CATSDB table since raufpl and raplzl are not key fields then the same set of data repeating for raufpl and raplzl more than once.some same set of values
utpo around 10 i noticed. so i cannot use the read statement inside the loop at it_afvc.
so I usedlike this below but still <b>timing out</b>.
LOOP AT it_afvc.
loop at IT_CATSDB where RAUFPL = IT_AFVC-AUFPL
and RAPLZL = IT_AFVC-APLZL.
MOVE it_catsdb-counter to it_catsdb1-counter.
move it_catsdb-workdate to it_catsdb1-workdate.
move it_catsdb-raufpl to it_catsdb1-raufpl.
move it_catsdb-raplzl to it_catsdb1-raplzl.
move it_catsdb-status to it_catsdb1-status.
move it_catsdb-catshours to it_catsdb1-catshours.
append it_catsdb1.
endloop.
ENDLOOP.
‎2006 Oct 31 12:25 PM
hmm!
so you are saying that as raufpl and raplzl are not key fields, you are getting number of identical records. why would you take all those into account. you can say DELETE ADJACENT DUPLICATES COMPARING FLD1 FLD2...
and coming to the code you have given here, i.e., nested loop structure, it is performance wise very very inefficient. READ statements always have proven track of performance. so i would suggest you to adopt READ technique instead of nested loop statement. also declare your internal table type as SORTED INTERNAL TABLE.
i had suggested you that gimmick according to your previous posting, where you said the piece of code given by ROB is working quite efficiently. so i do not understand over here, how these internal table operations are causing time out for you. can you give me your quick reply with your code which you have adopted currently.
shane
‎2006 Oct 31 7:24 PM
Well, I just wanted to see if the select I proposed was quicker. Since it was, you can now go on and remove the records from the internal table that would have been rejected by the other terms of the select. See if this works:
DATA: cat_index LIKE sy-tabix.
SELECT counter workdate raufpl raplzl status catshours
FROM catsdb
INTO TABLE it_catsdb
WHERE workdate < sy-datum
AND status EQ '30'.
SORT: it_afvc BY aufpl aplzl,
it_catsdb BY raufpl raplzl.
LOOP AT it_catsdb.
cat_index = sy-tabix.
READ TABLE it_afvc WITH KEY
aufpl = it_catsdb-raufpl
aplzl = it_catsdb-raplzl
BINARY SEARCH.
IF sy-subrc <> 0.
DELETE it_catsdb INDEX cat_index.
ENDIF.
ENDLOOP.Shane - the problem with your code was that it wasn't using a binary search in the read.
Rob
The order of the new loop was incorrect. I changed it.
Message was edited by: Rob Burbank
I also added an index to the delete.
Message was edited by: Rob Burbank
‎2006 Nov 01 11:35 PM
Here is a quick addition to Awesome Rob's code
(That's what we call Rob at our work place. I am surprised that Fantastic Rich did not take a stab at this).
TYPES:
BEGIN OF ty_afvc,
aufpl TYPE afvc-aufpl,
aplzl TYPE afvc-aplzl,
.... Add any other
.... fields that
.... you need from table AFVC
END OF ty_afvc.
TYPES:
BEGIN OF ty_catsdb,
counter TYPE catsdb-counter,
workdate TYPE catsdb-workdate,
raufpl TYPE catsdb-raufpl,
raplzl TYPE catsdb-raplzl,
status TYPE catsdb-status,
catshours TYPE catsdb-catshours,
del_flag(1) TYPE c, "Add as last field
END OF ty_catsdb.
DATA:
it_afvc TYPE SORTED TABLE OF ty_afvc
INITIAL SIZE 0
WITH HEADER LINE
WITH UNIQUE KEY aufpl aplzl.
DATA:
it_catsdb TYPE STANDARD TABLE OF ty_catsdb
INITIAL SIZE 0
WITH HEADER LINE.
FIELD-SYMBOLS:
<fs_catsdb> TYPE ty_catsdb.
**DATA: cat_index LIKE sy-tabix.
SELECT counter workdate raufpl raplzl status catshours
FROM catsdb
INTO TABLE it_catsdb
WHERE workdate < sy-datum
AND status EQ '30'.
**SORT: it_afvc BY aufpl aplzl,
** it_catsdb BY raufpl raplzl.
<i>*it_afvc is already sorted since it is defined as a sorted table.
*Personally, I would not do the below SORT also, since I don't gain anything from it.</i>
SORT it_catsdb BY raufpl raplzl.
LOOP AT it_catsdb ASSIGNING <fs_catsdb>.
** cat_index = sy-tabix.
** READ TABLE it_afvc WITH KEY
** aufpl = it_catsdb-raufpl
** aplzl = it_catsdb-raplzl
** BINARY SEARCH.
** IF sy-subrc <> 0.
** DELETE it_catsdb INDEX cat_index.
** ENDIF.
READ TABLE it_afvc TRANSPORTING NO FIELDS
WITH KEY aufpl = <fs_catsdb>-raufpl
aplzl = <fs_catsdb>-raplzl
BINARY SEARCH.
IF sy-subrc <> 0.
<fs_catsdb>-del_flag = 'X'.
<i>*Note that there is no MODIFY statement. Not needed.
*The field-symbol is already pointing to the actual table row.</i>
ENDIF.
ENDLOOP.
DELETE it_catsdb WHERE del_flag EQ 'X'.
The three things I did are:
1.)
LOOP AT <itab> <b>ASSIGNING <field-symbol></b>.
With this, there is no transfer of data from table row to table header. Makes it faster
2.)
READ TABLE <b>TRANSPORTING NO FIELDS</b>
We just need to check if a record exists. There is no need to really move data from table row to table header.
3.)
<b><fs_catsdb>-del_flag = 'X' and I did the delete outside the loop.</b>
If we are deleting a record in an internal table while looping on itself, the ABAP processing time increases since the table gets re-indexed everytime a delete takes place.
Instead, it is better to set a flag and do the delete outside (Learnt it the hard way).
Hope this helps.
Cheers,
-Ramesh
‎2006 Nov 02 3:48 AM
Interesting points Ramesh. I have a blog in the works on some of the points you are making. I thought about doing the delete outside of the loop, but not really for performance reasons. I think it's generally a bad idea to insert or delete a table that you are looping through.
On the other hand, I was just trying to address the main performance problem.
(I wonder if it ever worked.)
Rob
‎2006 Nov 02 4:37 PM
Thanks Rob.
Don't forget to send me a link to the blog after you finish it.
I don't think Govinda is looking at this any more. He must have moved on to the next program.
Looked like he was more worried about going through it in debug than the actual program execution itself.
Hope he got that fixed.
Cheers,
-Ramesh
‎2006 Nov 03 10:05 AM
Hi Shane,
Sorry , I was in leave and tested today only
its working fine as per you and the ROB also.Iam very much appreciated your valuable suggestions .
Thanks a lot and awarded points also.
Regards,
Govind
‎2006 Nov 03 10:14 AM
Hi Ramesh,
Thanks for your suggestion and awarded the points also .As per ROB and Shane initial suggetions i solved the problem so then i didn't try in u r way again
and aprreciate.
Regards
Govind
‎2006 Nov 03 10:16 AM
Hi Rob,
I was in leave and today only came and tested and now and its working fine .I appreciate your suggestions and awarded points also .
Thanks & Regards,
Govind
‎2006 Nov 20 3:45 PM
Here it is:
/people/rob.burbank/blog/2006/11/16/performance--what-will-kill-you-and-what-will-leave-you-with-only-a-flesh-wound
Rob
‎2006 Nov 21 6:15 AM
Dear Rob,
That weblog was a Big Help !!
/people/rob.burbank/blog/2006/11/16/performance--what-will-kill-you-and-what-will-leave-you-with-only-a-flesh-wound
Unfortunately I am not able to award points;
But accept my heartfelt appreciation.
Thanks for your precious time,:-)
AbyJacob, Bangalore
===================
‎2006 Oct 30 6:00 AM
oops!
i would like to make slight change on my recent posting.
before creating new index, just give one trial with the existing index if any, with the fields as
1. PERNR
2. STATUS
3. WORKDATE
in your case you are using combination of the fields status and workdate. so just give a little trial to this index.
shane