‎2010 Jan 07 12:39 PM
Hi all,
I searched in this forum but coudnt able to figure out a one for my problem
Can anyone let me know how to change this code to make it performance wise efficient
I added the selecting max value inside loop becuse
i want to select max value for the given sales order,sales item and schedule line.
please look into the code
loop at itab_newcon .
select max( /bic/zpviewdt ) from /bic/azvfosp1w00 into variable_ord
where
doc_number = itab_newcon-doc_number and
s_ord_item = itab_newcon-s_ord_item and
sched_line = itab_newcon-sched_line.
wa_pdate-DOC_NUMBER = itab_newcon-DOC_NUMBER.
wa_pdate-S_ORD_ITEM = itab_newcon-S_ORD_ITEM.
wa_pdate-sched_line = itab_newcon-SCHED_LINE.
wa_pdate-pdate = variable_ord.
APPEND wa_pdate to it_pdate.
clear variable_ord.
endloop.
Regards
Path
‎2010 Jan 07 1:07 PM
You can avoid the select in loop,
by selecting the data from the table related with itab_newcon to a internal table using for all entries of itab_newcon and then filtering the max values.
by using a join statement between /bic/azvfosp1w00 and the table relted with itab_newcon along with max function together
or by creating a view for /bic/azvfosp1w00 and the table related to itab_newcon ,then selecting the data using max function.
Then use a read statement in the loop.
‎2010 Jan 07 1:10 PM
hi. should be something like:
SELECT doc_number, s_ord_item, sched_line, max( /bic/zpviewdt )
FROM /bic/azvfosp1w00
INTO TABLE it_pdate
FOR ALL ENTRIES IN itab_newcon
WHERE doc_number = itab_newcon-doc_number
AND s_ord_item = itab_newcon-s_ord_item
AND sched_line = itab_newcon-sched_line
GROUP BY doc_number, s_ord_item, sched_line.depending on how itab_newcon was populated you might be able to avoid FOR ALL ENTRIES, but you haven't showed us that part of the code.
‎2010 Jan 07 1:11 PM
hi Rui Pedro Dantas ,
For all entries with aggregate function is not possible.
‎2010 Jan 07 2:13 PM
>
> hi Rui Pedro Dantas ,
>
> For all entries with aggregate function is not possible.
oops, my mistake. i rarely use FAE, if that's an excuse.
i thought i remembered that it couldn't be used, but after skimming the F1 help I only say restrictions about ORDER BY and HAVING. but yes, you are right, if I remember correctly GROUP BY can be used but not aggregate fuctions.
‎2010 Jan 07 1:15 PM
> by selecting the data to a internal table using for all entries and then filtering the max values.
No, you can't combine a FAE with an aggregate function.
And I can not generally be recommended to SELECT everything instead of use an aggregate function. The aggregate can dramatically reduce the number of transferred records.
However, if the internal table itab_newcon comes from a previous selection, then a join could be an option.
How many line are in the internal table?
How many records are in average related to one maximum line?
Siegfried
‎2010 Jan 07 1:20 PM
HI Siegfried Boes
what i exactly meant for all entries without max function and then filtering the max values using a logic.
‎2010 Jan 07 1:19 PM
Hello,
As you rightly expected, the performace is affected because of the select query inside the loop. You may try something like this to avoid the select inside the loop.
"Declare a internal table with the fields /bic/zpviewdt ,
" docno, s_ord_item, sched_line say itab
select /bic/zpviewdt docno s_ord_itam sched_line
from /bic/azvfosp1w00
into table itab
for all entries in itab_newcon
where doc_number = itab_newcon-doc_number and
s_ord_item = itab_newcon-s_ord_item and
sched_line = itab_newcon-sched_line.
sort itab by docno s_ord_item sched_line descending .
delete adjacent duplicates from itab comparing docno s_ord_item sched_line .
loop at itab_newcon .
read table itab with key doc_number = itab_newcon-doc_number
S_ORD_ITEM = itab_newcon-S_ORD_ITEM
sched_line = itab_newcon-sched_line .
if sy-subrc = 0.
move-corresponding itab to wa_pdate.
wa_pdate-pdate = itab-/bic/zpviewdt.
APPEND wa_pdate to it_pdate.
endif.
endloop.
Vikranth
Edited by: Rob Burbank on Jan 7, 2010 9:41 AM
‎2010 Jan 07 1:21 PM
HI vikranth,
Nice one ... This didnt come to my mind
‎2010 Jan 07 3:04 PM
Hi Vikranth.Reddy
Fine with your code. can you let me know for the read statement.
Regards
path
‎2010 Jan 07 2:12 PM
> And I can not generally be recommended to SELECT everything instead of use an aggregate function. The aggregate can
> dramatically reduce the number of transferred records.
Please read carefully! The aggregate can avoid the transfer of thousands of lines, so it can not be recommended to omitt it.
It can even be that this usage of the program aggregates only a few lines, but all other aggregate a lot.
So be careful with performance recommendations. Never recommend anything, which can turn into an even worse solution.
Siegfried
‎2010 Jan 07 2:20 PM
Hi Siegfried Boes ,
So is it better to use a join along with aggregate instead of selecting all and processing with a descending sort?
I know it can be tested in ST05, if you dont mind please share your experience :).
I still remember this suggestion
One more addition
How will the binary search with standard internal tables reduce performance .. any peculiar reasons ?
Edited by: Keshav.T on Jan 7, 2010 7:56 PM
‎2010 Jan 07 2:14 PM
> read table itab
> with key doc_number = itab_newcon-doc_number
> S_ORD_ITEM = itab_newcon-S_ORD_ITEM
> sched_line = itab_newcon-sched_line .
even worse, NEVER use standard tables inside a loop with BINARY SEARCH
> sort itab by docno s_ord_item sched_line descending .
Descending SORT is only used for display reasons !!!
‎2010 Jan 07 2:28 PM
Hi Siegfried Boes
Can you let me know how to tune it as i am new to ABAP
i am basically a SAP BW consultant
regards
path
‎2010 Jan 07 3:04 PM
correction:
NEVER use withOUT BINARY SEARCH, as it is written in the example above.
Inside a loop you must faciliate a fast READ:
+ standard table with BINARY SEARCH
+ sorted table with table key or leading and selective part of the key
+ hashed table with full table key
(index accesses also fast, but the index is usually unknown)
‎2010 Jan 07 3:23 PM
Hi Siegfried Boes ,
got information from basis that due to my program there were many deadlocks
is it necessary to use commit statement over there
Please look into the code
Read all records in active table of ODS into itab_ord
select * from /BIC/AZVFOSDP200
appending corresponding fields of table itab_ord
where /BIC/ZPVIEWDT > '19900101'.
commit.
Insert Changes
loop at itab_omod where SID eq ' '.
itab_omod-sid = zarequest.
itab_omod-datapakid = zadatapakid.
itab_omod-record = zarecord.
modify itab_omod.
zarecord = zarecord + 1.
insert into /BIC/AZVFOSDP240 values itab_omod.
commit.
endloop.
endif.
Changing the contents of new table with the value
MODIFY /BIC/AZVFOSDP240 from table itab_neword.
commit
Regards
Path
‎2010 Jan 07 4:57 PM
LOL rightly hit by Siegfried as expected. The docno miss in the sort statement was a typo and thanks to Rob for correcting it:-) I think using a sorted internal table with binary search addition to the read statement should make the code decent.
Vikranth
‎2010 Jan 07 3:15 PM
coming back to the original question:
+ How many records are in itab_newcon ?
+ You can test the usefulness of the aggregate by testing this one:
select *
from /bic/azvfosp1w00
where doc_number = itab_newcon-doc_number
and s_ord_item = itab_newcon-s_ord_item
and sched_line = itab_newcon-sched_line.
directly with the SE16 (display of table content), you must type in example values for the 3 fields.
How many records come back?
You can run an SQL-trace in parallel just for this statement. How long does it need? Maybe your problem is
connected to the selection and not at all to the max function. Which index is used?
Usage of SQL-trace is simple in this case, just call it start it, execute the selection, stop it and display the result.
Times are in microseconds.
Siegfried
‎2010 Jan 08 8:31 AM
funny, but everything off-topic ...
1. The reason for the problem is still not clear, it is not even clear whether there is at all a problem, maybe there is simply
a lot to be done:
Coding
The max-aggregate can give you the max of 10 or of 1000 .... 10000 lines.
The change of SELECT in LOOP into a FAE can make sense if the LOOP is large.
2 Parameter, nobody known how large they are. => Not clear which one is better.
2. SELECTION to an unknowen table what is index? Maybe this is the reason for the problem.
... so analyse first before you advice ....