2006 Jul 12 3:08 PM
I am running in a job in PROD. It took 21 hours to process an input file of 300,000 records.
The processing is quite simple. The main logic is attached.
Any thought on why this would be runnning so slow?
I know that the BAPI does a significant amount of processing and database updates. It looks like this is where the bottleneck is.
Can there be any gains made in using smaller input files ? I am wondering if there are memory management issues with a large file and all of the processing for each record.
Any thoughts...J.J
Here is the main logic:
DATA: l_equipment LIKE bapi_itob_parms-equipment,
l_data_install LIKE bapi_itob_eq_install_ext,
l_bapiret2 LIKE bapiret2.
LOOP AT it_intab.
Get Child Equipment Number
SELECT SINGLE equnr INTO l_equipment
FROM equi
WHERE sernr = it_intab-child_sernr
AND matnr = it_intab-child_matnr.
IF sy-subrc <> 0.
CLEAR it_errtab.
MOVE-CORRESPONDING it_intab TO it_errtab.
MOVE 'Error selecting child from EQUI'
TO it_errtab-error.
APPEND it_errtab.
ADD 1 TO g_num_errors.
CONTINUE.
ENDIF.
Get Parent Equipment Number
SELECT SINGLE equnr INTO l_data_install-supequi
FROM equi
WHERE sernr = it_intab-parent_sernr
AND matnr = it_intab-parent_matnr.
IF sy-subrc <> 0.
CLEAR it_errtab.
MOVE-CORRESPONDING it_intab TO it_errtab.
MOVE 'Error selecting parent from EQUI'
TO it_errtab-error.
APPEND it_errtab.
ADD 1 TO g_num_errors.
CONTINUE.
ENDIF.
CALL FUNCTION 'BAPI_EQUI_INSTALL'
EXPORTING
equipment = l_equipment
data_install = l_data_install
IMPORTING
return = l_bapiret2.
Check l_bapiret2 to see if everything worked OK
IF l_bapiret2-type = 'S' OR l_bapiret2-type = ' '.
ADD 1 TO g_num_recs_proc.
Commit Work
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
EXPORTING
wait = 'X'.
ELSE.
CLEAR it_errtab.
MOVE-CORRESPONDING it_intab TO it_errtab.
MOVE l_bapiret2-message(80) TO it_errtab-error.
APPEND it_errtab.
ADD 1 TO g_num_errors.
ENDIF.
ENDLOOP.
2006 Jul 12 3:15 PM
remove your selects inside the loop and use for all entries instead.
use individual move statements in place of move-correspoding.
Regards,
ravi
2006 Jul 12 3:15 PM
remove your selects inside the loop and use for all entries instead.
use individual move statements in place of move-correspoding.
Regards,
ravi
2006 Jul 12 3:27 PM
Hi,
try to avoid select inside the loop, try to use the For all entires out side and use the read statement to get the data.
Regards
vijay
2006 Jul 12 3:24 PM
main performance reason is SELECT STATEMENTS IN LOOPS. So as you increase the no of records in the input file the processing time increases.
SO WRITE all SELECTS before the loop and use READ TABLE to read the particular contents from that table.
for that define an internal table with the required fields.
for example :
data: begin of it_equi occurs 0,
sernr type equi-sernr,
matnr type equi-matnr,
equnr type equi-equnr,
end of it_equi.
here above, i defined internal table IT_EQUI,assuming SERNR,MATNR are the key. if you have any other, add them to the definition.
*--This below SELECT is for pernr
if it_intab[] is not initial.
SELECT sernr
matnr
equnr INTO table it_equi
for all entries in it_intab
FROM equi
WHERE sernr = it_intab-child_sernr
AND matnr = it_intab-child_matnr.
*--This below SELECT is for parent pernr
SELECT sernr
matnr
equnr
<b>APPENDING table it_equi</b>
for all entries in it_intab
FROM equi
WHERE sernr = it_intab-child_sernr
AND matnr = it_intab-child_matnr.
if sy-subrc = 0.
sort ita_equi by sernr matnr.
endif.
endif.
LOOP AT it_intab.
Get Child Equipment Number
*SELECT SINGLE equnr INTO l_equipment
*FROM equi
*WHERE sernr = it_intab-child_sernr
*AND matnr = it_intab-child_matnr.
*--Instead of above select single, use read table IT_EQUI to findout record exists or not.
<b> READ TABLE IT_EQUI WITH KEY
sernr = it_intab-child_sernr
matnr = it_intab-child_matnr
BINARY SEARCH
TRANSPORTING NO FIELDS.</b>
IF sy-subrc <> 0.
CLEAR it_errtab.
MOVE-CORRESPONDING it_intab TO it_errtab.
MOVE 'Error selecting child from EQUI'
TO it_errtab-error.
APPEND it_errtab.
ADD 1 TO g_num_errors.
CONTINUE.
ENDIF.
*--like this also replace second SELECT IN THE LOOP.
USE THE SAME ABOVE read table but now send parent no to the same it_equi.
ENDLOOP.
Now you will see the difference in execution
Regards,
Srikanth.
Added internal table definition & BINARY SEARCH in READ TABLE.
Message was edited by: Srikanth Kidambi
<b></b>
Message was edited by: Srikanth Kidambi
2006 Jul 12 3:27 PM
NEVER NEVER NEVER NEVER
Code aSelect statement inside a loop .....
2006 Jul 12 3:37 PM
Hey,
One more thing you could do is to COMMIT after every 100 records instead of using the COMMIT BAPI for each record in the internal table.
-Kiran
2006 Jul 12 3:54 PM
Hi JJ,
you might avoid some select singles, but I guess the main runtime is in the BAPI - which needs every time a commit work.
But in general: 300 000 entries in 75600 is about 0,25 seconds per booking (if you don't have too much errors) - that's not so slow. Maybe you get a factor 2, but I wouldn't expect too much.
You still can use parallel execution - but that's just spreading the same runtime over several processes, so that you don't have to wait so long.
Regards,
Christian
2006 Jul 12 5:37 PM
I agree with Christian. I don't think there's much you can do to improve the performance. I would add that if you do submit this multiple times in parallel, you will likely run into locking problems.
Rob
2006 Jul 12 7:01 PM
I am sure that all of the time is being consumed by the BAPI and for a BAPI you must have the commit every record.
I am moving the select singles out of the loop as suggested but am not expecting great things.
thanks for all the input
J.J
2024 Mar 21 3:43 AM
Can anyone suggest if we can overcome this scenario using Odata and how to achieve that