‎2006 Apr 24 9:04 PM
Hi friends,
I'm going to read and process data in an interface coded in ABAP and OPEN SQL. To improve efficiency and reliability I'm processing the data in packets of a fixed size of rows - reading rows up to a predetermined numer into an internal table which then is processed and then finaly written back to database followed by "commit work". Then the process will continue with reading the next fixed number of rows, process them, and so on ...
The general question is, which is the most efficient way to implement this scenario?
I think of two basic approaches:
1.1) Loop over results from a cursor using FETCH NEXT CURSOR inside a LOOP appending the lines to the internal table.
2.1) Execute SELECT ... INTO TABLE <itab> FROM <table> UP TO <data packet size> ROWS.
My assumtion is that approach 2 would be the more effecient, is that correct?
The processed data will be written back to the database in one single statement:
2.2) INSERT <table> FROM TABLE <itab>
Which I assume is more efficient than doing the same using multiple inserts within a loop?
Regards,
Christian
‎2006 Apr 24 9:35 PM
Thanks!
So what would you finally recommend? To use "package size <n>" or "up to <n> rows" to restrict the number of read records in each processing step?
Regards,
/ C
‎2006 Apr 24 9:13 PM
Hi Christian ,
2nd one is more efficient.
Lets understand this...
in 2nd SELECT you make single call to database. This saves time and load on database..
Same applies for INSERT.
You should always try to reduce making unneccessary journeys to DB as it results in loss of time and puts load on DB.
Regards,
Tanveer.
Please mark helpful answers.
‎2006 Apr 24 9:16 PM
In native SQL you can also use the packet options.
SELECT <Fields name> appending corresponding fields of table <Internal table>
<b>package size 20000</b>
FROM <Database table name>
WHERE <Condition>.
ENDSELECT.
By using this the system will fetch the records from database table in packets [20000 records per package]
Regards
Aman
‎2006 Apr 24 9:22 PM
Inserting from an internal table is more efficient, but you have to make sure that the table is small enough so that the program doesn't exceed the rollback area(s).
Rob
‎2006 Apr 24 9:25 PM
Hi Aman,
I think using "SELECT.. ENDSELECT" and even "APPENDING CORRESPONDING FIELDS" have high performance constraints.
I would suggest the second option by Christian.
Regards,
Tanveer.
‎2006 Apr 24 9:35 PM
Thanks!
So what would you finally recommend? To use "package size <n>" or "up to <n> rows" to restrict the number of read records in each processing step?
Regards,
/ C
‎2006 Apr 24 9:43 PM
If you use up to <n> rows, you will always get the same rows of data. Package size <n> will get you the next group of records each time.
Rob
‎2006 Apr 24 9:48 PM
But what I would do would be to select all of the entries into an internal table at one time. Then move them in packets of say 5,000 records to a second table and do the insert from the second table.
Rob