2010 Apr 28 12:30 PM
Hi Friends,
I'm using open cursor for performance purpose as I'be to pull a large no. of records. Inside the do -enddo, I've to update a Ztable. So, after using the insert statement, I put " commit work" but it started giving dump because it resets the cursor. To avoid that I used the function module "DB_commit" but it's not updating the records in Ztable. code is like this.
OPEN CURSOR WITH HOLD lv_bsak_dbcur1 FOR
SELECT bukrs augbl auggj kostl prctr gsber aufnr
lifnr umskz augdt gjahr belnr buzei budat bldat cpudt
waers bschl shkzg mwskz dmbtr wrbtr sgtxt saknr
zfbdt zterm zbd3t zlsch hbkid rebzg rebzj rebzz fistl
geber bstat FROM bsak
WHERE gjahr IN r_gjahr
ORDER BY bukrs augbl auggj kostl prctr gsber aufnr.
DO.
REFRESH it_bsak.
FETCH NEXT CURSOR lv_bsak_dbcur1
APPENDING CORRESPONDING FIELDS OF TABLE it_bsak
PACKAGE SIZE iv_fetch_batch_size.
IF NOT sy-subrc IS INITIAL.
CLOSE CURSOR lv_bsak_dbcur1.
EXIT.
ENDIF.
IF NOT it_bsak IS INITIAL.
*
*
some process to form fina table
*
*
insert ztable from table it_final.
if sy-subrc is initial.
call function "DB_commit".
endif.
if I'll use "commit work", it'll give dump.
Please advise as what to do so that Dump can be avoided & records cab updated as well.
Thanks:
Gaurav
2010 Apr 28 12:54 PM
You you have realized a COMMIT WORK before the CLOSE CURSOR closes the DB cursor, so you cannot use it in one shot.
Can you not commit the db changes outside the DO ... FETCH ... ENDDO block ?
2010 Apr 28 2:27 PM
>
> I'm using open cursor for performance purpose as I'be to pull a large no. of records. Inside the do -enddo,
OPEN CURSOR will not help performance.
Rob
2010 Apr 28 2:32 PM
You are rith with the DB commit and the COMMIT WORK. The db commit can be used with the OPEN CURSOR ... WITH HOLD.
> but it's not updating the records in Ztable.
You did you check that? In parallel to the running process, then you have to check how your database handles commits reads.
Add a counter for the packages for testing:
+ Test whether it works at all with COMMIT WORK after the first package
+ Change to 2 packages and DB_COMMIT and check after it is finished whether the INSERTs are there.
+ Process all packages
Siegfried
2010 Apr 28 2:36 PM
@Rob
sorry, he is absolutely right. Large amounts of data should be processed in packages (50.000) and the resulting changes should be db_commited to get ride of the redo etc. block of the database. This has performance impacts. It is also better if problems occurs, which can require a repeat of the execution.
Siegfried
2016 Sep 09 12:33 PM
Hi,
data(lr_db_connection) = cl_sql_connection=>get_connection( cl_sadl_dbcon=>get_default_dbcon( ) ).
open cursor lv_cursor for select * from (p_table).
do.
fetch next cursor lv_cursor into table <dyn_tab> package size p_commit.
if sy-subrc eq 0.
insert (p_table) from table <dyn_tab>.
lr_db_connection->commit( ).
else.
close cursor lv_cursor.
exit.
endif.
enddo.