‎2010 Apr 28 12:11 PM
Hi Gurus,
I have one strange problem, actually i am deleting records from a custom database table. I am deleting it based on the records present in a file. I am doing commit to the table for 100 records at a time for performance.
Now my problem is that there are two files with different records and i am deleting records from the database table simultaneously. I mean prallelly. But the problem is that some of the records are not getting deleted in this case.
i am using Commit work and wait.
but when i execute the two files separetely i.e one after the other, then all the records are getting deleted. But for perfromance reason i want to delete the records from the table based on different files with different records parallely.
Kindly suggest me with the right solution.
Regards,
Nikesh Kumar
‎2010 Apr 28 12:43 PM
You would need to enqueue, dequeue the table, IMHO. Since you're doing 100 records at a time, in parallel, I would expect this to run longer than the typical statement like: DELETE (from? needed/not needed) DTAB from table ITAB. Executing as you are doing might create db deadlocks in some situations, so I'd use the table ENQUEUE, DEQUEUE for the table having rows deleted.
‎2010 Apr 28 12:51 PM
Do both the files share some common key fields? May be it's a locking issue. Could you share the table name and which particular field values you are trying to delete?
Otherwise you can collate data from both the files and do the database updation in a single file. It'll will be more efficient instead of accessing database simultaneously.
‎2010 Apr 28 1:05 PM
Hi,
yes the records are having same key fields in both the files. and since this is related to archiving so cannot collate both the files. If the records are not same, will this still create the locking issues.
Will the commit work and wait can create the issue? Shall is use enqueue and dequeue?
REgards,
Nikesh Kumar
‎2010 Apr 28 1:26 PM
You must be reading the files into internal tables.You can collate the contents of the internal tables from two files into one based on the key fields in the ABAP program.
You can use Enqueue-Dequeue. But, again you must wait to update an entry, if it's locked by any other record.It's more efficient to access a single database entry at a time instead of simultaneous call to same entry by two redundant records.
So, it will be better if you run your files individually as no locking issue will be encountered and no wait period is required to update all the entries. Otherwise, collate the both the entries based on their key fields. If you are deleting the whole row in the database, only the key fields are sufficient. Merge both the internal tables and delete duplicates comaparing key fields.
‎2010 Apr 28 3:55 PM
if there was a lock issue, you would get a dump or a catched exception. So, first make sure that your update really raises an exception. If yes, then it's maybe a lock issue. If not, then it's not a lock issue, your algorithm is faulty.
‎2010 Apr 28 2:18 PM
Hi ,
If u have 2 file and both having the same key , then declare the one internal table and append the all deleted records into that, and finally enqueue the table by using the FM ENQUEUE_E_TABLE and after completion dequeue the table by using the DEQUEUE_E_TABLE.
Reagrds,
Bharani
‎2010 Apr 28 4:21 PM
Hi Nikesh,
From the ABAP Performance aspect, an Database ARRAY Operation is always performance efficient rather than single line updates.
After reading all the chained discussion, I would rather suggest that you prepare an internal table which is of the same type as your database table and in can execute the database operation is a single shot. Based on the sy-subrc value you can either call a COMMIT WORK and if something goes wrong you can call a ROLLBACK WORK. Try keeping most of your programing logic in the ABAP Layer and make the use of performance efficient factors like FIELD-SYMBOLS and array operations to transfer the data as fast as possible. As you have the same key fields in both the files, you can achieve to integrate them in a single table.
So, don't forget the ground rules of Performance : 1. Keep the amount of data transfer to ABAP Database small and 2. keep the number of ABAP Database access small.
Hope this helps.
Thanks,
Samantak.