‎2009 Jan 05 3:17 PM
Hi All,
I am working on a program and it's getting terminated everytime it come across this statement "INSERT ztable_data FROM TABLE itab_data.". the reason is itab
_data contains huge number of records and as and when it comes to this place the logs gets saturated and finally the programs is getting terminated.
can you pleas tell me how can I take care of this.
Thanks,
Rajeev
‎2009 Jan 05 3:22 PM
You cannot insert into Internal table from internal table. Loop the internal to work area and insert the work area to internal table.
Press F1 on insert statement you will get help
‎2009 Jan 05 3:19 PM
Hi Rajeev,
The insert Statement always checks for the key fileds in the DB table.
if that record doesn't exist then it inserts the records.
if we are trying to insert the already existed record then it goes dump.
so for already existed record use the UPDATE statement.
Thanks.
‎2009 Jan 05 3:23 PM
Thanks for your reply...when in my case we are first deleting the whole ztable using the truncate statement (as when I was trying to use the delete from statement teh program was terminating and logs were getting satured but using truncate statement solved this problem) and then after some validations we are inserting the records back again in the table.
tahnks,
Rajeev
‎2009 Jan 05 3:21 PM
Make sure the internal table do not contain records of same key,
Also.. if there is huge data in internal table.... try executting in background ...
as foreground execution is limited only for some time... based on basis config...
Thanks,
Adi.
‎2009 Jan 05 3:22 PM
You cannot insert into Internal table from internal table. Loop the internal to work area and insert the work area to internal table.
Press F1 on insert statement you will get help
‎2009 Jan 05 3:25 PM
Thanks for your reply but we are not inserting the recods from internal to internal but we are inserting from internal to custom table.
Thanks,
Rajeev
‎2009 Jan 05 3:35 PM
Sounds like too many DB changes without "commit work" inbetween. Try splitting the data into several blocks of e.g. 10,000 records and insert those with a "commit work" statement after each insert.
Thomas
P.S. is this the same problem?
‎2009 Jan 05 3:31 PM
"INSERT ztable_data FROM TABLE itab_data.". the reason is itab
make sure the structure of itab_data and ztable_data is exactly the same, modify statement might also help you..
I did a similar thing to delete entries and re-upload again..kindaa.. press F1 on modify for more info..it should solve ur problem.
‎2009 Jan 05 6:55 PM
Rajeev,
Did you solve your problem?
I am in the same boat too
Please let me know if you have a solution for it.
w.r.t your other post,
Delete: The system leaves the resources / memory allocated during this statement.
Truncate: This soes not. So its good to use this native sql statement.
I handled the same way in my program by commenting the delete statement.
Now my dump is throwing at INSERT statement to update 700K records.
Any clues?
Thanks
Kiran
‎2009 Jan 05 7:49 PM
Hi,
Try using "ACCEPTING DUPLICATE KEYS " with the insert command.
So if there is any records with the same key, it will not over write. Only new records will be inserted with no dumps.
But if you want to insert new records and modify the existing record then use Modify command instead.
Hope this helps.
Regards,
Bharati