2013 Jul 23 8:26 AM
Hi i am having a table with more than 100000 entries with XML. So i want to decrease this table size by modifying XML content ( using call transformation id and initial components = suppress option. my code is
I am processing 50 entries at a time
lv_start_idx = 1.
lv_end_idx = lv_range. " 50
SELECT arnum FROM Zarxml INTO TABLE lt_arnum.
lv_iterations = ceil( lines( lt_arnum ) / lv_range ).
SORT lt_arnum BY arnum.
DO lv_iterations TIMES.
CLEAR: lt_process_ar,
lt_aragv_del.
LOOP AT lt_arnum FROM lv_start_idx TO lv_end_idx INTO lv_arnum.
APPEND lv_arnum TO lt_process_ar.
ENDLOOP.
IF lt_process_ar IS INITIAL.
EXIT.
ENDIF.
SELECT * FROM Zarxml INTO TABLE lt_aragv FOR ALL ENTRIES IN lt_process_ar WHERE arnum = lt_process_ar-arnum.
LOOP AT lt_aragv ASSIGNING <ls_aragv>.
CLEAR: ls_ipar_data,
lt_gkonh,
lt_gkonp,
lt_gpraxfu,
lt_scales,
lt_texts.
*****Old String
TRY.
CALL TRANSFORMATION id
SOURCE XML <ls_aragv>-agdat
RESULT source_ar = ls_ipar_data
source_konh = lt_gkonh
source_konp = lt_gkonp
source_praxfu = lt_gpraxfu
source_scales = lt_scales
source_texts = lt_texts.
CATCH cx_xslt_exception INTO lref_xslt_err.
ENDTRY.
****New String
TRY.
CALL TRANSFORMATION id
OPTIONS initial_components = 'suppress'
SOURCE source_ar = ls_ipar_data
source_konh = lt_gkonh
source_konp = lt_gkonp
source_praxfu = lt_gpraxfu
source_scales = lt_scales
source_texts = lt_texts
RESULT XML ls_aragv_new-agdat.
CATCH cx_xslt_exception INTO lref_xslt_err.
ENDTRY.
IF <ls_aragv>-agdat NE ls_aragv_new-agdat.
APPEND <ls_aragv> TO lt_aragv_del. " Data changed So delete old one
****Override
<ls_aragv>-agdat = ls_aragv_new-agdat. " Insert New one
ELSE.
DELETE lt_aragv. " No Need to Insert if data is same
ENDIF.
ENDLOOP.
***Process Next request's as a batch
lv_start_idx = lv_end_idx + 1.
lv_end_idx = lv_end_idx + lv_range.
***Deleting Old entries
IF lt_aragv_del IS NOT INITIAL.
DELETE Zarxml FROM TABLE lt_aragv_del.
ENDIF.
***Inserting New Entries
IF lt_aragv IS NOT INITIAL.
INSERT Zarxml FROM TABLE lt_aragv.
ENDIF.
ENDDO.
this report is running for 40min on 100000 entries and i am getting time out dump.Is there any way to avoid this dump..?
2013 Jul 23 3:54 PM
Ask your Basis guy to increase the time limit
Runtime is determined by setting of the profile parameter in RZ11
Regards,
Bastin.G
2013 Jul 23 4:09 PM
Your time_out is occurring at which statement?
Delete, or Insert, or something else?
It looks like a one-time activity to me.
First, you can run this program in background to get rid of time_out limit.
Second, you can divide the total volume into packets of 1000 each, (20x50).. and single program execution will process only 1000.
Schedule a job at regular interval that processes 1 packet at a time.
After 100 job runs (100 hours), table size will be reduced.
Third, the transformation part is somewhat independent of Delete and Insert statements.
Transformation part can be handled using parallel processing (if that is slowest), and results can be collated into final internal table, which would later be used to Delete and Insert.
This is assuming that there is nothing wrong with the logic. For instance, you are clearing lt_aragv_del at beginning of DO loop, but not lt_aragv. Any specific reason?
It also looks like table key is not changing, only xml content. So instead of Delete and Insert, you could do Modify on list of modified records. And maybe take Modify statement out of loop.
2013 Jul 24 2:18 PM
Hi Manish,
Thanks for your reply,In DO loop for every iteration, select query will refresh lt_aragv table.So i didn't cleared that table. Is Modify on DB table better than Delete and insert in this case. ? if i want to do this modify out of do loop i have to maintain all modified records in internal table..is internal table can handle this much huge amount of data..?
2013 Jul 24 4:10 PM
I understand the logic behind not clearing lt_aragv.
Modify should be better than a combination of Delete and Insert.
You can run the program in background starting with smaller subset, gradually increasing data volume and seeing whether time taken increases linearly.
Since I can't run your scenario on my computer, it is hard to give exact answer.
2013 Jul 24 5:25 AM
Hi Praveen,
Navigate to RZ11 transaction and enter the parameter name as 'rdisp/max_wprun_time' and click on display button. Check the current value field.If it is 600 then set the current value to '0' by clicking on 'Change value' button and save it, and execute your program now.
Regards,
Chinni
2013 Sep 11 6:18 PM
HI,
This table size is around 250GB with around 189000 entries and we are running this report in background and it took nearly 7hr to process 76000 entries and then background job fails. we improved this report to do commit work and wait after processing each set in above report it is 50 but we used 500 entries as a set in latest version. here is the trace file
| Call | Number | Gross | = | Net | Gross (%) | Net (%) | Type |
| Call Transformation | 9,901 | 2,479,930,302 | 2,479,858,023 | 69.2 | 69.2 | ||
| Select /IRM/IPARAGV | 189 | 649,914,783 | = | 649,914,783 | 18.1 | 18.1 | DB-> |
| Array Insert /IRM/IPARAGV | 20 | 385,840,293 | = | 385,840,293 | 10.8 | 10.8 | DB-> |
| Call Transformation | 9,901 | 53,776,144 | = | 53,776,144 | 1.5 | 1.5 | |
| Commit Work | 188 | 7,651,109 | = | 7,651,109 | 0.2 | 0.2 | DB |
| Array Delete /IRM/IPARAGV | 20 | 2,481,286 | = | 2,481,286 | 0.1 | 0.1 | DB-> |
| Runtime Analysis On | 0 | 3,584,274,827 | 0 | 100 | 0 |
It is not good to have a program which will fail after processing 76000 entries / after executing for 7 hrs.. Is there any way to avoid this background job fail..?
Regards,
Praveen
2013 Sep 12 5:37 AM
Hi,
please check that the internal tables lt_process_ar and lt_aragv_del have no header lines.
If they have header lines please use REFRESH instead of CLEAR on that tables in the begiining of the DO loop.
Regards,
Klaus
2013 Sep 12 12:58 PM
Hi klaus,
I am not using internal tables with header lines..so clear statement will clear entire internal table.
Regards,
Praveen