Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

time out error

Former Member
0 Likes
1,814

Hi i am having a table with more than 100000 entries with XML. So i want to decrease this table size by modifying XML content ( using call transformation id and initial components = suppress option. my code is

I am processing 50 entries at a time

lv_start_idx = 1.

lv_end_idx = lv_range.  " 50

SELECT arnum FROM Zarxml INTO TABLE lt_arnum.

 

lv_iterations = ceil( lines( lt_arnum ) / lv_range ).

SORT lt_arnum BY arnum.

   

DO lv_iterations TIMES.

 

  CLEAR: lt_process_ar,

         lt_aragv_del.

  LOOP AT lt_arnum FROM lv_start_idx TO lv_end_idx INTO lv_arnum.

    APPEND lv_arnum TO lt_process_ar.

  ENDLOOP.

  IF lt_process_ar IS INITIAL.

    EXIT.

  ENDIF.

  SELECT * FROM Zarxml INTO TABLE lt_aragv FOR ALL ENTRIES IN lt_process_ar WHERE arnum = lt_process_ar-arnum.

  LOOP AT lt_aragv ASSIGNING <ls_aragv>.

      CLEAR: ls_ipar_data,

           lt_gkonh,

           lt_gkonp,

           lt_gpraxfu,

           lt_scales,

           lt_texts.

 

*****Old String

    TRY.

        CALL TRANSFORMATION id

        SOURCE XML <ls_aragv>-agdat

        RESULT source_ar = ls_ipar_data

               source_konh = lt_gkonh

               source_konp = lt_gkonp

               source_praxfu = lt_gpraxfu

               source_scales = lt_scales

               source_texts = lt_texts.

      CATCH cx_xslt_exception INTO lref_xslt_err.

    ENDTRY.

****New String

    TRY.

        CALL TRANSFORMATION id

          OPTIONS initial_components = 'suppress'

        SOURCE source_ar = ls_ipar_data

               source_konh = lt_gkonh

               source_konp = lt_gkonp

               source_praxfu = lt_gpraxfu

               source_scales = lt_scales

               source_texts = lt_texts

        RESULT XML ls_aragv_new-agdat.

        CATCH cx_xslt_exception INTO lref_xslt_err.

    ENDTRY.

    IF <ls_aragv>-agdat NE ls_aragv_new-agdat.

      APPEND <ls_aragv> TO lt_aragv_del.    " Data changed So delete old one

****Override

      <ls_aragv>-agdat = ls_aragv_new-agdat. " Insert New one

    ELSE.

      DELETE lt_aragv. " No Need to Insert if data is same

    ENDIF.

  ENDLOOP.

 

***Process Next request's as a batch

  lv_start_idx = lv_end_idx + 1.

  lv_end_idx = lv_end_idx + lv_range.

***Deleting Old entries

  IF lt_aragv_del IS NOT INITIAL.

    DELETE Zarxml FROM TABLE lt_aragv_del.

  ENDIF.

***Inserting New Entries

  IF lt_aragv IS NOT INITIAL.

    INSERT Zarxml FROM TABLE lt_aragv.

  ENDIF.

ENDDO.

                     this report is running for 40min on 100000 entries and i am getting time out dump.Is there any way to avoid this dump..?

8 REPLIES 8
Read only

bastinvinoth
Contributor
0 Likes
1,194

Hi praveen kumar

Ask your Basis guy to increase the time limit

Runtime is determined by setting of the profile parameter in RZ11

Regards,

Bastin.G

Read only

Former Member
0 Likes
1,194

Your time_out is occurring at which statement?

Delete, or Insert, or something else?

It looks like a one-time activity to me.

First, you can run this program in background to get rid of time_out limit.

Second, you can divide the total volume into packets of 1000 each, (20x50).. and single program execution will process only 1000.

Schedule a job at regular interval that processes 1 packet at a time.

After 100 job runs (100 hours), table size will be reduced.

Third, the transformation part is somewhat independent of Delete and Insert statements.

Transformation part can be handled using parallel processing (if that is slowest), and results can be collated into final internal table, which would later be used to Delete and Insert.

This is assuming that there is nothing wrong with the logic. For instance, you are clearing lt_aragv_del at beginning of DO loop, but not lt_aragv. Any specific reason?

It also looks like table key is not changing, only xml content. So instead of Delete and Insert, you could do Modify on list of modified records. And maybe take Modify statement out of loop.

Read only

0 Likes
1,194

Hi Manish,

      Thanks for your reply,In DO loop for every iteration, select query will refresh lt_aragv table.So i didn't cleared that table. Is Modify on DB table better than Delete and insert in this case. ? if i want to do this modify out of do loop i have to maintain all modified records in internal table..is internal table can handle this much huge amount of data..?

Read only

0 Likes
1,194

I understand the logic behind not clearing lt_aragv.

Modify should be better than a combination of Delete and Insert.

You can run the program in background starting with smaller subset, gradually increasing data volume and seeing whether time taken increases linearly.

Since I can't run your scenario on my computer, it is hard to give exact answer.

Read only

chinni_adapa
Participant
0 Likes
1,194

Hi Praveen,

Navigate to RZ11 transaction and enter the parameter name as 'rdisp/max_wprun_time' and click on display button. Check the current value field.If it is 600 then set the current value to '0' by clicking on 'Change value' button and save it, and execute your program now.

Regards,

Chinni

Read only

Former Member
0 Likes
1,194

HI,

            This table size is around 250GB with around 189000 entries and we are running this report in background and it took nearly 7hr to process 76000 entries and then background job fails. we improved this report to do commit work and wait after processing each set in above report it is 50 but we used 500 entries as a set in latest version. here is the trace file

CallNumber Gross= NetGross (%)Net (%)Type
Call Transformation9,9012,479,930,3022,479,858,02369.269.2
Select /IRM/IPARAGV189649,914,783=649,914,78318.118.1DB->
Array Insert /IRM/IPARAGV20385,840,293=385,840,29310.810.8DB->
Call Transformation9,90153,776,144=53,776,1441.51.5
Commit Work1887,651,109=7,651,1090.20.2DB
Array Delete /IRM/IPARAGV202,481,286=2,481,2860.10.1DB->
Runtime Analysis On03,584,274,82701000

                                           It is not good to have a program which will fail after processing 76000 entries / after executing for 7 hrs.. Is there any way to avoid this background job fail..?

Regards,

Praveen

Read only

Former Member
0 Likes
1,194

Hi,

please check that the internal tables lt_process_ar and  lt_aragv_del have no header lines.

If they have header lines please use REFRESH instead of CLEAR on that tables in the begiining of the DO loop.

Regards,

Klaus

Read only

0 Likes
1,194

Hi klaus,

          I am not using internal tables with header lines..so clear statement will clear entire internal table.

Regards,

Praveen