Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

TIME_OUT

Former Member
0 Likes
1,586

Hi Team

Can you please check the below mentioned code and suggest me the solution, how to resolve TIME_OUT error issue

Code works like,

its looping 4 times

1 st time, it takes it_bsis - 500 records and cross check with table faglflexa and append into itab

2nd time, it takesit_bsas1 - 3500000 records and cross check with table faglflexa and append into itab - here TIME_OUT error fires

find the below mentioned code for your reference

DO c_4 TIMES.

    CLEAR: gwa_whr_line.
    REFRESH git_whr.
    UNASSIGN <gfs_bsis_bsas>.

    CASE sy-index.
      WHEN c_1.
        CHECK NOT it_bsis IS INITIAL.
        ASSIGN it_bsis TO <gfs_bsis_bsas>.
        lv_flag = c_x.
      WHEN c_2.
        CHECK NOT it_bsas IS INITIAL.
        ASSIGN it_bsas TO <gfs_bsis_bsas>.
        lv_flag = c_x.
      WHEN c_3.
        CHECK NOT it_bsak IS INITIAL.
        ASSIGN it_bsak TO <gfs_bsis_bsas>.
        CONCATENATE 'BUZEI =' '<GFS_BSIS_BSAS>-BUZEI' INTO gwa_whr_line SEPARATED BY space.
        lv_flag = c_x.
      WHEN c_4.
        CHECK NOT it_bsad IS INITIAL.
        ASSIGN it_bsad TO <gfs_bsis_bsas>.
        lv_flag = c_x.
    ENDCASE.

    CHECK lv_flag = c_x.

 if not s_prctr[] is initial.
   SELECT RYEAR
          DOCNR
          RBUKRS
          RTCUR
          RACCT
          PRCTR
          WSL
          DRCRK
          BUDAT
          BUZEI
      FROM faglflexa
      APPENDING TABLE git_faglflexa
        FOR ALL ENTRIES IN <gfs_bsis_bsas>
       WHERE ryear  = <gfs_bsis_bsas>-gjahr
         AND rbukrs = <gfs_bsis_bsas>-bukrs
         AND docnr  = <gfs_bsis_bsas>-belnr
         AND (gwa_whr_line)
         AND prctr IN s_prctr.
 else.
  SELECT ryear
         docnr
         rbukrs
         rtcur
         racct
         prctr
         wsl
         drcrk
         budat
         buzei
         FROM faglflexa
         APPENDING TABLE git_faglflexa
         FOR ALL ENTRIES IN <gfs_bsis_bsas>
         WHERE ryear  = <gfs_bsis_bsas>-gjahr
          AND docnr  = <gfs_bsis_bsas>-belnr
          AND rbukrs = <gfs_bsis_bsas>-bukrs
          AND (gwa_whr_line).
 endif.

 clear lv_flag.
 ENDDO.

Thanks in Advance

Sekhar

15 REPLIES 15
Read only

Former Member
0 Likes
1,533

If docnr = belnr in the table, then you should use belnr rather than docnr in the select. Failing that, add ldnr to the select if you can determine what values to use.

Rob

Read only

0 Likes
1,533

Hi Rob,

Thank you for your reply

only DOCNR in the table FAGLFLEXA and all key fields are used in where condition RYEAR, RBUKRS and DOCNR

Could you please suggest the solution for TIME_OUT issue

Thanks in Advance

Sekhar

Read only

0 Likes
1,533

Hi Sekahr,

I think you posted two threads on same and one is locked.Can you check the second one .

Regards,

Madhu.

Read only

0 Likes
1,533

Hi Madhu,

I have declared types as per table faglflexa field sequence

Before checking the for all entries with table faglflexa, i was used DELETE ADJACENT DUPLICATES FROM it_bsas

after deleted the duplicate records in BSAS table contains 3500000 records, because of this reason it goes to TIME_OUT error

Can you please suggest the solution to resolve this TIME_OUT error

Thanks in Advance

Sekhar

Read only

0 Likes
1,533

Hi,

I do not know about your tables declaration.Declare hashed tables with key fields and see.You need check all the setting to increase your performance .Similarly check the selection criteria.

Regards,

Madhu.

Read only

Former Member
0 Likes
1,533

Hi Sekhar

I'm seeing that table FAGLFLEXA does not have MANDT for first field but RCLNT.

Have you tried this SELECT with the CLIENT-SPECIFIED option telling RCLNT = sy-MANDT ?

I'm thinking there might be a problem for the optimizer to use the primary index (its key) of this table.... and this might help

Cheers

Read only

Former Member
0 Likes
1,533

Hi Sekhar,

The where clause is not in sequence. Correct the same.

APPENDING TABLE will take more time.

Also the query is inside Do - ENDD0 which is also not recommended. If you can think of some other logic i believe you can avoid this dump.

Also the performance depends on the no. of entries in the FAGLFLEXA table.

Regards,

Prathvi

Read only

0 Likes
1,533

Hi Sekhar,

Also after rearranging the sequence of the where clause, Create a secondary index with following fields

RCLNT

RYEAR

DOCNR

RBUKRS

PRCTR

BUZEI

This will improve the performance of the program.

Regards,

Prathvi

Read only

Former Member
0 Likes
1,533

Hi Sekhar

something I forgot to mention, the FOR ALL ENTRIES in problem

Please refer to note 48230 explaining why you might be better off with an INNER JOIN.....

All the best,

Curt

Read only

Former Member
0 Likes
1,533

bsas1 - 3.500.000

what do you expect ... FOR ALL ENTRIES with 3.5 mio records is nonsense.

And the result is supposed to be display in ALV or what?

What is the result good for? Will it be stored to db, then block-procressing is anyway necessary.

It is displayed? Who will view it?

Is the result much smaller, 3.500.000 but much less are return, then do a join.

Is it a db-change, then do not tranfer to ABAP at all.

Read only

0 Likes
1,533

bsas1 - 3.500.000

>

> what do you expect ... FOR ALL ENTRIES with 3.5 mio records is nonsense.

>

> And the result is supposed to be display in ALV or what?

>

> What is the result good for? Will it be stored to db, then block-procressing is anyway necessary.

> It is displayed? Who will view it?

Siegfried,

in real life, the next request at this stage usually is (when the requester recognizes, that the ALV is not working),

if it might be possible to implement a download to flat file, which is supposed to be locally uploaded

to an access database for further processing...

I am not kidding.

Came across such a thing when two guys where swamping my filesystems with GBs of CSV files...

Volker

Read only

0 Likes
1,533

in real life, the next request at this stage usually is (when the requester recognizes, that the ALV is not working),

Bear in mind that it was Siegfried who first mentioned ALV, not the OP.

Who knows how this will be used?

Rob

Read only

0 Likes
1,533

Yes Rob,

I'd got that, but I think Siegfrieds question it is a very valid one.

I am asking the same thing quite frequently when people are selecting tons of data,

and to my suprise people rather often like to extract stuff to process it in the described way.

Volker

Read only

Former Member
0 Likes
1,533

Let me give you a reply based on an assumption that the table FAGLFLEXA is basically used for getting the profit centre info.

The given code will take too much time as depending on the number of entries for a given FI document the FOR ALL ENTRIES has to be executed. So i suggest if ur requirement is to get Profit centre info it is better to DELETE ADJACENT DUPLICATES after sorting and then get the required PRCTR info.

It will be better if you could specify the aim for the FOR ALL ENTRIES query that you have used now, as I believe rest of the info which you have specified in the query other than PRCTR will be available in BSIS and BSAS.

Hope this helps..

Read only

Former Member
0 Likes
1,533

A few things I notice...

1) 3.5 mil records !!!

I notice sometimes that certain programmers tend to fetch all the records and then worry about processing them later. Hope this isn't the case and all 3.5 mil are required. If so I wouldn't suggest using for all entries for that much data. Maybe InnerJOINS.

2) APPENDING TABLE and usage of (gwa_whr_line)

As your records increase this will be more taxing. Based on a quick test program...for 1 mil records in my system time taken was using 'APPENDING TABLE' was almost twice to INTO TABLE and append to a 'final table.

Also similar is the case when (gwa_whr_line) doesn't imply anything until sy-index = 3...its bogs down the statement until then.

This time would again change for a lot worse if we are talking about SORTED TABLES.

3) Where condition needs to be sequenced according the the key

II'm not sure though in suggesting a use of 'Views' as my knowledge in that matter would be limited.....