Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Select query taking Much time

Former Member
0 Likes
1,099

Dear all ,

I am fetching data from pool table a006. The select query is mentioned below.

select * from a005 into table i_a005 for all wntries in it_table

where kappl = 'V'

and kschl IN s_kschl

and vkorg in s_vkorg

and vtweg in s_vtgew

and matnr in s_matnr

and knumh = it_table-knumh .

here every fields are primary key fields except one field knumh which is comparing with table it_table. Because of these field this query is taking too much time as KNUMH is not primary key. And a005 is pool table . So , i cant create index for same. If there is alternate solutions , than please let me know..

Thank You ,

And in technical setting of table ITS Metioned as Fully buffered and size category is 0 .. But data are around 9000000. Is it issue or What ? Can somebody tell some genual reason ? Or improvement in my select query.........

Edited by: TVC6784 on Jun 30, 2011 3:31 PM

5 REPLIES 5
Read only

yuri_ziryukin
Product and Topic Expert
Product and Topic Expert
0 Likes
785

Hello TVC,

AXXX are condition records tables and buffering of these tables is a tricky thing.

The table should be buffered if:

1) Table is small (2-3 Mb or less)

2) Table is RARELY changed.

Otherwise SAP recommends to unbuffer such a table.

If the table is large and moreover frequently changed, then the overhead of reloading the table into the buffer will be really high.

What can happen is that your table is large and it's being frequently displaced from the table buffer due to the buffer size limitation or changes to the table contents. This results in a fact, that the complete table should be buffered again and when you access it, it's full contents are loaded into the buffer which may be causing your performance problem.

Please check ST10 statistics for this table and if you see many invalidations or the buffer state is not "valid", then it may be an indicator for the problem described above.

Regards,

Yuri

Read only

0 Likes
785

Hi Yuri ,

Thanks for your reply....I will check as per your comment...

bUT if i remove field KNUMH From selection condition and also for all entries in it_itab , than data fetch very fast As KNUMH is not primary key..

. the example is below

select * from a005 into table i_a005

where kappl = 'V'

and kschl IN s_kschl

and vkorg in s_vkorg

and vtweg in s_vtgew

and matnr in s_matnr.

Can you comment anything about it ?

And can you please say how can i check its size as you mention that is 2-3 Mb More ?

Edited by: TVC6784 on Jun 30, 2011 7:37 PM

Read only

yuri_ziryukin
Product and Topic Expert
Product and Topic Expert
0 Likes
785

Hi Yuri ,

>

> Thanks for your reply....I will check as per your comment...

> bUT if i remove field KNUMH From selection condition and also for all entries in it_itab , than data fetch very fast As KNUMH is not primary key..

> . the example is below

>

> select * from a005 into table i_a005

> where kappl = 'V'

> and kschl IN s_kschl

> and vkorg in s_vkorg

> and vtweg in s_vtgew

> and matnr in s_matnr.

>

> Can you comment anything about it ?

>

> And can you please say how can i check its size as you mention that is 2-3 Mb More ?

>

> Edited by: TVC6784 on Jun 30, 2011 7:37 PM

I cannot see the trace and other information about the table so I cannot judge why the select w/o KNUMH is faster.

Basically, if the table is buffered and it's contents is in the SAP application server memory, the access should be really fast. Does not really matter if it is with KNUMH or without.

I would really like to see at least ST05 trace of your report that is doing this select. This would clarify many things.

You can check the size by multiplying the entries in A005 table by 138. This is (in my test system) the ABAP width of the structure.

If you have 9.000.000 records in A005, then it would take 1,24 Gb in the buffer (which is a clear sign to unbuffer).

Read only

1MoreDev
Explorer
0 Likes
785

Hi,

1 - Remove the knumh and the for all entries from the select statement.

2 - Make it_table with non duplicated knumh and make it hashed table with knumh as key.

3 - After the select just do a loop to i_a005 and inside the loop a read to the created hashed it_table.

If the read fails delete i_a005 index sy-tabix.

Should improve a bit 😃

Read only

Former Member
0 Likes
785

Hi,

Please adopt the following approach for your select query.

Remove For all entries.

Data: gv_cursor TYPE cursOR.

PArameters: p_size TYPE i DEFAULT 100.

*Package size can be of ur choice.

FIELD-SYMBOLS: <a005 > LIKE LINE OF i_a005.

OPEN CURSOR gv_cursor FOR

select * from a005 where kappl = 'V'

and kschl IN s_kschl

and vkorg in s_vkorg

and vtweg in s_vtgew

and matnr in s_matnr .

DO.

FETCH NEXT CURSOR gv_cursor

INTO CORRESPONDING FIELDS OF TABLE i_a005 PACKAGE SIZE p_size.

IF sy-subrc NE 0.

EXIT.

ENDIF.

ENDDO.

CLOSE CURSOR gv_cursor.

After all the data has been fetched use field symbol instead of conventional work area to assign according to field for which you are using for all entries.

LOOP AT i_a005 ASSIGNING <a005 > .

*do ur assignment

endloop.

Hope it helps.