Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Sequential Read on table BSAD

Former Member
0 Likes
1,267

Hello All

Our custom program job when observed thru SM51 is spending over 4 hours at a single point showing SEQUENTIAL READ on table BSAD. DB2 provided us with the queries that were executed during runtime trying to fetch data from table BSAD

SELECT * FROM BSAD

WHERE XBLNR EQ <value>

AND BUKRS EQ <value>

AND KUNNR IN L_KUNNR [] (which has around 400 KUNNR values)

SELECT bukrs belnr xblnr blart zuonr kidno

FROM bsad

UP TO 1 ROWS

INTO CORRESPONDING FIELDS OF <name>

WHERE kunnr IN L_kunnr[] ((which has more than 400 KUNNR values)

AND ( zuonr EQ <value> OR kidno EQ <value> ).

We have indexes for both combinations specified above - still during the execution of the program the above query is being observed to clock for over 5 hours. This program had not been giving issues all along - and we are seeing issues lately. DB2 team mentions that IN statement is expensive. When DB2 observes during run-time - they see a lot of values been used in L_KUNNR[] range. The table re-org or index re-build has not been done for a while too. We are trying to interpret the possibilities

Edited by: Vasantharaman Viswanathan on Jan 29, 2011 5:49 PM

9 REPLIES 9
Read only

brad_bohn
Active Contributor
0 Likes
1,067

Did you trace the statements yourself and check the index usage? 400 entries isn't that many rows, especially when compared to statements that use FOR ALL ENTRIES against internal tables with thousands of entries. When a program just starts behaving poorly in an environment for the same criteria, it's usually one of three things: statement/code change, index change/add or stats go stale. If you haven't updated the stats in a while, it could be that. BSAD is a heavily used table and the index stats need to be kept up to date.

Read only

Former Member
0 Likes
1,067

THanks for your inputs John

Whenever we tried putting a trace- we do it during the execution of the statement - and only for a few times we have been able to capture trace - sometimes Basis team says that they did not capture any. In the traces that we have i was able to see at the statement being executed for 10 mins or so - which may not make sense if the program is stuck for hours.

Also in the trace - i believe you are asking if we are able to deduce if it would use index based on the values provided?\or is there a field / param in the trace that provides this info? Please clarify

Also when i tried having around 400 values in the IN statement and tried executing the query in non-prod - it took atleast 600 secs (since thats the max time i could keep it active as Dialog process in non-prod). Also do you think the above SELECT statement warrants a change?

Read only

Former Member
0 Likes
1,067

Can you post the results of your trace?

Read only

brad_bohn
Active Contributor
0 Likes
1,067

You can trace background processes as well as dialog processes and capture the trace results - you simply need to specify the process ID. As for the development query, keep in mind that time is fairly irrelevant. As for the index, I wouldn't necessarily create it, given that there's a standard index for XBLNR on BKPF, but it depends on the uniqueness of the value. What is the exact order of the fields in there? It doesn't sound like your XBLNR value is very specific if 400 customers have the same value. No one is really going to be able to give you specific, good advice here without the trace results or the statistical data spread. Again, it really sounds like your stats may need to be updated unless your indexes aren't very good.

Edited by: Brad Bohn on Jan 31, 2011 10:18 AM

Read only

Former Member
0 Likes
1,067

>

> 400 entries isn't that many rows, ....

I've seen SELECTS with just three or four entries that take a long time to run.

If therer are many cleared entries for some of these customers, it could cause the problem. Try it with fewer entries in the SELECT-OPTIONS or try to find the customers with the most entries in BSAD and do them separately.

Rob

Read only

Former Member
0 Likes
1,067

I may not be able to share the specifics of the trace since they show prod data. Please advice

DBSTATS has been performed recently - but rebuilding of index / re-org has not been done for a year

Read only

0 Likes
1,067

The trace data won't compromise any data but you need to focus on the index analysis and relative costs of the indexes. You can also run the trace in your DEV environment since it seems that the issue exists there as well. Surely you have someone onsite, whether it's a development lead or a DB administrator that can read the results and provide some advice? Without knowing your indexes, cost analysis, and data distribution, there's really no way to help here other than general tips.

@Rob - Yep, you're exactly right. Seems strange in this case to have 400 customers with the same reference field value. That would be one business process to address as well, not to mention trying to get the data another way.

Edited by: Brad Bohn on Jan 31, 2011 10:44 AM

Read only

former_member186741
Active Contributor
0 Likes
1,067

You could reorganise your select to match the prmary index.

Also, I have found that for all entries is often more efficient than the 'in' which seems to generate large single sql statements.

If your L_KUNNR is only using the 'low' component you may be able to try something like this:

Data lt_kunnr type sorted table of bsad-kunnr with unique key table_line.

loop at lt_kunnr.

collect lt_kunnr-low into lt_kunnr.

endloop.

SELECT * FROM BSAD

FOR ALL ENTRIES IN LT_KUNNR

WHERE BUKRS EQ <value>

AND KUNNR EQ LT_KUNNR-TABLE_LINE

AND XBLNR EQ <value>

Read only

0 Likes
1,067

How do i share the trace file? Please help