Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Large time difference spent for the same report with same data

Former Member
0 Likes
1,161

Hi Experts,

We have a report. It takes long time to execute thus causes 'Time_out' dump due to two expensive select statements(table: S031 and MSEG) the first time.

But afterwars I execute the same report again. It is very fast and no dump happens. I have no idea on this. Could you please show me any pointer on that? Thanks in advance.

Note: there are no buffering table in my report.

Regards..

10 REPLIES 10
Read only

Former Member
0 Likes
976

Buffering can happen in different areas. I.e. some SAP tables are buffered on the application server, but this kind of buffering is not applicable for transactional data (like MSEG), because the data is too volatile. The buffering on application server level is usually done for tables with data that seldom changes.

The database always uses buffering (to avoid expensive disk reads) and that's probably the buffering that results in your report working the second time. You're talking of two expensive statements, so possibly the first timeout occurs on the second select, but when you execute it for the second time the first select is fast (result buffered) and thus more time is available for completing the second select.

You should be able to see this when looking at the short dumps and doing a runtime analysis/SQL trace.

Read only

0 Likes
976

Thanks for your reply.

The problem is table S031 seems not a 'Buffering allowed' table from the technical setting of the table. Is there other way to check whether it's buffered or not?

Below is info of the relevant internal table from runtime log .

IT_S031

Table IT_46[768x80]

\PROGRAM=Z_L_CN_OPS_REP_INVENTORY\DATA=IT_S

Table reference: 31

TABH+ 0(20) = C000000165FC0A70000000000000

TABH+ 20(20) = 0000001F0000002E000003000000

TABH+ 40(16) = 0400000000000A7000102C942180

store = 0xC000000165FC0A70

ext1 = 0x0000000000000000

shmId = 0 (0x00000000)

id = 31 (0x0000001F)

label = 46 (0x0000002E)

fill = 768 (0x00000300)

leng = 80 (0x00000050)

loop = -1 (0xFFFFFFFF)

xtyp = TYPE#000041

occu = 16 (0x00000010)

access = 1 (ItAccessStandard)

idxKind = 1 (ItIndexLinear)

uniKind = 2 (ItUniqueNon)

keyKind = 1 (default)

cmpMode = 2 (cmpSingleMcmpR)

occu0 = 1

groupCntl = 0

rfc = 0

unShareable = 0

mightBeShared = 1

sharedWithShmTab = 0

isShmLockId = 0

gcKind = 0

isUsed = 1

isCtfyAble = 1

>>>>> Shareable Table Header Data <<<<<

tabi = 0xC000000165CFF2E0

pgHook = 0xC000000165FC0AD0

idxPtr = 0xC000000165C2E590

shmTabhSet = 0x0000000000000000

id = 33 (0x00000021)

refCount = 0 (0x00000000)

tstRefCount = 0 (0x00000000)

lineAdmin = 944 (0x000003B0)

lineAlloc = 816 (0x00000330)

shmVersId = 0 (0x00000000)

shmRefCount = 1 (0x00000001)

>>>>> 1st level extension part <<<<<

tabi = 0xC000000165CF

pgHook = 0xC000000165FC

idxPtr = 0xC000000165C2

shmTabhSet = 0x000000000000

id = 33 (0x00000

refCount = 0 (0x00000

tstRefCount = 0 (0x00000

lineAdmin = 944 (0x00000

lineAlloc = 816 (0x00000

shmVersId = 0 (0x00000

shmRefCount = 1 (0x00000

>>>>> 1st level extension par

regHook = Not allocated

collHook = Not allocated

ext2 = Not allocated

>>>>> 2nd level extension par

tabhBack = Not allocated

delta_head = Not allocated

pb_func = Not allocated

pb_handle = Not allocated

FP_IT_MSEG

Table IT_54[205206x92]

\PROGRAM=Z_L_CN_OPS_REP_INVENTORY\DATA=IT_MSEG

Table reference: 36

TABH+ 0(20) = C000000165FC0640000000000000000000000000

TABH+ 20(20) = 0000002400000036000321960000005CFFFFFFFF

TABH+ 40(16) = 0400000000000BC0001024C421800000

store = 0xC000000165FC0640

ext1 = 0x0000000000000000

shmId = 0 (0x00000000)

id = 36 (0x00000024)

label = 54 (0x00000036)

fill = 205206 (0x00032196)

leng = 92 (0x0000005C)

loop = -1 (0xFFFFFFFF)

xtyp = TYPE#000048

occu = 16 (0x00000010)

access = 1 (ItAccessStandard)

idxKind = 0 (ItIndexNone)

uniKind = 2 (ItUniqueNon)

keyKind = 1 (default)

cmpMode = 8 (cmpManyEq)

occu0 = 1

groupCntl = 0

rfc = 0

unShareable = 0

mightBeShared = 1

sharedWithShmTab = 0

isShmLockId = 0

gcKind = 0

isUsed = 1

isCtfyAble = 1

>>>>> Shareable Table Header Data <<<<<

tabi = 0xC000000165F08150

pgHook = 0xC0000001D6177000

idxPtr = 0x0000000000000000

shmTabhSet = 0x0000000000000000

id = 37 (0x00000025)

refCount = 0 (0x00000000)

tstRefCount = 0 (0x00000000)

lineAdmin = 212912 (0x00033FB0)

lineAlloc = 205232 (0x000321B0)

shmVersId = 0 (0x00000000)

shmRefCount = 2 (0x00000002)

>>>>> 1st level extension part <<<<<

regHook = Not allocated

collHook = Not allocated

ext2 = Not allocated

>>>>> 2nd level extension part <<<<<

tabhBack = Not allocated

delta_head = Not allocated

pb_func = Not allocated

pb_handle = Not allocated

Read only

0 Likes
976

What I was trying to say is that in your case it cannot be the application server buffering (i.e. the one that you enable in the data dictionary), but the lower level database buffering. If you create a SQL trace you should be able to see the results of the buffering on database level...

Read only

0 Likes
976

Ok, Got it . Thanks. Let me try the trace once. Could you please point out how to check the buffering in the trace ? is it only need to tick 'Table buffer trace' option in St05?

Regards

Read only

0 Likes
976

Hold on, I think I lead you down the wrong path. First of all, the Table Buffer Trace you see in ST05 is for the SAP table buffering on the application server (i.e. the table buffering defined in the data dictionary). So this should not be applicable in your case.

The SQL trace would show you the execution times, which should be substantially different in subsequent executions and thus be an indicator for the buffering happening on database level. I don't have access to a SAP system right now, but I think there's actually no information in the SQL trace where you can actually see that the data was read from database buffers instead of the file system. One important information in the SQL trace is to check the execution plan of the SQL statements. In your case though I'd expect that they are the same for the consecutive calls...

You can probably see more when looking at the database level in transaction ST04 in the Shared cursor cache. There you can find for the cached SQL statements the IO reads, buffer reads, etc. So while executing the report you can monitor the shared cursor cache and keep refreshing the list.

Read only

0 Likes
976

Great! I checked ST04 and found some buffer details of some SQL statement for my program.

Then what should i do next regarding the performance issue? How can I speed up my program?

Regards..

Read only

0 Likes
976

Start with the basics, i.e. ABAP runtime analysis and SQL trace. Find out where the program spends most of the time. If it's the two SQL statements accessing the tables you mentioned, check what can be done to improve the queries. Maybe the queries don't use the best execution plan (i.e. nice index is available, but database doesn't pick it) or you can modify the statements to make them faster (kind of pointless to go into details here, lots of information is out here in the forums and blogs).

As a last resort it might help to define a secondary index to make your queries run faster, but keep in mind that we're talking about huge tables and adding indexes is sometimes creating more trouble than doing any good. So it really depends on your situation what should be done...

Read only

Former Member
0 Likes
976

> It is very fast and no dump happens

I doubt that it is very fast, it is always faster in the second access, because the data come from the db cache and not from the hard discs, also the index is usually in the memory.

Factors between the two executions is maybe 10.

If it is really more than 100, on several days, then you should check your database perfomance.

Siegfried

Read only

0 Likes
976

Hi All;

It is caused by below SQL. I can see it nearly take 9 minutes also to run first time from the trace.

SELECT MBLNR

MJAHR

ZEILE

INTO TABLE FP_IT_MSEG

FROM MSEG

FOR ALL ENTRIES IN FP_IT_S031

WHERE MATNR EQ FP_IT_S031-MATNR

AND

WERKS EQ FP_IT_S031-WERKS

AND

LGORT EQ FP_IT_S031-LGORT.

And about 205206 records of the 7838904 records will be selected to table FP_IT_MSEG which is declared as standard table. Although it matched an secondary index, it still took that much time.

How should I optimize it? Will it speed up if using package size(such 500) ? Should I declared table FP_IT_S031 with more initial size(like 300000) ? Or do you think I should split it into several same select statement with smaller size FP_IT_S031.?

If you faced similar issues, do let me know? I can only test it after moving to Q.

Thanks and Regards

Read only

Former Member
0 Likes
976

Moderator message - you have asked the same question in another thread, so this one is locked Rob