Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Run Time Analysis And Performance Tuning.

Former Member
0 Kudos

Hi ABAPers,

what is the difference between Run Time Analysis And Performance Tuning.

Explain in Details.

Thanks in Advance.

Regards,

Ramana Prasad.T

4 REPLIES 4

former_member223537
Active Contributor
0 Kudos

Hi,

<b>Run Time Analysis</b> includes finding out how much time a particular query is taking. i.e. Database hit for retrieving records takes how much time. It takes into account the Database hit time + Network load & bandwidth.

SE30 transaction.

<b>Performance tuning</b> only check for the database hit period. It doesnt take into account Network load/bandwidth into consideration.

Optimizing a select query with proper where clause will hv a considerable impact on Performance tuning but if the Network bandwidth is not gud, then it wont reflect the impact on Run time Analysis.

ST05 transaction.

Best regards,

Prashant

Former Member
0 Kudos

<b>Runtime Analysis </b>

Tool for analyzing the execution of program parts or individual statements and for measuring their runtime. Call using transaction code SE30.

<b>Tools provided for Performance Analysis</b>

Following are the different tools provided by SAP for performance analysis of an ABAP object

1. Run time analysis transaction SE30

This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.

2. SQL Trace transaction ST05

The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.

The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.

The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.

To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.

In addition check these links...

http://www.sapdevelopment.co.uk/perform/performhome.htm

http://www.sapdevelopment.co.uk/perform/perform_pcursor.htm

Regards,

Pavan

Former Member
0 Kudos

Hi,

          • Please Close the Duplicate Threads *****

You can see a report performance in SE30(Runtime analysis)and

SQLtrace(ST05).

ST05 tells you the list of selected statements.

You should remember some points when you tuning the code

- Use the GET RUN TIME command to help evaluate performance. It's

hard to know whether that optimization technique REALLY helps unless you

test it out. Using this tool can help you know what is effective, under what

kinds of conditions. The GET RUN TIME has problems under multiple CPUs, so

you should use it to test small pieces of your program, rather than the

whole program.

- *Generally, try to reduce I/O first, then memory, then CPU activity.

*I/O operations that read/write to hard disk are always the most

expensive operations. Memory, if not controlled, may have to be written to

swap space on the hard disk, which therefore increases your I/O read/writes

to disk. CPU activity can be reduced by careful program design, and by using

commands such as SUM (SQL) and COLLECT (ABAP/4).

- Avoid 'SELECT *', especially in tables that have a lot of fields.

Use SELECT A B C INTO instead, so that fields are only read if they are

used. This can make a very big difference.

- Field-groups can be useful for multi-level sorting and displaying.

However, they write their data to the system's paging space, rather than to

memory (internal tables use memory). For this reason, field-groups are only

appropriate for processing large lists (e.g. over 50,000 records). If

you have large lists, you should work with the systems administrator to

decide the maximum amount of RAM your program should use, and from that,

calculate how much space your lists will use. Then you can decide whether to

write the data to memory or swap space.

- Use as many table keys as possible in the WHERE part of your select

statements.

- Whenever possible, design the program to access a relatively

constant number of records (for instance, if you only access the

transactions for one month, then there probably will be a reasonable range,

like 1200-1800, for the number of transactions inputted within that month).

Then use a SELECT A B C INTO TABLE ITAB statement.

- Get a good idea of how many records you will be accessing. Log into

your productive system, and use SE80 -> Dictionary Objects (press Edit),

enter the table name you want to see, and press Display. Go To Utilities ->

Table Contents to query the table contents and see the number of records.

This is extremely useful in optimizing a program's memory allocation.

- Try to make the user interface such that the program gradually

unfolds more information to the user, rather than giving a huge list of

information all at once to the user.

- Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS

is the number of records you expect to be accessing. If the number of

records exceeds NUM_RECS, the data will be kept in swap space (not memory).

- Use SELECT A B C INTO TABLE ITAB whenever possible. This will read

all of the records into the itab in one operation, rather than repeated

operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement.

Make sure that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the

number of records you expect to access.

- If the number of records you are reading is constantly growing, you

may be able to break it into chunks of relatively constant size. For

instance, if you have to read all records from 1991 to present, you can

break it into quarters, and read all records one quarter at a time. This

will reduce I/O operations. Test extensively with GET RUN TIME when using

this method.

- Know how to use the 'collect' command. It can be very efficient.

- Use the SELECT SINGLE command whenever possible.

- Many tables contain totals fields (such as monthly expense totals).

Use these avoid wasting resources by calculating a total that has already

been calculated and stored.

Try to avoid joins more than 2 tables.

For all entries

The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of

entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the

length of the WHERE clause.

The plus

Large amount of data

Mixing processing and reading of data

Fast internal reprocessing of data

Fast

The Minus

Difficult to program/understand

Memory could be critical (use FREE or PACKAGE size)

Some steps that might make FOR ALL ENTRIES more efficient:

Removing duplicates from the the driver table

Sorting the driver table

If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:

FOR ALL ENTRIES IN i_tab

WHERE mykey >= i_tab-low and

mykey <= i_tab-high.

Nested selects

The plus:

Small amount of data

Mixing processing and reading of data

Easy to code - and understand

The minus:

Large amount of data

when mixed processing isn’t needed

Performance killer no. 1

Select using JOINS

The plus

Very large amount of data

Similar to Nested selects - when the accesses are planned by the programmer

In some cases the fastest

Not so memory critical

The minus

Very difficult to program/understand

Mixing processing and reading of data not possible

Use the selection criteria

SELECT * FROM SBOOK.

CHECK: SBOOK-CARRID = 'LH' AND

SBOOK-CONNID = '0400'.

ENDSELECT.

SELECT * FROM SBOOK

WHERE CARRID = 'LH' AND

CONNID = '0400'.

ENDSELECT.

Use the aggregated functions

C4A = '000'.

SELECT * FROM T100

WHERE SPRSL = 'D' AND

ARBGB = '00'.

CHECK: T100-MSGNR > C4A.

C4A = T100-MSGNR.

ENDSELECT.

SELECT MAX( MSGNR ) FROM T100 INTO C4A

WHERE SPRSL = 'D' AND

ARBGB = '00'.

Select with view

SELECT * FROM DD01L

WHERE DOMNAME LIKE 'CHAR%'

AND AS4LOCAL = 'A'.

SELECT SINGLE * FROM DD01T

WHERE DOMNAME = DD01L-DOMNAME

AND AS4LOCAL = 'A'

AND AS4VERS = DD01L-AS4VERS

AND DDLANGUAGE = SY-LANGU.

ENDSELECT.

SELECT * FROM DD01V

WHERE DOMNAME LIKE 'CHAR%'

AND DDLANGUAGE = SY-LANGU.

ENDSELECT.

Select with index support

SELECT * FROM T100

WHERE ARBGB = '00'

AND MSGNR = '999'.

ENDSELECT.

SELECT * FROM T002.

SELECT * FROM T100

WHERE SPRSL = T002-SPRAS

AND ARBGB = '00'

AND MSGNR = '999'.

ENDSELECT.

ENDSELECT.

Select … Into table

REFRESH X006.

SELECT * FROM T006 INTO X006.

APPEND X006.

ENDSELECT

SELECT * FROM T006 INTO TABLE X006.

Select with selection list

SELECT * FROM DD01L

WHERE DOMNAME LIKE 'CHAR%'

AND AS4LOCAL = 'A'.

ENDSELECT

SELECT DOMNAME FROM DD01L

INTO DD01L-DOMNAME

WHERE DOMNAME LIKE 'CHAR%'

AND AS4LOCAL = 'A'.

ENDSELECT

Key access to multiple lines

LOOP AT TAB.

CHECK TAB-K = KVAL.

" ...

ENDLOOP.

LOOP AT TAB WHERE K = KVAL.

" ...

ENDLOOP.

Copying internal tables

REFRESH TAB_DEST.

LOOP AT TAB_SRC INTO TAB_DEST.

APPEND TAB_DEST.

ENDLOOP.

TAB_DEST[] = TAB_SRC[].

Modifying a set of lines

LOOP AT TAB.

IF TAB-FLAG IS INITIAL.

TAB-FLAG = 'X'.

ENDIF.

MODIFY TAB.

ENDLOOP.

TAB-FLAG = 'X'.

MODIFY TAB TRANSPORTING FLAG

WHERE FLAG IS INITIAL.

Deleting a sequence of lines

DO 101 TIMES.

DELETE TAB_DEST INDEX 450.

ENDDO.

DELETE TAB_DEST FROM 450 TO 550.

Linear search vs. binary

READ TABLE TAB WITH KEY K = 'X'.

READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.

Comparison of internal tables

DESCRIBE TABLE: TAB1 LINES L1,

TAB2 LINES L2.

IF L1 <> L2.

TAB_DIFFERENT = 'X'.

ELSE.

TAB_DIFFERENT = SPACE.

LOOP AT TAB1.

READ TABLE TAB2 INDEX SY-TABIX.

IF TAB1 <> TAB2.

TAB_DIFFERENT = 'X'. EXIT.

ENDIF.

ENDLOOP.

ENDIF.

IF TAB_DIFFERENT = SPACE.

" ...

ENDIF.

IF TAB1[] = TAB2[].

" ...

ENDIF.

Modify selected components

LOOP AT TAB.

TAB-DATE = SY-DATUM.

MODIFY TAB.

ENDLOOP.

WA-DATE = SY-DATUM.

LOOP AT TAB.

MODIFY TAB FROM WA TRANSPORTING DATE.

ENDLOOP.

Appending two internal tables

LOOP AT TAB_SRC.

APPEND TAB_SRC TO TAB_DEST.

ENDLOOP

APPEND LINES OF TAB_SRC TO TAB_DEST.

Deleting a set of lines

LOOP AT TAB_DEST WHERE K = KVAL.

DELETE TAB_DEST.

ENDLOOP

DELETE TAB_DEST WHERE K = KVAL.

Tools available in SAP to pin-point a performance problem

The runtime analysis (SE30)

SQL Trace (ST05)

Tips and Tricks tool

The performance database

Optimizing the load of the database

Using table buffering

Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:

Select DISTINCT

ORDER BY / GROUP BY / HAVING clause

Any WHERE clasuse that contains a subquery or IS NULL expression

JOIN s

A SELECT... FOR UPDATE

If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.

Use the ABAP SORT Clause Instead of ORDER BY

The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.

If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.

Avoid ther SELECT DISTINCT Statement

As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.

Regards

Sudheer

Former Member
0 Kudos

Runtime analysis is one of the tool used for performance tuning.

The other tool is sqltrace.

Runtime Analysis


The following documentation describes the runtime analysis application in the ABAP Workbench. The runtime analysis provides an overview of the duration of your source code, from individual statements up to complete transactions.

Choose Menu ®Test ®Runtime Analysis or transaction SE30 to start the runtime analysis. On the initial screen, you will find the four main functions of this tool. They can be activated using the appropriate pushbutton:

sql trace


The Performance Trace allows you to record database access, locking activities, remote calls of reports and transactions, and table buffer calls from the SAP system in a trace file and to display the performance log as a list. The Performance Trace additionally offers wide support when analyzing individual trace records in detail.

1. SQL Trace: This allows you to monitor the database access of reports and transactions.
See also SQL Trace Analysis.

2. Enqueue Trace: This allows you to monitor the locking system.
See also Enqueue Trace Analysis.

3. RFC Trace: This provides information about Remote Function Calls between instances.
See also RFC Trace Analysis.

4. Table buffer trace: You can use this to monitor database calls of reports and transactions made via the table buffer. See also, Table Buffer Trace Analysis.

Hope this answers your curiosity,
Award points if useful else getbk,
Aleem.


· Measurement in dialog status

· Measurement of external session

· Planning a measurement

· Selection of measurement restrictions

· Analyzing measurement results

For large applications it is recommended that you first analyze the entire application and hit list.