‎2008 May 31 3:41 PM
hello,
can u tell how to improve the performance steps?
If my program has less performance.
thank you very much.
Regards,
vijaya.
‎2008 May 31 3:52 PM
Hi,
INTERNAL TABLES 4
Read Table 4
Sort Statement 7
INITIAL SIZE <n> 9
Append Statement 10
Collect Statement 10
DELETE 10
MOVE 11
Explicit Work area 11
Exit 11
Where 11
Nested Loops 12
Check 12
Control Break Statements 13
Other ABAP Statements 14
If Condition 14
Case Statement 14
Describe Statement 14
Type Conversions 14
Type u2018Pu2019 fields 14
Field Length 14
Arithmetic Operators 15
Use of tables 15
ASSIGN 15
AT PFnn 15
CHECK 15
User Interface (GUI) 15
CHECK, EXIT, REJECT, STOP & CONTINUE 15
Subroutines 16
Field-Symbols 16
Miscellaneous Methods to Enhance Performance 17
DATABASE SELECTION 18
Techniques for efficient Database selection 18
Reduce Information transfer to Application server 19
Keep the number of data records small 20
Some Guidelines while creating Indexes 23
Reduce the load on the database engine 24
Avoid unnecessary database accesses 26
SQL Tuning Checklist 31
Scenario 32
INDEXES IN DATABASE TABLES 35
Overview 35
Table Access Strategies 37
Index Unique Scan 38
Index Range Scan 39
Full Table Scan 42
Concatenation 43
TABLE BUFFER 44
Introduction 44
Types of Buffering 46
Buffer Synchronization 47
Degree of Invalidation 48
Bypassing Buffer 49
Buffering Strategy u2013 Technical Criteria 51
Buffering Strategy u2013 Semantic Criteria 52
Buffering Roadmap 53
Buffering in the Program 54
NULL VALUES 55
Definition 55
Null Values & SELECT 55
Null values - Aggregate functions 55
COMPUTING with Null Values 56
Null values - Indexes 56
Null values u2013 Sort 56
Null values u2013 Buffer 56
TESTING TIPS 57
GENERAL 57
DATA POPULATION 58
SELECT STATEMENTS 58
ARITHMETIC ERRORS 60
SELECTION-SCREEN 60
OUTPUT LAYOUT 60
FUNCTIONALITY 60
MESSAGES 61
INTERNAL TABLES
The following are the guidelines for working with the Internal Tables:
Read Table
Read statement fetches the record from the internal table using implicit key or explicit key. When no key or index is specified for the search criteria the key in the WA of the internal table is used implicitly for searching. SAP recommends explicit search criteria. Here the key or index is explicitly specified which is used for searching the records from internal table. This is faster than an implicit search.
When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ. Therefore, SORT the table and use READ TABLE WITH KEY BINARY SEARCH.
Do not use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
Select matnr werks from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Loop at I_mara.
-
Read table i_marc with key matnr = I_mara-matnr.
-
Endloop.
Instead use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
Select matnr werks from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Sort I_marc by matnr.
Loop at I_mara.
-
Read table i_marc with key
matnr = I_mara-matnr
binary search.
-
Endloop.
Always read the required fields using TRANSPORTING f1 f2 u2026
Do not use the following statement:-
data: i_vbak like vbak occurs 0 with header line.
data: i_vbap like vbap occurs 0 with header line.
Sort I_vbap by vbeln.
Loop at i_vbak.
-
read table i_vbap with key
vbeln = i_vbak-vbeln binary search.
If sy-subrc = 0 and i_vbap-posnr = u201800010u2019.
-
-
endif.
-
Endloop.
Instead use the following statement:-
data: i_vbak like vbak occurs 0 with header line.
data: i_vbap like vbap occurs 0 with header line.
Sort i_vbap by vbeln.
Loop at i_vbak.
-
read table i_vbap transporting posnr with key
vbeln = i_vbak-vbeln binary search.
If sy-subrc = 0 and i_vbap-posnr = u201800010u2019.
-
-
endif.
-
Endloop.
Use with key when reading from an internal table.
Do not use the following statement:-
Loop at i_mara.
-
-
i_marc-matnr = i_mara-matnr.
read table i_marc.
-
-
Endloop.
Instead use the following statement:-
Loop at i_mara.
-
-
read table i_marc with key
matnr = i_mara-matnr.
-
-
Endloop.
Sort Statement
SORT <itab> [Ascending/Descending] [As text] [Stable]
By <f1> Ascending/descending [As text]
u2026
By <fn> Ascending/descending [As text]
[As text] is used to sort the table alphabetically.
A linear search on internal table with large record size or large record numbers is time consuming.
It is always a good practice to search a record by binary search after sorting. The difference is felt especially in the production environment where the live data is usually huge.
When an internal table is sorted without specifying the keys the default is used. Hence specify the fields to be sorted, which is more efficient than SORT ITAB.
Sorting an Internal Table, though done in the memory, is a unnecessary load on the processor. Try to fill the table with sorted values. If the SORTED BY clause cannot be used in the select statement, then Sorting the Internal Table is inevitable.
APPEND <wa> to <itab> SORTED BY <f>
Use SORTED clause when:
There are less than 100 rows.
Rows are already in sorted order.
If an internal table must be sorted and the above cannot be satisfied, fill the table in unsorted order and then sort specifying key fields.
To remove duplicates, if required, collect the rows from the sorted table into an auxiliary table.
As of release 4.0 there are new types of internal tables like SORTED and HASHED which can be effectively used to further reduce the search time and processing time.
Specify keys when sorting an Internal table SORT INT_TAB BY COL1, COL2
Do not use the following statement:-
Data : begin of i_mara occurs 0,
ersda like mara-ersda,
ernam like mara-ernam,
laeda like mara-laeda,
aenam like mara-aenam,
vpsta like mara-vpsta,
End of i_mara.
-
-
Sort i_mara.
Loop at i_mara.
-
Endloop.
Instead use the following statement:-
Data : begin of i_mara occurs 0,
ersda like mara-ersda,
ernam like mara-ernam,
laeda like mara-laeda,
aenam like mara-aenam,
vpsta like mara-vpsta,
End of i_mara.
-
-
Sort i_mara by matnr aenam.
Loop at i_mara.
-
Endloop.
INITIAL SIZE <n>
This size does not belong to the data type of the internal table, and does not affect the type check. You can use the above addition to reserve memory space for <n> table lines when you declare the table object.
When this initial area is full, the system makes twice as much extra space available up to a limit of 8KB. Further memory areas of 12KB each are then allocated.
You can usually leave it to the system to work out the initial memory requirement. The first time you fill the table, little memory is used. The space occupied, depending on the line width, is 16 <= <n> <= 100.
Append Statement
Use APPEND LINES OF <itab1> [FROM <n1>] [TO <n2>] TO <itab2>
When appending fixed lines from an internal table to a new internal table.
Collect Statement
When working with internal tables, remember that the COLLECT statement can be very CPU intensive. When COLLECTing on an internal table with greater than 50-60 entries, use the following alternative code.
READ TABLE ITAB WITH KEY NEW-KEY BINARY SEARCH.
CASE SY-SUBRC.
WHEN 0. NEW-AMT = NEW-AMT + ITAB-AMT.
MOVE ...
MODIFY ITAB INDEX SY-INDEX.
WHEN 4. MOVE ...
INSERT ITAB INDEX SY-INDEX.
WHEN 8. MOVE ...
APPEND ITAB.
ENDCASE.
For internal tables with less than 50 entries the COLLECT is more efficient.
Do not perform any of the following functions when internal table is populated using Collect statement: -
Append, Insert, Modify, Delete, Sort, Refresh, Import, and Select into table/appending table. They invalidate the collect statement.
DELETE
Where appropriate using the following syntax, task of deleting a set of lines can be transferred to the kernel.
DELETE itable [FROM ...] [TO ...] WHERE ...
If possible, WHERE should be used together with FROM... and/or TO... to enhance performance.
The performance gain when using
DELETE itable WHERE....
Instead of
LOOP AT table WHERE ...
DELETE itable.
ENDLOOP.
Increases with the number of entries the internal table contains and the number of lines to be deleted.
When deleting adjacent duplicates use COMPARING fields as far as possible.
E.g. DELETE ADJACENT DUPLICATE ENTRIES from <itable>
COMPARING F1 F2u2026
MOVE
When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b.
MOVE BSEG TO *BSEG.
Is better than
MOVE-CORRESPONDING BSEG TO *BSEG.
Explicit Work area
Avoid unnecessary assignments to the header line when using internal tables with a header line.
For example,
"APPEND wa TO itab."
is approximately twice as fast as
"itab = wa. APPEND itab.u201D
The same applies to COLLECT and INSERT.
Whenever possible, use statements that have an explicit work area.
APPEND workarea TO itable.
INSERT workarea INTO itable.
COLLECT workarea INTO itable.
MODIFY itable FROM workarea.
READ TABLE itable INTO workarea.
LOOP AT itable INTO workarea.
Exit
Use EXIT statement to jump out of the loop, when the complete processing is done.
Where
Use WHERE clause when processing internal tables with the loop statement.
Nested Loops
Avoid nested looping on the internal tables. They eat up processor time.
If nested loops are unavoidable then,
Avoid the following:
SORT: I_vbak by vbeln, I_vbap by vbeln.
Loop at I_vbak
Loop at I_vbap where vbeln = I_vbak-vbeln.
u2026
Endloop.
Endloop.
Instead use:
SORT: I_vbak by vbeln, I_vbap by vbeln.
Loop at I_vbak
Read table I_vbap with key vbeln = I_vbak-vbeln
binary search transporting no fields.
Loop at I_vbap from sy-tabix.
If I_vbap-vbeln <> I_vbak-vbeln.
Exit.
EndIf.
Endloop.
Endloop.
Check
Do not use the CHECK construct within Loop u2026 Endloop. The end condition for check statement varies with the type of loop structure. For example within loop u2026 endloop it moves to the next loop pass, whereas in form u2026 endform it terminates the subroutine. Thus the outcome may not be as expected. It is always safe to use If u2026 Endif.
Do not use the following statement:-
Loop at i_mara.
Check i_mara-matnr = u2018121212u2019
-
-
Endloop.
Instead use the following statement:-
Loop at i_mara.
If i_mara-matnr = u2018121212u2019
-
-
Endif.
Endloop.
Control Break Statements
SAP recommends not to use control break events in loop statements which have a Where Clause. The outcome of the results cannot be predicted correctly. However it is a general practice to use these control break events even in loop statements which have a Where Clause. It is always a good practice to avoid control break events in such cases. If the logic is complicated without events then populate a new internal table, which has the required records, and then use events with it.
Do not use AT events in Loop u2026 Endloop having From or To or Where condition.
Do not use the following statement:-
Loop at i_vbak where
vbeln > u20189000123u2019 and vbeln < u20189000132u2019.
At new vbeln.
-
-
EndAt.
-
-
Endloop.
Instead use the following statement:-
Append lines of I_vbak to I_vbak_new
where vbeln > u20189000123u2019 and vbeln < u20189000132u2019.
Loop at i_vbak_new.
At new vbeln.
-
-
EndAt.
-
-
Endloop.
Other ABAP Statements
The following should be avoided in the ABAP/4 statements to improve efficiency:
If Condition
When coding IF tests, nest the testing conditions so that the outer conditions are those which are most frequently true. This will ensure minimal code execution. Similarly, for logical expressions with u2018ANDu2019, place the most likely false first and for the OR, place the most likely true first.
Case Statement
When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient.
This feature is also available in SAP Tips & Tricks. As can be seen the time measured for the same logical units of code using ifu2026 elseif u2026 endif is almost twice that of case u2026 endcase.
It is always advisable to use case u2026 endcase as far as possible.
Note: CASE statement used in place of IF where field checked for > 2 values.
Describe Statement
DESCRIBE TABLE <itable> [LINES <l>] [OCCURS <n>] [KIND <k>]
To find out how many entries are in an internal table use DESCRIBE.
DESCRIBE TABLE ITAB LINES CNTLNS.
Is more efficient than
LOOP AT ITAB.
CNTLNS = CNTLNS + 1.
ENDLOOP.
Type Conversions
The processor takes additional time to convert from one data type to another.
Type u2018Pu2019 fields
Unless rounding errors are not avoidable, do not use u2018packedu2019 data variables.
Field Length
To find out the length of a field use the string length function.
FLDLEN = STRLEN (FLD).
Is more efficient than
IF FLD CP u2018* #u2019.
ENDIF.
FLDLEN = SY-FDPOS.
Arithmetic Operators
Use symbols for arithmetic operators instead of characters for better performance.
Example : use <> instead of ne .
Use of tables
Internal tables vs. field groups:
Using internal tables is more efficient than field groups and should be used when possible.
If the volume of data is very much, field groups are more efficient compared to internal tables in terms of memory management.
ASSIGN
Use of field symbols is discouraged unless necessity dictates. Field symbols, when used, should be documented in program comments when defined and whenever used. Always have a type declaration for field symbols.
AT PFnn
Use the u2018 AT USER COMMANDu2019 instead of u2018AT PFnnu2019. This ensures proper response to the user command and is more legible.
CHECK
Use check statements whenever possible instead of nested IFu2019s.
User Interface (GUI)
GUI statuses should be used for interactive report programs and online programs. Use menu bar linking whenever possible to build consistent GUI statuses for screens within a module pool.
CHECK, EXIT, REJECT, STOP & CONTINUE
Use these statements to suspend processing and/or skip remaining unnecessary processing for improved performance.
Subroutines
Whenever values need to be passed in a subroutine have type declarations assigned to the formal parameters. If no specific type declaration is possible then use TYPE ANY. This improves the performance. It is also recommended by SAP and can be noticed during extended program check (EPC) for the program.
E.g. Do not use the following statement:-
perform type_test using p_var1 p_var2 p_var3.
-
form type_test using p_var1 p_var2 p_var3.
-
endform.
Instead use the following statement:-
perform type_test using p_var1 p_var2 p_var3.
-
form type_test using p_var1 type c
p_var2 type any
p_var3 like mara-matnr.
-
endform.
When modularizing your program, use FORMS rather than FUNCTIONS whenever practical. A statement of PERFORM FORM <..> requires significantly less resources and time than does a CALL FUNCTION <..>.
Field-Symbols
This is similar to type declarations for subroutines. Except that type declarations need to be maintained for field-symbols.
E.g. Do not use the following statement:-
field-symbols: <fs1>, <fs2>, <fs3>.
Instead use the following statement:-
field-symbols: <fs1> type c,
<fs2> like mara-matnr,
<fs3> like marc-werks.
Miscellaneous Methods to Enhance Performance
1. Always declare variables to avoid SAP make type conversions
2. Use while loop rather than DO/ENDDO
The Do u2026 Enddo loop does not have a terminating condition. This has to be handled explicitly within the loop construct. This has some affect on the performance. On the other hand While u2026 Endwhile loop has a condition to satisfy before entering the loop. Hence will improve the performance and is also safe to use.
Do not use the following statement:-
Do.
If count > 20.
Exit.
Endif.
-
-
Count = count + 1.
Enddo.
Instead use the following statement:-
While ( count < 20 ).
-
-
Endwhile.
3. Remove Break-point from the code.
It is observed that break points are hard coded in the program during testing. Some are soft break points some are hard coded and also user specific. These are left in the program during transports and cause production problems later.
DATABASE SELECTION
Techniques for efficient Database selection
The following basic guidelines need to be considered while retrieving data from the database:
Avoid unnecessary database accesses
Keep the number of data records small
Reduce information transfer to application server
Reduce the load on the database engine
Use database locks and SAP enqueue.
Basic thing to be kept in mind is:
Get the data in one single go as far as possible, thereby reducing the network traffics.
Reduce Information transfer to Application server
Specify the individual column (field) names of the data you want to retrieve in the Select/ Get statement.
E.g. Use: SELECT single MATNR ERSDA
INTO (V_MATNR, V_ERSDA)
FROM MARA
WHERE MATNR EQ u2018000101234567890123u2019.
Instead of: SELECT single * FROM MARA
WHERE MATNR EQ u2018000101234567890123u2019.
In this example, only 2 fields are transported back to the work area for each matching record. This greatly reduces the amount of data transfer (network traffic) u2013 especially for tables, which have a large number of fields/large record size.
Similarly, Use: Get <dtable> fields f1 f2 u2026 fn.
Instead of: Get <dbtable>.
Use Aggregate functions provided in the Select statement.
By using these, the DBMS makes the computations instead of transferring all the data to the application.
E.g.1 Use: SELECT COUNT (*)
FROM MARC
WHERE MATNR EQ u2018000101234567890123u2019.
SY-DBCNT will hold the number of matching records found.
Instead of: SELECT * FROM MARC
WHERE MATNR EQ u2018000101234567890123u2019.
ADD 1 TO V_COUNT.
ENDSELECT.
E.g.2 Use: SELECT SUM (TRAME)
INTO V_TOTAL_IN_TRANS
FROM MARC
WHERE MATNR EQ u2018000101234567890123u2019.
Instead of: SELECT * FROM MARC
WHERE MATNR EQ u2018000101234567890123u2019.
V_TOTAL_IN_TRANS = V_TOTAL_IN_TRANS + MARC-TRAME.
ENDSELECT.
Use update statement sparingly.
Only update the columns, which are changed, and do not overwrite the entire line.
Keep the number of data records small
Always use the WHERE clause in the corresponding SQL statement.
An Application should read only those lines of the table that are necessary for the processing. Therefore formulate filter condition not through CHECK statements, rather through part of WHERE statements
E.g. Always use option 1 instead of option 2.
Option 1: SELECT MATNR
INTO TABLE IT_MARA
FROM MARA
WHERE MATNR LIKE u201823%u2019.
Option 2: SELECT MATNR
INTO MARA-MATNR
FROM MARA.
CHECK MARA-MATNR+0(2) NE u201823u2019.
MOVE MARA-MATNR TO IT_MARA-MATNR.
APPEND IT_MARA.
CLEAR IT_MARA.
ENDSELECT.
Use the indexes of the relevant database tables to make your WHERE clause more efficient, by checking all index fields for equality (EQ, 😃 and using the AND operator.
E.g.1 Use: Select * From dbtab
Where field1 = value1 and
field2 = value2.
...
EndSelect.
Note: The fields field1, field2 etc are the primary key fields of the table. Primary Index is used for searching.
E.g.2 Use: An index to the table EXTAB containing the fields FIELD1, FIELD2,
FIELD3 and FIELD4 in this sequence are to be defined. The table is to be accessed using the SELECT statement:
SELECT * FROM EXTAB
WHERE FIELD1 = X1 AND
FIELD2 = X2 AND
FIELD4= X4.
Since FIELD3 is not specified more precisely, sorting of the index functions only up to FIELD2. If the database system accesses the data using this index, it will quickly be able to access all records for which FIELD1 = X1 and FIELD2 = X2 are valid. It will then have to select all the records for which FIELD4 = X4 from this set.
If FIELD3 can be provided, the index can be used to full extent.
SELECT FIELD3 FROM BSPTAB
INTO TABLE ITAB
WHERE FIELD1 = VALUE1.
SELECT * FROM EXTAB
FOR ALL ENTRIES IN ITAB
WHERE FIELD1 = X1 AND
FIELD2 = X2 AND
FIELD3 = ITAB-FIELD3 AND
FIELD4= X4.
...
ENDSELECT.
Define a database view via the ABAP/4 Dictionary, which identifies the tables and fields that you are interested in. Then SELECT from the view rather than the database table(s).
E.g. SELECT *
FROM Z_M_MARC
WHERE MATNR EQ u2018000101234567890123u2019.
This is preferable if multiple programs require the same data from the same table(s). If the set of data in these programs requires changes, then the developer need only change the definition of the database view, rather than modify every SELECT statement u2013 i.e. greater reusability and maintainability.
Avoid using complex WHERE clauses since the system has to break them down into several individual statements for the database system
If all you are interested in is retrieving all possible values for a given table field (i.e. you do not want duplicates), then SELECT DISTINCT will provide that capability and will also restrict the amount of data passed back to the application to only the set of unique values.
Instead of using DISTINCT, the application might remove the duplicates by itself, e.g. using DELETE ADJACENT DUPLICATES. This should only be used if a small number of duplicates are expected.
E.g. SELECT DISTINCT WERKS INTO V_PLANT FROM MARC.
<Process plant>
ENDSELECT.
In this example, a value will be returned for each unique value of the field WERKS (Plant). If there are 3 unique plants in table MARC, <process plant> will be performed 3 times only, with each value of WERKS being placed into the work area V_PLANT. Each plant may, in reality, be present on multiple records, but this method returns only unique values.
Be careful while restricting data on NULL values.
You will not find a record in a database table, if field has a Null Value and you are using Following WHERE-clauses:
WHERE fieldn = 0.
WHERE NOT fieldn = 0.
WHERE fieldn < 5.
WHERE fieldn > 5.
WHERE fieldn = SPACE.
WHERE fieldn <> SPACE.
If you want to read records with Null Values, you have to use IS NULL with your WHERE-clause:
WHERE fieldn = 0 OR fieldn IS NULL.
WHERE fieldn < 5 OR fieldn IS NULL.
WHERE fieldn = SPACE OR fieldn IS NULL.
Where possible, avoid accessing the same data more than once (for example, by using SELECT before an UPDATE or DELETE statement).
Never use Nested select loops. Instead consider options: Views, Joins or select statement with for ALL ENTRIES OPTION.
E.g. Use: Select * From dbtab1
into table Itab
where field1 = value1.
Select * from dbtab2
for ALL Entries In Itab
where field2=value2.
...
EndSelect.
Instead of: Select * From dbtab1 where field1=value1.
Select * from dbtab2 where field2=value2.
...
EndSelect.
EndSelect.
Some Guidelines while creating Indexes
The speed of access provided by this index is heavily dependent on how closely the dataset selected via the index corresponds to the dataset to be finally selected.
Consequently, the sequence of the fields in the index is decisive in determining the speed with which data records are accessed.
Fields, which frequently contains constants in many selections, should be located at the beginning. An index assists selection only up until the first unspecified field. Fields should be included in an index only if they significantly restrict the set of data selected.
Eg: The following selection is often made from an address file ADRTAB:
SELECT * FROM ADRTAB WHERE TITLE = 'Prof.' AND
NAME = X AND
FIRSTNAME = Y.
In an index containing the fields NAME, FIRSTNAME and TITLE, the field TITLE would rarely narrow down further the records specified via name and first name. It would not be useful, therefore, to include this field in such an index. An index containing only the field TITLE might be useful, for example, if it is often necessary to select all the professors.
The primary key of a database table is automatically its primary index. If selections are frequently made involving attributes that are not contained in the primary index, you can also create secondary indexes for a database table in the ABAP Dictionary. However, certain considerations need to be made while creating secondary indexes.
Please note that any additional indexes you create may place an additional load on the system, since they have to be adjusted each time a change is made to the table contents. Every additional index therefore slows down the insertion of records in the table. Tables in which entries are frequently written should generally therefore have only a few indexes.
Even if a suitable index exists for a selection, it may not always be used. The index selected depends on the database system optimizer in use. You should usually therefore check whether an index is in use.
The indexes for a table should therefore be as distinct as possible u2013 i.e., they should have as few fields in common as possible. If two indexes for a table have many fields in common, this can complicate the choice of the most selective index by the optimizer.
As of release 4.5 it is now possible to specify the index which should be used in the select clause. The addition %_hint is used to achieve this. This improves the performance as the index is known before hand and need not be determined by the system. Also it helps the system in cases where none of the indexes exactly matches with the fields in the where clause of the select statement.
If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes, Invert the logical expression instead. Nevertheless, specify if it can avoid useless transfers between database and application.
Reduce the load on the database engine
If all the primary keys of a table can be provided in the where clause it is a good practice to use SELECT SINGLE.. rather than using SELECT UP TO 1 ROWS etc.
E.g. Do not use the following statement:-
Select matnr up to 1 rows from mara
into mara-matnr
where matnr = itab-matnr.
Instead use the following statement:-
Select single matnr from mara
into mara-matnr
where matnr = itab-matnr.
Cluster and pool tables store data from various data objects and so is the most convenient point for accessing data. But this has adverse affect on the performance. It is always a good practice to avoid reading directly from cluster and pool tables.
Identify all the transparent tables, which store the required data. Though the data cannot be retrieved from one table it is much faster than retrieving from cluster or pool tables.
E.g. Do not use the following statement:-
Select belnr buzei audat
From bseg
Into ( bseg-belnr,
bseg-buzei,
bseg-audat )
Where belnr in s_cust.
Endselect.
Instead use the following statement:-
Select belnr buzei audat
From bsak
Into ( bsak-belnr,
bsak-buzei,
bsak-audat )
Where belnr in s_cust.
Endselect.
Use the 'into' syntax and push data directly into internal tables rather than using the row by row 'append' approach. Again this is an example of thinking in terms of 'sets' rather than 'rows'. Old-fashioned databases only knew about rows. Modern relational databases work better when dealing with sets of data. Try to work always in sets.
The order of the fields in the where clause of the select must be in the same order as the order in the Index. (Oracle limitation but routinely should be used against all databases).
The addition INTO CORRESPONDING FIELDS in the INTO clause of the SELECT Statement is only effective for large amounts of data, because the time required to compare the field names is otherwise too great.
There are two ways to get ordered data. The data can be selected using the addition ORDER BY from the database. This is advisable for large amounts of data (>10 MB) because the database system is the only component in R/3 with large resources. Ensure that the ORDER BY can use an index. When a small amount of data is to be sorted, it can be done with ABAP statement SORT.
Do not use a SELECT statement to confirm existence before UPDATE or DELETE is used. Directly use UPDATE or DELETE and then check the result with SY-SUBRC.
Always use collective Updates. Do not use single line updates.
E.g. Use: Update dbtab
Set field = field + delta
Where field1=value1.
Instead of: Select * from <dbtab>
Where field1=value1.
dbtab-field = dbtab-field + delta
update dbtab.
EndSelect.
As far as possible, avoid using Insert, Append, Modify, Update or Delete within the loop..endloop. Use array-updating functionality and Insert, Append, Modify, Update or Delete at one go.
E.g. Use: INSERT dbtable FROM TABLE itable
ACCEPTING DUPLICATE KEYS.
IF NOT SY-SUBRC IS INITIAL.
... Error handling
ENDIF.
Instead of: LOOP AT itable.
INSERT INTO dbtable VALUES itable.
ENDLOOP.
When multiple records are to be inserted into a table, then use the first form as shown above rather than loop into the internal table and inserting/updating single records.
Be Careful with range table. The size of statement might exceed the limit depending on the size of the range table!
E.g.: Data: Ranges ran_field1 for dbtable1-field1.
Select * from dbtab1 into table itab1.
ran_field1-sign = u2018Iu2019.
ran_field1-option = u2018EQu2019.
Loop at itable1.
Move itable1-field1 To ran_field1-low.
Append ran_field1.
EndLoop.
Select * from dbtable2 where field2 in ran_field1.
...
EndSelect.
Avoid unnecessary database accesses
Donu2019t use identical select statements.
E.g.1 To get Material Descriptions
Use: LOOP AT IT_VBAP.
READ TABLE IT_MAKT KEY MATNR = IT_VBAP-MATNR
TRANSPORTING MAKTX.
IF SY-SUBRC NE 0.
SELECT MATNR MAKTX FROM MAKT
APPENDING TABLE IT_MAKT-MAKTX
WHERE MATNR EQ IT_VBAP-MATNR.
ENDIF.
MOVE: IT_MAKT TO IT_VBAP-MAKTX.
ENDLOOP.
Instead of: LOOP AT IT_VBAP.
SELECT SINGLE MAKTX INTO IT_VBAP-MAKTX
FROM MAKT
WHERE MATNR EQ IT_VBAP-MATNR.
ENDLOOP.
E.g.2 Use: Select vbeln vkorg from vbak
Into table i_vbak
Where vbeln in s_vbeln.
Select hkunnr from kunh
Into table i_kunh
Where vkorg in (u2018IJI1u2019,u2019IJI2u2019,u2019IJI3u2019,u2019IJI4u2019).
Loop at i_vbak.
Case i_vvbak-vkorg.
When u2018IJI1u2019.
Read table i_kunh where vkorg = i_vbak-vkorg.
-
-
When u2018IJI2u2019.
Read table i_kunh where vkorg = i_vbak-vkorg.
-
-
When u2018IJI3u2019.
Read table i_kunh where vkorg = i_vbak-vkorg.
-
-
When u2018IJI4u2019.
Read table i_kunh where vkorg = i_vbak-vkorg.
-
-
Endcase.
Endloop.
Instead of: Select vbeln vkorg from vbak
Into table i_vbak
Where vbeln in s_vbeln.
Loop at i_vbak.
Case i_vbak-vkorg.
When u2018IJI1u2019.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = u2018IJI1u2019.
-
-
When u2018IJI2u2019.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = u2018IJI2u2019.
-
-
When u2018IJI3u2019.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = u2018IJI3u2019.
-
-
When u2018IJI4u2019.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = u2018IJI4u2019.
-
-
Endcase.
Endloop.
Using Database Buffering:
Saving database tables in local buffers can save a considerable amount of time. Wherever possible, use buffered data, and only use the BYPASSING BUFFER addition where absolutely necessary e.g. Table updated at a regular frequently or a table used only once.
Note that the following additions automatically bypass the buffer:
SELECT ... BYPASSING BUFFER
Any SELECT from a view (except a projection view)
SELECT FOR UPDATE...
Any aggregate function (COUNT, MIN, MAX, SUM, AVG)
Example: SELECT MIN (field1) FROM dbtable1 WHERE ...
SELECT DISTINCT...
WHERE-clause contains ... IS (NOT) NULL
ORDER BY (other than PRIMARY KEY)
Any Native SQL statement.
Inner Joins:
An INNER join is a join where you expect that there will always be a relationship between all of the tables in the set, and if any of the relationships are missing, then you do not want to get the row.
E.g. Consider 3 imaginary tables.
Customer
MANDT (Primary Key)
CUST_CODE. (Primary Key)
Invoice
MANDT (Primary key)
INV_NUM (Primary key)
CUST_CODE (Foreign key to Customer)
Invoice_line
MANDT (PK)
INV_NUM (PK)
INV_LINE_NUM (PK)
You could construct an INNER join over these tables as follows
Customer Invoice Invoice_line
MANDT MANDT MANDT
inv_num === inv_num
cust_code === cust_code
Say you then have to report off this information.
Your report would contain ONLY those customers, which had Invoices, which had at least one line. Customers without invoices would not be included. Customers that had invoices but without a line would not be included.
This is the principle of the INNER join. All the data has to be there and be related.
Database views support ONLY the inner join concept.
Outer Joins:
In the majority of cases inner joins will actually be the requirement, but in the minority of cases you will want to use an outer join. Outer joins are useful where you may need all records from a certain table, that meet the selection criteria, and all data from another table(s), IF it exists. But if the latter data does not exist, you still want to get the first table data.
E.g. Using our imaginary tables again.
Suppose the requirement is not for an invoice report, but for a customer report, showing outstanding invoices. You want to see ALL customers, so you can't use an inner join
(Incidentally therefore you can't use a database view, database views only support INNER joins) because if you did, you would not see customers that had no invoices. So you would construct an outer join relationship as under:
Customer Invoice Invoice_line
MANDT MANDT MANDT
inv_num === inv_num
cust_code -
> cust_code
In this situation you would get all customers regardless, and their
invoices. You would only see the invoices, which had at least one line as the join
between invoice and invoice_line is still an inner join. If you wanted to be absolutely sure you could code a double outer join as under:
Customer Invoice Invoice_line
MANDT MANDT MANDT
inv_num -
> inv_num
cust_code -
> cust_code
In this situation you would see all the invoices, even if they had no lines.
In reality in the above situation you would seldom code such a well defined relationship as invoice - invoice_line using an outer join. If an invoice doesn't have a line then it is usually corrupt data.
Using outer joins places a heavy load on the database engine (although no heavier that if you coded a nested select), so be very parsimonious about the use of outer joins.
The performance of the join depends on the database optimizer used especially if there are more than two tables used for joins.
Try to give maximum number of conditions in the ON clause. This is because the ON conditions are evaluated first and the virtual table created as a result is the one on which the WHERE clause applies.
Use subqueries if possible. However, be careful while choosing the type of subquery to be written.
Example: Selects the flights for which at least one booking exists:
TABLES SFLIGHT.
SELECT * FROM SFLIGHT AS F
WHERE EXIST
( SELECT * FROM SBOOK
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND FLDATE = F~FLDATE ) .
WRITE:/ SFLIGHT-CARRID,
SFLIGHT-CONNID,
SFLIGHT-FLDATE.
ENDSELECT.
This is most assuredly not a Good Thing. What this is actually doing is forcing the database to do a separate selection from SBOOK for each row in sflight. For a small number of records this would probably work OK - but once again - on larger datasets it would hang. For every record in SFLIGHT a separate database operation is occurring in sbook. You might as well have coded the selections inside a loop.
This query is better written as follows;
Select distinct f~*
into table it_sflight
From sflight as F
inner join sbook as s on
SCARRID = FCARRID AND
SCONNID = FCONNID AND
SFLDATE = FFLDATE.
Following the above guidelines does not guarantee the most optimal selection from the database. Ultimately, it depends on the nature of the data and other associated factors.
SQL Tuning Checklist
Consolidated selections
No row by row processing
No check statements
No selections within loops
Selections are 'into' internal tables - no appends
SQL trace check completed
All programs checked to make sure that they are using the full index and in the
Correct order.
Minimum or zero number of identical selects.
Use of appropriate Indexes
Scenario
Pick details from Sales Header table for the chosen parameters, pick the subsequent matching records from line items and write the line item details in the report. While writing the line items pick up the material description from material master.
This is how normally its written:
Data : I_vbak like vbak occurs 0,
Data : I_vbap like vbap occurs 0.
Data : d_lines like sy-subrc.
Start-of-selection.
Refresh : I_vbak , I_vbap.
Select * from VBAK into table I_vbak
Where erdat eq p_erdat And
vkorg = p_vkorg And
vtweg = p_vtweg And
spart = p_spart.
Describe table I_vbak lines d_lines.
Check d_lines <> 0.
Select * from VBAP into table I_vbap
For all entries in table I_vbak
Where vbeln = I_vbak-vbeln.
Describe table I_vbap lines d_lines.
Check d_lines <> 0.
End-of-selection.
Loop at I_vbap.
Clear makt.
Select single * from makt
where spras = sy-langu and
matnr = I_vbap-matnr
Write 😕 I_vbnap-vbeln,
I_vbap-psonr,
I_vbap-matnr,
Makt-matnr,
I_vbap-kwmeng.
Endloop.
Better Way :
Data : Begin of I_vbak occurs 0,
Vbeln like vbak-vbeln,
End of I_vbak.
Data : Begin I_vbap occurs 0,
Vbeln like vbap-vbeln,
Posnr like vbap-posnr,
Matnr like vbap-matnr,
Kwmeng like vbap-kwmeng,
End of I_vbap.
Data : Begin of I_makt occurs 0,
Matnr like makt-matnr,
Maktx like makt-maktx,
End of I_makt.
Data : d_lines like sy-subrc.
Start-of-selection.
Refresh : I_vbak , I_vbap , I_makt.
Select vbeln from VBAK into table I_vbak
Where erdat = p_erdat And
vkorg = p_vkorg And
vtweg = p_vtweg And
spart = p_spart.
Clear d_lines.
Describe table I_vbak lines d_lines.
Check d_lines <> 0.
Select vbeln posnr matnr kwmeng from VBAP into table I_vbap
For all entries in table I_vbak
Where vbeln = I_vbak-vbeln.
Clear d_lines.
Describe table I_vbap lines d_lines.
Check d_lines <> 0.
End-of-selection.
Sort I_vbap by vbeln matnr.
Select matnr maktx from makt into I_makt
For all entries in I_vbap
Where spras = sy-langu And
matnr = I_vbap-matnr.
Sort I_makt by matnr.
Loop at I_vbap.
Clear I_makt.
Read I_makt with key matnr = I_vbap-matnr binary search.
Write 😕 I_vbnap-vbeln,
I_vbap-psonr,
I_vbap-matnr,
I_Makt-matnr,
I_vbap-kwmeng.
Endloop.
Above is a real life scenario that has been optimized.
INDEXES IN DATABASE TABLES
Overview
When you create a database table in the ABAP Dictionary, you must specify the combination of fields that enable an entry within the table to be clearly identified. Position these fields at the top of the table field list, and define them as key fields.
After activating the table, an index is created (for Oracle, Informix, DB2) that consists of all key fields. This index is called a primary index. The primary index is unique by definition.
As well as the primary index, you can define one or more secondary indexes for a table in the ABAP Dictionary, and create them on the database. Secondary indexes can be unique or non-unique.
The example above shows two unique indexes (Index A and Index B) pointing to table X. Index records and table records are organized in data blocks.
If you dispatch an SQL statement from an ABAP program to the database, the program searches for the data records requested either in the database table itself (full table scan) or by using an index (index unique scan or index range scan). If all fields requested are found in the index using an index scan, the table records do not need to be accessed.
A data block shows the level of detail in which data is written to the hard disk or read from the hard disk. Data blocks may contain multiple data records, but a single data record may be spread across several data blocks.
Data blocks can be index blocks or table blocks. The database organizes the index blocks in the form of a multi-level B* tree. There is a single index block at root level, which contains pointers to the index blocks at branch level. The branch blocks contain either some of the index fields and pointers to index blocks at leaf level, or all index fields and a pointer to the table records organized in table blocks. The index blocks at leaf level contain all index fields and pointers to the table records from the table blocks.
The pointer that identifies one or more table records has a specific name. It is called, for example, ROWID for Oracle databases. The ROWID consists of the number of the database file, the number of the table block, and the row number within the table block.
The index records are stored in the index tree and sorted according to index field. This enables accelerated access using the index. The table records in the table blocks are not sorted.
An index should not consist of too many fields. Having a few very selective fields increases the chance of reusability, and reduces the chance of the database optimizer selecting an unsuitable access path.
Table Access Strategies
Index unique scan: The index selected is unique (primary index or unique secondary index) and fully specified. One or no table record is returned. This type of access is very effective, because a maximum of four data blocks needs to be read.
Index range scan: The index selected is unique or non-unique. For a non-unique index, this means that not all index fields are specified in the WHERE clause. A range of the index is read and checked. An index range scan may not be as effective as a full table scan. The table records returned can range from none to all.
Full table scan: The whole table is read sequentially. Each table block is read once. Since no index is used, no index blocks are read. The table records returned can range from none to all.
Concatenation: An index is used more than once. Various areas of the index are read and checked. To ensure that the application receives each table record only once, the search results are concatenated to eliminate duplicate entries. The table records returned can range from none to all.
Index Unique Scan
If, for all fields in a unique index (primary index or unique secondary index), WHERE conditions are specified with '=' in the WHERE clause, the database optimizer selects the access strategy index unique scan.
For the index unique scan access strategy, the database usually needs to read a maximum of four data blocks (three index blocks and one table block) to access the table record.
In the SELECT statement shown above, the table VVBAK is accessed. The fields MANDT and VBELN form the primary key, and are specified with '=' in the WHERE clause. The database optimizer therefore selects the index unique scan access strategy, and only needs to read four data blocks to find the table record requested.
Index Range Scan
In the example above, not all fields in the primary index of the table VVBAP (key fields MANDT, VBELN, POSNR) are specified with '=' in the WHERE clause. The database optimizer checks a range of index records and deduces the table records from these index records. This access strategy is called an index range scan.
To execute the SQL statement, the DBMS first reads a root block (1) and a branch block (2). The branch block contains pointers to two leaf blocks (3 and 4). In order to find the index records that fulfill the criteria in the WHERE clause of the SQL statement, the DBMS searches through these leaf blocks sequentially. The index records found point to the table records within the table blocks (5 and 6).
If index records from different index blocks point to the same table block, this table block must be read more than once. In the example above, an index record from index block 3 and an index record from index block 4 point to table records in table block 5. This table block must therefore be read twice. In total, seven data blocks (four index blocks and three table blocks) are read.
The index search string is determined by the concatenation of the WHERE conditions for the fields contained in the index. To ensure that as few index blocks as possible are checked, the index search string should be specified starting from the left, without placeholders ('_' or %). Because the index is stored and sorted according to the index fields, a connected range of index records can be checked, and fewer index blocks need to be read.
In the example above, an index range scan is performed on table VVBAP using the primary index. Since no WHERE condition is specified for the field VBELN in the SQL statement, the corresponding places for VBELN are filled with '_' in the index search string.
The range that can be used to select the index blocks (that is, the fully specified range starting from the left), therefore only consists of the field MANDT. Since many index records fulfill the condition MANDT = '001', a large number of index blocks are read, and their index records checked. All the index records are then filtered out that fulfill the condition POSNR = '0001'. The relevant index records point to the table records.
If a further WHERE condition had been specified in the SQL statement for a field that was not in the index, this WHERE condition would only have been evaluated after the table records had been read from the table blocks.
Due to outdated update statistics, database optimizer errors, missing indexes, or inappropriate ABAP coding, the database optimizer may select a completely unsuitable index, and then perform an unselective index range scan.
In the example above, WHERE conditions are only specified for MANDT and MATNR. In spite of this, the database optimizer chooses to perform an index range scan over the primary index (MANDT, VBELN, POSNR). Since there is only one unselective WHERE condition (MANDT = '001') to select the index blocks (due to the WHERE clause), a large number of index blocks are read. There is no further WHERE condition to filter the index records. The table records corresponding to all suitable index records are therefore read.
More data blocks are read if an unselective index range scan is performed, than if the table is read sequentially. This is because a table block is read more than once if index records in different index blocks point to the same table record. The maximum number of data blocks to be read per SQL statement execution is thus calculated from the product of the number of table blocks and the number of index blocks.
All index blocks and table blocks read during an index range scans are stored in the data buffer at the top of a LRU (least recently used) list. This can lead to many other data blocks being forced out of the data buffer. Consequently, more physical read accesses become necessary when other SQL statements are executed.
Full Table Scan
If the database optimizer selects the full table scan access strategy, the table is read sequentially. Index blocks do not need to be read.
For a full table scan, the read table blocks are added to the end of an LRU list. Therefore, no data blocks are forced out of the data buffer. As a result, in order to process a full table scan, comparatively little memory space is required within the data buffer.
The full table scan access strategy is very effective if a large part of a table (for example, 5% of all table records) needs to be read. In the example above, a full table scan is more efficient than access using the primary index.
Concatenation
In the concatenation access strategy, one index is reused. Therefore, various index search strings also exist. An index unique scan or an index range scan can be performed for the various index search strings. Duplicate entries in the results set are filtered out when the search results are concatenated.
In the SQL statement above, a WHERE condition with an IN operation is specified over field VBELN. The fields MANDT and VBELN are shown on the left of the primary index. Various index search strings are created, and an index range scan is performed over the primary index for each index search string. Finally, the result is concatenated.
TABLE BUFFER
Introduction
The SAP System is scalable at application server level. This means that you can add more application servers to SAP System. The database, as the central data repository, is not scalable. That is, there can be only one database for each SAP System.
Table contents, repository objects, ABAP Dictionary objects, and so on, are data records on the database. Every time these objects are called, they must be read from the database and transferred to the application server. To avoid this, the objects are stored temporarily in the shared memory of the application servers in various buffer areas, for example:
 Table buffer (single-record buffer, generic buffer)
 Repository buffer
 Dictionary buffer
 Number range buffer
 Roll and paging buffers
SAP table buffering buffers table contents on the application server to eliminate the need for database accesses and thereby reduce the load on database resources (for example, CPU load and main memory).
There are two types of buffer: single-record buffer (technical name TABLP) and generic table buffer (technical name TABL). The single-record buffer contains data records from tables with single-record buffering. The generic table buffer contains data records from generically buffered or fully buffered tables.
Accesses to buffered data records take approximately 0.2 - 6 ms, whereas between 8 and 600 ms are required to read the data from the database. Using buffers reduces the waiting times for SAP work processes on the application server. Thus, the total system load is reduced.
Data records are sorted according to the primary index and stored in the table buffers. You cannot access buffered tables with secondary indexes. You cannot use SAP table buffering and index support of SQL statements in parallel. You can only use SAP table buffering as an alternative.
Types of Buffering
Full buffering:
The first time the table is accessed, the contents of the entire table are loaded into the table buffer (generic table buffer). Any further accesses to the table can be made through the table buffer.
Generic buffering:
To generically buffer a table, you must first define the generic area. The generic area is the first n key fields in the table. If an SQL statement with a specific instance of the generic area (for example, SELECT * FROM TAB1 WHERE KEY1 = u2018002u2019 or SELECT * FROM TAB2 WHERE KEY1 = u2018002u2019 AND KEY2 IN (u2018Au2019,u2019Cu2019) ) is executed for a generically buffered table, the corresponding data records are loaded into the table buffer. Any further accesses with the same specific instances can be made through the table buffer.
Single record buffering:
Only single data records are read from the database and loaded into the table buffer (single-record buffer).
All types of buffering can be understood as variants of generic buffering with n key fields:
 Full buffering: n = 0
 Generic buffering: 0 < n < number of key fields
 Single-record buffering: n = number of key fields
Buffer Synchronization
In the first step of a read access to a buffered table, the relevant data records are changed on the database. The data records are then changed (work area mode) or invalidated (set mode) in the table buffer of the local SAP System instance (in the example, application server A). The table buffers of all other SAP System instances (in the example, application server B) are not changed.
In the second step, the local database interface (in the example, application server A) passes on the changes to the other application server by making the appropriate entry in database table DDLOG. The table buffers of all other SAP System instances have not yet been updated.
In the third step, the database interfaces of the SAP System instances call the buffer synchronization (every 1 to 2 minutes, controlled by profile parameters). This is done by sending a SELECT statement to table DDLOG. If the data records returned from DDLOG indicate that changes have been made to buffered tables, the data in the table buffer of the non-local SAP System instances are invalidated accordingly. SQL statements on invalidated data are then provided with data directly from the database. The table buffers of non-local SAP System instances are reloaded.
Note that, in the time between two buffer synchronization, SQL statements may be supplied with obsolete data from the table buffers.
Degree of Invalidation
Work area mode means that changes on the database are made by single record accessing. Single record accessing is performed using the ABAP statements UPDATE/INSERT/MODIFY/DELETE dbtab (FROM wa) or INSERT dbtab VALUES wa.
You can also change a database table in set mode, which means that you use mass processing. To make database changes using mass processing, formulate the ABAP statements UPDATE/INSERT/MODIFY/DELETE dbtab FROM itab, UPDATE dbtab SET field = value WHERE field = condition or DELETE dbtab WHERE field = condition.
With fully buffered tables, all records are invalidated by database changes.
With generically buffered tables in work area mode, the records are invalidated that have the same specific instance in the generic area as the fields in the work area of the SQL statement executed. If a generically buffered table is accessed in set mode, all data records are invalidated.
When a data record with single record buffering is changed in set mode, only the changed single record is invalidated. In set mode, the whole table with single record buffering is invalidated by database changes.
The degree of invalidation corresponds to the degree to which the table buffers are filled.
Bypassing Buffer
Some SQL statements cannot be evaluated on table buffers. Instead, they are transferred directly to the database from the database interface.
If you want an SQL statement to be executed at database level (for example, to ensure that current data is read), use the addition BYPASSING BUFFER. SQL statements on database joins (database views with more than one table, ABAP JOINs, subqueries) are always executed at database level, even if all tables involved are buffered.
Database locks are set by SELECT FOR UPDATE. They must therefore be performed at database level. Other statements that can only be executed at database level include: SELECT statements with aggregate functions, SELECT DISTINCT, SELECT statements with WHERE conditions containing the operator IS NULL, ORDER BY (except ORDER BY PRIMARY KEY), and GROUP BY (with HAVING, if applicable).
A table with single record buffering is filled and invalidated record by record. You can only read data records of a table with single record buffering from the table buffer by using single record accesses (SELECT SINGLE with all key field entries).
The degree to which a generically buffered table is filled and invalidated depends on the generic area. If you want to read records of generically buffered tables from the table buffer, you must specify WHERE conditions with '=' for all fields in the generic area.
In the first example, table TCURR is generically buffered with one key field. The generic area for the client-dependent table TCURR consists of the key field MANDT.
In the example of the SQL statement on the left, the incorporation of a WHERE condition is suppressed for the client by the addition CLIENT SPECIFIED. The generic area is thus not specified, and the access bypasses the buffer. However, the SQL statement in the example on the right is read on the table buffer.
In the second example, table T100 has single record buffering. In the SQL statements on the left and on the right, all key fields in the WHERE clause are specified with '='. In the example on the left, T100 is accessed using SELECT. The table contents are therefore returned from the database, not from the table buffers. In the example on the right, T100 is accessed using SELECT SINGLE. The table contents are therefore returned from the table buffers.
Buffering Strategy u2013 Technical Criteria
The records of the buffered tables are retained in the shared memory of the relevant application server, and are redundant. You should therefore only buffer small tables. Only buffer tables larger than 10 MB in exceptional cases.
To keep the number of invalidations as low as possible, and thus keep the buffer loading process as short as possible, only buffer tables that have predominantly read accesses (changes < 1% of read accesses).
Directly after changes to buffered tables, the table buffers of the non-local R/3 instances are not up-to-date (see Buffer Synchronization slide in this unit). During this time, data may be read inconsistently.
Data records from buffered tables are stored and sorted according to primary key. You should therefore access them via key fields. Secondary indexes cannot be used.
Before you decide to buffer tables, contact your R/3 System administrator to ensure that there is enough space in the shared memory (single record buffer: approximately 40 MB, generic table buffer: approximately 80 MB). If there is not enough space, displacement occurs in the table buffers, making table buffering counterproductive.
Before changing buffering settings for SAP standard tables, look for relevant R/3 Notes in SAPNet. To perform the changes, you need a SAP modification key.
Buffering Strategy u2013 Semantic Criteria
When you determine the buffering settings for a table, you must consider semantic criteria as well as technical criteria.
Never buffer transaction data (for example, VBAK, LIKP, MKPF or RESB). These tables can grow to several gigabytes. In addition, transaction data is changed frequently.
Master data (for example, MARA, KNA1 or LFA1) does not grow as fast as transaction data. However, these tables can grow to several 100 megabytes. Also avoid buffering master data because it can be accessed by a number of different access paths (for example, using search help). Access paths are usually supported by secondary indexes. The technical criteria for table buffering are therefore not met.
Customizing tables (for example, T000, T001, T001W or TVKO) are usually small and are seldom changed after production startup. They are therefore suitable for buffering.
Buffering Roadmap
Buffering in the Program
The best way to reduce the load on the database is to avoid database accesses. You can do this by buffering read table contents in an internal table (preferably of the type SORTED or HASHED).
In the example above, the goal is to read data from database table KKNA1 (customer master data) in a SELECT u2026 ENDSELECT over table VVBAK (sales document headers). This can be achieved by a SELECT SINGLE or by a read routine.
Using SELECT SINGLE would mean executing identical SQL statements. You should therefore read the KKNA1 records in a read routine. In the example, the read table contents from KKNA1 are buffered in an internal table. Before the database is accessed, the relevant table entry is checked to establish whether it has already been read. If no corresponding entry exists on the database, this information is also buffered in the internal table (EXIST field).
You can use read routines if data is buffered in the local program buffer and is not reused. If you want to buffer data across programs in an internal session (call chain), use a function module. Table contents are buffered in various locations in the SAP standard (for example, FB STATUS_CHECK).
NULL VALUES
With every new release you should pay more attention to Null Values.
Definition
A Null Value is an undefined value of a field in a database table.
You get a Null Value:
 After ADD FIELD fieldn in the database tables i.e. a new field is added to a database table. All existing entries get a Null Value for fieldn.
 After INSERT with a database view for the fields which are not defined in the view.
When you read a field with a Null Value from the database, you will get:
 SPACE for fields with type CHAR, RAW, LANG.
 0 for fields with type INT4, FLTP and DEC.
Null Values & SELECT
You will not find a record in a database table, if fieldn has a Null Value and you are using the following WHERE-clauses:
WHERE fieldn = 0.
WHERE NOT fieldn = 0.
WHERE fieldn < 5.
WHERE fieldn > 5.
WHERE fieldn = SPACE.
WHERE fieldn <> SPACE.
If you want to read records with Null Values, you have to use IS NULL with your WHERE-clause:
WHERE fieldn = 0 OR fieldn IS NULL.
WHERE fieldn < 5 OR fieldn IS NULL.
WHERE fieldn = SPACE OR fieldn IS NULL.
Null values - Aggregate functions
Null Values will be ignored when you are using aggregate functions.
Field1 10, NULL, 20
MIN (field1) = 10
Field2 -2, -3, NULL
MAX (field2) = -2
Field3 40, NULL, 80
AVG (field3) = 60
COMPUTING with Null Values
Be careful with Null Values when computing.
Field1 Null
UPDATE dbtable SET field1 = field1 + 1.
Field1 Null
Null values - Indexes
Some DBMS do not store Null Values in an index.
So when you SELECT with index, depending on the Database system, Null values will be ignored in indexes.
Index dbtable__1: field1, field2, and field3.
SELECT * FROM dbtable
WHERE field1 = value1
AND field2 IS NULL
AND field3 = value3.
ORACLE uses only field1
INFORMIX can use field1, field2 and field3
Null values u2013 Sort
Dependent from the DBMS, you will find Null Values at the beginning or the end of the sorted table records. Null Values will not be sorted like SPACE.
Null values u2013 Buffer
The SAP tables buffers do not know Null values. In buffered records, Null values will be replaced with the initial value. The same record has a different field value (buffer / database).
TESTING TIPS
The frequently occurring bugs while testing REPORTS, INTERFACES, TRANSACTIONS, SAP SCRIPTS and USER EXITS in STVG have been put under following sub-headings.
 GENERAL
 DATA POPULATION
 SELECT STATEMENTS
 ARITHMETIC ERRORS
 SELECTION-SCREEN
 OUTPUT LAYOUT
 FUNCTIONALITY
 MESSAGES.
GENERAL
The following are the general bugs.
 Variables and tables are declared but no where used in the program. This is an unnecessary.
 Code in the program.
 Dead code in the program. This should be removed.
 No documentation for variables, tables and internal table fields when they are declared.
E.g. TABLES: PERNR, "Standard Selections for HR Master
u201CData Reporting
T5UBA, "Benefit Plan
T512W, "Wage Type Valuation T503T,
ZCUMUL,
PCL1, "HR Cluster 1
PCL2. "HR Cluster 2
 Wrong declaration of fields. E.g. Packed instead of character.
 No proper indentation.
 Standard naming conventions are not followed.
DATA POPULATION
The following are the data population bugs.
 Variables and internal table work areas are not cleared.
 Incorrect internal table population.
 Incorrect string manipulation.
E.g. Data: l_salesno(16).
Data: Begin of itab_final Occurs 0,
vbeln like vbak-vbeln, u201Clength is 10
posnr like vbap-posnr, u201Clength is 6
End of itab_final.
Concatenate itab_final-vbeln
itab_final-posnr
Into
l_salesno
Separated by '/'.
Write:/1 l_salesno.
Since the concatenated length is 17 and declared field length is 16, one character will be truncated and then displayed.
SELECT STATEMENTS
 NO handling of sy-subrc after select endselect statements.
 Not considering all matching conditions in JOIN.
E.g. Select statement before not considering all matching conditions:
SELECT EKPO~WERKS " plant
EKPO~EBELP " item no
EKPO~LOEKZ " deletion indicator
ESSR~EBELN " purchase order no
ESSR~LBLNI " service entry sheet no
ESSR~BUDAT " posting date
ESSR~KZABN " service accepted indicator
EKKO~LIFNR " vendor number
B~SRVPOS " service no
B~KTEXT1 " service description
B~MENGE " quantity
B~MEINS " unit of measurement
B~BRTWR " cost
B~USERF1_NUM " no. of loads
B~USERF2_NUM " no. of pallets carried
B~USERF1_TXT " no. of drops
INTO TABLE IT_MAIN
FROM EKPO
INNER JOIN ESSR
ON ESSREBELN = EKPOEBELN
AND ESSREBELP = EKPOEBELP
INNER JOIN EKKO
ON EKKOEBELN = ESSREBELN
INNER JOIN ESLL AS A
ON APACKNO = ESSRPACKNO
WHERE EKPO~WERKS = P_WERKS AND
ESSR~EBELN IN S_EBELN AND
EKKO~LIFNR IN S_LIFNR AND
ESSR~BUDAT IN S_BUDAT AND
A~MATKL IN S_MATKL AND
ESSR~LBLNI IN S_LBLNI AND
ESSR~KZABN IN R_KZABN.
Select statement after considering all the matching conditions:
SELECT EKPO~WERKS " plant
EKPO~EBELP " item no
EKPO~LOEKZ " deletion indicator
ESSR~EBELN " purchase order no
ESSR~LBLNI " service entry sheet no
ESSR~BUDAT " posting date
ESSR~KZABN " service accepted indicator
EKKO~LIFNR " vendor number
B~SRVPOS " service no
B~KTEXT1 " service description
B~MENGE " quantity
B~MEINS " unit of measurement
B~BRTWR " cost
B~USERF1_NUM " no. of loads
B~USERF2_NUM " no. of pallets carried
B~USERF1_TXT " no. of drops
INTO TABLE IT_MAIN
FROM EKPO
INNER JOIN ESSR
ON ESSREBELN = EKPOEBELN AND
ESSREBELP = EKPOEBELP
INNER JOIN EKKO
ON EKKOEBELN = ESSREBELN
INNER JOIN ESLL AS A
ON APACKNO = ESSRPACKNO
INNER JOIN ESLL AS B
ON BPACKNO = ASUB_PACKNO
WHERE EKPO~WERKS = P_WERKS AND
ESSR~EBELN IN S_EBELN AND
EKKO~LIFNR IN S_LIFNR AND
ESSR~BUDAT IN S_BUDAT AND
A~MATKL IN S_MATKL AND
ESSR~LBLNI IN S_LBLNI AND
ESSR~KZABN IN R_KZABN AND
ESSR~LOEKZ IN R_LOEKZ.
 The orders of field selection in select statement do not match with the sequence in data dictionary.
ARITHMETIC ERRORS
 Division by zero is not handled.
The code prior to not handling division by zero:
I_FOOTER-AVG_HCE = ( I_FOOTER-RATIO_HCE / I_FOOTER-TOT_HCE ).
The code after handling division by zero:
IF I_FOOTER-TOT_HCE NE 0.
I_FOOTER-AVG_HCE = ( I_FOOTER-RATIO_HCE / I_FOOTER-TOT_HCE ).
ENDIF.
 Assigning quantity fields to Char fields.
 Comparison operators GE, GT, LT, LE should not be used for u2018CHARu201D data type.
SELECTION-SCREEN
 Incorrect titles in the selection-screen.
 No F-4 functionality.
 Incorrect input validation. E.g. when u2018TOu2019 date is less than u2018FROMu2019 date no Error message.
OUTPUT LAYOUT
 Field is missing in the report layout.
 Incorrect titles for report/fields.
 Incorrect positioning of fields.
FUNCTIONALITY
 Push buttons in tool bars are not functioning.
 Non-functioning of u2018BACKu2019 button.
 Incorrect navigation of screens.
MESSAGES
 Long text not declared.
E.g. message is not defined.
 Incorrect usage of placeholders.
E.g. SELECT SINGLE * FROM SPFLI
WHERE CARRID = SPFLI-CARRID
AND CONNID = SPFLI-CONNID.
IF SY-SUBRC NE 0.
MESSAGE E005 WITH SPFLI-CARRID SPFLI-CONNID.
ENDIF.
Then message creation should be as below.
005: flight no &1 &2 not created.
Thanks
Sarada
‎2008 May 31 3:57 PM
hi check this...
1.dont use the loop in a loop
2.use for all entries than the joins
3.use all the keys in the select (primary and secondary)
regards,
venkat