Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

For all entries - Performance Issue

Former Member
0 Likes
2,413

Hi All,

I developed a code in which I use for all entries on various tables(for about 40 tables in different select queries)...

Now, when I reviewed the code by a standard tool of our company, it is showing the warning message-

DELETE ADJACENT DUPLICATE before using FOR ALL ENTRIES..

But I want to maintain Duplicate entries in that table..

To work that way out, I copied the table into another internal table.. sorted it.. deleted adjacent duplicates.. and used that copy in FOR ALL ENTRIES..

*But doing like above for around 40 internal tables,(copying, sorting & deleting adjacent duplicates) my performance has really gone down..*

Is there any alternative that I can do??

1 ACCEPTED SOLUTION
Read only

Former Member
0 Likes
1,707

FIRST

If you might have observed, the resultant table that you fetch from the database does not have the duplicate entries when you use for all entries.

Eg. If the code is -

if itab1[] is not initial.

select * into table itab2

from table db_tab2

for all entries in itab1

where field1 eq itab1-field1.

endif.

-> then there wont be any duplicate entries in itab2. => because for all entries automatically deletes the duplicate entries.

-> you might have duplicate entries if you dont include the keyfields in the structure of itab2.

SECOND

Also, if there are any duplicate entries in itab1 - it results in duplicate efforts to fetch the data from database.

-> you can verify this by doing SQL trace ST05 on that query.

-> when you use for all entries, the system basically splits the query in multiple queries and fetches data and then deletes any duplicates from the resultant table if any.

CONCLUSION

These are the only 2 things related to duplication and for all entries.

The tool you are using for evaluation must be a custom tool. Dont use the tool blindly. Understand what the error is for. There must be a user manual with someone within team.

Its upto your judgement wheather you want to accept the suggestion (as it is a warning) given by your tool. You can choose to ignore it.

Anyway, using 40 for all entries in the program is also an indication of a bad design. Please do a peer-review of your code, to avoid any performance issue in production.

5 REPLIES 5
Read only

Former Member
0 Likes
1,707

After each DELETE ADJACENT DUPLICATES statement and before the SELECT, did you put a clause to ensure that the internal table has some entries in it? Not doing so will make the SELECT query unrestricted when some internal table is empty

Ensure your coding is like below (if it is not already so)

DELETE ADJACENT DUPLICATES FROM itab COMPARING key1 key2 ...

IF NOT itab[] IS INITIAL.
  SELECT xxxx
    FROM dbtable
      FOR ALL ENTRIES IN itab
        WHERE xxx
ENDIF.

Read only

Former Member
0 Likes
1,707

Hi,

Along that coparing internal table should not be inital and note in your where clause you are comaring with more than one internal table or one interal table and select-options the performace of the program will degraded.

Ex:


data: s_matnr type s_matnr.
selct-options : s_sel  type s_matnr.
-------------------------------
-----------------------------------
if it_tab1[] is not initial.
 select field1
            field2
            from <DB> into it_tab2
            for all entries in it_tab1
             where <field> = it_tab1-<field>
              and    <field1> in s_sel.  --------------------> this will degrade the performace of the program.

In this case btter write a join or make a view.

Thanks,

Phani.

Read only

Former Member
0 Likes
1,708

FIRST

If you might have observed, the resultant table that you fetch from the database does not have the duplicate entries when you use for all entries.

Eg. If the code is -

if itab1[] is not initial.

select * into table itab2

from table db_tab2

for all entries in itab1

where field1 eq itab1-field1.

endif.

-> then there wont be any duplicate entries in itab2. => because for all entries automatically deletes the duplicate entries.

-> you might have duplicate entries if you dont include the keyfields in the structure of itab2.

SECOND

Also, if there are any duplicate entries in itab1 - it results in duplicate efforts to fetch the data from database.

-> you can verify this by doing SQL trace ST05 on that query.

-> when you use for all entries, the system basically splits the query in multiple queries and fetches data and then deletes any duplicates from the resultant table if any.

CONCLUSION

These are the only 2 things related to duplication and for all entries.

The tool you are using for evaluation must be a custom tool. Dont use the tool blindly. Understand what the error is for. There must be a user manual with someone within team.

Its upto your judgement wheather you want to accept the suggestion (as it is a warning) given by your tool. You can choose to ignore it.

Anyway, using 40 for all entries in the program is also an indication of a bad design. Please do a peer-review of your code, to avoid any performance issue in production.

Read only

0 Likes
1,707

Hi Chinmay..

Very helpful reply..

Thanks a ton!! Very informative and well written...:)

Read only

Former Member
1,707

The DELETE ADJACENT DUPLICATES is actually not the best solution for this task, especially if you need the table including the duplicates later again.

Much more elegant is the solution to define a hashed table with a unique key consisting of the fields needed in the FAE

hash type hashed table of st_fields with unique table key field1 field2 field3.

and to use a COLLECT!


LOOP AT itab ASSIGNING <fs>.
   wa-field1 = <fs>-field1.
   wa-field1 = <fs>-field1.
   wa-field1 = <fs>-field1.
   COLLECT wa INTO hash.
ENDLOOP.

SELECT   ...
       FOR ALL ENTRIES in hash
       WHERE field1 = hash-field1
       AND       field2 = hash-field2
       AND       field3 = hash-field3.

The COLLECT is faster, it copies only the columns which are really needed not large data columns, and

is unique already by construction.

I doubt that you can measure the overhead, it will be smaller than the usual variations of the SELECT.

Siegfried