Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

how improve the program peformance

Former Member
0 Likes
432

Hi All,

Due to below coding my program have low peformance and i checked in code inspecteor it was showing like below error

(Program /AMS/OBRFCR_RECONCIL_COPY Include ZOBRFCN_RECONCIL_FORM_COPY Row 215 Column 6

Sequential read access possible for a hashed table.)

-


In my program coding is like below please tell me any one how improve performance of the program.

TYPES : l_ty_t_bukrs TYPE HASHED TABLE OF l_ty_bukrs WITH UNIQUE KEY bukrs.

DATA : l_ih_bukrs TYPE l_ty_t_bukrs.

line 215 DELETE l_ih_bukrs WHERE flag EQ c_x.

Thanks,

3 REPLIES 3
Read only

Former Member
0 Likes
397

Hi,

please check, whether the performance will increase, when you change your table type from hashed to sorted!

Regards,

Klaus

Read only

HermannGahm
Product and Topic Expert
Product and Topic Expert
0 Likes
397

Hi,

> (Program /AMS/OBRFCR_RECONCIL_COPY Include ZOBRFCN_RECONCIL_FORM_COPY Row 215 Column 6

> Sequential read access possible for a hashed table.)

> -


> In my program coding is like below please tell me any one how improve performance of the program.

>

> TYPES : l_ty_t_bukrs TYPE HASHED TABLE OF l_ty_bukrs WITH UNIQUE KEY bukrs.

> DATA : l_ih_bukrs TYPE l_ty_t_bukrs.

>

> line 215 DELETE l_ih_bukrs WHERE flag EQ c_x.

first of all check whether the delete is a real performance problem. The code inspector reports potential problems.

The code inspector can not now how big this table is and how often the delete statement is executed. Only an ABAP trace can show you real problems.

Your unique key is the company code (bukrs). How many rows can you have in the table with that key?

only a delete with bukrs = ... can be optimized in a hashed table. the delete where flag = ... can not. A full

table scan will be executed on the table. But this is only a problem if the table is big and/or the delete statement

is executed often.

Kind regards,

Hermann

Read only

Former Member
0 Likes
397

Hi,

You can

1. try changing the table type, as for Hash tables the access time using the key is constant, regardless of the number of table entries. If your records have unique keys, use of hash table will improve performance when dealing with large dataset .

2. Instead of deleting the records with flag 'X', code in such a way that u do not add those records to your table l_ih_bukrs.

3. Also check your whole program, there might be something else which might be affecting the overall performance .

Regards,

Gopal