Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

ABAP Performance Problem.

Former Member
0 Likes
1,408

Hi Gurus

I have an ABAP program which reads data from one table and loads into another Z table after some transformations. It runs smoothly till 5 millions records are loaded into this Z table. Thereafter it slows down. It takes 30 minutes to load first 5 millions transactions and another 30 minutes to load remaining 1.5 million transactions.

Any ideas about parameters, table settings, basis settings etc???????

Thanks

Rajesh

Moved to correct forum

Edited by: Rob Burbank on Mar 10, 2009 4:38 PM

17 REPLIES 17
Read only

Former Member
0 Likes
1,363

Are there any secondary indexes on the table?

Rob

Read only

0 Likes
1,363

Bob

The table is stand alone. No secondary or primary indexes. ABAP program deletes the contents of table and reloads again.

Thanks

Rajesh

Read only

0 Likes
1,363

Well, there has to be a primary index.

And what is the size category.

Rob

Edited by: Rob Burbank on Mar 10, 2009 5:12 PM

Read only

Former Member
0 Likes
1,362

HI,

Try to refresh the table after some period so that old datas are deleted.....

check the size category also.....

Read only

Former Member
0 Likes
1,362

hi rajesh,

try to refresh the table after deleting the records.and also specify the primary index in this case.

Read only

Former Member
0 Likes
1,362

Please be more specific, what are you actually doing?

You read from table 1, in blocks or all at once?

Into an internal table, what kind of transformations?

And then you insert, update or what a Z-table?

You should do it in block of 10.000 to 50.000, if you do so then you can run a SQL Trace in

parallel and tell us the time for the SELECT and for the INSERT of one or two blocks.

Siegfried

Read only

0 Likes
1,362

Thanks guys for prompt responses

I am trying to respond to everyone through this note

All I am doing is

1. Select all records from a table (Active ODS table in BW) into Internal Table (say IT_A).

2. Deleting entire content of Z table

3. Populate another Internal Table (say IT_B) which is corresponding to Z table.

4. Insert into Z table from Internal Table IT_B.

The above process is carried out after every 50K lines in Internal Table IT_B which is then cleared and refreshed.

Size category (The Z Table is on BW) is correctly specified as 8 (5,900,000 to 11,000,000).

However, I find that Data Class is specified as DODS which is assigned by system when ODS (BW Object) is created. It is never available as an option for Z tables. I think, when Z table was copied from active table of ODS, the data class was inherited. I am not sure if it would make any difference.

As a side note, this loading process works fine on QA environment.

All suggestions are welcome.

Thanks

Rajesh

Read only

0 Likes
1,362

Since you are deleting the entire contents of the Z table at the beginning, try this:

After the delete, do a commit.

After each array insert, do another commit.

I'm thinking there may be issues with the rollback segment.

Rob

Read only

0 Likes
1,362

Thanks Rob

I checked this program again and it doing COMMIT after both actions referred by you.

PS - I am sorry i referred you earlier as Bob in haste. My Apologies.

Thanks

Rajesh

Read only

0 Likes
1,362

If it's not too big, can you post the code for the building of the second itab and the insert?

Rob

PS - don't worry about the Bob thing. You're not the first

Read only

0 Likes
1,362

Here it is Rob.....

*Loop into actual data.

LOOP AT zco_co06.

CLEAR: zprofitability_table.

MOVE-CORRESPONDING zco_co06 TO zprofitability_table.

zprofitability_table-costcenter = zco_co06-piovalue+4(10).

zprofitability_table-acttype = zco_co06-piovalue+14(2).

zprofitability_table-comp_code = zco_co06-comp_code.

****@

  • Determine Project for WBS Element

Read table it_WBS_ELEMT into ws_WBS_ELEMT with Key /BI0/OIWBS_ELEMT = zco_co06-wbs_elemt.

if sy-subrc = '0'

zprofitability_table-project = ws_WBS_ELEMT-/BI0/OIPROJECT.

endif.

****@

zprofitability_table-zzproc_year = p_year.

zprofitability_table-metype = '900000'.

zprofitability_table-pstgdate = sy-datum(8).

zprofitability_table-rateversion = zco_co06-version.

APPEND zprofitability_table. "Record for actual data.

IF zco_co06-costelmnt = '0000910010'.

  • Loop into standard rates to create a record for each version.

LOOP AT zstd_rates

WHERE costcenter = zprofitability_table-costcenter

AND costelmnt = zprofitability_table-costelmnt

AND acttype = zprofitability_table-acttype

AND curtype = zprofitability_table-curtype

AND co_area = zprofitability_table-co_area

AND fiscper(4) = p_year

AND fiscvarnt = zprofitability_table-fiscvarnt.

  • Data loaded in the standard rate table only one period

  • Create standard rate record for each version.

zprofitability_table-rateversion = zstd_rates-version.

zprofitability_table-amount = zstd_rates-pricecom * zprofitability_table-quantity.

zprofitability_table-vtype = zstd_rates-vtype.

zprofitability_table-zzproc_year = p_year.

zprofitability_table-metype = '900001'.

APPEND zprofitability_table.

  • PERFORM database_update.

ENDLOOP.

ENDIF.

  • To avoid memory problem (frequent update and itab memory clean up)

DESCRIBE TABLE zprofitability_table LINES zlines.

IF zlines > 50000.

PERFORM database_update.

ENDIF.

ENDLOOP.

PERFORM database_update.

ENDLOOP.

&----


*& Form database_update

&----


  • text

----


  • --> p1 text

  • <-- p2 text

----


FORM database_update .

IF zprofitability_table[] IS INITIAL.

WRITE: 'NO DATA FOR UPDATE'.

ELSE.

WRITE:/'Count before deletion is:', zlines.

  • WRITE zlines.

SORT zprofitability_table ASCENDING BY WBS_ELEMT costcenter costelmnt acttype EMPLOYEE curtype metype CJR2VERSION rateversion fiscper FISCYEAR co_doc_no co_item_no CO_AREA zzproc_year COMP_CODE.

DELETE ADJACENT DUPLICATES FROM zprofitability_table COMPARING WBS_ELEMT costcenter costelmnt acttype EMPLOYEE curtype metype CJR2VERSION rateversion fiscper FISCYEAR co_doc_no co_item_no CO_AREA zzproc_year COMP_CODE.

DESCRIBE TABLE zprofitability_table LINES zlines.

WRITE:/ 'Count after deletion is :', zlines.

INSERT zprofitability FROM TABLE zprofitability_table.

COMMIT work.

IF sy-subrc EQ 0.

WRITE: 'UPDATE SUCCESS'.

UPDATE ZDELTALOAD SET LASTSUCCESSDATE = sy-datum.

commit work.

ELSE.

WRITE: 'UPDATE FAILED'.

ENDIF.

CLEAR: zprofitability_table[], zlines.

REFRESH: zprofitability_table[].

ENDIF.

ENDFORM. " database_update

Read only

0 Likes
1,362

Rajesh - I've had a quick look at your code and can see that if internal tables zco_co06, it_wbs_elemt, zstd_rates (and possibly one other that I don't see) are standard tables rather than sorted or hashed, you will have performance issues because you are calling your update routine inside doubly or triply nested loops. Have a look at:[Performance of Nested Loops|/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops]. In addition, Siegfried also has some blogs that will help you out.

Rob

Read only

0 Likes
1,362

Hi Rajesh,

Thanks for your code, is this code used on extractor ?

Check this comments on your code:

1. Reduce the copy to workarea time:

out side the first loop, put:
FIELD-SYMBOL <fs_zco_co06> LIKE LINE OF zco_co06.

change the loop to:
LOOP AT zco_co06 ASSIGNING <fs_zco_co06>.

replace each reference to workarea zco_co06 to <fs_zco_co06>.

2. Read table w/o binary search:

out side the first loop, put:
SORT it_wbs_element by /BI0/OIWBS_ELEMT.
FIELD-SYMBOL <fs_wbs_elemet> LIKE LINE OF it_WBS_ELEMT.

change the read code to:
READ TABLE it_wbs_element ASSIGNING <fs_wbs_element> WITH KEY /BI0/OIWBS_ELEMT = zco_co06-wbs_elemt BINARY SEARCH.
IF sy-subrc = 0.
  zprofitability_table-project = <fs_wbs_element>-/BI0/OIPROJECT.
ENDIF.

3. Code may lead to excessive COMMIT WORK, delete the line PERFORM database_update inside the LOOP AT zstd_rates.

4. Confirm if there's only one record on table ZDELTALOAD.

5. Perform indexed acess to table zstd_rates (you can change the access from workarea to field-symbols):

out side the first loop, put:

SORT zstd_rates BY costcenter costelmnt acttype curtype co_area fiscvarnt fiscper.

FIELD-SYMBOL

WITH KEY costcenter = zprofitability_table-costcenter

costelmnt = zprofitability_table-costelmnt

acttype = zprofitability_table-acttype

curtype = zprofitability_table-curtype

co_area = zprofitability_table-co_area

fiscvarnt = zprofitability_table-fiscvarnt.

IF sy-subrc EQ 0.

lv_tabix = sy-tabix.

  • Start looking direct on the first desired record

LOOP AT zstd_rates ASSIGNING -fiscvarnt = zprofitability_table-fiscvarnt ).

  • Stop loop when first undesirable record is found

EXIT.

ENDIF.

CHECK -fiscper(4) = p_year. " This partial field can't be tunned as the others

....

ENDLOOP.

ENDIF.

6. Improve the suggestion:

6.1. Remove all references to workarea fields (use field-symbol since you need greater extractor performance)

6.2. Don't use LIKE LINE OF on declaration... It was used here just because isn't shown. Use TYPE.

I hope it can help you.

Fernando Da Ró

Read only

Former Member
0 Likes
1,362

> 1. Select all records from a table (Active ODS table in BW) into Internal Table (say IT_A).

> 2. Deleting entire content of Z table

> 3. Populate another Internal Table (say IT_B) which is corresponding to Z table.

> 4. Insert into Z table from Internal Table IT_B.

maybe you should better ask your question in a BW forum, for general OLTP processing I find

your example very strange.

You can not move all records from a DB-table in an internal table and delete the db_table

before you save the data successfully somewhere. If you get a dump, system shut-down or time-out, your data will be gone.

You must process in blocks, sooner or later the memory will not be sufficient.

SORT

DELETE ADJACENT DUPLICATES ....

also strange, you create your data, you should take care that there are no duplicates.

Use sorted or hashed tables if you process large internal tables

Siegfried

Read only

Former Member
0 Likes
1,362

You are doing database update for large volume of data at a time. So reduce the count to update databased to approx 10,000 records.

Please see if helps to improve performance.

Edited by: Sunil Sawaikar on Mar 12, 2009 11:54 AM

Read only

Former Member
0 Likes
1,362

Hi;

Use open cursor, for the details

[Link|http://help.sap.com/saphelp_nw04/helpdata/EN/fc/eb3b23358411d1829f0000e829fbfe/content.htm]

Regards

Shashi

Read only

Former Member
0 Likes
1,362

It might shave almost nothing out of the time it takes to run the program, but instead of the move corresponding you might want to assign each field manually.