Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Optimisation question, internal table vs multiple DB calls

Former Member
0 Likes
2,036

Hi,

I have to optimise a program. The program is a loans report for bank. The main tables used are VDARL, VDBEKI and VDBEPI. VDBEKI(loans master) has about 16000records, VDBEKI and VDBEPI(flows\cash flows) have about 1000 000 records each. Currently the program loops through each loan(16 000 loans) and then calls a standard Function module LOAN_FLOWS_SUPPLY(retrieves cash flows from VDBEKI and VDBEPI) . LOAN_FLOWS_SUPPLY has the following join in it:

    SELECT * INTO TABLE wrk_t_beki_bepi
      FROM (    vdbeki AS vdbeki INNER JOIN vdbepi AS vdbepi
            ON  vdbeki~mandt    = vdbepi~mandt
            AND vdbeki~bukrs    = vdbepi~bukrs
            AND vdbeki~rbelkpfd = vdbepi~rbelkpfd )
      WHERE      vdbeki~bukrs    EQ  i_bukrs
        AND      vdbeki~ranl     EQ  i_ranl
        AND      vdbeki~sstorno  IN wrk_sstorno
        AND      vdbepi~dvalut   LE i_dvalut_until
        AND      vdbepi~dfaell   LE i_dfaell_until
        AND      vdbepi~ddispo   LE i_ddispo_until
        AND      vdbepi~dfaell   GE i_due_date_from
        AND      vdbepi~dfaell   LE i_due_date_until
        AND (    vdbeki~s_compr  IN wrk_compr      "       NOTE 580603
              OR vdbeki~s_compr  IS NULL        ). "   

The real problem is that its doing this for every loan, so there is a lot of back and forth between the DB server and the Application server. The other alternative is for me to create an internal table for VDBEKI, but since VDBEKI has about a million records, this will require a lot of memory?

What should I do? should I create an internal table for this or find another alternative for this?

1 ACCEPTED SOLUTION
Read only

Former Member
0 Likes
1,890

please check in all your new developments that you use suitable internal tables, either sorted or hashed for every table which can have more than 50 records:

Measurements on internal tables: Reads and Loops:

/people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables

18 REPLIES 18
Read only

Former Member
0 Likes
1,890

the other option is to rewrite LOAN_FLOWS_SUPPLY with logic only relevant to our requirements

Read only

ThomasZloch
Active Contributor
0 Likes
1,890

This function is also used in many SAP standard programs, did you check for related SAP notes dealing with performance problems in this area?

This would be my first investigation, if there is nothing, then you could open a call with SAP, however if I understand correctly the calling program is your own development, so you will probably be turned down.

After that we can check additional options

Thomas

Read only

0 Likes
1,890

Hi Thomas,

no, I have not checked for any sap notes on it. The calling program was developed by someone else, I have been tasked to optimize it. After looking at the program I don't see much that I can optimise. The makority of the time is consumed by this function module.

Here is the main logic where it loops through an internal table for vdarl

http://codeviewer.org/view/code:1117

here is a breakdown of the runtime

http://farm5.static.flickr.com/4099/4931329631_163d4fa70c_b.jpg

Read only

0 Likes
1,890

Could you check ZCL_DBSA_REPORTS=>GET_UNIV_TRANS_REP ?

This takes 58.4% net time, and you should start to optimize...

Read only

0 Likes
1,890

Hi, I can't see much room for improvement in "ZCL_DBSA_REPORTS=>GET_UNIV_TRANS_REP ", that section of the code makes use of internal tables where possible and transfers between DB and APP server are kept to a minimum. So far, I have eliminated a few unnecessary layers

What I really need advice on is, is it better to create internal tables for VDBEKI and VDBEPI(cash flows) each having up to a million records or should I stick to using LOAN_FLOWS_SUPPLY which retrieves cash flow on a per loan basis but incurs the cost of having to fetch flows per loan from the DB.

Which is more expensive in terms of time: #1 loading 2 massive internal tables and using READ or #2fetching flows for each loan by going to the DB for each and selecting via a join from VDBEKI & VDBEPI?? #3 going the middle route and creating an internal table for VDBEKI and selecting VDBEPI flows from DB using a select * for all entries in IT_VDBEKI for that specific loan.

Read only

0 Likes
1,890

>

> Which is more expensive in terms of time: #1 loading 2 massive internal tables and using READ or #2fetching flows for each loan by going to the DB for each and selecting via a join from VDBEKI & VDBEPI?? #3 going the middle route and creating an internal table for VDBEKI and selecting VDBEPI flows from DB using a select * for all entries in IT_VDBEKI for that specific loan.

Usually you should go for #3, but it depends (it depends, for example, on many entries will be in your internal table for VDBEKI, as compared to the total number of rows in the db). You will end up with big internal tables, so you then must make sure you use SORTED TABLE (probably for VDBEPI, I haven't checked all your code); that is crucial, otherwise you will end up with worse performance.

But why don't you try yourself and post your results?

Anyway, Vincent Zhao is right: from your SE30, the LOAN_FLOWS_SUPPLY is "only" 25% (gross), while GET_UNIV_TRANS_REP has 58% net. Sticking with the "but my problem really is..." is a common mistake; if you ask questions in the forum you have to be prepared to accept the answers and at least consider what people are suggesting.

Calling a method in itself is not heavy, so the 58% net time means that it is taking time in something else that is not shown in the analysis; usually that is internal table operations, since they are not checked in the default SE30 variant (if that is true, you probably have a big percentage for ABAP in the first screen of the SE30 analysis). Expensive internal table operations usually mean READs or LOOPs over big non-sorted/hashed tables. If I were you I would run SE30 with a variant that includes Internal tables operations, and then check where your biggest net time is.

Hope this helps,

Rui Dantas

Read only

Former Member
0 Likes
1,890

This is your main issue, one execution of ZCL_DBSA_REPORTS=>GET_UNIV_TRANS_REP requires 700sec

17000 exections of the LOAN_FLOWS_SUPPLY require the rest, the 360sec

I would assume that your internal table handling is suboptimal. Either show the full code of ZCL_DBSA_REPORTS=>GET_UNIV_TRANS_REP

But to make it easier, run the ABAP Trace again and switch on internal table tracing, that will break down the 700 sec.

Later yu should provide the SQL trace for the testcase, maybe it is possible to optimize the VDBEKI fetch with a self written buffer.

Read only

0 Likes
1,890

Hi Siegfried , here is the code for zcl_dbsa_reports=>get_univ_trans_rep method

http://codeviewer.org/view/code:119a

here is trace with internal table tracing switched on

http://farm5.static.flickr.com/4149/4963227226_3ecb66939e_b_d.jpg

Read only

0 Likes
1,890

You should check the lines with more net time.

In the runtime analysis, if you select "Read Table IT_854" and press "Display Source Code (CTRL+F7)" what line does it go to?

Read only

0 Likes
1,890

Hi Rui

it takes me to the first line in the code below

*   loan dates
    read table lt_zcml_loan_dates into ls_zcml_loan_dates
    with key bukrs = ls_output-bukrs
             ranl = ls_output-ranl
             dateno = gc_51.
    if sy-subrc eq 0.
      ls_output-actdate_51 = ls_zcml_loan_dates-actdate.
      ls_output-repyear_51 = ls_zcml_loan_dates-repyear.
      ls_output-repper_51  = ls_zcml_loan_dates-repper.
    endif.
    read table lt_zcml_loan_dates into ls_zcml_loan_dates with key dateno = gc_53.
    if sy-subrc eq 0.
      ls_output-actdate_53 = ls_zcml_loan_dates-actdate.
      ls_output-repyear_53 = ls_zcml_loan_dates-repyear.
      ls_output-repper_53  = ls_zcml_loan_dates-repper.
    endif.

Read only

0 Likes
1,890

Hi Bijo,

Then that means the READ to lt_zcml_loan_dates alone is taking 35% of the execution time. You have two of those reads, though the runtime analysis only shows one (probably because you selected FULL aggregation).

In the second read do you really want to read only with dateno, or should it use also bukrs and ranl ?

Anyway, change it to a SORTED TABLE by dateno, bukrs, ranl, and you should be ok. Then measure again to confirm it improved, and to check where you should attack next.

Read only

0 Likes
1,890

Hi Rui,

Changing the table to a sorted table has had a substantial effect, I changed the internal table lt_zcml_loan_dates to a sorted table with unique key for bukrs, ranl and dateno. I created a second table also a sorted table but with non-unique key for dateno (for the second read statement). As you can see from the before and after images using the same set of data, the table read time has substantially decreased.

One more thing, the most costly sql statement in this application is this (within the LOAN_FLOWS_SUPPLY function module)

SELECT * INTO TABLE wrk_t_beki_bepi
      FROM (    vdbeki AS vdbeki INNER JOIN vdbepi AS vdbepi
            ON  vdbeki~mandt    = vdbepi~mandt
            AND vdbeki~bukrs    = vdbepi~bukrs
            AND vdbeki~rbelkpfd = vdbepi~rbelkpfd )
      WHERE      vdbeki~bukrs    EQ  i_bukrs
        AND      vdbeki~ranl     EQ  i_ranl
        AND      vdbeki~sstorno  IN wrk_sstorno
        AND      vdbepi~dvalut   LE i_dvalut_until
        AND      vdbepi~dfaell   LE i_dfaell_until
        AND      vdbepi~ddispo   LE i_ddispo_until
        AND      vdbepi~dfaell   GE i_due_date_from
        AND      vdbepi~dfaell   LE i_due_date_until
        AND (    vdbeki~s_compr  IN wrk_compr      "       NOTE 580603
              OR vdbeki~s_compr  IS NULL        ). "  

will creating indexes for vdbeki and vdbepi accomadating the fields in the select above make a substantial effect to the run time of this program? (sstorno, s_compr are not key fields of vdbeki . dvalut, dfaell, ddispo are not key fields of vdbepi)

thank you for your advice so far, you have been a great help

before

http://farm5.static.flickr.com/4127/4990770194_469d01a62f_b.jpg

after

http://farm5.static.flickr.com/4089/4990165823_a1ac695d6e_b.jpg

Edited by: Bijo Samuel on Sep 14, 2010 7:50 PM

Read only

0 Likes
1,890

Hi Bijo,

In my system those tables have no entries so I can't really check, but as you said in the very beginning it seems that the access is good but is done lots of times. You should make a trace and check it in ST05, but I suppose that the query is using index VDBEKI~4 (by BUKRS and RANL). If you check in SE16, how many entries are there on average for a given BUKRS and RANL? If it is a small number then your access is already good, and you can't improve much by additional indexes.

The solution would be to read the data directly with FOR ALL ENTRIES (or maybe directly with a join with VDARL), instead of making a loop and calling LOAN_FLOWS_SUPPLY. This, however, means you have to replicate the logic inside the standard function; if all you want from the fuction is O_VDBEKI and O_VDBEPI then that should be pretty straightforward.

Hope this helps,

Rui

Read only

0 Likes
1,890

One more thing: you might also want to check if these two related OSS notes apply to you.

They are a modification to the standard, so consider if you really want to do it. They promise a 30% improvement.

Note 947139 - Poor performance when system reads documents

Note 969396 - Convert table VDBEPI to Oracle Index-Organized Table (IOT)

Regards,

Rui Dantas

Read only

HermannGahm
Product and Topic Expert
Product and Topic Expert
0 Likes
1,890

Hi Bijo,

In order to help you for the select we would need to have all the execution details from ST04 / ST05.

If you try the mentioned notes from Rui, please let us know the results. I heard from different people applying the suggestions with great and not so great sucess so i am interested in other experiences.

Kind regards,

Hermann

Read only

0 Likes
1,890

Hi Rui, Hermann

Sorry for the delayed reply. The average number of records in vdbeki and vdbepi for a typical loan(RANL & BUKRS combination) is around 200 records.

some background info

I am currently testing this on our QA system, which is being run on virtual machine. The QA system has about 8 clients, 4 used for data migration. There is a database view called VDBEVI which is almost a combination of VDBEKI and VDBEVI. The project has not yet gone live, the current data in vdarl, vdbeki and vdbepi is just historical data.

VDARL (Loans Master) 17925 records

VDBEKI (Flow Data: Document Header for Actual Record) 1074417 records

VDBEPI (Posted line items for document header) 1074417 records

there is a 1 to 1 relationship between VDBEKI and VDBEPI

I have attached a pic of the linkage between the 3 tables.

http://farm5.static.flickr.com/4126/4998435206_49630dde67.jpg

This a file containst the SQL trace for the report for a single loan

http://www.filedropper.com/summarytracefor1loan

This file contrains the summary by SQL statement of the trace above

http://www.filedropper.com/tracelistfor1loan

This a file containst the SQL trace for the report for 200 loans

http://www.filedropper.com/summarytracefor200loans

a zip file with the 3 files above:

http://www.filedropper.com/traces

Thank you for finding those notes. I read through the info for the notes you specified. I will need to sit with all the involved parties and discuss the implications and get the go-ahead before I can implement that. For the time being, I am waiting for a copy of the loans data to be made to one of the dev testing clients, then I can try the note in the dev environment. I will provide feeback once its implemented. For the second one, we are using SQL server DB as far as I'm aware, so I'm not sure whether the note is applicable??

Actions taken so far

-changing suitable internal tables to sorted tables

-removing additional layers\wrappers which incured unnecessary copy\time overheads

alternatives to consider

- adding indexes to both vdbeki and vdbepi to accomadate the join specified in my previous post.

- rewriting the standard LOAN_FLOWS_SUPPLY, using internal tables for vdarl and vdbepi

- applying notes

Thanks

Bijo

Edited by: Bijo Samuel on Sep 20, 2010 4:37 PM

Read only

0 Likes
1,890
Read only

Former Member
0 Likes
1,891

please check in all your new developments that you use suitable internal tables, either sorted or hashed for every table which can have more than 50 records:

Measurements on internal tables: Reads and Loops:

/people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables