Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Performance issue on commit work after BAPI_SALESORDER_SIMULATE

nicholas_archer3
Explorer
0 Likes
2,446

Hello,

We use FM BAPI_SALESORDER_SIMULATE to gather the prices of all our active materials per active customer to be used by a 3rd party program. The results are inserted or modified into a custom DB table depending on if the customer already exist in the table already.

The program usually takes 7+hrs to run whether or not there was an insertion or modification to the custom table. If we clear the table the program takes 4hrs to run reinsert all of the customer/material prices.

I found that it takes approximately 1 and a half hours to preform the "commit work" even if no modifications are made to the DB table.

Can you suggest a reason why the commit takes so long and if there are any known ways of decreasing the time?

Nicholas

Edited by: Nicholas Archer on Jan 13, 2012 9:00 PM

1 ACCEPTED SOLUTION
Read only

volker_borowski2
Active Contributor
0 Likes
1,925

Hello,

all in all this sounds like a couple of problems alltogehter, including may be faulty hardware.

First assumptions as for 7 hours normal operation vs. 4 hours if the customer table is "zeroed":

a) you have no DB stats on theses tables when filled from scratch

b) your check "customer exists" is either not index supported or not taken because of a), this is why it is faster when no data is in the table, because the first results get validated on short FTS, which get longer and longer with th etable size.

I'd start a SE30 analysis and try to find offending DB statements and check if stats are up to date.

Volker

12 REPLIES 12
Read only

volker_borowski2
Active Contributor
0 Likes
1,926

Hello,

all in all this sounds like a couple of problems alltogehter, including may be faulty hardware.

First assumptions as for 7 hours normal operation vs. 4 hours if the customer table is "zeroed":

a) you have no DB stats on theses tables when filled from scratch

b) your check "customer exists" is either not index supported or not taken because of a), this is why it is faster when no data is in the table, because the first results get validated on short FTS, which get longer and longer with th etable size.

I'd start a SE30 analysis and try to find offending DB statements and check if stats are up to date.

Volker

Read only

0 Likes
1,925

Volker, thanks for your response. I need to provide additional information about the program.

First, when the program starts, all of the custom table data is stored in an internal sorted table, A.

The BAPI_SALESORDER_SIMULATE function is executed for every active customer and all active material prices are stored for each customer in an internal table, B.

For each line in table B, read if customer/material exist in table A. If it does not exist insert, else modify only if there is a change.

Given the current size of custom DB table, it takes an additional hour and half to read table A (approx. 0.02 seconds per read).

The remaining hour and 30minutes is waiting for the commit to complete, whether or not there was a change to the DB table.

Is there anyway for me to cut the commit time?

Nicholas

Read only

0 Likes
1,925

Hi,

No idea why a commit should take 30 Minutes. What DB is this? Oracle?

What is you hit ratio on the average checks?

If you have average 10 Materials for one Customer in Table B and only 8 of 10 Customers form B exist in A, it might be worth to split the check from table B to a group oriented one.

Like :

Check if Customer exists first

NO: saved 10 Material checks, you need to insert them all 10 anyway

YES: process as normal.

In the given ratio like 1:10 and 8 out of 10 both ways to process should do similar, but in 1:10+ or 8- out of 10 or esp. if both ratios are better, processing group wise should save you quite a couple of checks.

The other option might be to skip the existence check completely.

Simply do the insert (which will do the check internally anyway, if your internal table has a unique key)

and react with update if SY-SBUBRC shows duplicate key on insert.

Depending on which one succeeds more often on first try in average, you might like to go the other way round:

Try UPDATE and react with INSERT on NOT found.

Doing any check by SELECTING (or reading as for internal tables) first, makes only sense,

if you need to validate conditions on non key fields of the existing row for your further logic.

If you already have a unique row, that needs to be stored, there is no need for selecting first,

only to decide if to insert or to update (Remember: this strategie requires a UNIQUE key).

Volker

Another idea for the commit: Check transaction DB01 for DB Locks while the commit is going on.

(Allthough I think a lock based wait would already show up BEFORE the commit).

Is that 30 minutes Commit a value from SE30?

Edited by: Volker Borowski on Jan 14, 2012 12:21 PM

Read only

0 Likes
1,925

are you using commit work and wait?

take out wait ,.. it wont wait for commit to finish doing its stuff..

moreover, check with basis team for the database consistency.. ask them to regenerate the index for the custom table.. etc etc..

it should not take this much time just for commit.

Read only

0 Likes
1,925

Hello Nicholas,

could you please post here the source code that makes change in the custom table (incl. COMMITS).

Then we can probably give you better advices.

Yuri

Read only

0 Likes
1,925

Hey,

I greatly appreciate all of your responses. I modified the code to allow parallel processing, removed the wait and the run time has dropped from 7 to 4 hours. The commit however is still taking 1 hour and 30 minutes even though no DB changes occurred in our QA system.

Volker, Iu2019m going to try updating the DB without checking if the value already exist, but if it works I will be completely lost as to why the commit takes so long when the DB isn't modified. At no point in time have I seen a DB lock (DB01) when the program is running. We're currently using Oracle 11.2.0.2.0.

Edited by: Nicholas Archer on Jan 16, 2012 6:04 PM

Read only

0 Likes
1,925

See code below.

data: custpricetab like zsky_custprice occurs 0 with header line.
data: t_custprice type sorted table of zsky_custprice
      with non-UNIQUE key SALESORG, DISTCHAN, DIVISION, CUSTOMER, MATERIAL.
data: wa_custprice like zsky_custprice.


form return_custom using taskname.

  rcv_jobs = rcv_jobs + 1.

  read TABLE t_task WITH TABLE KEY taskname = taskname.
  custtab-kunnr = t_task-kunnr.

  receive results from function 'BAPI_SALESORDER_SIMULATE'
    TABLES
      ORDER_ITEMS_OUT           = order_items_out
      ORDER_CONDITION_EX        = order_condition_ex
      MESSAGETABLE              = messagetable.

  functioncall1 = done.
  read table messagetable with key type = 'E'.
  if sy-subrc = 0.
    loop at messagetable where type = 'E'.
      write error msg.
    endloop.
    
  Else.
	u2026
	Store customer-material prices in custpricetab from order_items_out and order_header
	u2026
  Endif.
endform.                    "Set_custom



form custom.
  select * into table t_custprice
       from zsky_custprice.

  select * into table t_ZSKY_ORDER_TYPES
       from ZSKY_ORDER_TYPES.

  select * into table t_cust
       from zsky_cust.

  CLEAR: taskname, index, snd_jobs, rcv_jobs, exc_flag, mess, functioncall1.
free t_task.

u2026
Store active customers in internal table custtab
u2026

   loop at mattab.
        clear order_items_in.
        order_items_in-itm_number = ln.
        order_items_in-material = mattab-matnr.
        order_items_in-req_qty = 1000.
        clear work_meins.
        select single meins into work_meins
        from mara
        where matnr = mattab-matnr.
       order_items_in-sales_unit = work_meins.

        append order_items_in.
        ln = ln + 1.
  endloop.

  loop at custtab.
    index = sy-tabix.
    exc_flag = 0.


      clear: order_header_in.
      refresh:  order_partners, order_schedule_in, order_items_out, order_condition_ex.

      order_header_in-doc_type = 'ZOR'.

      order_header_in-sales_org = p_vkorg.
      order_header_in-distr_chan = p_vtweg.
      order_header_in-division = p_spart.
      order_header_in-req_date_h = sy-datum.

      clear order_partners.
      order_partners-partn_role = 'AG'.
      order_partners-partn_numb = custtab-kunnr.
      append order_partners.

      free: messagetable.

      wait_count = 0.
      CONCATENATE 'Task_' index into taskname. 
      CONDENSE taskname NO-GAPS.
      t_task-taskname = taskname.
      t_task-kunnr = custtab-kunnr.
      insert TABLE t_task.
      DO.
        CALL FUNCTION 'BAPI_SALESORDER_SIMULATE' STARTING NEW TASK taskname
          performing return_custom on end of task
            EXPORTING
              ORDER_HEADER_IN           = order_header_in
            TABLES
              ORDER_ITEMS_IN            = order_items_in
              ORDER_PARTNERS            = order_partners
              ORDER_SCHEDULE_IN         = order_schedule_in
          EXCEPTIONS
            system_failure = 1 MESSAGE mess
            communication_failure = 2 MESSAGE mess
            resource_failure = 3.

        CASE sy-subrc.
          WHEN 0.
            snd_jobs = snd_jobs + 1.
            exit.

          WHEN 1 OR 2.
*            Error.
          WHEN 3.
            wait_count = 1.
            IF snd_jobs >= 1 AND exc_flag = 0.
              exc_flag = 1.

              WAIT UNTIL rcv_jobs >= snd_jobs UP TO 5 SECONDS.
            ENDIF.

          WHEN OTHERS.
            MESSAGE 'Other error' TYPE 'I'.
        ENDCASE.

        if exc_flag = 1.
          exc_flag = 0.
        else.
          if wait_count = 1.
            error
          endif.
          wait_count = 0.
          exit.
        ENDIF.

      ENDDO.
    endif.

  endloop.

WAIT UNTIL rcv_jobs >= snd_jobs.

Read only

0 Likes
1,925
loop at custpricetab.
    index = sy-tabix.
    clear deleted.
    read table mattab with key matnr = custpricetab-material.
    if sy-subrc = 0 and mattab-lvorm = 'X'.
      deleted = 'X'.
    endif.

    clear: wa_custprice.
    read table t_custprice into wa_custprice with key salesorg = custpricetab-salesorg
                                                      distchan = custpricetab-distchan
                                                      division = custpricetab-division
                                                      customer = custpricetab-customer
                                                      material = custpricetab-material.

    if sy-subrc <> 0.
      if deleted <> 'X' and
      custpricetab-amount <> 0.
        clear wa_custprice.

        clear wa_custprice.
        wa_custprice-salesorg = custpricetab-salesorg.
        ...
        wa_custprice-del = ' '.
        insert zsky_custprice from wa_custprice.
      endif.
    else.
      if wa_custprice-amount <> custpricetab-amount or
         ...
         wa_custprice-grouproutine5 <> custpricetab-grouproutine5.

        move custpricetab-amount to wa_custprice-amount.
        ...
        move w_timestamp to wa_custprice-ldbtstamp.
        modify zsky_custprice from wa_custprice.

      endif.
    endif.
  endloop.

*   delete NJMA
    WRITE: / 'Start of customer price commit ', sy-UZEIT.
******************
  commit work.
*   delete NJMA
    WRITE: / 'End of customer price commit ', sy-UZEIT.
******************

endform.
Read only

0 Likes
1,925
loop at custpricetab.
    index = sy-tabix.
    clear deleted.
    read table mattab with key matnr = custpricetab-material.
    if sy-subrc = 0 and mattab-lvorm = 'X'.
      deleted = 'X'.
    endif.

    clear: wa_custprice.
    read table t_custprice into wa_custprice with key salesorg = custpricetab-salesorg
                                                      distchan = custpricetab-distchan
                                                      division = custpricetab-division
                                                      customer = custpricetab-customer
                                                      material = custpricetab-material.

    if sy-subrc <> 0.
      if deleted <> 'X' and
      custpricetab-amount <> 0.
        clear wa_custprice.

        clear wa_custprice.
        wa_custprice-salesorg = custpricetab-salesorg.
        ...
        wa_custprice-del = ' '.
        insert zsky_custprice from wa_custprice.
      endif.
    else.
      if wa_custprice-amount <> custpricetab-amount or
         ...
         wa_custprice-grouproutine5 <> custpricetab-grouproutine5.

        move custpricetab-amount to wa_custprice-amount.
        ...
        move w_timestamp to wa_custprice-ldbtstamp.
        modify zsky_custprice from wa_custprice.

      endif.
    endif.
  endloop.

*   delete NJMA
    WRITE: / 'Start of customer price commit ', sy-UZEIT.
******************
  commit work.
*   delete NJMA
    WRITE: / 'End of customer price commit ', sy-UZEIT.
******************

endform.
Read only

0 Likes
1,925

Hello Nicholas,

thanks for posting the coding.

If I understand correctly, you do the final processing in one loop.

You loop over all customer/material records and either insert or modify the table entries one by one.

When the complete loop is over, you do COMMIT WORK once. Right?

Our investigations show that single MODIFY or INSERT statements are showing significantly worse performance than array inserts and updates. Please try to implement packaging mechanism for your loop with insert and update.

Build an internal table to be inserted and updated. When one of the tables reach the package size (e.g. 10.000 entries), you make the insert/update from table. Afterwards, do a COMMIT (see also below).

In your particular case I would try calling function DB_COMMIT rather than COMMIT WORK, as the standard commit does more than simply make a DB commit. There might be the case that BAPI_SALESORDER_SIMULATE planned some subroutines or function modules to be executed on COMMIT and they cause longer COMMIT runtimes.

Regards,

Yuri

Read only

0 Likes
1,925

Just want to say thanks to everyone who provided assistance and points will be assigned. I get the best performance (3hrs) when the table is updated/insert with out checking if it already exist (thanks Volker).

I've tested all of your suggestions and I'm still not sure what was causing the delay with the 'commit work' or DB_commit function.

Anyway thanks again.

Read only

0 Likes
1,925


 data: t_custprice type sorted table of zsky_custprice
       with non-UNIQUE key SALESORG, DISTCHAN, DIVISION, CUSTOMER, MATERIAL.

Hi Nick,

I hope the table in question is not this "t_custprice", because,

as I said, using the INSERT / UPDATE returncodes in the described way only works correctly with a UNIQUE key.

If you did adjust this or the table in question was another one, I am happy that this did work well.

Volker