Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Parallel processing for account posting skipping records

Former Member
0 Kudos

Hello All,

I am using parallel processing method for account posting. I have created a Zbapi for bapi_acc_document_posting and have committed in the same zbapi.

   CALL FUNCTION 'ZBAPI_ACC_DOCUMENT_POST'  STARTING NEW TASK 'POST' DESTINATION 'NONE'

   EXPORTING
       documentheader    = gs_header
    TABLES
       accountgl         = gt_item
       accountreceivable = gt_acctr
       accountpayable    = gt_acctpy
       currencyamount    = gt_curr
       criteria          = gt_crit
       return            = gt_return
       it_bapiret       = gt_bapiret2
EXCEPTIONS
         communication_failure = 1 MESSAGE msg
         system_failure        = MESSAGE msg
         resource_failure      = 3.

When I am scheduling the program in background for uploading suppose 1.5lakh records at a time, some records are gettinng skipped (somtimes 7000 records somtimes100 records). We are not getting the total count in bkpf table. I tried using wait up to '0.5' secs in zbapi too after commit.Please tell me what can be the issue.

1 ACCEPTED SOLUTION

Former Member
0 Kudos

Hello ...Does performing ... at end of task slow down the program performance??? I have gone through your code. Is it that the calling program will wait till all the tasks created while posting, pass this flag finish = X. Also will this create a bottleneck of resources??

9 REPLIES 9

arindam_m
Active Contributor
0 Kudos

Hi,

Could be locking conflict as you are doing parallel on same objects and they work on same Number ranges, same DB tables. Is it what you want to do? make sure you do lock and unlock well in your code.

Cheers,

Arindam

Former Member
0 Kudos

Hello Arindam,

Thanks for your response!! I have copied the standard bapi in to zbapi. So the locking and unlocking of the document number  is taken care by the bapi itself. Do I need to add additional code for that?

0 Kudos

Hi,

Well even though its standard it might happen. Try introducing WAIT statements. Best would be to trigger multiple jobs and each will use one background process (Usually many are available) so you will end up with parallel processing and also can avoid the missing doceuments.

Cheers,

Arindam

Former Member
0 Kudos

helloo... i have already put wait up to statement in the zbapi. Please let me know if you have any piece of code to "trigger multiple jobs and each will use one background process" . Right now, the job takes the available workprocesses. The program will be scheduled in background only.

tolga_polat
Active Participant
0 Kudos

Hi,

For this wait is not the solution. I used import parameter once.

Here is the example code : http://scn.sap.com/message/13993540#13993540

Regards

Tolga

Former Member
0 Kudos

Hello ...Does performing ... at end of task slow down the program performance??? I have gone through your code. Is it that the calling program will wait till all the tasks created while posting, pass this flag finish = X. Also will this create a bottleneck of resources??

0 Kudos

Yes it can cause to slowing. But for this scenario i needed to know first document finished its work, right know you can ask why you need parallel process, I did this in exit while posting document, so when writing without parallel process cause buffer overwrite anyway.

For your situation i think you will post independent document, and you want to call bapi same time.

I think data lost cause of update process, You used same task for all document, try changing this like

CONCATENETE lv_count 'POST' in lv_task. then

call function 'Z....' starting new task lv_task etc.

with finish = 'X' you have only time increase, no resources problem, because with this you force program to wait until function finish its job ( like wait, but with this you have changeable time ). for time issue, I'm usually use another background job first i pass all data to background job with task SAVE_DATA for example, then in this i call posting with new task POST_DATA. so user doesnt wait until jobs done. And with finish = 'X' only two jobs is processing at same time.

Former Member
0 Kudos

Thanks Tolga Polat ...your solution seems to be working fine for my requirement

0 Kudos

you are wellcome