2016 Aug 01 11:12 AM
Hi Experts,
I have internal table having 20,000 records.
This internal table is looped and for every record the following BAPI is used:
CALL FUNCTION 'BAPI_USER_GET_DETAIL' DESTINATION lv_dest
Since this BAPI is being called for every record, it is taking 30 mins to come out from the loop.
IF this BAPI is commented then it takes only 2 sec but need user details hence cannot skip this BAPI.
Please let me know solution to overcome this performance issue.
Regards,
Sree
2016 Aug 24 11:32 AM
Improved the performance by implementing parallel processor techinque.
ealier it took 40 minutes to execute, post changes it took only 2 minutes to execute.
2016 Aug 01 1:31 PM
You can increase the performance using these 2 methods (can be combined) :
1) as you use RFC, avoid doing 20000 RFC calls, instead do only 1 call to a custom RFC-enabled function module, which calls the BAPI 20000 times locally.
EDIT :
As I read my answer a few days later, I realize that proposition "1" above is incorrect because it's not the purpose of RFC of doing mass processing as its execution is time-limited and should be short. So I propose another way:
1) Avoid doing 20000 RFC calls (RFC adds non-neglictible overhead) ; instead, you may run a background job on the RFCremote server, and query it from time to time until it's over.
2) BAPIs are great except when it's about mass processing and if the performance is bad. In this case, there are 2 ways to improve the performance :
2016 Aug 01 5:41 PM
Hi Sreekanth N,
BAPI_USER_GET_DETAIL is slow as every SAP function but not too slow.
As you do an RFC call, is it a remote system? If so, make sure the RFC connection is not estsablished for each and every SAP call. SAP will keep the RFC session in memory is the next call to and from the same system comes within a certain time (don't know eyactly but probably enough in your case).
Check your program if there is any call to functions RFC_CONNECTION_CANCEL or
RFC_CONNECTION_CLOSE. This should be avoided as it enforces reload of the program(s) which is performance-robbing,.
Also you can always try to make use of parallel processing to speed up the process.
Regards
Clemens
2016 Aug 01 7:56 PM
Hi Sreekanth,
Based on the large number of destinations and information in ABAP Help, I suspect that 'STARTING NEW TASK IN GROUP' for 2000 records would be a better solution for Parallel processing as Clemens suggested. Please provide all the parameters required.
https://scn.sap.com/thread/3518253
Regards,
Hiran
2016 Aug 02 1:02 PM
Hi,
This FM may ultimately get values from tables i.e. USR* or US*. So you may also think of calling RFC_READ_TABLE in single call for 1 table at a time.
Thanks,
Ashok
2016 Aug 03 2:38 PM
HI Ashok, Can u share sample code on how to fetch user data for 20,000 users in single stretch with fm suggested by you
2016 Aug 24 11:32 AM
Improved the performance by implementing parallel processor techinque.
ealier it took 40 minutes to execute, post changes it took only 2 minutes to execute.