Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
bruno_esperanca
Contributor

Intro


 

Throughout my career as an ABAP developer I found very limited real world cases of parallel processing in ABAP. Truth is, 90% of the times, poor performance is coming from an unreasonable number of accesses to the database, normally in the order of several thousands of accesses, which is very easily fixable by changing the selection strategy, and the other 9.9% of the time, from not defining internal tables properly (as sorted or hashed tables when suitable), etc. Only very rarely there comes a situation where performance can be best improved by proper parallel processing, and I think that when these situations finally come around, the developer forgets that this is possible, it simply doesn't come to the developer's mind, or the developer is lazy to educate him or herself about it.

So, this blog post serves mainly to raise awareness once again to the fact that parallel processing is possible in SAP ABAP and actually quite easy, and maybe it will help someone getting it to work, even if I think the technical part is not very challenging. The "inspiration" for this came from the following link:

 

Coming to the topic at hand, at my company we bought and installed a piece of software called "Transport Profiler" (hopefully this is not confidential information) which checks all transport requests which are planned to be imported into production, to check for integrity, missing dependencies, or any issues that might prevent a smooth import. The software is sometimes a bit unstable, but it has become quite helpful, I can recommend it.

Anyway, as we have several productive systems, to make it practical for the operator who is checking the results, we created a program that could be executed from a central system (Solution Manager in our case) and trigger the API for this Transport Profiler for each productive system, gather the results and display them to the operator. Typically, this would be done sequentially, but it's an excellent opportunity for parallel processing, so I took this opportunity.

 

Implementation


 

The first step is creating an RFC that you will need to propagate for each system, or for other cases, for example if you need to perform some expensive calculations that you'd like to do in parallel, you need to create a remote-enabled function module anyway, because that's simply how this works.

 


Figure 1 - The function that will be called in parallel


 

I will simply copy/paste the coding now that I wrote to call this in parallel. As in my situation we only need 3 processes, and we pretty much always have enough processes available, my resource handling is atrocious. In another case, I created a proper loop which will wait for a few second and try a few times to get an available process. If it continues to fail for a long time, it gives up. You can get an "inspiration source" in the link above.

 

The idea is simple... call the function module to initialize the environment, whatever that means... and from this initialization you can get the number of processes available. This could be helpful if you want to guarantee, for example, that you don't completely drain the resources of the system and you always leave enough margin for other activities. Following, you call the remote-enabled function module which is responsible for performing the calculations or whatever you want to do in parallel, specifying the subroutine which is responsible for gathering the results. You can use the "taskname" counter to identify this, or you can simply include some identification in the results of the function module, it's a design decision, however you feel most comfortable. The code snippets should be enough for you to understand how this is working, if necessary.

If you need more information or something is not clear, feel free to ask.

 
FUNCTION ZSOL_CHECK_TP_PROD.
*"----------------------------------------------------------------------
*"*"Local Interface:
*" IMPORTING
*" VALUE(IT_TRKORRS) TYPE TRKORRS
*" EXPORTING
*" VALUE(EV_ERROR_OCCURRED) TYPE FLAG
*" VALUE(ET_TP_RESULTS) TYPE ZSOL_TP_PROD_RESULTS_TT
*"----------------------------------------------------------------------

DATA lt_systems TYPE tt_system.
DATA lv_taskname(4) TYPE n VALUE '0001'.
DATA ls_check_results LIKE LINE OF gt_check_results.
DATA lv_progress_text TYPE syst_ucomm.
DATA lv_percentage TYPE i.

PERFORM clear_global_variables.
CLEAR:
ev_error_occurred,
et_tp_results.

* Something to check?!
CHECK it_trkorrs IS NOT INITIAL.

PERFORM get_systems
CHANGING
lt_systems.

* Any system to check?!
CHECK lt_systems IS NOT INITIAL.

PERFORM initialize_pbt_environment
CHANGING ev_error_occurred.
CHECK ev_error_occurred IS INITIAL.

LOOP AT lt_systems INTO DATA(ls_system).

* Save information of taskname and system
CLEAR ls_check_results.
ls_check_results-taskname = lv_taskname.
ls_check_results-check_system = ls_system-system.
ls_check_results-check_client = ls_system-client.
INSERT ls_check_results INTO TABLE gt_check_results.

PERFORM start_trans_profiler_remote
USING
lv_taskname
ls_system
it_trkorrs.

ADD 1 TO lv_taskname.
ADD 1 TO gv_snd_jobs.

ENDLOOP.

DO.

IF gv_rcv_jobs >= gv_snd_jobs.
EXIT.
ENDIF.

WAIT UP TO 1 SECONDS.

CHECK gv_number_of_systems IS NOT INITIAL "CYA
AND gv_rcv_jobs > 0.

lv_percentage = gv_rcv_jobs * 100 / gv_number_of_systems.
lv_progress_text = |Results received from { gv_rcv_jobs } servers.|.
CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
percentage = lv_percentage
text = lv_progress_text.

ENDDO.

et_tp_results = gt_check_results.

ENDFUNCTION.

Code snippet 1 - Code of the main function module


 
*----------------------------------------------------------------------*
***INCLUDE LZSOL_TPF01.
*----------------------------------------------------------------------*
*&---------------------------------------------------------------------*
*& Form GET_SYSTEMS
*&---------------------------------------------------------------------*
* Get systems from customizing
*----------------------------------------------------------------------*
FORM get_systems
CHANGING
ct_systems TYPE tt_system.

DATA lt_values TYPE TABLE OF zsol_attr_value.
DATA ls_system LIKE LINE OF ct_systems.

CLEAR ct_systems.

SELECT attrval FROM zsol_setting
INTO TABLE lt_values
WHERE area = 'TRANSPORT_PROFILER'
AND attrname = 'SYSTEMS_FOR_PROD_CHECK'.
CHECK sy-subrc = 0.

LOOP AT lt_values INTO DATA(value).

CLEAR ls_system.

SPLIT value AT '.' INTO ls_system-system ls_system-client.

INSERT ls_system INTO TABLE ct_systems.

ENDLOOP.

gv_number_of_systems = lines( ct_systems ).

ENDFORM.
*&---------------------------------------------------------------------*
*& Form RETRIEVE_INFO
*&---------------------------------------------------------------------*
* text
*----------------------------------------------------------------------*
FORM retrieve_info USING iv_taskname.

DATA lv_contains_error TYPE flag.

READ TABLE gt_check_results ASSIGNING FIELD-SYMBOL(<lf_results>)
WITH KEY taskname = iv_taskname.
CHECK sy-subrc = 0.

RECEIVE RESULTS FROM FUNCTION 'ZSOL_RUN_TP_API'
IMPORTING
ev_contains_error = lv_contains_error
ev_inspection_id = <lf_results>-inspection_id
EXCEPTIONS
communication_failure = 1
system_failure = 2.
IF sy-subrc <> 0.
<lf_results>-check_result = 'E'.
RETURN.
ENDIF.

ADD 1 TO gv_rcv_jobs.

IF lv_contains_error = abap_true.
<lf_results>-check_result = 'E'.
ELSE.
<lf_results>-check_result = abap_false.
ENDIF.

ENDFORM.
*&---------------------------------------------------------------------*
*& Form INITIALIZE_PBT_ENVIRONMENT
*&---------------------------------------------------------------------*
* Initialize the PBT environment, whatever it is
*----------------------------------------------------------------------*
FORM initialize_pbt_environment
CHANGING
pv_error_occurred TYPE flag.

CALL FUNCTION 'SPBT_INITIALIZE'
EXPORTING
group_name = gc_server_group
EXCEPTIONS
pbt_env_already_initialized = 1
OTHERS = 4.
IF sy-subrc > 1.
pv_error_occurred = abap_true.
ENDIF.

ENDFORM.
*&---------------------------------------------------------------------*
*& Form START_TRANS_PROFILER_REMOTE
*&---------------------------------------------------------------------*
* Start Transport Profiler in the remote system
*----------------------------------------------------------------------*
FORM start_trans_profiler_remote
USING
iv_taskname TYPE n
is_system TYPE ty_system
it_trkorrs TYPE trkorrs.

DATA lv_excp_flag. "Number of RESOURCE_FAILUREs
DATA lv_progress_text TYPE syst_ucomm.

zcl_utilities=>get_rfc_destination(
EXPORTING
im_tech_sys_name = is_system-system " Extended System ID
im_client = is_system-client " The ABAP Client
im_rfc_type = zcl_utilities=>co_rfc_type-trusted " RFC Destination Type
IMPORTING
ex_rfc_dest = DATA(ls_trusted_rfc_dest) " Logical Destination (Specified in Function Call)
).

PERFORM filter_irrelevant_trkorrs
USING
is_system
CHANGING
it_trkorrs.

CALL FUNCTION 'ZSOL_RUN_TP_API'
STARTING NEW TASK iv_taskname
DESTINATION IN GROUP gc_server_group
PERFORMING retrieve_info ON END OF TASK
EXPORTING
it_trkorrs = it_trkorrs
iv_rfcdest = ls_trusted_rfc_dest
EXCEPTIONS
communication_failure = 1 "MESSAGE lv_msg
system_failure = 2 "MESSAGE lv_msg
resource_failure = 3. "No work processes are
IF sy-subrc <> 0.
MESSAGE 'All servers currently busy' TYPE 'S'. "i837. "All servers currently busy.
"Wait for replies to asynchronous RFC calls. Each
"reply should make a dialog work process available again.
IF lv_excp_flag = space.
lv_excp_flag = 'X'.
WAIT UNTIL gv_rcv_jobs >= gv_snd_jobs UP TO '1' SECONDS.
ELSE.
WAIT UNTIL gv_rcv_jobs >= gv_snd_jobs UP TO '5' SECONDS.
IF sy-subrc = 0.
CLEAR lv_excp_flag.
ELSE. "No replies
"Endless loop handling
...
ENDIF.
ENDIF.
ENDIF.

lv_progress_text = |Transport Profiler started on { gv_snd_jobs + 1 } servers.|.
CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
percentage = 1
text = lv_progress_text.

ENDFORM.
*&---------------------------------------------------------------------*
*& Form CLEAR_GLOBAL_VARIABLES
*&---------------------------------------------------------------------*
* text
*----------------------------------------------------------------------*
FORM clear_global_variables.

CLEAR:
gv_snd_jobs,
gv_rcv_jobs,
gt_check_results,
gv_number_of_systems.

ENDFORM.
*&---------------------------------------------------------------------*
*& Form FILTER_IRRELEVANT_TRKORRS
*&---------------------------------------------------------------------*
* It could be that some transport requests were already imported
* or don't exist. Let's not run transport profiler on them
*----------------------------------------------------------------------*
FORM filter_irrelevant_trkorrs
USING
is_system TYPE ty_system
CHANGING
ct_trkorrs TYPE trkorrs.

DATA lt_trkorrs LIKE ct_trkorrs.

LOOP AT ct_trkorrs INTO DATA(lv_trkorr).

TRY.

DATA(tr_order) = NEW zcl_sol_trorder( lv_trkorr ).

DATA(cofile) = tr_order->read_cofile( ).

CATCH zcx_bobj.
CONTINUE.
ENDTRY.

DATA system LIKE LINE OF cofile-systems.
* This is NOT very elegant
CASE is_system-system.
WHEN 'QG1'.
READ TABLE cofile-systems INTO system " ASSIGNING FIELD-SYMBOL(<lfs_system>)
WITH KEY systemid = 'PG1'.
WHEN 'T19'.
READ TABLE cofile-systems INTO system
WITH KEY systemid = 'P19'.
WHEN 'T33'.
READ TABLE cofile-systems INTO system
WITH KEY systemid = 'P33'.
ENDCASE.
CHECK sy-subrc = 0.
"Has the transport request already been imported?
READ TABLE system-steps TRANSPORTING NO FIELDS
WITH KEY stepid = 'I'.
CHECK sy-subrc <> 0.
* No, so add it
APPEND lv_trkorr TO lt_trkorrs.

ENDLOOP.

ct_trkorrs[] = lt_trkorrs[].

ENDFORM.

Code Snippet 2 - Subroutines


 

Conclusions


 

Parallel processing is not so hard to implement. In situations where users are waiting for a longer period, and there's not much room for improvement in other ways, for example if we are talking about contacting several systems via RFC, or processing many sales orders which have to execute the pricing procedure each time, etc., parallel processing is an effective way of reducing the waiting times for the user, thus giving the illusion of improved performance, even though this should *not* be used as a replacement for proper database access strategies and proper coding.
10 Comments