Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
TimoStark
Participant
3,247
This is a direct answer to my own question.

You have the requirement to call an external scalable REST micro-service. Unfortunately the system needs a few 100ms to respond and you have to call the system multiple times with different parameters. As the code is called by end users you want to parallelize the REST calls.

So what is the "normal" solution for that problem? The Standard ABAP answer is of course: ABAP Parallelization using Worker-Processes (using e.g. that wonderful library ZTHREAD ). This has multiple problems though:

  1. You are the old limiting SAP system..The other system is a nice fancy scalable micro-service which can also consume 100k requests per second. You only have 100 work processes....

  2. Work processes are a scarce resource. If you e.g. parallelize the request above to 10 work processes and you have e.g. 1.000 active users on your ERP system, the work processes will heavily limit your possible parallelization. In worst case you go out of work processes thus causing wait times everywhere.

  3. Work processes are a "one fits all" approach. IMO they are heavily oversized for the simple requirement for multiple http requests.

  4. Work processes are exhausting to write and especially debug.. Even with nice libraries with ZTHREAD everyone knows the situation when 100 SAP GUI windows open because accidentially you've set a breakpoint in a parallelly executed code.


So what is the solution? Actually it is extremely simple and even documented in the SAP Help (with "a little older" code-examples): You can just call the send method of cl_http_client multiple times.

How does this work.. Let's see the following very simple example code on how to trigger an HTTP call.
DO 20 TIMES.
cl_http_client=>create_by_url(
EXPORTING
url = |{ base_part_of_url }{ sy-index }{ parameters_of_url }|
IMPORTING
client = DATA(client)
).
client->request->set_method( if_rest_message=>gc_method_get ).
client->send( ).
client->receive( ).
client->close( ).
ENDDO.

In my case this requests takes around 100ms and I have to call it 20 times. So without parallelization it would take around 2 seconds.

Solution: Just do not call receive all the time (which is synchronous), but instead just push the  cl_http_clients to a internal table of clients and after all are send, call the receive one by one.
  DATA: clients type standard table of if_http_client.

DO 20 TIMES.
cl_http_client=>create_by_url(
EXPORTING
url = |{ base_part_of_url }{ sy-index }{ parameters_of_url }|
IMPORTING
client = DATA(client)
).

APPEND client TO clients.

client->request->set_method( if_rest_message=>gc_method_get ).
client->send( ).
ENDDO.

LOOP AT clients INTO client.
client->receive( ).
client->close( ).
ENDLOOP.

Following Chart is giving an comparision of calling the requests sequentially vs. in parallel.

It is clearly visible that there is some overhead as well (probably on my receiving end), still it drastically performs better without big complexity increase.

 

 


Runtime Comparision


Additional remarks:

  • The documentation suggests to use the static listen method instead of just calling "receive" on each cl_http_client. This should theoretically return the cl_http_client instance which returned data "first". In my experiements however that was not really faster (at least if I have to wait for all queries to return) and at the same time made the coding more complex, as you have to match the call to the parameters you have sent. In "hardcode-parallelization" (i.E. making >1.000 parallel calls) this approach even crashed, while just calling receive one by one was stable.

  • Nothing comes for free. Ensure that the RZ11 parameter icf/max_handle_key and icm/max_threads are set high enough in case you really have a massive parallelization.


 
3 Comments
Labels in this area