Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
Hello Guys,

I am from SAP Gateway team,this post will help you get a glimpse of what $batch request really means in Odata .

Answers to expect from this post?

How to get the $batch concept implemented in your existing Odata Service ?

How do I implement CHANGESET process to make the data modification process in sync with each other?

What is new in $BATCH concept as such?

How to make read operations also  in sync with one another?


Basics on how to create a service in SEGW or Odata enabled CDS .

How to register the service and extract data relevant  information out of it ?

What will be your take after reading this post?

$Batch-Implementation in technical terms , just go through the post roughly so you can  get a deep dive into technical information .

Explained in layman terms , to make it easier in the beginning and technical in later part.

Let's get started.

Batch processing -Enables multiple retrieval operation in a single HTTP request in parallel.
However, the CHANGESET request can be posted along with GET request but it will not be processed parallelly.
Example: If I have two GET request and one CHANGESET set request first GET request will processed parallelly (2) and then change set will be processed.
To activate the batch processing the below configuration needs to be activated in SPRO.
Maximum no -No of batch queries that can be executed in parallel.
It is always recommendable to give the no of parallel process keeping the system performance in time-Because based on the number the existing work process will be allocated for processing the job.

Example: Say I have 3 Get Request , 2 Post Request, 1 Change request, 1 delete request.
For configuration of 3 as above depicted.
1st step: 3 Get Request
2nd step: 1 post request,
3rd step: 1 post request,
4th Step:1 Change request
5th Step: 1 delete request.
But this number of maximum parallel processing request will be voided if the available work process in system is low for parallel processing.
But this configuration can be disabled for particular service if required with transaction

Enable Deactivation from the below check box .

Only the GET requests will be executed in parallel.
It is always must to have HTTP header ‘Content-Type’ for batch request with value ‘multipart/mixed;boundary= ’.

The below

Batch request:

The body of a batch request is made up of an ordered series of retrieve operations and/or change sets.
A change set is an atomic unit of work that is made up of an unordered group of one or more of the insert, update or delete operations.
Change sets cannot contain retrieve requests and cannot be nested, that is, a change set cannot contain a change set.
The batch boundary in HTTP header –“ Content-Type “ specified in the GW client is valid only for retrieve operations.
For Update/Delete/Create request the boundary needs to specified again the “Changeset” exclusively apart from “Content Type” in HTTP header.

Basic Rule before firing a Gateway Batch call:

After GET, PUT, POST, DELETE statement before the input of “Batch” or “Changeset” statement line there should be two-line space as depicted below, if not it will result in error.
GET Statement in line 5, batch close call in line 8.

Example of batch request -Only read entity

Some basic framework for execution of $Batch request.

1. Before the GET statement or POST/DELETE/PUT request we will always have the below headers by default
Content-Type: application/http
Content-Transfer-Encoding: binary

2. Within a batch request to segregate each request separately for every new action (GET/CHANGESET) we have to use the prefix “Boundary value”

And again, within a “Changeset” to segregate each action we need to use boundary value defined for “Changeset” which is passed with begin of content type to segregate “Changeset”

SAP will process the operations such as CREATE/UPDATE/DELETE as it is in the same order, which is defined in the input time.
So it is business responsibility to take care of sequence in which batch “Changeset” calls would be defined.

The response of a batch request will exactly correspond to the order of retrieval / change operation in the batch request.

Each response includes a Content-Type header with a value of application/http, and a Content-Transfer-Encoding MIME header with a value of binary.

We can use one or more Update/delete/insert operation within a “Changeset” but when you use such template we need to make sure there is no “Commit work” statement within any one UPDATE/CREATE/DELETE entity.

If it exists, the system will dump the request and no further processing will happen.

Each Changeset process will be single LUW (logical unit of work) so ideally no Commit Work statement would be required.

So, each “Changeset” will be either fully processed or complete failure.

So why do I use batch processing in that why for Update/delete/create I must use Changeset?

Performance Improvement -the main reason behind batch processing.

This parallel query process will be executed only for Local and frontend system which has only one registered backend system.

When you use Multi Origin Composition with multiple backend system , the parallel query process will not be triggered each and every request will be processed separately.

Because in class /IWFND/CL_TRANSACTION_HANDLER under method SET_IS_MDC is set as abap_true.

In /IWFND/CL_MGW_RUNT_RCLNT_PRXY under method /IWFND/IF_MGW_CORE_RUNTIME~READ_ENTITY check is made whether it’s a multi origin composition request which was set under transaction handler if yes, in method CHECK_USE_CENTRAL_RFC of class /IWFND/CL_MGW_RUNT_RCLNT_PRXY a check is made and single processing of each and every request is processed separately.

When using Multi Origin Composition separately for an Entity Set parallelization will be enabled automatically in each of the system alias.

The other way of calling batch request with multi origin request if you want the data to be retrieved from single backend only we can use Origin option in the URI.

Example: /sap/opu/odata/SAP/ZRM_BATCH_LEARNING_SRV;o=GXX_000/$batch

This will enable parallelization if the required configuration in done in backend.

From SAP 740 SP09 Batch processing, performance has been improved by introducing new API for Changeset processing in defer mode.

The interface /IWBEP/IF_MGW_APPL_SRV_RUNTIME and the method CHANGESET_BEGIN,CHANGESET_PROCESS,CHANGESET_END will be implemented in each DPC_EXT class


Deferred mode: For Performance improvement

Each change set processing can also be improved if the data provider can handle the whole change set at once.

That means the provider must implement the new API for change set handling to process all change set operations within the new API CHANGESET_PROCESS.

In this case a data provider must return the result of all operations back to the gateway framework.
The below difference in call stack can be seen clearly.

When we intend to use DEFER mode its must to redefine /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN of the service, this is where we need to set CV_DEFER_MODE = ABAP_TRUE.

There is an option if you want to disable defer mode for certain Entity type, Operation Information so it will follow normal process, instead of hitting CHANGESET_PROCESS.

CHANGESET_BEGIN method has importing parameter IT_OPERATION_INFO which has ENTITY_TYPE, ENTITY SET, OPERATION_TYPE, CONTENT_ID, CONTENT_ID_REF fields which gives us a chance to decide if we want to switch CV_DEFER_MODE on or off.

In simple the response structure for all the entities will be set together in case of DEFER Mode with changing parameter table CT_CHANGESET_RESPONSE.

Either full response or no response.


In the internal table, IT_CHANGESET_REQUEST we have a field called ‘REQUEST_CONTEXT’ which bears entity specific details this must be down casted to a class ‘/IWBEP/IF_MGW_REQ_ENTITY_C’ method ‘GET_ENTITY_TYPE_NAME’ to have only Entity Type specific information so that we can retrieve the Entity Type and do the required action as per field “OPERATION_TYPE”.

The field “ENTRY_PROVIDER” in internal table has the data related to ‘Entity’. This field has reference to interface class ‘/IWBEP/IF_MGW_ENTRY_PROVIDER ‘the method ‘READ_ENTRY_DATA’ will get the data in result structure as per entity structure dynamically.

The Changeset request-operation number should be set to field Changeset response-operation number structure because the operation number is a unique field which maps to the output request order as per input request order.

The best use for “Defer Mode” is for ‘CREATE’ request in case of $batch request, the use can be identified by using a ‘CONTENT_ID’ and ‘CONTENT_ID_REF’ of ‘IT_CHANGESET_REQUEST’ parameter in ‘CHANGESET_PROCESS’.

If a hierarchical CREATE request is to be done, via batch processing in a single Changeset we can use this ‘CONTENT_ID’ concept.

Example: I have CHANGESET request where I must create ‘Sales Order Header’ entity ‘Sales Order Item’ entity.

In this case I either want to create both sales order header and item entity using the REQUEST information or nothing at all.

In this case I can use CHANGESET_BEGIN and CHANGESET_PROCESS method to ensure my success.

The $BATCH navigation and framework expand works like non-batch process.

Frame work expand: This will expand both the principal entity and dependent entity in URI.

Two entity results will be displayed


Navigation: This will expand only the depend entity based on key field from principal entity.

Data-Provider Expand: This will work like Frame work expand but here we can alter the return entity structure such that both the principal and dependent entity will be under one result structure (ER_ENTITY)

Deferred response Creation for Batch -Only GET requests

It will deactivate BATCH parallelization and CRP handling.


This feature has to be enabled in MPC_EXT class of a service .

Sample implementation:

There are two main methods introduced in DPC generator class


The logic in above BATCH_BEGIN method will check if there any request other than READ request .

It will allow only GET_ENTITY

If any other operations (Changeset) are present, the Deferred response creation mode will be disabled.

The implementation to enable Deferred response creation must be done in method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~BATCH_BEGIN.

When a batch request is executed in case of Hubway and Gateway /IWBEP/CL_MGW_REMOTE_HANDLER will be executed PROCESS_BATCH, in case of co-deployed /IWBEP/CL_MGW_LOCAL_HANDLER and same method name PROCESS_BATCH.

The BATCH_BEGIN and BATCH_END if once implemented for a service, IT_REQUEST_HEADER will have value 'SAP-IW-BATCH_DEF_RESP_CREA'.

So when the class /IWBEP/CL_MGW_REMOTE_HANDLER or /IWBEP/CL_MGW_LOCAL_HANDLER calls PROCESS_BATCH method, if the above request header is set .

The flag for Deferred response creation will be set.

The execution will be done based on no of operations (CRUD) operations in $batch request.


When processing the batch request first check will be made for each if its if its ‘Query’ ( ‘GET’ )request or CHANGESET (Modify operations) request.



Batch request will be processed in packets of information-so first let’s see how query request will be categorized further.

Only One ‘GET’ Request in $batch request:

If only one request/Operation is present in the received BATCH request, it will be direct call for processing the request without using batch parallelization concept.

This will be done in method ‘PROCESS_SINGLE_BATCH_QUERY’ the logic.

Based on function code value in structure ‘IS_BATCH_INFO_REQUEST-FUNCTION_CODE’ whether its Entity Type, Entity Set, Update entity, Create Entity, Delete Entity and some other operations based on the request received.

The method PROCESS_REQUEST will process the request related to retrieval of data.

The export structure IT_REQUEST header will have the function code value.

Multiple GET request in $batch request: A check will be made if the service is enabled for ‘Parallelization’ with below class and method


This will check in Global configuration /N/IWBEP/GLOBAL_CONFIG give maximum no. of parallel processing request and if batch parallelization is enabled for service.

This will in turn check for that service if parallelization is disabled with the below class.

The class which will parallelize the incoming request with below class



will be used if BATCH deferred response creation is used and implemented using BEGIN_BATCH method.

So, this way even for GET request we can use Deferred response either all or none (results)

The each individual request will be processed first in
Method: PROCESS_REQUEST_INT, for entity read, delete, update, create entity , metadata, vocabulary text retrieval request

The response is built using this

Error handling:

Either display all or nothing even for GET requests similar to CHANGESET request.

Implementation example for Batch -Deferred mode response creation

Redefinition has to be done for below methods

Additionally, if required for your use case it has to be done for below methods(busniess case)

Sample implementation:


First to enable Batch deferred response mode enable it in MPC_EXT model features as explained previously.

Redefine method and set the flag as Deferred response creation as ABAP_TRUE.

Introduce a new table attribute in DPC_EXT class.
This is to consolidate the response from GET ,EXPANDED (entity, entity set) and use it in BATCH_END class to display the response in one time.


Step4: Now you must redefine GET, EXPANDED (entity, entity set) as below for interface
This is required to consolidate the response and display in one shot in BATCH_END method.

Consolidate and display the result as below in BATCH_END method.

Call Stack: With Deferred response creation enabled

With Deferred response creation disabled.
Line 34 in call stack is different