Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
Showing results for 
Search instead for 
Did you mean: 

Running background reports with internal table parameters.

0 Kudos

Hello Friends,

I have a query. The scenario is that I want to process a large database table, say a BUT000 table and want to change some non-key field values.

A simple approach would be to

-execute a SELECT statement and retrieve the records in an internal table.

-change the values in the internal table.

- then update the database table from the internal table.

However, for large tables, performance issue will occur and the program might timeout.

So one solution would be to run a report in batches in background, each report processing 100 entries of the table.

The algorithm would be somewhat like this (Please ignore syntax):


DO till all the records are processed.

         FETCH 100 records INTO lt_tab[ ].

               SUBMIT change_report WITH lt_tab[ ] IN BACKGROUND mode.


  1. One way to achieve this is to use the EXPORT/IMPORT MEMORY of ABAP. However for this to work, all these SUBMITted reports need to be executed in the same work process. But we know that REPORTs in BACKGROUND job will work on different work processes. So this is not the solution.
  2. Another way is to use EXPORT/IMPORT to SHARED MEMORY or SHARED BUFFER. However, this will store the internal tables in the application buffer of the aplication server. So in this case, all the background reports need to be executed in the same application server so that they have access to the shared memory.
  3. One more solution is to EXPORT/IMORT the data to DATABASE cluster tables like INDX cluster table.

EXPORT itab FROM itab TO DATABASE indx(ar) CLIENT sy-mandt ID job_number

IMPORT itab TO itab FROM DATABASE indx(ar) CLIENT sy-mandt ID job_number

However, I am not sure if this will work cross-application-servers. My guess is this solution should hold even if the background reports are run in different application servers of the system.

Could anyone guide me as to whether my understanding of the above 3 points is correct?  Any comments on Point 3 ?

Are there any more solutions to this scenario?

With Kind regards,



Former Member
0 Kudos

hi Sameer,

1, Yes,  you can`t use abap memory.

2, For the shared memory and buffer, you need to take care size of the two memory. Shared buffer will cleared automatically when it reaches the maxinum limit. So for those two, shared buffer is a better option.

Alternatively, you can use 'OPEN DATASET'.

I wanna know how huge data the table have? if really really much, how about delete all data after you extract all data from DB table? then insert it back. i think 'insert' will fast than 'modify'.

Waiting Nabheet give you a help.



0 Kudos

Hi Dengyong,

Thanks for the suggestions Yes but deleting the table records and then adding the modified entries is not an option. This is because I have to change only few records in the table but the issue is that. for changing these few records, I have to scan this entire table and this may result in a timeout.

But I am interested in this 'Open Dataset' that you mentioned. Could you tell me more on this??



0 Kudos

Yes but deleting the table records and then adding the modified entries is not an option.

How do you plan to modify the table entry key fields without using delete/insert or delete/modify ?

Could you share a working code snippet that is able to modify key field of a single record without using delete/insert or delete/modify?

0 Kudos

Hi Manish,

I am modifying a non-key field.

So there is no delete/insert required as is the case when you modify a key field.

My concern is, to be concise, that how can I 'SUBMIT' reports in background so that these reports can share data among themselves, even if the report are run cross-application-servers.



Former Member
0 Kudos


Try parallel processing.

Parallel Processing - ABAP Development - SCN Wiki



0 Kudos

Hi Debopriyo,

Yes I did think of that but then I am not using any function module inside the report that can be executed in a new task (for parallel processing)

This is a case where I have encapsulated a simple logic in a report but since the report is running on a large number of table records I am thinking of running this in batches.

I have read many documents and I somehow think that Point 3 (in my original post) can achieve my requirement. However I am not sure because I cannot test this scenario.

So any idea on Point 3 or any more approaches?



Former Member
0 Kudos

This message was moderated.

Active Contributor
0 Kudos

Hello Sameer,

Have you tried by specifying PACKAGE SIZE in your select query. I guess you can try like below.

The following example processes data from table MAST, 2000 records per iteration.

SELECT <field 1>

       <field 2>




       <field n>

  FROM mast

  INTO TABLE t_mast


  WHERE <selection criteria>.


* the internal table i_mast will always have <= 2000 records

  LOOP AT i_mast INTO w_mast.


* process the data in the internal table

* Do the changes in the internal table and update database table.





Hope it helps you, please try it and let me know your results on it.



0 Kudos

Hi Thanga,

No this should also not work, in my opinion, because at the end, you are still processing a large number of records, even though if it is in a packet-wise manner.

Using Packet size will avoid timeout at the SELECT statement itself. However, it does not guarantee that timeout will not occur for the entire report program execution.

The more the number of records, the more the roundtrips to the database (each roundtrip for a packet size of 2000 records) and hence more time to execute the report.

So if not for the SELECT statement then timeout can occur for executing the entire report.

And moreover, I am using the cursor logic which fetches me a packet size of 100 records at a time. So your logic is equivalent to what I am already using.

But thanks for the suggestions. Do you have any comments on point no. 3 which I have mentioned in my original post?



Active Contributor
0 Kudos

I think the problem is not so much a time out, as you can (and probably should) run this in background.

The issue is more that you should do a COMMIT WORK every so often to avoid overflow of log files / redo logs (or whatever these are called in your database).

I have suggested this approach in the past:

  DATA lf_cursor TYPE cursor.


    SELECT ...  FROM ... WHERE ...


    FETCH NEXT CURSOR lf_cursor

          INTO TABLE ...

          PACKAGE SIZE 10000.

    IF sy-subrc NE 0.

      CLOSE CURSOR lf_cursor.



   "put your update logic here



I don't think SUBMIT or EXPORT/IMPORT workarounds are really required for this (if I understand correctly).


0 Kudos

Hi Thomas,

Yes but if the number of records is high in the db Table then the FETCH for 10000 records will take place multiple times. This coupled with the time taken to execute the "update logic" can cause a timeout.



0 Kudos

Hi Sameer

A better option will be to create an FM with these internal table parameters and call it in parallel. In this case you dont need to worry about the import/export stuff..