Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
Showing results for 
Search instead for 
Did you mean: 

dump DBIF_RSQL_INVALID_CURSOR while writing data to local pc

Former Member
0 Kudos

Hi all together,

I´ve been spending a lot of time on this issue without a success. My requirements are to get data from several db-tables and write them on the presentation server (local pc).

Some of the tables are small and there are no problems. But some of them are up to 8GB or 250.000 rows big. An internal table can storage round about 500MB.

I tried the FETCH NEXT CURSOR statement and the SELECT * ... ENDSELECT loop to read the data in packages (5000 rows) into an internal table. That works on the first pass well. The internal table handles the 5000 rows without any problems. After the first package was written to the local excel-file, the program tries to fetch the next cursor (next 5000 rows) and dumps!


Error analysis:                                                                               

One of the database selections included a database commit.                      

The selection was then supposed to continue. Before a                           

database commit, however, all outstanding database selections must be           


Possible causes in the application program:                                     

Within a loop (SELECT/LOOP/EXEC SQL or a FETCH statement), one of the following                      

statements is used:                                                             

- MESSAGE (apart from MESSAGE S...)                                             

- COMMIT WORK                                                                   

- ROLLBACK WORK                                                                 

- CALL SCREEN                                                                   

- CALL DIALOG                                                                   

- CALL TRANSACTION                                                              

- SUBMIT                                                                        

- BREAK-POINT                                                                   

- WAIT                                                                               


And there is no primary key or something like that in the db-table to use a WHERE-Clause.

I know, there are similar discussions on the web and this forum. I read a lot of them (nearly all of them), but there was no one which could help.

So please help me!

Thank you!

Here is my code:

                  <gs_wa> TYPE ANY,
                  <gs_table> TYPE ANY TABLE,
                  <gs_comp> TYPE ANY.

   CONSTANTS: co_package_size TYPE i VALUE 5000.

   DATA: lt_write_data_to_file TYPE TABLE OF string,
         l_xout TYPE string,
         l_xout_field TYPE string,
         l_tabname TYPE ddobjname,
         l_dref TYPE REF TO data,
         l_number_rows TYPE i,
         l_bool TYPE string VALUE 'true',
         l_db_cursor TYPE cursor,
         l_start TYPE i VALUE 0,
         l_end TYPE i VALUE co_package_size,
         l_lines TYPE i.

   CLASS cl_abap_char_utilities DEFINITION LOAD.
   CONSTANTS c_hor_tab  TYPE c VALUE cl_abap_char_utilities=>horizontal_tab.

   ASSIGN l_dref->* TO <gs_table>.

   SELECT COUNT(*) FROM (l_tabname) INTO l_number_rows.

   IF l_number_rows >= 1.

     OPEN CURSOR l_db_cursor FOR SELECT * FROM (l_tabname).

       FETCH NEXT CURSOR l_db_cursor INTO TABLE <gs_table> PACKAGE SIZE co_package_size. " <== Here comes the dump, after the first 5000                                                                                                                                                      "        rows were written into the file

       IF sy-subrc <> 0.
         CLOSE CURSOR l_db_cursor.

       LOOP AT <gs_table> ASSIGNING <gs_wa>.
         CLEAR l_xout.

           ASSIGN COMPONENT sy-index OF STRUCTURE <gs_wa> TO <gs_fs>.

           IF sy-subrc = 0.
             IF sy-index = 1.
               l_xout = <gs_fs>.
               l_xout_field = <gs_fs>.
               CONCATENATE l_xout l_xout_field INTO l_xout SEPARATED BY c_hor_tab.

         APPEND l_xout TO lt_write_data_to_file.

          CALL FUNCTION 'GUI_DOWNLOAD'   " <== Here I already tried a lot of download functions and also implemented an own function to download
             filename              = l_output_path
             filetype              = 'ASC'
             write_field_separator = 'X'
             append                = 'X'
             data_tab              = lt_write_data_to_file.

      CLEAR lt_write_data_to_file.



Active Contributor
0 Kudos

GUI_DOWNLOAD and similar functions for local download interrupt the database cursor of package selects, as you have found out the hard way. I don't see an easy workaround for this at the moment, without trying something very different like downloading to app server and then have the stuff transferred via FTP, or so...

"An internal table can storage round about 500MB"

Standard system settings (as I know them) allow up to 4 GB of memory usage before your process aborts with memory related short dumps. Please double-check the limit with your system administrator. Maybe the parameters can be increased temporarily until your task is done.

"some of them are up to 8GB or 250.000 rows big"

This would be around 32K per row, are you sure about the 8 GB size?

Maybe you can try reading it all into an internal table at once and do the GUI_DOWNLOAD in packages using the APPEND parameter (that's how I did something similar a while ago).


0 Kudos

Thank you Thomas!

I´ve tried it to download the data to application server and download that via transaction CG3Y. That is working, but I think that is not a good idea to clutter the application server with such big files. The next problem is, that the same db-table could have up to 20GB next time.

Additionally the 8GB is one of the table. This one has round 250.000 rows. There are some similar tables with up to 100.000 rows, which I have to wonload at the same time. So that would need a lot of memory on the application server.

And yes I am sure about the size of 8GB. I can´t read all the data in one internal table, because of the memory limit.

I already use the APPEND parameter of the GUI_DOWNLOAD.

Any other suggenstions?

0 Kudos

You said there is no primary key, I don't want to believe this, as DB tables as I know them must have one. Please clarify. By using primary keys, you could try to come up with a logic like this:


  SELECT ... UP TO x ROWS WHERE primary_key > saved_value ORDER BY PRIMARY KEY

  EXIT if sy-subrc <> 0


  save last primary key value(s)