Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Dump Created while fetching data TSV_TNEW_PAGE_ALLOC_FAILED

AadivaramAbbulu
Explorer
0 Kudos
589

Hi.

I am fetching data from below code.
While fetching from CDPOS ..Dump is created. 
As per my analysis ..What i understood is ..It is having memory issue. 
Table CDHDR is nearly having 27 Lakhs records. Due to this Dump is created.

My Question is without using Hana In Program itself is there any possible way to solve the issue. 
Some logic I am thinking.
1)First need to count total records from CDHDR.
2)Then while fetching CDHDR table i need to split, it into batches like up to 1 lakh records then remaining process like Fetching data from CDPOS and other remaining process need to complete, then remaining records again from CDHDR Need to process it.. until total records over.

But how to implement I would like to know?

    SELECT  objectclas,objectid, changenr, username, udate, utime, tcode FROM cdhdr  INTO TABLE @DATA(lt_cdhdr)
  WHERE objectclas IN @LR_objectclas AND  username IN @s_user  AND   udate IN @LR_daterange.
 IF lt_cdhdr IS NOT INITIAL.
      SELECT objectclas, objectid, changenr, tabname, tabkey, fname, chngind, value_new, value_old FROM cdpos INTO TABLE @DATA(lt_cdpos)
        FOR ALL ENTRIES IN @LT_cdhdr
        WHERE objectclas = @LT_cdhdr-objectclas AND objectid = @LT_cdhdr-objectid  AND changenr  = @LT_cdhdr-changenr.
endif.

 

12 REPLIES 12

Sandra_Rossi
Active Contributor
528

Please use the buttons "..." and "</>" to display your code nicely. See how to do it here: https://community.sap.com/t5/questions-about-sap-websites/how-to-post-code-in-sap-community-gt-2024/...

512

Thank U Done.

DominikTylczyn
Active Contributor
502

Hello @AadivaramAbbulu 

You can split your processing into chunks using OPEN CURSOR with PACKAGE SIZE n addition and FETCH statements. Use joint select from CDHDR and CDPOS to avoid FOR ALL ENTRIES, that might also cause issues when resulting OR condition is too large.

Best regards

Dominik Tylczynski

0 Kudos
339

Thank U I Implemented the logic. Now while fetching the records. I mean while calling the Query no dump. but there is Final internal table like rt_output here I am getting the dump No more memory available to add rows to an internal table-TSV_TNEW_PAGE_ALLOC_FAILED .... I am thinking final internal table not able to to hold huge data... Any advice. 

    "Using Cursor Technique to improve the performance
    " Open cursor for fetching data in packages
    OPEN CURSOR WITH HOLD @LV_cursor FOR
      SELECT objectclas, objectid, changenr, username, udate, utime, tcode
      FROM cdhdr
    WHERE objectclas IN @LR_objectclas
      AND username IN @s_user
      AND udate IN @LR_daterange.

    DO.
*      " Fetch data in chunks
      FETCH NEXT CURSOR lv_cursor INTO CORRESPONDING FIELDS OF TABLE lt_temp_cdhdr PACKAGE SIZE lv_package_size.

      " Exit the loop if no more data
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.

      SORT  lt_temp_cdhdr BY objectclas objectid changenr.

      CLEAR lt_dcdhdr.
      " Append fetched records to the final table
      APPEND LINES OF lt_temp_cdhdr TO lt_dcdhdr.


      IF lt_dcdhdr IS NOT INITIAL.
        SELECT objectclas, objectid, changenr, tabname, tabkey, fname, chngind, value_new, value_old FROM cdpos INTO TABLE @DATA(lt_dcdpos)
          FOR ALL ENTRIES IN @LT_dcdhdr
          WHERE objectclas = @LT_dcdhdr-objectclas AND objectid = @LT_dcdhdr-objectid  AND changenr  = @LT_dcdhdr-changenr.

        IF  lt_dcdpos IS NOT INITIAL.
          SELECT bname, name_first, name_last FROM user_addr INTO TABLE @DATA(lt_duser_addr)
           FOR ALL ENTRIES IN @LT_dcdhdr
            WHERE bname = @LT_dcdhdr-username.

          IF lt_duser_addr IS NOT INITIAL.
            SELECT bname, persnumber, addrnumber FROM usr21 INTO TABLE @DATA(lt_dusr21)
              FOR ALL ENTRIES IN @LT_duser_addr
              WHERE bname = @LT_duser_addr-bname.

            IF lt_dusr21 IS NOT INITIAL.
              SELECT addrnumber, persnumber, date_from, consnumber,smtp_addr, smtp_srch FROM adr6 INTO TABLE @DATA(lt_dadr6)
                FOR ALL ENTRIES IN @LT_dusr21
                WHERE persnumber = @LT_dusr21-persnumber.
            ENDIF.
          ENDIF.
        ENDIF.

        SORT lt_dcdpos BY changenr.
        SORT lt_duser_addr BY bname.
        SORT lt_dusr21 BY bname.
        SORT lt_dadr6 BY persnumber.
        LOOP AT lt_dcdhdr INTO DATA(ls_dcdhdr).
          READ TABLE lt_dcdpos WITH KEY changenr = ls_dcdhdr-changenr INTO DATA(ls_dcdpos) BINARY SEARCH.
          READ TABLE lt_duser_addr WITH KEY bname = ls_dcdhdr-username INTO DATA(ls_duser_addr) BINARY SEARCH.
          READ TABLE lt_dusr21 WITH KEY bname = ls_duser_addr-bname INTO DATA(ls_dusr21) BINARY SEARCH.
          READ TABLE lt_dadr6 WITH KEY persnumber = ls_dusr21-persnumber INTO DATA(ls_dadr6) BINARY SEARCH.

          APPEND VALUE #( uname  = ls_dcdhdr-username
                          udate  = ls_dcdhdr-udate
                          utime  = ls_dcdhdr-utime
                          tcode  = ls_dcdhdr-tcode
                          Objvale = ls_dcdhdr-objectid
                          tabname = ls_dcdpos-tabname
                          fname   = ls_dcdpos-fname
                          valueold = ls_dcdpos-value_old
                          valuenew = ls_dcdpos-value_new
                          fstname = ls_duser_addr-name_first
                          lstname = ls_duser_addr-name_last
                          email = ls_dadr6-smtp_addr ) TO rt_output.

          CLEAR:ls_dcdhdr,ls_dcdpos,ls_duser_addr,Ls_dusr21,ls_dadr6.
        ENDLOOP.
      ENDIF.
    ENDDO.

    CLOSE CURSOR lv_cursor.

334

You have changed your code to finally no advantage because you consume exactly the same amount of memory (RT_OUTPUT). The goal of loading by package is to process each package, free the memory, then process the next package, etc. So, change your code accordingly.

Note that the "death trio" STANDARD TABLE, SORT and BINARY SEARCH, have been obsolete for 20 years. Only use SORTED TABLE.

0 Kudos
314

I did some changes. No Use. Still same issue. 

    Do.  
      FREE lt_temp_cdhdr.
    ENDDO.

  

298

You don't understand. The issue is about RT_OUTPUT which contains gigabytes of data. Instead, process by package, fill RT_OUTPUT only with few megabytes of data, process the data, then load the next package in RT_OUTPUT, process the data, etc.

0 Kudos
288

Hi, Sandra not understood. 
Rt_output is Final internal table. Before displaying the data into the Report output. I need to fill it.  I am adding data into the  RT_output By package only...

Example:
let's say example 

First time package 

Rt_output  100 records loaded next time

Rt_output  100+120 records loaded next time many records.

Just filling Rt_ouptut with just 1 package and how to show the output... I need to process remaining package until total package over tight. But mean while rt_ouput is filling memory. Not understood clearly.

"fill RT_OUTPUT only with few megabytes of data, process the data"

265

Let's say your program prints a report.

In your example, LV_PACKAGE_SIZE is 100.

You need to refactor your program, to print 100 lines to the report at a time:

  • DO FETCH NEXT to read next package, and process the package:
    • SELECT other data (not good, don't nest the SELECT, use JOIN as far as possible as explained by Dominik)
    • Print all lines of the package (or any other processing)

RT_OUTPUT will never have more than 100 lines.

265

Agree with term "death trio" : lol

Suggest looking into using standard tables with defined non unique secondary keys instead of sorted tables.   Sorted tables have relatively poor insert/delete in comparison.    There can be multiple defined secondary keys as well.    

DominikTylczyn
Active Contributor
280

Hello @AadivaramAbbulu 

I can see two problems with your code:

  1. Don't use FOR ALL ENTIRES. You'll be better off with joining CDHRD, CDPOS, USER_ADDR, USR21, ADR6 and let the database do the heavy lifting.
  2. TSV_TNEW_PAGE_ALLOC_FAILED stems from the fact that you select a lot of change documents. I guess you can't to report changes posted by some users. So you need to narrow down CDHDR select statement by providing more restrictive selection criteria e.g. select for a few user ids only and/or for shorter date range. Otherwise the results of your report are so huge that the resulting RT_OUTPUT table overflows available memory.

Best regards

Dominik Tylczynski