Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Memory leak using (nested) internal table with reference to data

matt
Active Contributor
0 Likes
938

I've an internal table GT_BUFFER with the following structure.

pernr           TYPE pernr_d,
infty           TYPE infty,
r_infty_table   TYPE REF TO data

It's a HASHED table with unique key pernr infty. I use this code to add records to the buffer (if the record isn't already there)

FIELD-SYMBOLS:<t_prelp>      TYPE prelp_tab.
...
    CREATE DATA ls_buffer-r_infty_table TYPE prelp_tab.
    ls_buffer-pernr = i_pernr. " The current pernr
    ls_buffer-infty = i_infty. " The current infotype
    ASSIGN ls_buffer-r_infty_table->* TO <t_prelp>.
    INSERT ls_buffer INTO TABLE gt_buffer.
...
  <t_prelp> ) = it_prelp. " My supplied internal table with data of type prelp_tab.

The problem is, that according to the memory analyser, each time I insert a record into the buffer, and additional *2MB" of memory is allocated to GT_BUFFER. Even though the record typically has only 1 record in its nested table. (Used memory is about 4K!).

The OS in Linux. I'm wondering if this is a bug in the kernel, or if there is some other explanation. The programming looks sound to me.

matt

1 ACCEPTED SOLUTION
Read only

deepak_dhamat
Active Contributor
0 Likes
850

Hi Matt,

The internal tables, just like the database tables, are organized

in blocks or pages. Only when entries are

written to the table the system create a table header and a table body.

it allocates momory according to pages , size of allocations is generally 8kb when the records are actually inserted .

accordingly it allocates memory to internal table dynamically .

I am sure you might be aware of this .

Regards

Deepak.

6 REPLIES 6
Read only

deepak_dhamat
Active Contributor
0 Likes
851

Hi Matt,

The internal tables, just like the database tables, are organized

in blocks or pages. Only when entries are

written to the table the system create a table header and a table body.

it allocates momory according to pages , size of allocations is generally 8kb when the records are actually inserted .

accordingly it allocates memory to internal table dynamically .

I am sure you might be aware of this .

Regards

Deepak.

Read only

matt
Active Contributor
0 Likes
850

No one uses internal tables with headers any more , so the 8k doesn't matter. But you're not far off the truth. Yes, the problem lies with the allocation of memory in the nested table.

This is where the abap you didn't see becomes relevant... turns out it's vital. The key lies in how <t_prelp> gets populated. There is another internal table lt_prelp (same structure), which contains ~700 records. It has allocated and used 2MB. I copy lt_prelp to <t_prelp>, and then use a DELETE <t_prelp> WHERE pernr = i_pernr. This gets rid of all records in <t_prelp>, except for one.

But...

<t_prelp> has now only 8K used - but retains the 2MB allocated. When that gets put into GT_BUFFER, that 2MB is irretrievably allocated. So, after 600 records, I'm at 1.2GB and run out of memory.

Some kind of CONDENSE itab, would be nice. Force a deallocation of unused memory. I've implemented a different workaround.

matt

Edited by: Matt on Sep 23, 2011 2:28 PM

Read only

0 Likes
850

Matt,

When we copy the data from one internal table to another internal table using [] or direct assign, system don't copy it right away. It just keeps pointer to what data is changing. At end of processing it determines what to adjust. Thus, it allocates the same memory as what source table has.

The key lies in how <t_prelp> gets populated. There is another internal table lt_prelp (same structure), which contains ~700 records. It has allocated and used 2MB. I copy lt_prelp to <t_prelp>, and then use a DELETE <t_prelp> WHERE pernr = i_pernr. This gets rid of all records in <t_prelp>, except for one.

I had a similar problem in past due to Internal table copy, it was not for Memory leak though. It was for performance. We had about 25K lines in itab. It was copying all of them in the seperate table in a LOOP. so, eventually it was taking about 65% time in only APPEND statement. You can read more that [Performance of ITAB Copy|http://help-abap.zevolving.com/2011/06/performance-of-itab-copy/]

Regards,

Naimesh Patel

Read only

SuhaSaha
Product and Topic Expert
Product and Topic Expert
0 Likes
850

Hello Matt,

I've implemented a different workaround.

May be you could share the details of the workaround

BR,

Suhas

Read only

0 Likes
850

Hi Matt ,

Only Deleting a data will not deallocate memory . you need to Free that internal table .

but as it contains single record after after deleting . The memory will remain as it is .

regards

Deepak.

Read only

matt
Active Contributor
0 Likes
850

Hello Matt,

>

>

I've implemented a different workaround.

>

> May be you could share the details of the workaround

>

> BR,

> Suhas

Oh alright... I just thought the workaround was kind of obvious!

Instead of copying the itab and deleting the records I don't want in the copy, I insert into the copy only the records I want. Simples.

@Deepak - , Free, Clear and Refresh all allow the memory to be deallocated - at least in later ABAP versions.

matt