Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

INSERT dbtab FROM TABLE itab

Former Member
0 Likes
1,105

Hi,

Does anybody know if there is an upper limit of records in an internal table when to update a db table. (Ex. internal table contain 1.000.000 records)?

We are using DB2. Is there an upper limit? Would it be better split the internal table and update fewer records each time? What about performance?

Thanks in advance.

Kind regards,

Keld Gregersen

1 ACCEPTED SOLUTION
Read only

retired_member
Product and Topic Expert
Product and Topic Expert
0 Likes
831

Hi,

splitting the table should make no difference. In fact it should cost some performance, because you would have several database accesses instead of one.

In Open SQL data to be read and to be written is transported in <b>packages</b> between the database server and the application server anyway! You can configure the size of the packages using profile parameters (for example, the standard value for Oracle is 65 KB).

6 REPLIES 6
Read only

retired_member
Product and Topic Expert
Product and Topic Expert
0 Likes
832

Hi,

splitting the table should make no difference. In fact it should cost some performance, because you would have several database accesses instead of one.

In Open SQL data to be read and to be written is transported in <b>packages</b> between the database server and the application server anyway! You can configure the size of the packages using profile parameters (for example, the standard value for Oracle is 65 KB).

Read only

0 Likes
831

Hi mr. Keller,

Thanks for your reply. Using one insert statement will reduce the "fetches" between application and database server to a minimum, but what I tried to ask for was: can the database system (ex. DB2) handle (insert/update)

1.000.000 records in one DB LUW (is there a "db roll-back area" that might "run full" or so big that it would influence performance or .....).

Kind regards,

Keld Gregersen

Read only

0 Likes
831

Hi,

yes there are such limitations. It's either the rollback segments or e.g. the number of locks, a DB can manage.

Those limitations are database and configuration specific.

In order to be on the safe side, you must find out the limitations of your DB and split your data to packages that are well below the maximum size and insert those to the database table.

Regards

Horst Keller

Read only

0 Likes
831

Hi,

It depends upon Redo log file size in Oracle.

Regards

Venkat

Read only

Former Member
0 Likes
831

Hi Keld,

There is an upperlimit to the number of records, but it is not easy to determine. This limit actually is the 2Gb memory limit that each session holds as a maximum.

So you'll need to know how much your program occupies in memory, the size of your internal table record and so on. A good alternative is to assume that program, overhead and such hold about half of your memory available, thus leaving about 1Gb for your internal table.

Say your internal table record is 512 bytes wide (standard unindexed table) you would be able to hold about 2million records on a non-unicode system and about 1million records on a unicode system.

Of course there are many systems with data that exceeds such a limit, so SAP provides us with a way to do our trick with internal tables but limited to a certain amount (with each pass).

The ABAP code would look something like this:

DATA: my_table TYPE db_table OCCURS 0,

my_record TYPE db_table.

SELECT *

INTO TABLE my_table

FROM db_table <b>PACKAGE SIZE 10000</b>.

  • Process resulttable

LOOP AT my_table INTO my_record.

  • Do our trick here

PERFORM our_trick USING my-record.

MODIFY my_table FROM my_record.

ENDLOOP.

  • Of course we can use the entire table

PERFORM our_tricks TABLES my_table.

ENDSELECT.

Using PACKAGE SIZE n will fill the internal table with each pass in the SELECT loop with n records (in the example with 10.000 records).

The SELECT loop is passed each n records once, filling the internal table. You can do your work with that table.

Keep in mind, that the same restrictions apply as with normal SELECT loops.

Via testing you'll need to find out what value for PACKAGE SIZE will give the best performance. Under normal conditions values above 100 and below 10000 give the best results (in my experience).

Hope this gives a clue for your work.

Regards,

Rob.

Read only

0 Likes
831

Hi Rob,

Thanks for your answer, but my problem is not the size of the internal table, but if the database system can handle insert/update of 1.000.000 records in one DB LUW. See my response to Mr. Keller.

Kind regards,

Keld Gregersen