Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Data extract from huge table

Former Member
0 Likes
2,377

Hi,

I am extracting data from a table which has more than 25 million records without using any where condition.

45000 records in a file, in multiple files.

What is the best way to write the select statement.

Thanks,

fract

1 ACCEPTED SOLUTION
Read only

Former Member
0 Likes
1,731

Hi ,

Your should try to do analysis of your system and requirement to find the best approach.

Also, you should proceed with runtime analysis and trace analysis of your program.Check which index is being used by select statement.

you can try PACKAGE concept with the select query.

Also, try to analyse the impact of the changes that you have made by comparing with previous results.

Search SDN on performance tuning , there are many good posts avaliable on this.

Hope this helps you.

Edited by: Harsh Bhalla on Jan 21, 2010 11:35 AM

7 REPLIES 7
Read only

former_member150733
Contributor
0 Likes
1,731

SELECT UP TO 45000 ROWS

INTO IT_TAB

FROM <Table>

.....

Read only

Former Member
0 Likes
1,731

Hi,

You can also use PACKAGE concept as an addition to the select query. (please see F1 help on this syntax). You can specify the package size, and the system will fetch the record in those bunches.

regards,

amit m.

Read only

Former Member
0 Likes
1,732

Hi ,

Your should try to do analysis of your system and requirement to find the best approach.

Also, you should proceed with runtime analysis and trace analysis of your program.Check which index is being used by select statement.

you can try PACKAGE concept with the select query.

Also, try to analyse the impact of the changes that you have made by comparing with previous results.

Search SDN on performance tuning , there are many good posts avaliable on this.

Hope this helps you.

Edited by: Harsh Bhalla on Jan 21, 2010 11:35 AM

Read only

0 Likes
1,731

When writing into files till 1 million is ok. When it comes to more than 1 million, it is taking more time....

It writes first file from 1 to 45000. after writing the first file, for second file, it comes to Select statement again and reads 90000 records but skips first 45000records. like this it is doing for all files....

second file 45001 to 90000

third file 90001 to 120,000

....

....

....

......

Select * from....

if sy-dbcnt > last_count_records.

write into internal table.....

endif.

Endselect....

Is there any other way to improve performance.

Read only

0 Likes
1,731

Don't know why you're extracting 25,000,000 records, but package size would be my first choice....something like:

types: gtyp_int type i.

data: itab type table of <dbtab>.

parameters: p_pkg type i default 45000.

start-of-selection.

perform myform.

form myform.

data: file_cnt type gtyp_int.

select * from <dbtab> into table <itab>

package size p_pkg.

file_cnt = file_cnt + 1.

  • do something with itab contents

case file_cnt.

when 1.

append lines of itab to .... "or do a table copy.

when 2.

....

endcase.

endselect.

endform.

Read only

0 Likes
1,731

What is the best method in the following two....I never used Open curosr, I don't know how it works. I heard that when we use "Open cursor", does it reads only one time the database and cache in application server?

Open cursor......

(OR)

select * from <dbtab> into table <itab>

package size p_pkg.

Read only

0 Likes
1,731

Rather than saying what you "heard", why don't you just check the documentation?

Rob