Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

ABAP - Processing 40 million recrods to an outbound File

former_member327092
Participant
0 Kudos
323

Dear Folks,

I would like to pick your brains on best practices and performance tips for processing around 40 million records to an outbound file. We will be running the report based on posting periods in BKPF and BSEG entries. BKPF would have around 8 million records for a posting period and BSEG could have around 40 million records for the 8 million fetched from the BKPF. We will need consolidate the file based on Doc Type. Second level of consolidation will be on G/L and profit center combination.

With this huge amount of data possibility of memory issues are high. The option to go with is using Open / Fetch cursor with package size of may be 100,000.

Please provide your suggestions.

Thanks,

Raj

6 REPLIES 6

Sandra_Rossi
Active Contributor
0 Kudos
115

It sounds a correct approach to work on smallest units, their size will depend on the memory requirements for every package (which additional SELECT you'll do for every package, etc.), and the total memory available.

kiran_k8
Active Contributor
0 Kudos
115

Raj,

Open/Fetch/Close Cursor with proper primary keys or Index usage will yield positive results.You can explore the Primary Keys and existing indexes to have an idea on how far will they be able to help you in terms of performance and if required as a last option you can consider creating a new index to meet your requirement.

K.Kiran.

Jelena
Active Contributor
0 Kudos
115

What Sandra said + what do you expect to do with such file? Forget the SAP program, even opening or sending somewhere such large file would be problematic. What exactly is the requirement? Why do you have to read so many records and why write into a file?

former_member327092
Participant
0 Kudos
115

Finance is a shadow system in our environment. So we need to feed the SAP data for all SD and MM transactions to other system in that posting period. The file is going to be segregated at the FI doctype so ideally there would be only 30 header records and line items we are consolidating based on GL and profit center. The file will not have that many records since we are consolidating all the FI docs based on the above. It's the manipulation of the data at run time in the background and filling the file which I am worried about.

kiran_k8
Active Contributor
0 Kudos
115

Raj,

In addition to performance optimisation of the select query,you can also explore using field-symbols while looping the respective internal tables.

K.Kiran.

Jelena
Active Contributor
0 Kudos
115

In case of "shadow FI" I believe some standard interface solutions already exists, possibly using IDocs. Try Google -> "external FI system site:sap.com" or maybe ask in FI forum... err tag.

Either way, it seems the design to SELECT the whole GL into memory when you actually need either consolidated data or small percentage of the records is flawed. This question seems to be about a specific issue that is caused by such design but the design itself is a problem here IMHO.