cancel
Showing results for 
Search instead for 
Did you mean: 

SAP PO - Mapping Error Due to Run time Memory allocation

yeshuaq
Explorer
0 Kudos
712

Hello,

I am having an issue while process a DB to IDOC interface in SAP PO75.

I have to process huge data of 300mb mapping logic includes 'Variables' where i save data from source at run time and use that variables values to other target fields in mapping.

So Issue is: when i process data with <50MB, interface was successful. but when i process data of 300mb at a time, it shows error with variable field as 'Queue do not have same number of values'. So i suspect that when i process huge size message, mapping is not able to accommodate all values with that variable.

Can anyone help me to understand/how to change the Heap size or how to allocate more heap size for mapping objects.

any specific java parameters to add more run time memory for mapping object? or let me know if issue is due to some other reasons.

we had java heap size of 3gb already. but how to increase for runtime memory for ESR mapping object.

Thank you.YQ

Accepted Solutions (0)

Answers (3)

Answers (3)

yeshuaq
Explorer
0 Kudos

Hi Majumder/Talasila,

Thank you for Quick suggestions.

we had done the SAP PI server sizing and tuned system to handle huge data without getting out of memory error and message blocking.

And issue is with ESR -Mapping,

1. while in run time the 'variable' (standard option available in mapping) that i had added is not able to handle/retain huge data /records seems.

2. the above issue may be due to Format by example function that I used for 'variable' filed mapping logic. means FormatByEample is not able to handle huge data with 100,000 records..!. (here no issue with data because source and target fields mapped with formarbyexample are having same record count)

so is there any specific java property to change in NWA to provide more java memory for Mapping tool?

And also i am not able to split message as i am grouping the records received from source system based on conditions. so we can not predict the records may occur at any place in source data (logic may include to group records of same sales group. and that sales group records may occur in first or may be at last record out of 100,000 records.)

Please suggest.

Best Regards . YQ

Bhargavakrishna
Active Contributor
0 Kudos

Hi Yeshua,

I suggest to use process the data in chunks to optimize the message size, instead of processing huge records in one go..

Check the below links for your reference.

https://answers.sap.com/questions/8558049/pi-71-jdbc-sender-adapter-huge-load-from-db-select.html

https://answers.sap.com/questions/7561264/sending-data-in-chunks-in-jdbc-adapter-with-rollba.html

http://scn.sap.com/community/pi-and-soa-middleware/blog/2012/09/24/jdbc-receiver-scenarios-best-prac...

or

If you can generate the file from the database, pick the file using file adapter and process the file in chunks.

https://blogs.sap.com/2010/10/18/pixi-pi-73-processing-of-large-files-teaser/

Regards

Bhargava Krishna

sugata_bagchi2
Active Contributor
0 Kudos

You need to check with your basis team for scaling and sizing. if the data from source is 300 mb, Using PI might not be the best option . you can use other ETL tool.