on 2015 Aug 14 7:39 AM
We are facing an Issue in a extractor which is based on a Function module, where it is fetching data from the KONV (cluster table) based on inner join with VFKP table on field value KNUMV (Number of the document condition).
Here when we loading the delta (it is fetching data by AEDAT changed on field in VFKP), system is throwing a dump TSV_TNEW_PAGE_ALLOC_FAILED with a text "No more storage space available for extending an internal table" in the SAPLARFC progam.
When doing a repair full the extractor is properly picking data in different packets, only in delta mode this is happening.
So for a permanent resolution we want to change the data packet size in extractor level itself as It is not working when we try changing it in Inbound Info package.
Can anyone please suggest how to do this..?
Hi,
You can overrule the default settings of the Source System in the InfoPackage in BW.
Menu path: Scheduler > DataSource Default Data Transfer
Best regards,
Sander
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Pavan,
Thanks for your reply, we have already tried the Info package settings option but it doesnt work, and coming to the hard coding of I_MAXSIZE in function Module which is the extractor for this.
Could you please let me know the code snippet for that, I believe people wont approve/accept it. If i choose it , but Just want to know how I can Incorporate that and want to try testing it in my QA system.
Hello Sander, Thanks for quick reply..!! Here Im attaching the .txt file with function module source code in both of them. Request you to please go through it and let me know where can i Introduce the code to manipulate data packet size so that extractor picks sufficient amount of records/ 50k in pkt in intervals.
Hi,
I analyzed the coding and I see that the package size is correctly used. I don't have a direct explanation why in delta mode it runs into memory problems.
As far as I can understand the logic I see that table VFKP (Shipment Costs: Item Data) is used for the packaging. Here the field KNUMV (Number of the document condition) is read for the selected records and used in a next step to select data from table KONV (Conditions (Transaction Data)) with a FOR ALL ENTRIES IN i_vfkp_data WHERE knumv = i_vfkp_data-knumv.
It might be that here an issue can occur. You have to be very careful with FOR ALL ENTRIES IN. You should only use it with a list of distinct values of KNUMV. I cannot judge if this is the case.
To come to a list of distinct values, you can introduce a second "for all entries" table as a copy of the other internal table. Here you sort the records on KNUMV. Then you delete adjacent duplicates from this "for all entries" table. This "for all entries" table is then used in the SELECT statement on KONV.
Secondly, I cannot judge how the data in table KONV is related to VFKP. I.e. how many KONV records can be expected for one KFKP record. If this is e.g. 10 KONV records for 1 VFKP record, then indirectly you can have a package size with is 10 times higher (10 x 50,000 = 500,000).
As a temporary workaround I suggest to reduce the package size hard-coded as follows:
* Fill parameter buffer for data extraction calls
s_s_if-requnr = i_requnr.
s_s_if-dsource = i_dsource.
* s_s_if-maxsize = i_maxsize. "temporary work-around
s_s_if-maxsize = 10000. "temporary work-around
Now you can check if the delta mode can handle the memory consumption.
Also try debugging in delta mode. I don't know if that can be done using t/code RSA3 or that you have to go for a more difficult background process debugging.
Last but not least, please have a look at document which can perhaps serve as reference material.
Best regards,
Sander
User | Count |
---|---|
66 | |
11 | |
10 | |
10 | |
9 | |
7 | |
7 | |
6 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.