cancel
Showing results for 
Search instead for 
Did you mean: 

Data Loading Issue to DSO.

Former Member
0 Kudos
107

Hi ,

I have a data loading from DSO1 to DSO2.Where i also 4 field level routines to populate some fields using the master .Here my master is of 50,000 to 80,records.

It is taking more than 8 hours for loading 8 lac records.

Can any one guide me to in regard of performance.

Data packet size is 50,000 records/pack.

no sematice key is enabled .

I have read statements in fields routiens.No Loops ...

Regards

Laxman.

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

hi Laxman,

Try optimizing your code ,Some points are .

1.If possible go for start or end routine and if not then create global table in start routine and write all select statements at start routine .At field level only create field symbol and read statement .

2. Select only required fields and use corresponding fields of statement in select.

3. Use For All Entries in SOURCE_PACKAGE .

4. Refresh table after updation at end routine .

5. Use field symbols instead of Work area that will make great difference .

6. In read use binary search and before that don't forget to sort data as per keys you are using in read at start routine .

7.Set semantic keys so that all record get sorted as per semantic keys and select with FOR ALL ENTRIES will become more efficient .

8.Field level routine take more time than start or end routine so if possible switch to start routine .

9.If extraction is taking more time than reduce package size .

Regards,

Jaya Tiwari

Answers (4)

Answers (4)

Former Member
0 Kudos

Hi Laxman,

I will try to explain as to what exactly maybe causing problem for you.

Suppose only 50000 records are to be loaded. Please note that field routines are executed for every record which means that every field routine will be executed 50000 times (which means total of 200000 routines execution). As you are fetching the master data from some table, therefore you must have written a select statement in all 4 routines which means 200000 select statement execution for 50000 records(assuming that only 1 select statement was there in every routine).

So, this is your problem. About 200000 select statements mean 200000 times database is hit, which degrades the performance.

Also, another thing you can check is the where clause in your select statement. If where clause has more than 1 field, ensure that the sequence of fields in where clause is similar to their sequence in the database table whch our statement is executing.

What should be done:

the best way would be to fill these fields in start routine. the advantage would be:

Assuming that 50000 records are to be loaded & 1 select statement is required to fetch the master data for each field, total of 4 select statements would be required. This would highly improve the performance.

Former Member
0 Kudos

Thanks a lot to ALL,

Durgesh and Rahul:

My code is simple ,ITAB is declared and filled in start rputine below is field level routine.

I have three fields routine of this kind.

if SOURCE_FIELDS-GL_ACCOUNT = '0007102000'.

RESULT = 'Materials'.

ELSE.

READ TABLE ITAB3 INTO WA3 WITH KEY COMP_CODE = SOURCE_FIELDS-COMP_CODE

GL_ACCOUNT = SOURCE_FIELDS-GL_ACCOUNT XEXP_HEAD =

SOURCE_FIELDS-/BIC/XEXP_HEAD COSTCENTER =

SOURCE_FIELDS-COSTCENTER.

if sy-subrc = 0.

*break-point.

RESULT = WA3-ZSUB_HEAD.

*break-point.

endif.

endif.

Kamal:

Delclared and ITAB in start routine and filled it .

In field level routine i am using read statement to read the ITAB.

Parallel processing are 3.

If no fo parallel processing is less than it would take more time Am i correct?

kiran:

not 3.x its 7.0 Also cannot use attributes.

Former Member
0 Kudos

Hi,

From the code that I see, your read statement includes fields COMP_CODE , GL_ACCOUNT, XEXP_HEAD, COSTCENTER in sequence. Check whether the table itab3 has these 4 fields in same sequence(doesn't matter if there are any fields between them but the sequence in which these 4 fields are declared in ITAB3 should be same e.g. GL_ACCOUNT should not be declared before COMP_CODE). If they are in some other sequence, use them in the sequence in which fields are defined in ITAB.

Former Member
0 Kudos

hi laxman ,

Some tweak in code to reduce the time :

In start routine add one statement :

sort ITAB by COMP_CODE GL_ACCOUNT XEXP_HEAD COSTCENTER .

in field level

FIELD-SYMBOLS:

<WA3> type <Same as the type of ITAB3> .

if SOURCE_FIELDS-GL_ACCOUNT = '0007102000'.

RESULT = 'Materials'.

ELSE.

READ TABLE ITAB3 assigning <WA3> WITH KEY COMP_CODE = SOURCE_FIELDS-COMP_CODE

GL_ACCOUNT = SOURCE_FIELDS-GL_ACCOUNT XEXP_HEAD =

SOURCE_FIELDS-/BIC/XEXP_HEAD COSTCENTER =

SOURCE_FIELDS-COSTCENTER binary search.

if sy-subrc = 0.

*break-point.

RESULT = <WA3>-ZSUB_HEAD.

*break-point.

endif.

endif.

I am sure at least it will reduce time by 2-3 hours .

One more thing fields in sort statement can be reduced .Only sort based on fields common in all field level routine key .

Regards,

Jaya Tiwari

KamalMehta
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi ,

I would suggest to declare the ITAB table as Hash table in start routine with Unique key comp_code , gl account , cost center etc as used in the Read statement .

Then you need not sort it .

Secondly you can go for the field symbols as suggested by Jaya .This would surely improve your performance .

Thanks

Kamal

Former Member
0 Kudos

Hi,

can you paste the SELECT for ITAB3 ( adjust it with READ statement you are using )

check your WHERE clauses for all the SELECTS in the start routine.

you should try to restrcit the number of records in the internal table.

Regards,

Neeraj.

Former Member
0 Kudos

Hi ALL ,

thanks a lot .I don't know what is field symbol and how to sue it.

Let me learn some thing of it and try it then i will get back to you.

Any belwo is my complete code.

Start routine.

DATA :BEGIN OF WA2,

GL_ACCOUNT TYPE /BI0/OIGL_ACCOUNT,

/BIC/X_BUSINSS TYPE /BIC/OIX_BUSINSS,

/BIC/XALLO_PRO TYPE /BIC/OIXALLO_PRO,

END OF WA2.

DATA ITAB2 like table of WA2.

DATA :BEGIN OF WA1,

COSTCENTER TYPE /BI0/OICOSTCENTER,

END OF WA1.

TABLES : /BIC/AXNET_O0500.

DATA :BEGIN OF WA3,

COMP_CODE TYPE /BI0/OICOMP_CODE,

GL_ACCOUNT TYPE /BI0/OIGL_ACCOUNT,

XEXP_HEAD TYPE /BIC/OIXEXP_HEAD,

COSTCENTER TYPE /BI0/OICOSTCENTER,

ZSUB_HEAD TYPE /BIC/OIZSUB_HEAD,

END OF WA3.

DATA ITAB3 like table of WA3.

DATA: ITAB_PRV LIKE /BIC/AXNET_O0500 OCCURS 0.

DATA: WA_PRV LIKE /BIC/AXNET_O0500.

DATA ITAB1 like table of WA1.

SELECT GL_ACCOUNT /BIC/X_BUSINSS /BIC/XALLO_PRO FROM /BIC/AXNET_O1300

INTO TABLE itab2.

Delete adjacent duplicates from itab2 COMPARING ALL FIELDS.

SELECT COMP_CODE GL_ACCOUNT /BIC/XEXP_HEAD COSTCENTER /BIC/ZSUB_HEAD

FROM /BIC/AXNET_O1300 INTO TABLE itab3.

Delete adjacent duplicates from itab3 COMPARING ALL FIELDS.

SELECT COSTCENTER FROM

/BIC/AXNET_O1300 INTO TABLE itab1.

Delete adjacent duplicates from itab1 COMPARING ALL FIELDS.

SELECT * FROM /BIC/AXNET_O0500 INTO TABLE ITAB_PRV .

Filed Level routines.

Filed 1.

if SOURCE_FIELDS-/BIC/X_BUSINSS = 'GLOBAL'.

RESULT = 'GLB'.

else.

RESULT = SOURCE_FIELDS-/BIC/X_BUSINSS.

ENDIF.

Filed 2.

IF SOURCE_FIELDS-GL_ACCOUNT = '0007102000'.

READ TABLE ITAB_PRV INTO WA_PRV WITH KEY GL_ACCOUNT =

SOURCE_FIELDS-GL_ACCOUNT /BIC/X_BUSINSS = SOURCE_FIELDS-/BIC/X_BUSINSS.

IF SY-SUBRC = 0.

RESULT = SOURCE_FIELDS-/BIC/XALL_AMOT * WA_PRV-/BIC/XALLO_PRO.

ENDIF.

Field 3.

if SOURCE_FIELDS-GL_ACCOUNT = '0007102000'.

RESULT = 'Materials'.

ELSE.

READ TABLE ITAB3 INTO WA3 WITH KEY COMP_CODE = SOURCE_FIELDS-COMP_CODE

GL_ACCOUNT = SOURCE_FIELDS-GL_ACCOUNT XEXP_HEAD =

SOURCE_FIELDS-/BIC/XEXP_HEAD COSTCENTER =

SOURCE_FIELDS-COSTCENTER.

if sy-subrc = 0.

RESULT = WA3-ZSUB_HEAD.

endif.

endif.

Field 4.

Read TABLE itab1 into wa1 WITH KEY COSTCENTER =

SOURCE_FIELDS-COSTCENTER.

IF SY-SUBRC = 0.

RESULT = 'NETWORK'.

ELSEIF SOURCE_FIELDS-GL_ACCOUNT ='0007102000'.

RESULT = 'NETWORK'.

else.

RESULT = 'NON-NETWORK'.

ENDIF.

former_member188080
Active Contributor
0 Kudos

Hi,

I am not sure whether you are loading in 3.X Or BI7.

If 3.X just bypass PSA..

in case of routine see if you can avoid and use master data attributes for those 4 new fields.. it will save some time..

else carry out debugging for few records and check which step it is taking longer.

Thanks and regards

Kiran

KamalMehta
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi ,

For the time being you can do 2 things :

1. Check the Parallel processing of the DTP and reduce it to 1 or 2 .To do the same double click on the DTP and then from menu bar >>>go to>>>setting for DTP temporary storage . Here you can chnage the parallel processing of the DTP .

2. Decrease the data packet size .You can keep it as 20,000 for the time being .

Later you need to check the routines properly as suggested by Jaya :

1. If possible do all the selects in start routine rather than at the Field level .

2. Use Internal table of type Hash with unique key definition .

3. Use Field Symbol if possible .

Hope it helps .

Thanks

Kamal Mehta

Former Member
0 Kudos

Hi Laxman,

If routines are the culprit then you can use some code optimization techniques,

Check the below link for more information.

[;

Regards,

Durgesh.