cancel
Showing results for 
Search instead for 
Did you mean: 

stock query performance...

Former Member
0 Kudos

Hello,

I've created a process chain that loads BF delta (2lis_03_bf) and UM delta (2lis_03_um).

The process chain consists of:

1. a starter

2. delete index

3. load BF delta

4. load UM dela

5. create index

6. compression of cube

Execution of a query based on this cube is extremely slow today, and I'm concerened that the setup of the process chain might be problematic.

Is the delete/create index compatible with compression like described above?

BEst regards,

Fredrik

Accepted Solutions (0)

Answers (1)

Answers (1)

edwin_harpino
Active Contributor
0 Kudos

hi Fredrik,

check oss note 419490-Non-cumulatives: poor query performance

Symptom

Poor performance of non-cumulative queries

Other terms

OLAP, Business Information Warehouse, InfoCube, report, query, non-cumulative, non-cumulatives, non-cumulative value, non-cumulative key figure, validity slice, validity table

Solution

Note the following points if your stock queries are performing poorly:

1. Compression:

The non-cumulative queries strongly depend on the compression of the InfoCubes.Therefore, check whether the data of the cube has been compressed (if this is possible in the respective application) if you encounter a poor performance.

Always compress data when you are sure that you do not want to delete the corresponding request from the cube.

You can compress a BasicCube via the 'Compression' tab in the Administrator Workbench under 'Manage'.Aggregates will always be compressed automatically during the rollup.

2. Validity table:

The number of the validity characteristics and their cardinality also have a big impact on the performance.Therefore, you should only define characteristics as validity characteristics if they are really required on the application side.

Use Transaction RSDV to display the number of different validity slices.

Note 360249 contains detailed information on this subject.

It also describes how you can change the validity table for an InfoCube that has already been filled.

3. Time restrictions in the query:

Non-cumulative queries should always have a restriction on a time characteristic.The more restrictive this time restriction, the faster the query will be executed since the non-cumulative only needs to be reconstructed for a smaller amount of events.

4. Time drilldown in the query:

This means that if the 'Calendar day' and the 'Calendar month' are included in the cube, a breakdown via month would be quicker than a breakdown via day since the number of events for which a stock needs to be calculated is smaller.(As of BW 2.0B Support Package 20, there is an additional improvement for such queries because it is now possible to use aggregates for which the system aggregates over the time for such queries. To be able to use aggregates over the time for non-cumulative queries, however, the same must also apply here - exactly as with normal InfoCubes whereby there is a tagged time characteristic in the aggregate, from which all other time characteristics can be derived. These type of aggregates can be only defined in expert mode (note 456359).

For the remaining exception aggregations (average, average weighted with factory calendar, minimum and maximum), this rule does not apply since data is always calculated first on the most detailed time characteristic and only after that the system executes the exception aggregation.

Thus, it might be a good idea to split a query which contains non-cumulative key figures with LAST as well as with average, if you do not always need the average.

5. Totals rows:

If the totals rows in a report are not needed, you should suppress them since the calculation of the totals rows can be very time-consuming depending on the used aggregations.

Former Member
0 Kudos

Thanks!

But is the process chain logic (the steps defined 1-6) ok?

Best regards,

Fredrik