cancel
Showing results for 
Search instead for 
Did you mean: 

validation of a info provider

Former Member
0 Kudos

Hai All,

A quick question. If I design a cube or a multiprovider, how do I decide that its designed for better performance and how do I know the loop holes in my design? I know that there are some best practices and recommendations regarding building these infoproviders. But is there any way that I can check the performance? May be like running the queries and measuring times etc... any suggestiosn are appreciated.

Thanks.

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

One of the important things to me (assuming your test data is representative of the production data) for a cube would be to look at the dimension tables and see how many rows the table has and the number of distinct values the characteristics in the dimension have, which might suggest breaking up a dimension into two, or moving a char from one dim to another. Depends on how much you were able to do to analyze the source data for your original design.

Dim 1 has 2 characteristics

Char 1 - 15,000 distinct values

Char 2 - 100 distinct values

Depending on the correlation of these two chars, your dimension table could range from 15,000 rows to 1,500,00 rows. If the dimension table is closer to 15,000, then queries that read that dim table will probably be OK, but if your dim table ended up being much larger, e.g. 300,000, then you might want to consider splitting up the chars into different dims.

As far as query results, I would rely on the info in the BW statistics cubes or RSDDSTAT table to get the database times. Keep in mind you need to run any query you are testing twice, making sure the query is NOT using OLAP cache, and disregard the time for the first run. The first run will more likely need to physically read the data from disk, which takes more time, whereas the second query execution is more likely to retrieve much of the data from the database's buffer (memory) which is much faster than reading from disk.

<b>You do have to balance your time and effort against possible gains.</b> If the cube will never be very large, you probably don't need to agonize over every last performance issue, but if the cube will be large and/or queried heavily, then a greater investment of design/review effort is warranted.

There are other tools such as adding aggregates and master data indexes that can help for some queries too.

OLAP cache can provide a huge performance gain, but requires that the variables in your queries be setup a certain way to maximize OLAP cache use.

Also - redesign a year after the cube is in use is possible or might be necessary - don't rule it out. So much of the design is based off the data and its use, so depending on how much expertise (internal or consultant) with both is/was available during the design will probably dictate the likelihood of a future redesign.

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Visu,

There r a lot of documents in service market place to answer ur queries....give me ur email id and i will send u a document........

Southie

Former Member
0 Kudos

My email id is visu_venkat@yahoo.com

Thanks a lot.