cancel
Showing results for 
Search instead for 
Did you mean: 

Maximum Recommended Recordcount for a DSO

Former Member
0 Kudos

I thought I'd seen some guidelines on recommendations regarding DSO recordcounts ... but I can't seem to find them at the moment.

We generally load from a PSA to DSO, and then into partitioned cubes. In this scenario the DSO continues to grow, and after a few years I'm wondering when big is too big? I don't see any load issues yet, but it would seem like that day is inevitable!

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

"For DataStore objects, we recommend that you do not save more than 20 million data records. Otherwise the activation process takes considerably longer. For InfoCubes, we recommend that you do not save more than 200-400 million data records. Otherwise the compression and reconstruction of aggregates takes too long. The semantically partitioned object does not have these implicit restrictions, as the data quantity is distributed between several data containers."- help.sap.com

http://help.sap.com/saphelp_nw73/helpdata/en/d1/468956248e4d9ca351896d54ab3a78/frameset.htm

ThomasEsser
Advisor
Advisor
0 Kudos

Where do this values come from? 20 Mio. records for a DSO?

Answers (2)

Answers (2)

former_member184494
Active Contributor
0 Kudos

There are a few questions you could ask to find out if you need to "touch" this DSO which might have gone up in size ...

1. Do you have a need for reporting off the DSO?

2. Do you need data that is really old - say about 3-4 years old

1. If you need to report on this DSO - smaller the better or you could partition the DSO.

For partitioning the DSO - you can either do the partitioning in the database level or do it in 7.3. If you have to partition a very large DSO in the database - please be advised that this is not something that can be done in 15-20 minutes - please consult your DBA regarding the same. Make sure you enable row movement when you do this.

2. If you have some really old data that you will not need - then you could archive it out using the BW archiving tools ( archiving selective data would mean that you will not get any changed records for these selections - else the data loads will start failing. )

In most typical installations - when you look at DSO for delivery , sales orders etc - over a period of time they become huge - with 150 - 200 million records. We have dsos with this many records - but then these DSos are partitioned at the database and then report design is strictly enforced to make sure that we use the partitions. Else - do not report on the DSO - report only off the cubes.

And for your answer - there is no magic number as such - but in light of activities like Stats generation , Unicode conversion etc - predominantly DB related activities - the larger the database size , the tougher it becomes.

Former Member
0 Kudos

I believe we are addressing the question of sizing of EDW layer, DW layer, PSA layer, Multi Dimensional Model layer and how to adress the future needs. It depends on several factors.

- How big is your business- How big is your DB size.

- How did you plan to see your data in each layer after let us say 3 years- Semantic Partitioning, Table partitioning, Archiving and near line storage (NLS)

PSA

EDW Layer

Multi Dimensional Model Layer

How do you differentiate between each layer?

- Reporting requirements

- Granularity of the data

- Frequency of load

- Historical back ground

Do you plan to implement a BI Accelrator.

Please look at the proof of Concept scenario for a 60TB DB and the 25 TB BIA proof of concept scenario in the following document and Comparison of Info Cube size Vs the Data warehouse size. It is a very good document from SAP.

http://csc-studentweb.lr.edu/swp/Berg/Conferences/SAP_NW2008_and_Admin_2008/BI/Track3/Track3_session...

Former Member
0 Kudos

That's certainly relevant to the topic!

We've been operational for a couple years, and I have a DSO that's currently at almost 80M records. That data is already loaded to partitioned cubes, and although it is not accessed frequently, I wouldn't want to delete it. So obviously, NLS comes to mind. Short of that, I could see using something like a semantically partiioned DSO (whether handled by native BW functionality or something I build to emulate it).

I'm just wondering when and if there's a wall I'm going to hit at, say 100M records, or 250M records, where DSO performance suddenly goes south. Maybe there really isn't a magic number ...