Showing results for 
Search instead for 
Did you mean: 

Time to take a breather and provide some inputs for my prototype

Former Member
0 Kudos

Hello everyone,

I have something to share and need your thoughts and ideas. I am a grad student at California State University, Chico and currently pursuing my MBA in Management Information Systems. The MIS department (College of Business, CSU Chico) and the SAP vibe which permeates through every molecule of the department has basically driven me to pursue a career in SAP irrespective of the Industry norms, pre-requisites, politics, benefits derived from good press, monetary benefits, etcetera etcetera. By pursuing a career I mean gaining max knowledge and living life at its simplest level. Staying away from triviality in a nut shell...

I am going to summarize a model of mine which is still quite fresh and took 30 minutes of my time to be developed. I've been posting it through emails and on some private forums since October 2009. So it's on the Internet and dated (FYI).


Increase the query performance (providing specific insight) by segregated exception reporting

Involves either a dual data warehouse approach or 2 groups of DSO's (DSO approach may sound more reasonable and cost-effective though)

1) Strategic DW/DSO: will contain good data (statistical calculations will always result above break-even

2) Exception DW/DSO: will contain bad data (statistical calculations will always result below break-even

Initial Infrastructure:

1) 2 DW's or 2 groups of DSO's (Strategic and Exception)

2) Not pulled the data yet

3) Setting up a break-even based on historical data sets (Management level decision making). Once decided then include it in the extractor level intelligence level (code) so that the source data detour's into the appropriate closets (DW/objects) accordingly. Basically, greater than equal to break-even moves towards Strategic compartment and less than break-even moves into Exception compartment.

4) Querying against strategic level would give "green colored signals" always! Whereas against the exception level data objects it would give red. We now eliminate the concept of "Orange colored" exceptions. So basically we are not allowing the recipient of an orange exception state to be like, "Oh! The patient has a kidney problem, but it ain't that big a deal as of now. Let it be!" Does not make sense to me! This concept would function like a diode (1 or 0). Either we have an issue or not. period.

5) Exception and strategic InfoCubes if needed.

6) The precedence for performance enhancement techniques would now be somewhat like this:

a) BI Accelerator (In memory)

b) DSO/ DW splitting (The above concept)

c) Aggregates

d) Partitioning

Benefits from this approach:

u2022 Achieving focus towards activities which crave for focus but never used to win it (Exceptions) (On a real-time basis)

u2022 Performance enhancement (Better analysis)

u2022 Division of loads/querying into two DSO's/ warehouses

u2022 Time reduction for analysis

u2022 Addition of BIA, aggregates, partitioning would exponentially improve the performance now

Drawback and Solution:

It might be costly. But the costs could be cut down as we are using the SAP BOBJ reporting tools which definitely cuts a lot of IT jobs as per the current readings I've been doing. Along with that I also propose that a firmu2019s profitability will definitely increase with an increase in focus. The excess costs should definitely get compromised by an increase in profitability due to this approach.

Furthermore, my lack of knowledge might have been at play while creating this model so please don't be like "huh! r u kidding me boi?" I know I might be stupendously mistaken but there's nothing at stake.

The End

I am not a hardcore techie as I have not yet smelled the industry a.k.a my hands ain't greasy with SAP yet so please don't complicate things by explaining a code to me...Please make it layman specific so that knowledge diffuses well into the environment around us..Remember we are not living/breathing to show our skills but to share them...

I am just passionate about life and knowledge acquisition. And I am willing to shed my sweat, blood and tears for it without having an iota of fear to go wrong!

Thanks and Regards,

Akshay Thussu

Edited by: Akshay Thussu on May 15, 2010 10:25 PM

Edited by: Akshay Thussu on May 16, 2010 10:57 AM

Edited by: Akshay Thussu on May 16, 2010 11:00 AM

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Akshay,

Sounds like a good idea, but I am not sure about the limitations of it.

Not sure whether you would be expecting this kind of response, but as I am not that good in the fundamentals of the performance and cost reduction. I am suggesting do do a case study.

Take any standard data model that SAP BW delivers. and try to mold it with the ideas you have described above.

Though you may not be able to use BIA or BOBJ but rest of the things like aggregates etc can be used and then you will have a better view of what you are offering as a solution.

Best Regards,

Pratap Sone

Former Member
0 Kudos

Please bounce to the following thread Pratap. I have some detailed analysis you might want to look @. Leisure reading I must say!

Thank you very much!

I am here to become aware of the limitations and that's it! I've been researching on the transition from SAP BEx to BOBJ since 8-9 months now in my own little ways. I do have access on the BOBJ server as am doing this research (Independent study) at my university- CSU Chico (SAP Alliance).

I was thinking of ending my research with my own data model and making a pitch at the very end. Since the inception of my research on the transition I've been dreaming of coming up with something new.

If Bill Inmon could do it why can't we? stake in thinking like that right?

That's why I am here! So you have hit the nail right on its head with your idea.

Thank you very much Sir! I really appreciate your time and consideration.

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Akshay.

Interesting stuff!

Some comments that immediately came to my mind when reading this:

1) Query performance will improve by splitting one large set into half, making two smaller sets

Ok, in general, less data makes for better performance. Sure. Thus, as long as you can guarantee that users will always query either "Good" or "Bad", it will make sense to split the model into these two chunks - but that would be nothing new, as we already design things like that, if we have that kind of certainty....

However, you always find the need to see good and bad together, because business users will be doing stuff like calculating relative expressions (fx: value of "bad" / value of "good") to measure how much of a headache the bad things actually are. And you cannot be sure that querying one big table is slower than querying the two halfs and combining the result. It depends....

2) No orange-level exception

You will need it. Always. Ok, it will be just another DSO (or DW) in your model.... I completely agree with you that it does not make sense to argue "Oh! These guys have a kidney problem, but it's no big problem yet, so let it be!". However, that is the reality of the healthcare sector: you can only treat as many as you got room/time/money for and so, you H A V E to make a choice - in this particular example about kidney-problems, I guess it would make even more sense to not report the red ones, because they have the largest problem, making them likely to be the most difficult ones to succesfully cure. Hence, treatment should be given to the orange ones, while you might as well give up on the red ones from the start. Like this, you will maximize "health per dollar" but become extremely unpopular for other reasons (and if you are in politics, popularity seems to be the only thing that matters these days, but that is another discussion...).

3) Hardcoded exception levels

Your model filters data into good and bad at the backend. That is great from a reporting perspective, because now we do not have to do it on the fly in the report - it's already taken care of when the data is loadet. True, but what if the "cut-off-point" changes thus re-defining when stuff is good or bad? You have no easy way of re-categorising your data - you will have to do a reload of all the records. This takes time, not much maybe, but certainly waaaaaaaay longer than it would take the user to change a value on the variable screen when running the report, which is all the user would have to do, if you were to define the exception limits in the frontend, using a variable for the user to fill at runtime....

So the point is that your model is very inflexible. Sure, you can find cases where the limit's never or seldom change and here, what you are proposing makes sense, but then comes the question of how you are going to handle "what-If" scenarios:

Your model has no way of letting the users play with the limits and especially finance users love doing that - if you don't provide the means, they will find another way. They always do; excel, access, cognos, whatever... Even when you provide the means for doing exactly what they want, you will find that users are going to great lengths to find other ways of doing it, simply because no money is spent on change management and training.

4) Higher profitability due to focus on problem areas.

Well, I am not sure that this is particular to the model you are proposing; we already have exception reporting, so the knowledge of problem areas already exists. You'r point is that you want to do it in another way... Ok, if current exception reporting is too slow, then sure - an improvement in query-run-time will enable the business to better take care of bad stuff, but long running queries can be dealt with in so many ways of which changing the datamodel will be one of the last ones to go for, because it almost always means that the users have to do things in a new/different way and chaning people's habits is very hard indeed (= lost of money for change management).

5) Changing data at the extraction level

You are proposing to take care of the filtering into 2 DSO's at the extractor level. It might be relegion, but I don't like to mess with the data at this level. I want a 1:1 copy of the source and my logic applied after that. This enables me to reload from within my DW if for some reason I need to do that and I can do it without bothering the source system.

6) SAP BOBJ reporting tools cutting IT jobs.

Where did you read that? The initial effect is definitely a lot more IT jobs, because now the IT department has to provide stuff in the Bobj environment AS WELL AS in the SAP environment, so more people will be needed, as your existing SAP people do not know Bobj - ok, you can teach them, but the two worlds are quite different and even if you succeed in teaching your existing employees, they now have to do more work: Web-I's and Xcelsisus Dashboard as well as Bex queries and webtemplates. Unless of course you do a total switch to Bobj, but that would be madness (to me at least) and you will still need the same number of people to support the Bobj frontend and SAP BI backend.

Well, that was it from me - I don't mean to discourage you; you are thinking along the right lines, so keep doing that and keep sharing ideas!

Kind regards


p.s: I got the paragraphs spaced nicely as well, but I think once I hit "post message" it will turn into one big blob of a post - dont know why this happens

Former Member
0 Kudos

Hello Jason,

Well! I really really really appreciate your time and consideration Sir! Truly itu2019s an honour for me to have someone talk about this small little thought of mine. Thank you!

1) Do we already have designs in place wherein the data is fed into the warehouse AFTER being segregated at the source system level (check out counters or at the time of transactions)? I thought we always work on data once itu2019s in. Please clarify. What if you query the u201CBadu201D sectors only? How does it relate to anything else? How can we quantify a headache by stating how much of a headache? A headache is a headache and it destroys efficiency in ways in its own ways (small or large). There isnu2019t an ibuprofen for partial headaches yet! A dollar is a dollar and will hold its value in all circumstances. And its value today is far greater than its value tomorrow. Right? If Warren Buffet can pick a quarter from the ground and put it in his pocket then that means that something went wrong in the system for that quarter to fall on the ground and he made use of it. Simple! Where did I go wrong in this thought?

Sidenote: I am a business backgroundu2026in fact I donu2019t even need one as its made filthy nowu2026Systems, policies and strict ways of doing things have a lot of energy locked in them which needs to be revived by loosening up things a bitu2026Oliver and Wilbur Wright could do it without can weu2026isnu2019t it? Thatu2019s why I try to think laterally and question things without having the fear to go wrong.

2)Lovely! Pretty amazing! Not having reds in the health care domain is a good idea if we think of it. But in other cases wouldnu2019t eliminating orange make sense? It reverts back to the same..u201Dhow much of a head acheu201D thing..isnu2019t it?

Also, You just added another u201Cdimensionu201D in our future Cubesu2026u201DPopularityu201D hahahaHahah! Killer idea! Isnu2019t it? Itu2019s coming on my research for sure. And donu2019t worry I will quote it as Jacon Jansen on SDN says, u201Cu2026..u201D

3)This is where I was thinking of adding a BIA. And I had that in parallel with the warehouse or DSO in my model diagram. So in memory (indexing) will have u201CGoodu201D and u201CBadu201D indexes. But then again the flow was first DSO/DW and then BIAu2026.This thought of mine got detonated and redone on May 19th, 2010 while listening to Mr. Vishal Sikka and Prof. Dr. h.c. Hasso Plattner @ SAPPHIRE 2010. They introduced the first product out of the 3 (HANA- High performance analytical appliance). That has the basic concept in common with mine (to have data available at the time a customer is actually checking out of the store) ..That was the very first statement which I uttered out of my mouth in the month of October 2009 b4 I created this model..I was like why are we so far away from the source systems??? Hahaha Well! SAP moved the BIA a bit and of course its role..hahaha..So it kind of changed my perspective as well. u201CCut-offu201D point may not be an issue then. Limitu2019s go away too. A comparative analysis at the extractor level (between the customer and us..with both being at stake for costs incurred) could have segregated data moving into the BIA and since its in-memory and fast we could assign good/bad indexes with a parallel DSO/DW for history (Archiving) and go up from it a little too much an ask?

4)Itu2019s not exception reporting basically..its more like reporting in an exception domainu2026so all you deal with is exceptions and more yes thatu2019s the other way I was talking aboutu2026But itu2019s not that big a change in the data modelu2026everything remains the sameu2026.objects remain the sameu2026we are changing the flow of datau2026we are having a toll area for the data to use the extractor bridge (Like Richmond bridge in the Bay Area, to jump onto the PSA/BIA (Fast track or cash lane) typesu2026

5)hmmu2026true! But what if itu2019s not 1:1? What if its 1:1 but with a differenceu2026You can always have multi-providers to get a single relative view of the truth..

6) I was considering a total switch which might come about someday! Sorry! But if not cutting IT jobs then at least BOBJ reporting tools would provide a lot. Query time reduction will increase query performance which will result into clear insightu2026therefore effective decision making will resultu2026and more profit and extra savings could get invested on this model/BIA/people working on itu2026Idk lol is it lack of industry experience shouting out loud from my end? Haha

Thank you very much Sir for your perspective..It really means a lot to me and I think someone needs to work on the u201CBlob Projectu201D soon!!! Cmon itu2019s SAP after all!!


Akshay Thussu