on 2013 Oct 03 3:46 PM
Hello Experts,
We recently installed HANA Live content, virtual data models, HANA live browser (all the good things) and started replication with SLT in side-by-side scenario with Enterprise edition.
I was surprised to see, there are no attribute views which came as part of virtual data models. Everything is a calculation view and seems to have number of joins (at times 10+) for simpler data models.
I'm wondering why SAP didn't deliver not a single "attribute view" and has modelled everything in calculation view.
Even to get some of the basic master data, we need to do enhancement and could be a pain.
Any insight?
Thanks.
Abhijeet
Help others by sharing your knowledge.
AnswerRequest clarification before answering.
Abhiijeet,
Did you review HANA live docs model documentatioin @ http://help.sap.com/hba . Which module (CRM / Core ECC , etc..) of HANA live are you implementing? Since General Availability was only last month, you might want to get latest documentation links via OSS customer message or your internal SAP account manager.
Hope this helps.
Regards,
Rama
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Rama - HANA Live for ECC GA was in Dec 2012. They are in SP 03 right now released in aug 2013. We have reviewed all necessary documents on /hba site.
@Ravindra - Great reasoning on performance perspective. But its a big thumps down from reusability perspective especially for master data. In classic BW scenario, you know once we set up 0MATERIAL, we can assume all the attributes on all transactional data. But here, we have to add individual attributes or many cases, complete join to Material calculation view. To me this is going backwards.
Also - HANA Live views not being attribute views, if we are doing mixed scenarios, we cannot use those views/tables, to have virtual master data in BW.
Hi Abhiheet,
Last year during my discussions with the HANA development team, it was mentioned that in the options of generating HANA information models based on DSOs / InfoCubes, the Analytic views / Calc views will not contain any Attribute views due to the complexity in maintaining the re-usability aspect. Currently all the imported BW objects based models contains ALL tables for InfoObjects (P tables and T Tables) in the Analytic views. If the infoObject is used in multiple DSOs, then the same P and T tables are repeated in each of the generated Analytic views. I think the concept might have been used in the HANA Live content.
But that was last year, this year it has been informed that with the new SP07 of HANA and BW 7.4, there would be attribute views created for the InfoObjects. Not yet sure if those attribute views will be used in the generated Analytic views, but at least the first step is expected to be taken to generate Attribute views.
I'd suggest you discuss with SAP on similar lines and check if the possibility of having attribute views is likely in next releases / SP07.
Regards,
Ravi
Thanks Ravi for insight. I think that makes sense. We are on 7.4 and SPS06..Will check with SAP regarding future direction and if upgrade of HANA live content will be applicable.
My problem is - if I start exposing lot of missing master data pieces into calculation views for transactions, then my upgrade possibility will be diminished.
Hi Ravindra, just one quick note on your comment -
"Attribute views are good from the perspective of maintenance and development standards / reusability, but from the performance perspective, when the attribute views are used in Analytic views and Calc views, they are resolved to table joins. Hence it doesn't matter if the HANA views use Attribute views or tables in the view definitions."
Unfortunately this is not the case (although I wish it were). We recently came across awful performance problems in a very simple analytic view. We moved tables into the Data Foundation rather than as Attribute Views - and performance improved significantly. This is, of course, contrary to modeling "best practice".
One interesting exercise is to create the world's simplest/smallest star schema with one fact table and one dimension table, and model in the following ways:
1) Analytic View with Attribute VIew
2) Analytic View with dimension table in Data Foundation
3) Calculation View (with base tables or An/At Views)
4) Calculation View, executed in SQL Engine (option in Properties pane)
5) SQL
What's interesting is that every single approach generates a different visualization plan for respective queries - except for options 4 and 5 (which shows that the SQL Engine option for graphical Calc Views does indeed work).
It appears there is more than one way to skin a cat...
Cheers,
Jody
Hi Jody,
Thanks for the insight. It is quite helpful and I agree with your observation. Recently even we experienced performance improvement when the tables were moved to the data foundation of the Analytic view.
My comment was based on the interpretation from an answer by Lars in the discussion
If my Analytic View foundation is joined to attribute views, is both the OLAP and JOIN Engine used?
A2: Nope - during activation of the analytic views, the joins in the attribute views get 'flattened' and included in the analytic view run time object. Only the OLAP engine will be used then.
So I interpreted it as, it shouldn't matter if the table is included as Attribute view or as part of the data foundation. It the joins are flattened and included in the Analytic view runtime object, then there should not be any impact on the performance.
Thanks for your comment.
Regards,
Ravi
Just to add some more information here regarding testing out multiple ways of implementing a model. This all came about when we realized that all HANA Live is delivered completely in calculation views as the OP had brought up. I have yet to hear a good reason why, and we constantly hear that 'best practice' is to always use an analytic view if possible. So it was surprising that even simple HANA Live views are built as calculation views.
when the attribute views are used in Analytic views and Calc views, they are resolved to table joins. Hence it doesn't matter if the HANA views use Attribute views or tables in the view definitions.
I recently set up a similarly structured test geared towards performance of an simple Analytic View (4 dimensions) vs. the exact same functionality but implemented in a calculation view as HANA Live would deliver it (series of joins).
My process was to issue queries against these views; using the fact table only, then using 2 fields from one dimension, then two dimensions, then three dimensions and so on. What I found is that as the number of joins increased, the performance in the calculation view deteriorated more sharply than the Analytic view. In fact, the Analytic view performance remained almost constant - even with more joins, while the calc view saw between 2x - 4x performance degradation.
Just thought I would share findings in support of Jody's comment that there is more than one way to do anything in HANA.
Thanks,
Justin
Generating the Attribute Views will be helpful for the Auto generated BusinessObjects HANA based Universe. Starting in 4.1, the IDT will examine the underlying SAP HANA model when it designs the IDT Business Layer. It uses the Attribute View in HANA to determine which objects are reusable and which objects are unique. In short, it tries to avoid proliferating the attribute columns in a Universe that is based on multiple information views.
If importing the BW content resulted in the creation of Attribute Views, Analytic Views and Calculation Views, the Universe design process could be greatly simplified.
Do you know the version of SAP HANA that was used in your investigation? I have seen the optimizer make bad decisions in some SPS4 and SPS5 versions of HANA. I performed a similar test on version 62 and found Zero difference between the plans when using an Attribute View or Join in the Data Foundation. The OLAP engine was used to process both information views exactly the same.
Hey Jody,
clearly, the modelling "best practice" aims at a imaginary common/average use case.
A star schema with just one dimension table is an edge case.
The very problem of the star query (applying filters applied to multiple joined tables at once to the one big table where aggregation takes place) is just not present in this scenario.
The SAP HANA optimizers try to be clever about edge cases (ever notes how developers often have the tendency to focus on edge cases?) and will eventually decide to process the query in a different way.
Specifically for the OLAP engine, there are e.g. operations (POPs) that can sometimes be used to combine the work of multiple POPs (e.g. POP1 + POP3 can sometimes we replaced with POP13).
On top of that (you know what's coming now): SAP HANA is still a very fast moving target.
Things do change a lot and often.
And if you're dealing with very specific scenarios, 'suddenly' optimizations could show up in the SAP HANA code that help these specific scenarios.
Should you come across a situation where it's rather obvious how to process the query best, but SAP HANA does it in a slower way: that would be the case for a support message in my eyes .
Cheers, Lars
Lars Breddemann wrote:
Hey Jody,
clearly, the modelling "best practice" aims at a imaginary common/average use case.
A star schema with just one dimension table is an edge case.
The very problem of the star query (applying filters applied to multiple joined tables at once to the one big table where aggregation takes place) is just not present in this scenario.
Thanks Lars. As you know, I always try to reduce weird behavior to the simplest reproducible situation possible. The actual scenario that we're working with involves 4 attribute views with multiple filters. Modeling in DF rather than AT views also gives different VizPlan results and much better performance. So, the 'edge case' above is rather a 'simple case' of the same observation.
The SAP HANA optimizers try to be clever about edge cases (ever notes how developers often have the tendency to focus on edge cases?) yes, as former developer, edge cases are the ones that get you and will eventually decide to process the query in a different way.
Specifically for the OLAP engine, there are e.g. operations (POPs) that can sometimes be used to combine the work of multiple POPs (e.g. POP1 + POP3 can sometimes we replaced with POP13). Interesting, didn't know that. Thanks!
On top of that (you know what's coming now): SAP HANA is still a very fast moving target.
Things do change a lot and often.
And if you're dealing with very specific scenarios, 'suddenly' optimizations could show up in the SAP HANA code that help these specific scenarios.
Should you come across a situation where it's rather obvious how to process the query best, but SAP HANA does it in a slower way: that would be the case for a support message in my eyes . Agreed. For now we're on Rev 61, and I know the first thing support will say - upgrade and try again. We're supposed to upgrade to 67 any day now, so I'll re-test then and potentially open a support message. If I learn anything valuable, I'll update this post.
Cheers, Lars
Hi all,
We did the same test and here are our findings.
Having Calculated fields in Attribute view causes performance degradation when used in query. When you have calculated fields in AT and run the Visual plan you will get OLAP and CALC engine come into play.
Excluding the Calculated fields from the query improved the performance. So we ended up using DS to populate the Table itself with calculated fields rather than creating Calc fields in AT.
Hope fully this Helps everyone.
Regards
Purnaram.k
I generally (When Possible!) try and perform the calculations within SLT (IUUC_REPL_CONTENT) or Data Services (Query Transform) to help with the overall performance. When you avoid using calculated columns in any information view, your view is more likely to stay completely within the OLAP engine for processing. As a side note, if your running suite on SAP HANA (Not side car) your options are more limited.
| User | Count |
|---|---|
| 17 | |
| 8 | |
| 8 | |
| 6 | |
| 4 | |
| 4 | |
| 3 | |
| 2 | |
| 2 | |
| 2 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.