Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Hi all,

I would like to share my learning gained from building reports in the Finance module and by doing so share a way to create Finance reports which supports dynamic reporting.

I have stored data from S4/HANA to SAP DataSphere (formerly SAP Data Warehouse Cloud).
Data Modelling was done in a separate space created in SAP DataSphere with remote tables, also local tables which were created via flat files and also Flows (previously known as Data Flows in SAP DataSphere). The Financial Reports were created in SAP Analytics Cloud with the data from SAP DataSphere using Live Connection.

All these reports use separate Analytical Datasets (now deprecated. Kindly use the SAP suggested method of creating Fact and Analytical Models).

To understand the linkage between tables and the data, I will be briefly explaining the connectivity as follows:

The points from 1 - 7 is to develop the complete master data and in point 7, the master data is joined with the transaction data to fetch the needed dimensions and measures.

  1. Financial Statement Version (FSV) must be created which shows the Balance Sheet Statement and Profit/Loss Statement as per the client’s hierarchy. In most cases, the combination of these 2 reports constitutes the FSV.

  2. Financial Statement Versions contains the G/Ls in a similar format to Parent-Child hierarchy with multiple levels. The hierarchy might vary based on the client with many levels.

  3. All these hierarchy data will be saved in the table HRRP_NODE with the field names of PAR_NODE and HRY_NODE and its description in the table HRRP_NODET.

  4. Based on the number of hierarchy levels in the FSV, it is needed to join HRRP_NODE-PAR_NODE with HRRP_NODE-HRY_NODE itself repeatedly so that all levels of hierarchy, level-by-level from the top header to the G/L level, will be fetched.

  5. The table FINSC_FAGL2SEMTA is where the Semantic tags are grouped with respect to the nodes of the FSV as per the Financial Statement Version denoting the nature of the G/Ls. The data in the table will denote the respective node in the FSV linked to the respective semantic tag. For example, A semantic tag (PL_RESULT) will consist of all the G/Ls considered as Net Profit G/Ls.

  6. After joining the tables HRRP_NODE and FINSC_FAGL2SEMTA, the result expected is to have data with G/Ls, FSV Nodes and Semantic tags.

  7. Connecting the above data with SKA1 (G/L Master table) will help us to achieve an accomplished master data with most of the necessary fields or dimensions.

  8. Joining this master data with ACDOCA (Transaction Table) will lead us to a collection of data comprising of G/Ls, Profit Centres, Cost Centres, Company Code, Amount, Hierarchy nodes and Semantic tags.

  9. You can also add company code description and country name by joining T001 and T005T  to ACDOCA with BUKRS and LAND1 and fetch BUTXT and LANDX for Company Code description and Country Name respectively.

  10. With all this data, it is possible to build reports like Profit and Loss statements, Balance Sheet, Cashflow statements, Finance Dashboard KPIs and much more with desired hierarchy.

  11. Carry-forward, MTD and YTD for specific years or for all years(based on the customer’s request) can be done with calculations in DataSphere and SAC Stories.

Fig.1 : Things to fetch by joining these tables

This method is complex to work on but provides more benefits in terms of performance, stability and accuracy of the data. To better understand  why to go for this method, all the dimensions required for a Finance report and hierarchy reporting is achievable.

This method enhances the performance than using a static approach to build the same reports. This method also addresses the possibility of any new addition to the G/Ls or the hierarchy automatically as it eliminates manual correction. Hence, it facilitates us to be sure that there are no major changes to the objects in the future.

I hope this blog helps to understand this linkage between individual G/Ls & the Financial Statement Versions and utilize it in your development as per the desired requirement, thereby making sure of client’s decision to trust SAP and us.


Sidharth T
Active Contributor
Hello Sidharth,

interesting approach... might be worth to try the same once again for gaining experience here.

I have two questions:

  1. Would you implement it the same way also today? It reads that you built that probably before March 2023, where the Data Warehouse Cloud had different capabilities?

  2. Why did you not use a Live-Reporting in SAC via CDS View: C_FINANCIALSTATEMENTKPI, like it is done in the SAC Content: "FI (Live based on Semantic Tags) Dashboard"?

BR, Martin
0 Kudos
Hi Martin,

Thanks for reading and sharing your thoughts.

As for your queries,

1. If I am implementing after the new updates of SAP DataSphere, the method would vary slightly as SAP suggested to change Analytical Datasets into Analytical Models via Fact Tables. In the new method, I would go with ACDOCA table as a Fact table as it has many needed primary keys.

2. Going with CDS view is sure a good method, but I preferred to model it this way because the headers of the hierarchy in the report was custom hierarchy and the semantic tags was customized from the standard a bit too.

Thanks again,
Interesting article. I find this valuable because it is dynamic and can support any change in the GL accounts/ Semantic tags. It is almost future proof against any changes in the business environment.

Siva Prakash.
Labels in this area