I am Jana Milivojević and through this blog I would like to show you the possibilities of SAP Profitability and Performance Management connections with other technologies. Connection between Snowflake and SAP Profitability and Performance Management can be established using both Cloud and On-Premise version. However, in the following example we will be focusing on the implementation on Cloud.
I have been working on establishing the connection between SAP Profitability and Performance Management Cloud and Snowflake during the previous few months. Through the series of two connected blogs my colleague cojba and I would like to give you an insight into possibilities that connection between SAP Profitability and Performance Management Cloud and REST API services provide. In this blog you will learn more about the setup in the SAP Profitability and Performance Management Cloud modeling flow, while in the second blog, we will cover the Python script and the coding needed to establish the connection.
Snowflake represents a true software as a service platform that can be used for data warehousing, data lakes, secure data sharing, and more. Therefore, the integration between Snowflake and SAP Profitability and Performance Management Cloud would allow users to easily load data from Snowflake, perform various complex calculations inside SAP Profitability and Performance Management Cloud, then send the data back into Snowflake data warehouse.
There are two possible approaches that can be used in connection implementation on the SAP Profitability and Performance Management Cloud side. These two different approaches were created to cover different use cases.
The first approach should be used when there is no problem with the format of data coming for Snowflake. Meaning, that data can be the input of SAP Profitability and Performance Management Cloud Model Tables without any need to further change, structure or modify the data. For example, this would mean that numeric columns contain only numeric values in appropriate format SAP Profitability and Performance Management Cloud recognizes. This also applies to date and categorical values.
Using this approach, we won’t get an error when trying to store the data imported from Snowflake into SAP Profitability and Performance Management Cloud.
This Model Table is used to provide parameters that are needed in order to establish a connection from Python RFA. But, it is also used to define mapping between the source data and the output SAP Profitability and Performance Management Cloud Model Table. Having this Input Modeling Table in the set up also makes the whole process more user friendly, as the end user doesn’t have to enter the Python script inside of the Python RFA function in order to change parameters or the mapping. It can simply be changed inside of this Model Table when needed.
We can notice that there are two different subsets of data. One relates to the parameters that are needed in order to establish connection, while the other is used to map the fields to SAP Profitability and Performance Management Cloud:
We can also see that there are four different columns used. First column called ‘Parameter or column (PARCOLTYPE) is there so we can define for each row whether it carries information about parameter or is used for mapping as seen in the image above.
Second column called ‘Parameter/Field name (PARAMCOL)’ is used so we can define the name of the parameter or the name of the field from SAP Profitability and Performance Management Cloud we want to map a certain column to. This allows us to easily access the values we need in our Python Script.
Third column called ‘Order (ORD)’ allows us to map a certain column from the Snowflake data to a field from SAP Profitability and Performance Management Cloud (defined in the previous column). For example, this would mean that we want to map the first column from our Snowflake table to the ‘ZPOPER’ field in SAP Profitability and Performance Management Cloud. Furthermore, the row number 8, indicated that we want to map the second column from Snowflake table to the field ‘RBUKRS’. This mapping should be specified for each column.
The fourth column is called ‘Parameter value or field type (VALTYPE)’ and here we are defining the value for each parameter. On the other hand, when it comes to mapping the fields, here we are specifying whether certain column contains text or numerical values. This information will later help us map the values correctly.
The input of the Python RFA function is the Input Model Table previously explained. We also need to set the selection fields to match the fields from our input table. In the Python rules we are using a Python script in order to connect to Snowflake API and get the table we need. This part will be explained in the second blog from this series.
Finally, to complete the setup for our first approach we also need an Output Model Table:
In this table we will define all the fields used during the mapping process. This table will be the output of our Python RFA function, and will therefore contain the data we got from Snowflake.
The second approach should be used when the data we want to upload to SAP Profitability and Performance Management Cloud doesn’t have consistent formatting, or has formatting that isn’t suitable with the SAP Profitability and Performance Management standards. This is when the Writer function steps in and helps us solve the problem.
In the following graph we can see how we manage data with unsupported formatting. If numerical values contain characters that are not numerical such as ‘$’. We are firstly saving them in the Generic fields, changing the formatting through the Writer function, then saving the updated data values in the correct Key Figure fields.
This problem often occurs when we want to store non-numeric values into Key Figure fields. This will cause an error within SAP Profitability and Performance Management Cloud. Therefore, in order to overcome this, in this approach we are using an additional Model Table that consists of ‘Generic fields’.
This Model Table, and the information it contains regarding the parameters needed for the connection is used as an input for the Python RFA function.
As the format of the data isn’t suitable for SAP Profitability and Performance Management standards, we need to create a Model Table that will be used as an output for Python RFA and can take any format of the data.
That is why in this Model Table we are using what we call ‘Generic fields’. Generic fields are environment fields of CHAR type, set with very long data lengths. That way, in these fields we can store all types of data (numbers, dates, etc.). as SAP Profitability and Performance Management Cloud will treat them as plain text. The number of the Generic fields we create must be equal or greater to the number of columns in the source Snowflake table. Furthermore, the length of each field must be big enough to store each record of the source data.
Once we have an Output Model Table with Generic fields, we can create the Writer which will help us convert the data to desired format. In the Writer we are also mapping the Generic fields into the actual fields we want to use. And finally, we have the Model Table that contains the result. This Model Table contains well-formatted data that can be used in further SAP Profitability and Performance Management Cloud calculations.
Both approaches are designed in a way that allows them to be used in various cases without much effort. Therefore, if you want to get the data from multiple Snowflake tables, you can simply copy the set of functions and then change the data inside of the Input Model Table, as well as the fields used in the functions. This way, you don’t need to change anything in the code itself and can easily adapt the solution to your needs.
Furthermore, as there are numerous API solutions available on the market, this approach can be used to connect SAP Profitability and Performance Management Cloud to other API services.
Besides using the Snowflake to get data, we can also input data into Snowflake. After performing some calculations in SAP Profitability and Performance Management Cloud, we can set the result function as an input to another Python RFA. Using the Python code we can establish the connection to Snowflake and send to it. Stay tuned and read more about this in the second blog of the series.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
5 | |
5 | |
5 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 |