Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
Learn how a SAC Planning model can be populated with data coming from custom calculations or Machine Learning. We describe this concept in a series of three blogs.

The blogs in the series are:

  • Accessing planning data with SAP Datasphere (this blog)

    • Create a simple planning model in SAC

    • Make the planning data available in SAP Datasphere, so that it can be used by a Machine Learning algorithm

  • Creating custom calculations or ML

    • Define the Machine Learning Logic

    • Create a REST-API that makes the Machine Learning logic accessible for SAC Planning

  • Orchestrating the end-to-end business process

    • Import the predictions into the planning model

    • Operationalise the process

Note: The hyperlinks to the other blogs of this series and to the sample repository might not yet be working for you. They will be updated as soon as those links are public / permanent.

This diagram shows the architecture and process from a high level:

The whole concept and blog series has been put together by maria.tzatsou2andreas.forstergonzalo.hernan.sendra and vlad-andrei.sladariu.

Intro for this blog

In the current blog, 'Accessing planning data with SAP Datasphere' we will achieve the following tasks:

  1. create a simple P&L model in SAC and add some data to it

  2. expose the data in the P&L model in DataSphere

In order to complete the steps described, you will need:

  • an SAP Analytics Cloud (SAC) instance

  • an SAP Datasphere instance

  • a Data Provisioning Agent (DP Agent) set up in your Datasphere instance

1. Create a simple P&L model

1.1 Prepare the data

We will implement the following P&L model:

  • Profit = Revenue - Cost

  • Revenue = UnitsSold * UnitPrice

  • Cost = DirectCost + IndirectCost

  • DirectCost = UnitsSold * UnitCost

UnitsSold, UnitPrice, UnitCost and IndirectCost are all inputs to the model

We have to consider two types of data:

  • actuals: this is input data that we record about the past

  • planning: this is input data that we create for the future

In the current example, we will create:

  • 12 months of planning data: 202301 to 202312

  • 15 months of actuals data: 202201 to 202303

Then, we will use the actuals for UnitPrice and UnitsSold and the plan for UnitPrice to estimate the plan for UnitsSold via external ML.

We prepare this data in csv and will later upload it to the SAC Planning model.

The data for one month looks like this:

Note that the costs are recorded as negatives. This is to allow hierarchical aggregation.

The sample data is available here.

1.2 Define the model in SAC

This sections assumes some basic familiarity with the SAC interface - so we won't add screenshots with the location of the required buttons.

We start with an empty model (Modeler -> Create new model -> New Model). This is a planning model by default.

We need one measure which we name 'Value' and keep the default decimal type. This will hold the data values.

We need one dimension, 'Account'. Here we will add members for all the different elements in the P&L model.

The dimension type is 'Account'. This will create a hierarchy allowing us to aggregate accounts hierarchically.

We add all the elements of the P&L as members of the Account dimension and specify how the computations take place (i.e. we implement the calculation model).

  • Profit and Cost will be computed by aggregation along the hierarchy (the 'Hierarchy' column specifies the element that the current element aggregates into)

  • Revenue and DirectCost will be computed by formulas

  • IndirectCost, UnitsSold, UnitCost and UnitPrice are inputs

The model we created has one data version by default: public.Actual.

We want to hold both actuals and planning value so we add a second version.

This cannot be done from the Modeler interface. It can be done from a story where we display the model.

Before leaving the Modeler in order to create a story, I saved the new model as 'ExtSACP01-P&L Model'

We create a new story from Stories -> Create new Canvas.

We add one table to the canvas and link it to the model we just saved ('ExtSACP01-P&L Model')

In order to create a new data version we save the story (I named it 'ExtSACP01-ModelView) then switch from edit mode to view mode.

Here we select the table widget, open the version management panel and create a new version 'Plan' with category 'Planning'

We are ready to import the excel data to the model. Back in the Modeler app, select the Data Management workspace then create a new import job from file. Select the excel file provided.

Once the import job is created, click Set Up Import.

Go to 'Map to Facts' step.

Here we need to delete the default mapping of default.Value = public.Actuals to Version and instead map the csv column 'Version'

In the next step, 'Map Properties', we need to map the csv column 'Category' to the Category property of the Version dimension

Now we are ready to run the import job which hopefully completes without any problem.

We can check the data inside the P&L model by going back to the story we created earlier.

We would like to see a little more detail. Switch to edit mode and open the right side panel. If the panel opens in 'Styling' mode, switch to 'Builder' mode.

Setup the display to show granular data. Here I have "Account" on rows, "Measure", "Version" and "Date" on columns.

We would like to allow the planner to enter values for unit price and get estimated values for units sold via ML.

The next step to enable that is to make the data in the Planning model available in DataSphere.

Before leaving SAC, make a note of the model id for the model you created. The model id is the last part of the text in the browser bar when the model is open in the modeler view. We will need it in step 2.3.

2. Expose the data in an SAC model to DataSphere

The SAC model we just created contains 'fact data'. We want to make this fact data readable from a table - so that we can use it as input for our algorithm.

We are going to achieve this in 3 steps:

  • Create an OAuth client in SAC. SAP Datasphere will connect to this client and get data from SAC via this connection

  • Create a CDI connection to SAC in Datasphere. This will allow data from SAC to be made visible in Datasphere

  • Create a HANA view (in Datasphere) on top of the SAC model fact data

2.1 Create an OAuth client in SAC

this is under System / Administration -> App Integration


  • Purpose: Interactive Usage and API Access

  • Access: Data Export Service

Once the OAuth Client is created, make note of the following pieces of information. We need it in order to create a connection from DataSphere to the OAuth Client in SAC

  • SAC system hostname

  • OAuth Client ID (generated at creation)

  • OAuth Client Secret (generated at creation)

  • SAC token URL (displayed at the top of the App Integration page in SAC)

2.2 Create a CDI connection to SAC in Datasphere

In Datasphere connections are defined inside spaces.

Select your space (in the screenshot it's 'Extend SAC Planning') then go to Connections -> Create

Select a 'Cloud Data Integration' connection type:

Make the following settings:

  • URL: [SAC hostname]/api/v1/dataexport/administration. This the administration service of the Data Export Service of SAC. You can find the documentation for the Data Export Service here

  • Authentication Type: OAuth 2.0

  • OAuth 2.0:

    • OAuth Grant Type: "Client Credentials"

    • OAuth Token Endpoint: use the SAC token URL we made a note of earlier

    • OAuth Response Type: "token"

    • OAuth Token Request Content Type: "URL Encoded"

  • Credentials (OAuth 2.0):

    • Client ID: the OAuth Client ID from the SAC OAuth client (we made a note of it earlier)

    • Client Secret: the OAuth Client Secret from the SAC OAuth client (we made a not of it earlier)

  • Features:

    • Data Provisioning Agent: select a DP Agent to enable Remote Tables

Give a name to the connection.

I used 'SAC DGC CDI Connection'. DGC is a code that will remind me which particular SAC system this connection goes to.

2.3 Create a HANA view (in Datasphere) on top of the SAC model fact data

In Datasphere go to 'Data Builder'. Notice you still need to be in the same 'space' where you defined the CDI connection before.

Select 'New Graphical View'. This will allow us to create the view using the graphical editor.

Go to 'Sources' in the left panel. Use the model ID of the SAC Model (the one we made a note of in step 1) in the search box. Drill down to the tables inside the model. Drag 'FactData' to the canvas in the middle.

Create a name for the table you are importing then 'Import and Deploy'.

Save the datasphere view. This is in General -> Save.

Make sure the View is exposed for consumption.

Now the data in the SAC model is accessible from Datasphere. To see the data, drag a column from the View Properties panel to the Data View panel then click view

Make sure to deploy the view.

We are almost done. We just need to find the parameters we need in order to connect to the HANA schema inside Datasphere where we just created the view. Once we have that we will be able to access the data in an external API and do any processing we require on it.

First, we create a Database user in Datasphere. Go to Space Management -> Database Access -> Database Users -> Create

  • Set a user name suffix. I chose 'TU' meaning 'technical user'

  • Enable read access to the Space Schema

  • Enable write access to the User's Open SQL Schema (this will allow you to write algorithm results here later)

  • Enable APL and PAL - these are in-database ML libraries that will be used in the second blog in the series

Deploy the space. Once it's deployed, open the Database User details view

Make note of:

  • Host Name

  • Port

  • Database User Name

  • Password (you will have to click the 'request new password button'. then the button changes to 'copy password')

You will use these 4 to connect to the data from an external environment - for example from a Python Jupyter notebook using hana_ml

Also make note of:

  • Space Schema - you will find the view on the SAC model fact data here. For me it is EXTEND_SAC_PLANNING

  • the name of the view we created earlier - 'ExtSACP01_FactData_View'

  • Open SQL Schema - 'EXTEND_SAC_PLANNING#_TU' - this is the Schema that the Database User can write in.

With this information we can access the data from an external algorithm

That's all for the first step. Congratulations on making it so far!

In this first step we created a simple planning model in SAP SAC, added actuals and planning data to the model. Then we made data in the model available to an external algorithm via SAP Datasphere

In the second step of the series we will use the model fact data to 'forecast' the 'UnitsSold' based on the 'UnitPrice' created by the planner.

In the third step we will take the 'UnitsSold' forecasted by the external algorithm and update the original SAC Model.
Product and Topic Expert
Product and Topic Expert
Great write up!  I'm looking forward to the subsequent posts!
I really appreciate all material until now!

If not have much problem, have is possible to send me the sample code in python i tried here and doesn't work!

I know it is my wrong code.

Again, thank you with this excellent material!

The second blog just went live. Here's the link (you can find it at the start of this blog as well) :
0 Kudos

I'm not sure what python code you have in mind. There's no python code in this part but there is some in the second part of the series. That just went online:
Active Contributor
Hello Vlad-Andrei,

very interesting blog (and hopefully only a workaround until a Datasphere model can be used directly, for planning in SAC?) ...

But I'm missing somehow the part on the Data Provisioning Agent.... could you please elaborate a bit more on that? I mean, what configuration steps are required here?


0 Kudos
Hi Martin,

I see installing DP Agent in the same bucket as installing Datasphere and SAP - so I didn't include it in the blog. I've never done it myself but I think following the documentation of Datasphere should be straightforward enough:

Can you give it a try? In case you have issues with it let us know.


Product and Topic Expert
Product and Topic Expert
Helo Martin, thanks for your comment. What is your view on the ideal way such scenarios could be tacked in SAC / Planning? Best regards Antoine
Product and Topic Expert
Product and Topic Expert

Just digged into Influence request - the only one I can find out that is related is this one - but maybe your viewpoint is different?

Active Contributor
Hello Antoine,

right... the request you found is exactly what a planning Admin would expect... and like it has been ever since.... coming from SAP BPS, Integrated Planning and BPC 😉

Right, the backend is strong and this is the right place to be for the data model for planning. And then you consume it in DWC via story and change the values there, like you would do it with the SAC Addin for Excel... one is the Front-End, and Data is stored somewhere else.

OR, you provide similar transformation & modeling capabilities as well in the frontend, like you have in a backend ... but this obviously has never been intended, because then already the file uploads into a model would have been designed differently, long ago.

What I also don't understand... why do you need to learn Python, now ? Would it not have been feasible to use the same kind of javascript, like you can use in the SAC Application Designer?

BR, Martin

Hi Vlad-Andrei,

This is a very nice blog, Thank you!!.

I have one question, do we have any similar option in BTP HANA Cloud, to bring SAC Planning data to HANA Cloud?

Kind regards,


Great blog. Also showing nicely the possibilities to include live Back-end modelling capabilities to the planning proces.
0 Kudos
Hi Chethan,

you can use similar SDI with DP Agent or Odata directly in HANA Cloud.

In DSP, you can also use Odata directly to connect to SAC. In this case, Odata connection needs to be created for each SAC model.



0 Kudos

Hi, we can restrict the particular SAC model in the "Root Path" option of the datasphere CDI connection. I see that we can restrict a single model by using \sac\<Model ID>

Do we have any option to restrict multiple models?