Introduction
Realizing & executing a complete end-to-end flow to serve eXtended Planning & Analytics (xP&A) purposes might sometimes be painful for business users, who need to trigger the integration of their own dataset, without assistance from IT department.
Considering the fact, that input data do usually have to be transformed and harmonized, before being used as planning drivers or based data for calculations, the way to manage this data preparation step might definitively put this task out of the responsibility of business department, especially when robust controls and quality have to be performed.
Classical IT integration flows for having the appropriate data in SAP Analytics Cloud was to consider a batch or scheduled approach for:
In this blog, we are going to present how you can setup a complete integration flows for business where they could be autonomous for execution and transformations, but keeping full capacity to controlled integrated data.
Description of the business scenario
Let’s try to make it simple, but realistic.
Financial controllers in charge of a scope of reporting units must usually collect their key figures from source systems on a regular basis (quarter, month, week, …).
For simplification, sources extraction has been already done and each controller can access them as flat files organized in a shared and secured files repository (SFTP, Sharepoint, etc.).
Here are the different steps of this classical reporting workflow:
Let’s define now the different elements to make this scenario working properly.
Setup a shared files repository with Google Cloud Platform
We are going to create a new files repository, that would be securely accessible through the SFTP (Secured File Transfer Protocol) standard by our middleware solution.
Navigate to the google cloud console (https://console.cloud.google.com/) and create a new project from the welcome screen.
Give an explicit name to your project and click on Create button.
Navigate to your new created project from the Console list box:
Enable the Compute Engine API in your new Project by clicking Compute Engine, then Enable button.
Create a new VMWare instance with the button Create Instance:
Define the characteristics of your new server; for files sharing a low-cost instance should be enough at least for development purpose:
Ensure that the target Operating System is Debian GNU Linux and that incoming HTTPS inbounds are allowed:
Click on the Create button to initiate the server creation.
Once the server is instantiated, it will be visible in the console with an external IP address to connect to:
Connect to your instance by clicking the SSH button:
Validate authorization demand to connect the browser in SSH:
You are now connected to your server through the SSH console:
To connect to your server through SFTP, it is highly recommended to add a dedicated user.
Add a new sftp group:
Create a director for an SFTP backend user:
Add the sftp_user to the sftp group:
Define the password for the new sftp_user:
Change the ownership of the directory /home/sftp_user:
Change the access mode to the directory /home/sftp_user:
Now modify the standard SSH behaviour to authorize password authentication. Edit the file /etc/ssh/sshd_config in a text editor like vi and set the option PasswordAuthentication to yes:
Save the file with vi command esc + wq.
Restart the ssh-daemon:
Test the SFTP connection with an SFTP client:
Ensure that you can create a new folder at the root level of the SFTP directory:
You can upload any file to the SFTP repository.
Create the staging data layer with SAP Datasphere
Data upload to SAP Datasphere will be performed through the Open SQL access to the underlying Hana Cloud layer of the instance.
In SAP Datasphere, you need to create a new space or select an existing one. Then open the configuration of the space from the Edit button:
Click on the Database Access tab:
Create a new database user by clicking the Create button:
Set username suffix and enable SQL read & write accesses by clicking the checkboxes:
New Open SQL user has been created but is still not deployed:
Click on the space Deploy button to save and deploy the new Open SQL user:
The status of the space deployment is displayed by SAP Datasphere:
Click on the Info symbol:
Click on Request New Password button:
Copy the new generated password the user and close the dialog box:
Click on the new created user from the list and connect the schema with the SAP Hana Database Explorer:
Ensure that “Authenticate using username and password” is selected and paste the new password from the clipboard with the save checkbox to true:
You are now connected to Hana Cloud instance of SAP Datasphere through Open SQL:
Expand the Catalog from the left panel, right click on Table and click on Open SQL Console menu:
Create a new SQL table by typing the DDL statement into the console and execute it with F8:
The new staging table has been created and is now available for data upload:
Build a REST API with SAP Cloud Integration
Connect to your SAP Integration Suite tenant and click on Integrations and APIs:
Create a new package and save it with the Save button:
Click on the Artifacts tab and add a new REST API artifact from the Add list box:
Fill Name, ID and Description of the new artifact, then click the Add and Open in Editor button:
A new Integration Flow has been created with the fundamental functions of your Rest API:
Click on the Edit button on the top right menu to modify the predefined flow:
Remove the Receiver box on the right by clicking on it then selecting the trash bin icon:
Double-click on the local sub-process Handle Request and rename it “Get SFTP Data”:
In the Integration Process, click on Handle Request box and rename it “Get SFTP Data”:
Select the right arrow after Get SFTP Data and add a new step of type Process Call by scrolling down the list:
Adjust the size of the graph by extending its size.
Add a new local integration process by choosing it from the top bar menu:
Position the new local process box to the right bottom of the page:
Rename the new local process “Post Data to Datasphere”:
Click on the “Process Call 1” box and rename it “Post to Datasphere”:
From the Process call menu, click the Process tab and hit Select button:
Choose the Post to Datasphere from the list:
In the Post Data to Datasphere local process, add a flow step of type Content Modifier:
In the Post Data to Datasphere local process, add a flow step of type Content Modifier:
Click on the Save as version button from the top right menu:
Fill version information and click OK button:
Controlling the execution context of the Integration Flow will be managed thanks to SAP Analytics Cloud sent variables in JSON format. The request body will thus content dynamic information, that must be used during processing.
SAP Cloud Integration is mostly managing its message in XML format, thus we will transform the JSON body from the caller to an XML message.
Edit the Integration Flow and click on the arrow before the Get SFTP Data process call box to add a new step JSON to XML converter:
Rename the step JSON to XML Converter and go to the Processing tab to uncheck the Use Namespace Mapping option:
We will take the values formatted as XML and save them as message properties through a new message content modifier step added after the JSON to XML converter:
Name the new step Set Job Parameters:
Go to Exchange Property menu and add a new constant property named STAGING _TABLE with the value TBL_SFTP_STAGING:
Add three additional properties named SAC_VERSION, SAC_DATE and SAC_CONSUNIT of type XPath with the below source values:
In the Get SFTP Data local process, add a new step of type Poll Enrich before the End event:
Rename the step Get SFTP File:
Add a new Sender external system to SFTP poll from the top menu participant list:
Rename it SFTP_Server:
Link the SFTP_Server to the Poll and choose SFTP as communication protocol:
Save the Integration Flow before going further.
To create the SFTP credentials, you need to go the menu Integration and APIs from the left bar menu:
Click the Security Material tail:
Click on the Create button and choose User Credentials from the list:
Enter required fields and click on Deploy when finish:
To ensure a secure dialog between the SFTP server and Cloud Integration, you must update the known_hosts config file. You can use the connectivity test to get the host key of the SFTP server.
In the security management tiles, click on Connectivity Tests:
Go to the SSH menu tab and fill the fields with you SFTP server properties, click the Send button:
After the server’s answer, copy the server key through the Copy Host Key button:
From the Monitor menu, open the Security Material tile:
Download a copy of the known.hosts file:
Open the known.hosts with a text editor and paste the content of the clipboard (Ctlr+V) at the end of file as a new row:
Save and close the known.hosts file.
Now, upload the modified file in the security material list:
Select the modified file from your computer and click the Deploy button:
Reopen the Integration Flow designer and click on the SFTP arrow, go to Source tab.
Enter connection information to navigate to the right directory et put the filename identified by dynamic parameters as properties:
Click the Processing tab to setup the behaviour of adapter when file has been processed:
Save the Integration Flow once you have done.
File content read from the SFTP adapter will be saved as CSV data in the body message of the Integration Flow. Translate it in XML format through a CSV to XML step:
You will need to describe how you want to represent the XML data after the transformation. You will create an XSD document to manage this in a notepad, then upload this file as an attachment of your Integration Flow.
Content of the XSD schema file to create with a notepad editor:
Click on the global area of the Integration Flow, then navigate to the References tab and click the Add button to add a new XSD schema:
Select the new created file from your filesystem, then click add:
Select the CSV to XML step, navigate to Processing tab and set the different fields:
Save the Integration Flow once done.
In the Post Data to Datasphere local process, rename the “Content Modifier 2” step to “Prepare DELETE” from the General tab:
Add a new Exchange Property as an expression with the following setup to persist the retrieve content as a variable:
In the Message Body tab, enter the following Expression:
<root>
<Delete_Statement1>
<dbTableName action="DELETE">
<table>${property.STAGING_TABLE}</table>
<access>
<VERSION>${property.SAC_VERSION}</VERSION>
<DATE>${property.SAC_DATE}</DATE>
<CONSUNIT>${property.SAC_CONSUNIT}</CONSUNIT>
</access>
</dbTableName>
</Delete_Statement1>
</root>
This dynamic SQL Delete statement will clear existing data with same execution scope into the staging table.
Select the arrow after the Prepare DELETE step and add a new step of type Request Reply:
Name it Post DELETE in the general tab.
Add a new Receiver from the top bar menu and name it DELETE_Data:
Link the Post DELETE to the DELETE_Data receiver by choosing a JDBC adapter type:
A JDBC connection to the Open SQL Schema of Datasphere must be setup before continuing the Integration Flow.
Save your Integration Flow and navigate to Monitor / Integrations and APIs menu from the left bar:
Click on the JDBC Material tile:
Add a new JDBC source by filling the dialog fields to connect to the Open SQL schema of Datasphere, click on Deploy to save:
When successfully deployed, the JDBC connection appears in the list:
Go back to the Integration Flow in edit mode and navigate to JDBC link connected to the step Post DELETE. Enter the connection properties in the connection tab:
Add a new flow step of type Groovy Script after the Post DELETE to prepare the insert SQL statements to write our data the staging table, name it Prepare INSERT:
In the processing tab of the Prepare INSERT step, add a new Script File from the Create button:
Replace the default content with the following script:
Add a new Request Reply step after the Prepare INSERT and name it Post INSERT:
Add a new receiver from the top bar menu, name it INSERT_Data:
Link Post INSERT to the receiver INSERT_Data through a JDBC adapter:
Configure the JDBC link with the JDBC Open SQL alias and the Batch Mode option checked:
Setup of the Integration Flow is completed, save it as the initial version:
And deploy it:
Once the deployment is done, navigate to Monitor / Integrations and APIs left bar menu:
Click on Manage Integration Content / All Started tile:
Click on your SFTP_Integration API from the list and copy the URL of the service with the dedicated button:
Create a REST Api connection in SAP Analytics Cloud
Go to SAP Analytics Cloud and open the connections menu from the left menu panel.
Add a new source of type HTTP API:
Set the information to connect to the REST API and the Auth 2.0 security values provided by your technical administrator (taken from the BTP Cockpit), click on create to validate:
Create the data transformation view in SAP Datasphere
Navigate to SAP Datasphere and from the data builder connected to your space, create a new graphical view:
From the left bar, select the staging table as a source of your view by expanding the Open SQL schema:
Import and deploy the remote table in your space:
You can now decide to enhance the calculation flow of your view to manage transformation or mapping rules with standard Datasphere features:
In our current example, we just want to expose data as they were uploaded in the staging table, thus rename the view VW SFTP STAGING and set it as Fact view exposed for consumption (define a measure as LC_AMOUNT):
Save and deploy the view to Datasphere.
Create a source view connection in SAP Analytics Cloud
Navigate to SAP Analytics Cloud and go to the connections list from the left menu panel.
Add a new OData Services connection:
Set connection parameters in the dialog box.
OData consumption view URL has the following structure: <Datasphere based URL>/dwaas-core/odata/v4/consumption/relational/<SPACE>/VW_SFTP_STAGING.
Click on Create to validate the connection:
Create a SAC Planning data import job
Navigate the modeler view of your SAC Planning model and add a new source of type OData Services:
Select the connection created in the last step and click Next:
Select your Datasphere consumption view from the list and click Next:
Select all fields by double clicking the view name, then click Create:
You can now setup the transformation as in any data import of SAC Planning and rename your job SFTP Upload from DSP:
Create the SAC Planning multi-action to trigger data import job
Let’s now create the main sequence of actions in SAP Analytics Cloud to trigger the SFTP extraction flow, data staging and transformation in SAP Datasphere and import data in our SAC Planning model.
Go to the multi-action menu and create a new multi-action from the tile:
Add a new API step in the canvas:
Rename it and choose your REST API connection from the list. Fill the REST service URL as set in the connection:
To add dynamic input parameters for the integration flow execution, insert parameters:
Create parameter for Version and do the same for Date and Organization-type dimension (ConsolidationUnit in the example):
Once created, click on Edit JSON to organize to body of the calling message with a correct JSON syntax:
Click OK to validate the JSON syntax.
Save the multi-action to the directory of your choice in SAC.
Add a Data Import Step after the execution of the REST API:
Set the properties of your import step to connect to the target planning model and select the import job you have configured in a previous step:
Save the multi-action.
Open your SAC Planning story in edit mode and add a new multi-action trigger from the Planning Actions menu:
Set the name and the multi-action to trigger, add any relevant story filter to be passed directly as execution parameters:
Set the multi-action trigger style properties to fit your enterprise rendering and save the story.
Testing the solution
Put a new CSV file into your SFTP server directory:
Go to your SAP Analytics Cloud story and trigger the multi-action button:
Parameters are read from the story filters and you can now run the multi-action:
Once the execution has ended successfully, the story will be populated with data loaded from the original file.
Conclusion
In this use case, we have seen a concrete example on how to combine several pieces from the SAP BTP catalogue to serve a classical xP&A reporting process.
Usage of an intermediate layer like SAP Datasphere rapidly becomes a must have when you need to transform, harmonize, or map data from several sources.
Because business users want to keep some high degree of autonomy to manage this at their own pace, you would also need the gluing capacity of a message-oriented middleware like SAP Integration Suite.
We hope you can expand this starting use case to adapt it and create the one that fits to your own situation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
28 | |
13 | |
12 | |
11 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 |