Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
This is Part II in a two-part series detailing how to install and configure SAP Data Intelligence (SDI) upon a Red Hat OpenShift cluster. Part I covered the background and prerequisites of setting up your environment, preparing the OCP cluster for SDI and deploying the SDI Observer. In this chapter, we look at how to perform the actual SDI installation, as well as the tests required to verify your installation and the setup. By the end of this two-part post, you will have a SAP Data Intelligence workspace running upon an OpenShift cluster. Special thanks to mkoch-redhat and Michal Minar for testing and validating & providing the technical content for this article.

SDI installation

This step will show you how to do the actual SDI Installation.

  1. Return to the Maintenance Planner browser Tab.

    The bridge has to be kept open in an active window while working with (MP).

    Enter the hostname and port from the previous step. image

  2. Click Next and then Deploy. image

  3. If you see the following:image Switch back to the SLCB browser tab. You will see this: imageClick OK.

  4. Enter Your S-User credentials and click next. image

  5. Select "SAP DATA INTELLIGENCE 3 - DI Platform Full" and click next. image

  6. Enter the OpenShift Namespace where the SDI should run in. In this case it is sdi . When this is done, click Next. image

  7. Select Advanced Installation and click Next. image

  8. Enter a password for the System Tenant Administrator. image

  9. Enter the Default Tenant name. image

  10. Enter the Default Tenant Adminstrator name and password. image

  11. As our cluster has direct access to the internet, we do not need to set proxies. If this is not the case for you, see this guide for details on how to proceed.image 

  12. Disable backup. If this step is not needed, see SAP Note 2918288. Note that the object storage infrastructure NooBaa cannot be used as backup media if Vora is used. To disable backup, be sure to remove the check mark.image

  13. Enable the Checkpoint store. Ensure that the checkmark is set.imageSelect S3 Compatible object store .imageUse the name and credentials for the checkpoint store which were created earlier. Note that the endpoint for NooBa S3 is always s3.openshift-storage.svc.cluster.local.imageimageimageimageNote that it may take some time for the validation to finish processing, even if your cluster is setup correctly. In the unlikely event that it does fail, check that you used http and not https. With private certificates this step may not work.

  14. Continue with the defaults on the next screens. Use the default storage class for persistent volumes.imageYou can leave the custom container log path box unchecked. imageEnable Kaniko. imageYou do not need a different container image repo for demo purposes. image'Enable kernel module loading' can be left unchecked as this has already been actioned by the installer, but if you do decide to check it the process will be unimpacted.imageLeave defaults.imageimageimageimageimage

  15. Change the clustername to sdidemo-ghpte-$GUID and replace with your lab GUID. imageThe following is a summary of the installation parameters.

  16. Start the installation procedure. After installation the following screen will appear. image


Make sure you write down or save your System ID.. In this example it is 11bw3dz.

Post Installation work

Getting Access to the SDI console

We have configured the sdi-observer to make the route to the admin interface available. You can check this with the following command:

    # oc rollout status -n sdi-observer -w dc/sdi-observer

If sdi-observer has exported the route correctly, the system will return with this command:

    replication controller "sdi-observer-2" successfully rolled out

You can double check with

    # oc get routes -n sdi

The output should be

    NAME      HOST/PORT                                                PATH   SERVICES   PORT      TERMINATION          WILDCARD
vsystem vsystem vsystem reencrypt/Redirect None

You can now access the SDI management console at https://vsystem-<SDI_NAMESPACE>.apps.<cluster_name>.<base_domain> . In this example it is

Configuring the Connection to Data Lake

  1. Login to the SDI Console at https://vsystem-<SDI_NAMESPACE>.apps.<cluster_name>.<base_domain>;. Use the tenant default, user defaultadmin and the password from the installation procedure detailed above.



  1. Click the Connection Management tile. image

  2. Click on + image

  3. Enter the following values and click Test Connection:

Parameter Value
Connection Type SDL
Object Storage Type S3
Endpoint s3.openshift-storage.svc.cluster.local
Access Key ID from above (see storage-credentials.txt)
Secret Access Key from above (see storage-credentials.txt)
Root Path from above (see storage-credentials.txt)


  1. If the connection test is successful, click on Create.


  1. Upon successful completion a notification will appear.


SDI validation

To validate your installation of SAP Data Intelligence, see SAP Data Intelligence Installation Guide .

Return to the Data Intelligence Launchpad (or login like in the previous step).

Defining a pipeline

  1. Launch the Modeler by clicking the Modeler tile. Note that this step may take a while. image

  2. Enter in the search field and click on Data Generator. image

  3. Save the configuration. image

  4. Start the graph. image

  5. Check that the status changes to 'Running' (this may take several minutes). image

  6. Open the Terminal user interface by right-clicking upon the Terminal operator and selecting OpenUIimage

  7. Once the Data Generator is running the following will be displayed. If not, you will see an error dialogue.image

  8. Stop the graph once you observe its output in the Terminal. image

Checking Your Machine Learning setup

  1. To create an ML scenario, open ML Scenario Manager from the SAP Data Intelligence Launchpad image Note that this step may take a while.

  2. Click Create . Enter a name for your scenario and a business question (this second question is optional). Click Create.image

The details for your scenario will appear and your scenario will be added to the list of ML scenarios on the overview page.

  1. On the details page for your scenario, click Create in the Notebooks tab to create a new Jupyter Notebook. In the Create Notebook dialog box, enter a unique name for your notebook (this step is optional). Enter a description of your notebook, then click Create to produce your Jupyter notebook.imageNote that this step may take a while.

  2. At this stage, your notebook will open in JupyterLabs. You will be prompted to select your kernel. Choose Python 3. image

  3. In your JupyterLab notebook, copy the following code into a cell and run it:

     import sapdi
    from hdfs import InsecureClient
    client = InsecureClient('http://datalake:50070')


Check that the code runs without errors.

The code should return JSON, similar to the following:

    {'pathSuffix': '',
'type': 'DIRECTORY',
'length': 0,
'owner': 'admin',
'group': 'admin',
'permission': '777',
'accessTime': 0,
'modificationTime': 1576237423061,
'blockSize': 0,
'replication': 1}




Congratulations – you’ve successfully set up SAP Data Intelligence (SDI) upon a Red Hat OpenShift cluster. In the first chapter, you gained insight into the high level installation workflow of setting up SDI on OpenShift and learnt how to set up your environment, prepare the OCP cluster for SDI and deploy the SDI Observer, and in this chapter you performed the actual SDI installation, as well as the tests required to verify your installation and the setup. Well done!

If you have feedback or thoughts, feel free to share them below in the comment section. For the latest content on SAP Data Intelligence, Red Hat and OpenShift, do subscribe to the tags and to my profile (vivien.wang01) for more exciting news in this space.


Vivien Wang is currently an Ecosystem Partner Manager for the Red Hat Partner Engineering Ecosystem.