Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
asim_munshi
Advisor
Advisor
1,764
Pre-Requisities: Familiarity with SAP Data Intelligence; Modeler and ML-Scenario Manager

We recently came across a case wherein a we wanted to train a Machine Learning model externally and use Data Intelligence for Inference and Consumption. The below steps outlines a 'Bring your own model' use-case with SAP Data Intelligence used for hosting and consumption

Train Model Externally


We used Google Cloud Platform to train our model. We created an AUTO-ML Vision for training our image classification model as out lined here.

The usecase trains the model on about 60 Cloud images to classify the clouds as cumulus, cumulonimbus or cirrus

  1. Upload images to GCP



uploadimages2gcpstorage


2.  Navigate to the Vision Dashboard and Create a DataSet with your images.

3.  Once your import is done you will be able to see your images


inspectuploadedimages


4. Click on Train button to create your model. Since you want to download and use the trained model externally - choose 'Edge' as the deployment option


chooseedgemodel


5. Follow the GCP tutorial link to test your trained model on GCP itself. Once you are satisfied download the model as a container. Note that you will get this option as shown in the screenshot only if you selected the 'Edge' option while creating your model in step 4. You can export to a cloud storage and then download it to your local disc.


exportmodelascontainer


6.  You can check the exported model by browsing to the bucket. It is downloaded as "saved_model.pb". Download the pb file to your local machine for consumption with DataIntelligence or on your local desktop


exportedmodel







Consuming Exported Model on your Local Desktop


 

To consume your exported model on your local machine follow the steps outlined here

Mount the model locally on your docker image using
docker run --rm --name <any_unused_container_name> -p 8501:8501 -v C:\<path>\ML:/tmp/mounted_model/0001 -t gcr.io/cloud-devrel-public-resources/gcloud-container-1.14.0:latest

 

\<path>\ML = location of the downloaded .pb model file


runninglocallyondocker


 

Now you are ready to test the exported model. Convert your test image to b64 encoding. I used the base 64 encode app from microsoft for this purpose - can be dowloaded here and save the results as a .json file

Test your model as
curl -X POST -d @resp1.json http://localhost:8501/v1/models/default:predict

output


testinglocally


 

You can also use postman to test this. Submit your input as a binary file then


Consuming Exported Model using SAP DataIntelligence


DI:2006 supports only exported models in the container form (.pb). It does not yet support tensorflow.js format.  The default tensorflow run time is 1.15 and the Tensorflow 2 is supported with the Model Serving operator. Details here 

Step 1: Zip and upload your model


Zip your saved_model.pb as saved_model.zip

Login to your DI instance's Launchpad and click on 'System Management'

Navigate to the Files tab. Create a new folder called 'mlup' under the Files folder and upload the saved_model.zip file in this folder

 


uploadmodeltoDI



Step2: Create a ML Scenario and Register the Model as an Artifact in SAP DataIntelligence


Login to your SAP DI instance's Launchpad and click on ML Scenario Manager

Create a  new scenario. We are naming it 'gcpconsume'

Create a New Notebook with the kernel as 'python3'. Enter the following code
import os
targetdir = '/vrep/mlup'
files = os.listdir(targetdir)
print(files)

This should output 'saved_model.zip'

Now register saved_model.zip with the following code
import sapdi
from sapdi.artifact.artifact import Artifact, ArtifactKind, ArtifactFileType
zip_file_path = "/vrep/mlup/saved_model.zip"
artifact = sapdi.create_artifact(
file_type=ArtifactFileType.FILE,
artifact_kind=ArtifactKind.MODEL,
description="Cloud Model pb",
artifact_name="cloudpb",
file_name=os.path.basename('/vrep/mlup/saved_model.zip'),
upload_content=zip_file_path
)
print('Model artifact id {}, file {} registered successfully at {} \n'.format(artifact.artifact_id, zip_file_path,artifact.get_uri()))

output:

Model artifact id b3c65b93-c010-487e-bdc6-243884f110a6, file /vrep/mlup/saved_model.zip registered successfully
at dh-dl://DI_DATA_LAKE/shared/ml/artifacts/notebooks/exportmodel.ipynb/0d66f14c-c5bc-4c9e-aeed-d0389d1d649f/saved_model.zip

You should also see the model registered in your ML SCENARIO

Step 3: Create an Inference Pipeline with the template 'Tensorflow Serving Pipeline'



Tensorflow Consumer pipeline


On clicking 'Create' the modeler application opens. Do not change anything. Make a note of the the various configuration parameters for the Model Serving Operator

 


Pipeline


 

Go back to the ML Scenario Manager.

select the pipeline and click Deploy

Enter the parameters in Step 4 as shown in the screenshot

ModelRuntimeId : tf-1.14 or the default of tf-1.1.5

Artifact Model: Select the model from the list

 


pipeline parameters


 

Click Save. Navigate back to the Modeler and monitor the pipeline Run.

Open the Wiretap once the pipleine shows running. The wiretap should show output as shown in the screenshot


wiretap


 

Step4: Consuming the hosted model


Navigate back to the ML Scenario Manager and grab the Deployment URL


DeploymentURL


 

Now open up Postman

Create a POST request with the URL as is. Nothing needs to be prefixed

Use the Basic Authorization and provide your

Username as Instance\<UID>

Password

Headers

key : X-Requested-With   Value: Fetch

The image to be tested should be passed with base64 encoding. Convert this as in the earlier step and pass it to postman in RAW - JSON format


postman_DI


 

If you Navigate back to the Wiretap UI - you can see the time taken for the inference request

 


Time to Infer


 
2 Comments