Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
alexd
Product and Topic Expert
Product and Topic Expert
1,422

Intro


In this blog post you can see how you can expose SAP HANA machine learning models as Node.js dockers hosted in SAP BTP, Kyma Runtime.

Why you may want to do that?First, because it is a fun and an easy puzzle that could get you interact with all the greatest and latest SAP tech stack : SAP BTP Kyma RuntimeSAP HANA Cloud and Python machine learning client for SAP HANA
However, this kind of architecture might be helpful if inference is not needed in SAP HANA and you want to expose trained machine learning models as REST endpoints.

A business scenario could be the following:

In a manufacturing plant, the Production department raises within SAP Plant Maintenance a PM Notification which will be received by the Maintenance department.

While the Maintenance Planner is creating a new Work Order, an API call is initiated to a recommender system hosted in SAP BTP, Kyma Runtime that populates back to the Planner a list of activity recommendations, based on a machine learning model trained on similar events resolved in the past.




Action 1 : Expose yourself to SAP BTP, Kyma Runtime so you can expose your dockers to the world


SAP BTP, Kyma Runtime is available in SAP BTP Trial so if you want to get a first glimpse, I would recommend you follow the SAP developers mission available here. It is a great start to see why SAP BTP, Kyma Runtime is not just a Kubernetes service, but comes with a great set of additional services on top to simplify the extension, and integration of monolithic software.

At SAP BTP, Kyma Runtime you can build volumes aka. directories that remain intact even if the dockers may die and come back in life (which is what they are supposed to do due to their stateless nature). Or you can deploy your own dockers in any programming language using kubectl and attach to them APIRules so that can be accessed via public APIs. e.g. Your Flask Python app or your Node.js Express app.

Action 2: APL model extracted to JSON format


If you didn't have the chance already, please spend some time having a look at this great blog post published from Andreas Forster and Marc Daniau. What Andreas and Marc showed us, is how you can score a SAP HANA Automated Predictive Library (APL) model in a stand alone Javascript environment outside of SAP HANA.

For the hasty readers, Andreas's and Marc's blog post uses a Jupyter notebook accessing a SAP HANA Cloud instance to train a model and at the very end the model is saved externally as a JSON file. Then this JSON file can be used by a Javascript scoring runtime to score new predictions.

What if the Jupyter notebook was hosted in SAP BTP, Kyma Runtime and the JSON file was exported in a volume? That would make the JSON file available to other apps in the cluster(keep that in mind).

Using kubectl and a dockerized Jupyter version (I recommended to start with image: jupyter/minimal-notebook:latest but you can pick your own) you can deploy a Jupyter notebook as a docker to your trial SAP BTP, Kyma Runtime account. After completing your deployment, you will end up having a Jupyter notebook hosted in Kyma:


 

This notebook is where a Data Scientist can develop and train machine learning models using machine learning client for SAP HANA and SAP HANA Cloud. Just before completing your work, following what Andreas and Marc showed us, save your extracted model to a predefined Volume (in this example the path is /var/opt/json and the name of the JSON file half_marathon.
text_file = open("/var/opt/json/half_marathon.json", "w")
text_file.write(json.dumps(data))
text_file.close()

 

Now there is a Volume with the extracted JSON files in the SAP BTP, Kyma Runtime environment, each one representing an APL trained model.


What is important to note, is that at the deployment.yaml we have configured the path of the volume to /var/opt/json same as the path we save the JSON file in the Jupyter notebook. If you don't do that right, the notebook saving code will fail to execute.
spec:
containers:
- name: minimal-notebook
image: jupyter/minimal-notebook:latest
ports:
- containerPort: 8888
volumeMounts:
- name: hana-json-vm
mountPath: /var/opt/json

 

What is missing now is a service which may

  • Accept requests for a pretrained model, along with some input variables (the "new" data)

  • Handle and construct a response

  • Return a prediction


Action 3: Build a Dockerized Node.Js Express app


So now it is time to build a Node.js application which is going to:

  1. Accept requests with

    1. A model name (which it will map to a JSON file's name)

    2. New variables (input for our model)



  2. Execute the selected ML model

  3. Send back a response with the result


Let's use a toy example to understand the concept: There is a JSON file in a volume with path /var/opt/json/half_marathon.json

This model is waiting as an input your time record in a half marathon and it can send you back a prediction of your full marathon time. Ideally we could send a POST request similar to the one below:
{
"modelname": "half_marathon",
"parameters" : [
{
"variable":"HALFMARATHON_MINUTES",
"value" : 120
}
]
}

and having a Node.js Express route to take care of a POST request, similar to the one below:
...
const VOLUME_PATH = "/var/opt/json/"
...

app.post('/scoreWithJSON',(request,response) => {

console.log("[SCORING_JSON_APP with POST] with Request: " + JSON.stringify(request.body))

var _data = request.body;
var _modelname = _data.modelname;
var _parameters = _data.parameters;

let _model = fs.readFileSync( VOLUME_PATH + _modelname + ".json");

let modelDefinition = JSON.parse(_model);
var autoEngine = runtime.createEngine(modelDefinition);
const prediction = autoEngine.getScore(_parameters);
console.log(prediction["score"])
var r = prediction["score"] + ""
response.status(200).send({ score: r });
});

and get back a response!







Disclaimer


This example is only scratching the surface of all the technologies mentioned and there is space for lots of improvements and innovations. A short list on top of my mind:

  • Use Kyma Functions instead of deploying Node.Js apps

  • Add security on APIRule APIs

  • Use Redis deployed in Kyma, instead of Volumes to save/retrieve JSON files

  • Attach an event listener (similarly to the SAP BTP, Kyma Runtime mission) to the extracted APL model and trigger your model from SAP S/4HANA Cloud events.

  • Configure the Volume path so it is not hard-coded to each docker.


Once you know enough, it will be time to answer a harder question:
What change, in your business processes could and should make right now?

Then you will definitely need to take a look on that blog post from Gaurav Abbi  and start enhancing your business processes with intelligence.

Outro and Call for Action


This blog post has an estimated reading time of 5 minutes. Just reading it might get you inspired to think of your own innovative use-cases.

However, I would highly recommend to get your hands on each of the 3 actions and eventually you will have to:

  • Setup a SAP BTP, Kyma Runtime

  • Work with your favorite IDE to interact via kubectl with SAP BTP, Kyma Runtime

  • Have a look at the deployment.yaml files and understand how to setup deployments, volumes and APIRules

  • Dockerize a Jupyter notebook

  • Write a small but powerful Node.Js application

  • Create a Machine Learning model in SAP HANA Cloud using the Python machine learning client for SAP HANA

  • Connect all the above with a business event trigger


...which are going to keep you busy for more than 5 minutes but it will pay back with a holistic view of various SAP Business Technology Platform components.

Happy cloud-native-hana-ml-intelligent-business coding!
4 Comments