Supply Chain Management Blogs by SAP
Expand your SAP SCM knowledge and stay informed about supply chain management technology and solutions with blog posts by SAP. Follow and stay connected.
cancel
Showing results for 
Search instead for 
Did you mean: 
uta_loesch
Advisor
Advisor
3,466
This blog post presents an example implementation of the algorithm extensibility of SAP Predictive Maintenance and Service (PdMS). It shows how data can be retrieved and processed using the SAP Predictive Maintenance, machine learning extension. After developing the algorithm of interest, we show how it can be deployed on SAP Cloud Platform Cloud Foundry. The coding presented here is also available from Github [1] and may be used as a template for developing custom algorithms. For a documentation of the used SAP Predictive Maintenance and Service, machine learning extension, please refer to the corresponding blog post [2].

Use Case


Let's assume we have a pump that we want to monitor to detect abnormal pump behavior. In SAP Predictive Maintenance and Service, the pump's operation mode and its rotational speed are recorded.

To this end, a Gaussian kernel density estimator will be used [3]. This is a very simple use case, that could also be achieved in many other ways. While the use case presented here is very simple, the coding developed here can be used as template for other use cases as well to use different algorithms with SAP Predictive Maintenance.

For the sake of the example, the pump is equipped with one sensor that measures the rotational speed. The pump has two operational modes. Thus, the algorithm should be able to detect if the pump is operating in one of the two modes. Otherwise, it is expected that the algorithm outputs a high anomaly score.

The modes and related sensor data can be seen in the following screenshot of the PdMS Indicator Chart:

indicatorchart.png

As it can be seen, the readings are either around the value of 700 or around the value of 2200 - depending on the operation mode.

Exploration


The first step in the development of custom algorithms is some exploratory analysis to understand the sensor data and to try out different algorithms.To this end, a jupyter notebook may be used for data exploration and experiments. Part of this coding will then be transferred later on into the python application.

As a result of the exploration, an algorithm is available that is able to detect the anomaly properly, as it can be seen in the following image:

anomaly_score.png

In the date range between March, 17 and March 24, the pump is operating at a very low rotating speed compared to the normal operating mode. The algorithm has detected this anomaly, indicated by a high anomaly score.

The algorithm that will be used for detecting abnormal pump behavior is a Gaussian Kernel Density Estimator as provided in the popular scikit-learn Python library [4].

Building the application


After deciding on an algorithm, an application should be build which can be used to train and score models based on the data that is available in SAP Predictive Maintenance and Service on a continuous basis. Here, we build a a python application which can either run locally or can be deployed to SAP Cloud Platform, Cloud Foundry for scheduled scoring.

As the SAP Predictive Maintenance and Service machine learning engine extension requires Java, a runtime environment with Java and Python is needed. In order to achieve that, the Cloud Foundry multi-buildpack is used. To store and load the trained model, AWS S3 is used as persistence.

Used Libraries



  • SAP Predictive Maintenance, machine learning engine extension (mle-connector)

  • pandas

  • numpy

  • scikit-learn

  • boto3


Overview of the Coding


app.py


This file contains the main building blocks of the application for retrieving data, triggering a training or scoring and persisting the scores.

The main function of the application takes one input which defines the task the application should perform (one of 'train', 'score' or 'score_scheduled').

In the training function, the training data will be collected from SAP Predictive Maintenance and Service and then the training function of the algorithm will be applied to the collected data.

The input data for training or scoring is collected with
mle.collect(ts_from=from_date, ts_to=to_date, dataset=dataset, equipment="2828273CC809415D923E5FBF24542AAF")

where dataset contains the dataset definition, which is defined in the code as well.

After the model has been created it will be persisted to S3.

The scoring function retrieves the trained model from S3, get the scoring data from PdMS and apply the model scoring function on the collected data.

Once the scores are computed, they are persisted to the PdMS tenant with the following coding:
mapping = [
{
"name": "score",
"templateId": "GH0100304AEFE7A616005E02C64AE887",
"Indicator": "Anomaly_Score",
"IndicatorGroup": "Scores"
}
]
print("Persisting {} rows of data into indicator '{}'".format(len(scores), mapping[0]['Indicator']))
mle.persist(scores, mapping)

The mapping defines the relationship between the scoring column in the DataFrame and the indicator in PdMS. In the example, the content of the column named score in the result DataFrame would be persisted to the indicator named "Anomaly_Score" in the IndicatorGroup "Scores".

 

Other Python Files



  • s3_persistence.py

    • Contains the code to load and save the raw model object in S3 bucket

    • Uses boto3 library to interact with AWS



  • train_score_helper.py

    • Contains the code to train and score the scikit-learn model.

    • The KernelDensity algorithm with gaussian kernel is used as example.

    • Anomaly scores are capped at numeric value of 1000




Cloud Foundry Related Files



  • apt.yml

    • Configures the apt-buildpack to install openjdk-8-jre



  • manifest.yml

    • Cloud Foundry application manifest

    • Secret environment variables for connection need to be added to env section



  • multi-buildpack.yml

    • Multi-buildpack configuration file

    • Configures python and apt buildpack



  • requirements.txt

    • Python pip dependencies



  • runtime.txt

    • Defines python version (3.6.8)




Files related to SAP Predictive Maintenance and Service, machine learning extension



  • mle-cli.jar

    • Java library



  • mle-py-connector.tar.gz

    • Pip package




Usage Instructions


Before Use



  • Download SAP Predictive Maintenance and Service machine learning engine extension and extract the library into working directory.

    • mle-cli.jar and mle-py-connector.tar.gz should be present.



  • Get service keys using Cloud Foundry commands:

    • Asset Central Foundation service key

    • Leonardo IoT service key

    • Object Store service key




Run Locally


To run the application locally, use the pip package manager to install the requirements with the command below. Make sure that python 3.6.x and java are installed.
$ pip install -r requirements.txt

Set the environment variables AC_KEY (Asset Central Foundation service key), IOT_KEY (Leonardo IoT service key) and S3_KEY (Object Store service key) and start training, scoring or scheduled scoring with:

  • python app.py train

  • python app.py score

  • python app.py score-scheduled


Run On Cloud Foundry


To control the application, the Cloud Foundry tasks framework is used. Before deploying the application, add the values of the service keys to the manifest.yml file. Afterwards, the application can be deployed with cf push and can be stopped afterwards with cf stop.

Training can be triggered with cf run-task extensibility-ml-app "python app.py train" and scoring with cf run-task extensibility-ml-app "python app.py score". Scheduled scoring is possible with cf run-task extensibility-ml-app "python app.py score-scheduled". In scheduled mode, the application stays running, polls for new data every 120 seconds and computes the anomaly scores. To stop the scheduled scoring, run cf terminate-task extensibility-ml-app <task-id>.

While training or scoring, the logs can be observed with cf logs extensibility-ml-app.

log-output-scheduled-scoring.png

Conclusion


The provided example shows the step necessary to transform an algorithm that is used for training and scoring a model into a Python application deployed on Cloud Foundry that can be used to continuously score data and provide the result to SAP Predictive Maintenance and Service. By exchanging the training and scoring function the application may be adapted for other algorithms.

References


[1] Sample code on Github: https://github.com/SAP-samples/pdms-extensibility-python-example

[2] Blog post giving an overview of SAP Predictive Maintenance and Service, machine learning extension: https://blogs.sap.com/2019/07/30/sap-predictive-maintenance-and-service-machine-learning-engine-exte...

[3] Wikipedia entry describing the algorithm used in the example (Kernel density estimation): https://en.wikipedia.org/wiki/Kernel_density_estimation

[4] Python implementation of Kernel density estimation used in the example: https://scikit-learn.org/stable/modules/density.html

6 Comments