Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
sudipghosh
Active Contributor
1,753
Hello All,

Welcome to Part - 2 of Developing Enterprise AI Extension, Well in first part i have discussed about overview of different types of AI you can Infuse into your Enterprise. In this Blog i will show you how you can create Custom Machine Learning based Image Classification Model with Teachable Machine ( A No Code Machine Learning Model Development Platform) and then How you can Dockerize it and Finally Deploy that machine Learning Model in SAP Cloud Platform Kyma Run Time.

 

What Is teachable Machine?

Teachable Machine is Web Based tool where you can train and create custom machine learning model for Image Classification, Voice and Pose. Cool isn't it. Best Part is Non Machine learning Developer can create Machine Learning Model, Train with their data and export it as saved model or keras or tensorflow.js and can create rest api out of it or embed it into Application.

More Information you can find out here.



Lets get into Action


Create Three Materials in SAP S/4HANA Cloud with Iphone X, Iphone 11 pro and Iphone 6 and we will create custom tensorflow based  Image Classification Model for those Material in Teachable machine and finally we will embed that model in flask based application and dockerize it to deploy into Kyma.


Now in order to train and build the Machine Learning Model Create Training Data Set and Upload the Training data Set as below. I have created three class accrodingly.


After That Click on Train Model to train the model and then export the model in Tensorflow Keras Format as below by clicking on the Download my model button.


Now extract the Zip format, you will find keras model and Label.txt.


These two Files are Very Important and we need this two files to build Python Flask based application.

Creating Machine Learning Model Embeded Python Flask based Application


Everyone needs rest api, isn't it because ultimately that could be integrated everywhere like any UI application or any Conversational AI based Application. 

As this model is based on Keras, In order to expose this machine Learning Model as Rest API we need to create Flask Based Application and tweak this generated model into it.

Lets open Visual Studio Code (My Favourite IDE) and Create the Application Structure as Below

 


Below are the Code Snippet for each files.

app.py


import json
import os
import io

# Imports for the REST API
from flask import Flask, request, jsonify

# Imports for image procesing
from PIL import Image

# Imports for prediction
from predict import predict_url

app = Flask(__name__)

# 4MB Max image size limit
app.config['MAX_CONTENT_LENGTH'] = 4 * 1024 * 1024




@app.route('/')
def index():
return 'GET Methods are Not Allowed'



@app.route('/image', methods=['POST'])

def predict_url_handler():
try:
image_url = json.loads(request.get_data().decode('utf-8'))['url']
results = predict_url(image_url)
return jsonify(results)
except Exception as e:
print('EXCEPTION:', str(e))
return 'Error processing image'


if __name__ == '__main__':
# # Load and intialize the model
# initialize()

# Run the server
app.run(host='0.0.0.0', port=80)


 

labels.txt


MZ-PROC-IT-IP-0032
MZ-PROC-IT-IP-0035
MZ-PROC-IT-IP-0100

predict.py


import tensorflow.keras
from urllib.request import Request, urlopen
from PIL import Image, ImageOps
import numpy as np
import ssl


def predict_url(imageUrl):
"""
predicts image by url
"""
ssl._create_default_https_context = ssl._create_unverified_context
# log_msg("Predicting from url: " + imageUrl)
imgrequest = Request(imageUrl, headers={"User-Agent": "Mozilla/5.0"})
with urlopen(imgrequest) as testImage:
# with urlopen(imageUrl) as testImage:

image = Image.open(testImage)
return predict_image(image)


def predict_image(image):
# code snippet from teachable machine start-----------------------------
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)

# Load the model
model = tensorflow.keras.models.load_model('model.h5')
# model = keras.models.load_model('model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)

# # Replace this with the path to your image
# image = Image.open('test.jpg')

#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)

#turn the image into a numpy array
image_array = np.asarray(image)

# display the resized image
# image.show()

# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1

# Load the image into the array
data[0] = normalized_image_array

# run the inference
predictions = model.predict(data)
# code snippet end teachable machine start-----------------------------

# print('result')
# print(predictions)
labels_filename = 'labels.txt'
labels = []
with open(labels_filename, 'rt') as lf:
# global labels
labels = [l.strip() for l in lf.readlines()]
result = []
for p, label in zip(predictions[0], labels):
result.append({
'tagName': label,
'probability': p * 100
})

response = {

'predictions': result
}

# log_msg("Results: " + str(response))
return response
# return prediction

Dockerfile


FROM python:3.7-slim

RUN pip install -U pip
RUN pip install --no-cache-dir numpy~=1.17.5 tensorflow~=2.4.0 flask~=1.1.2 pillow~=7.2.0

COPY app /app



# Expose the port
EXPOSE 80

# Set the working directory
WORKDIR /app

# Run the flask server for the endpoints
CMD python -u app.py

Building Docker Image


docker build -t codersudip/tmachineofficesupply:aarini .



Docker Run Locally


docker run -p 127.0.0.1:80:80 -d codersudip/tmachineofficesupply:aarini


 

Testing in Postman


In order to test we need to find some image from google, i tested with Iphone 6s Picture and i must say result is pretty goodin terms of Accuracy. (If you remember MZ-PROC-IT-IP-0032 was the Material we have created for Iphone 6 in SAP S/4HANA CLOUD)



 

Now Deploying and running  into KYMA


In order to run and deploy this machine learning model into SAP Cloud Platform Kyma Run Time.
docker push codersudip/tmachineofficesupply:aarini            

 

Once it is pushed to Docker Hub, You are ready to Deploy into Kyma for that you need to enable Kyma Run Time.

In order to enable Kyma Run Please Follow this SAP Developer Tutorial for Enable Kyma Run Time by kevin.muessig

Pre-requisite: Install Kubectl and get you kubeconfig


 

Setting the Kube Config  Path.
export KUBECONFIG=kubeconfig.yml


 

Create Deployment


kubectl create deployment --image=codersudip/tmachineofficesupply:aarini officesupplytm



kubectl set env deployment/officesupplytm DOMAIN=cluster



Expose the service


kubectl expose deployment officesupplytm --port=80 --name=officesupplytm



Now go to Kyma and Look at the Deployment, You can see the recent deployment we just did



Now go to services and expose it as API.

 


Click on Expose Service



For this Scenario I am not showing How to secure this API because some one have already written blog on it. I am going to share the blog here insted 🙂

Please Follow Kyma for Dymmies [2]: First Simple Microservice with Security by carlos.roggan


 

Testing this Kyma Based ML Micro Service in Postman.



 

I have also done Youtube Podcast Session  on this topic, you can Watch if you would like to..



For today thats it, i hope you really enjoyed this blog, In next blog i would show How we can Integrate with Conversational AI and Finally Integrate with SAP S/4HANA Cloud to create Image Based Buying Feature for SAP S/4HANA Cloud from WhatsApp Based Conversation AI.

In the meanwhile have a good read, share with your friends and enjoy the weekend.

 

Regards,

Sudip
8 Comments
Labels in this area