The SAP Leonardo Machine Learning Foundation is made up of various services that can be accessed as API calls. For a personal project I need the result values of the respective calls. Therefore, I wrote test cases for these service calls. Because a Test-driven development (TDD) approach also consists of development, I also implemented the according API calls and created an
NPM module for NodeJs — as a side-product.
Every service on the SAP API Business Hub also offers code snippets for different programming languages. Unfortunately, the code snippets do not contain an implementation to transfer the image data. For this reason, I implemented this NodeJs package which encapsulates the API calls.
The following is an example for an object recognition with the Image Classifier service to detect an
elephant / tusker:
const leonardo = require('sap-leonardo');
var imageclassification = new leonardo.Imageclassification("apiKey");
imageclassification.classification("elephant-114543_640.jpg")
.then(body => {
console.log(JSON.stringify(body, null, " "));
var firstResult = body.predictions[0].results[0];
console.log("RESULT:", firstResult.label, firstResult.score)
// RESULT: tusker 0.7052137851715088
})
.catch(err => { console.error(err) });
I use
TypeScript as development language instead of JavaScript. TypeScript is a typed language and generates JavaScript code which is included in the NPM module. Additionally, the IDE (e.g. Visual Code) has a better support for TypeScript and makes the development easier and fun.
The NPM module has only two (runtime) dependencies to external modules. The
request module is responsible for the communication with the APIs. The
es6-promise module is necessary for an asynchronous process, because there is some delay between call and response. This allows several service calls to be called sequentially. In the
which_shoe example I use three SAP Leonardo services:
- The Multi Instance Image Segmentation service finds a shoe within an example image and returns the region of the shoe within the image. This part is cropped.
- The Image Feature Extraction service calculates the feature vectors for the cropped image as well as for two reference shoes.
- The Similarity Scoring service calculates the similarity within these three feature vectors. The first reference shoe has a similarity of 73% and the second 70%.
In addition to other examples, the main focus of this project is on the test cases. As a result, the test coverage is usually 100%. I am interested in the results because I refer to them in a document. If, for example, the Machine Learning model changes for the Image Classification service within SAP Leonardo, my test image for an elephant may return a different result. In this case, I can react and adapt the document.
As test framework I use
Mocha and
Chai. Chai is a Behaviour-Driven development (BDD) library that allows me to describe test cases as follow:
expect(body).to.have.property('predictions').to.be.an('array').with.length(1);
To create reasonable test cases, I need corresponding test data. For this I have searched for appropriate images and texts. For the Multi Instance Image Segmentation service, for example, I found an
image with four objects in it. The service detects all of them:
At the moment I’ve implemented 14 services and uploaded the project on
Github. I appreciate any feedback and especially, if someone can use this project.
Image credit:
chucks-153310_640.png,
hiking-shoes-3074971_640.png,
converse-2069209_640.jpg (all
CC0 licensed)