This functionality is useful when most of the workload is done on the HANA Cloud layer for training or inferencing a machine learning model and the user wants to provide an API to trigger the training or inference call. This step is typically needed after the model development has been done and an endpoint is needed to simply call the inference.
If the code requires significant dependencies on other libraries for data preparation and feature selection then a full deployment on Kyma is recommended. Here are some sample applications which can be used SAP-samples Kyma runtime-extension-samples as starters. In these cases the docker image needs to have hana-ml python package.
The same functionality can also be achieved via kubectl functionality, here we use the Kyma Cockpit and see the ease of creating a serverless function. For a developer focused functionality there are jumpstart generators for VS Code which enable the same. Here is a link to a complete tutorial from SAP HANA Academy SAP HANA Academy for BTP Serverless Python.
To go through the the steps you would need to ensure you already have the following:
You have HANA Cloud Database instance and required credentials to connect. This could be either HANA Cloud or underlying HANA Cloud from a SAP Datasphere tenant.
Create Kyma Function to access HANA ML via Python API Client
Choose the Kyma namespace where you would like to deploy this functionality
Add the HANA credentials, in our example, hanaauth, required to connect to the Secrets in Kyma
Go to Workloads and Create FunctionThis basic version works with Function profile XS but incase you have higher memory or CPU requirements for building (incase of other additional libraries) or compute increase the resources requirements.
Add Environment variables using the Secret
To enable the use of hanaml libraries we need to
Add the hana-ml and shapely in the dependenciesAdd Dependencies
Go to the yaml view of the file and add runtimeImageOverride: hanaacademy/kyma-faas:python39 after runtime:python39. This is the most critical step as without the runtime override the Kyma function is not able to build with hana-ml dependency as the default base image does not support it.
Here is some sample code to test the hana ml connectivity works. The specific ML code required can be added here depending on whether its a training or inferencing run.
import hana_ml.dataframe as dataframe
conn = dataframe.ConnectionContext(address = os.environ.get('HANA_ADDRESS'),
port = os.environ.get('HANA_PORT'),
user = os.environ.get('HANA_USER'),
password = os.environ.get('HANA_PASSWORD'),
encrypt = 'true')
print("HANA DB version:", conn.hana_version())
def main(event, context):
message = 'Hello World from the Kyma Function '+context['function-name']+' running on '+context['runtime']+ "hana_ml version:" + hana_ml.__version__ + '!';
print("hana_ml version:" + hana_ml.__version__)
Create API rule to call the Kyma function for HANA ML
Once the function is created we need to add an API rule to ensure access outside the Kyma cluster as usual
For this go to the Discovery and Network section -> API Rules
Creating the API rule is straightforward, you can change the Name as you like and provide a Subdomain which adds a prefix to the host and helps to distinguish this service from others you may create on the Kyma cluster
Create API Rule
The above endpoint can now be integrated in the end-user application. Typically you would then add authorization steps which can be done in Kyma. For example this endpoint can also be called from SAP Analytics Cloud via API Step in SAC Multi Actions to trigger workloads on SAP HANA.
Testing and Debugging the endpoint
You can test the endpoint by calling it directly from the browser or via postman and check the logs in Kyma console of the corresponding pod.