Metaflow helps data-scientists and developers to create scalable Machine Learning services and bring them to production faster. For a good overview on how to build and productize ML services with Metaflow see: https://docs.metaflow.org/introduction/why-metaflow.
The Metaflow Python library for SAP AI Core extends Metaflow's capabilities to run ML training pipelines as Argo Workflows.
In this blog you will learn how to leverage the Metaflow-Argo plugin for generating training templates for SAP AI Core:
You can experiment with local data right in your python environment and adjust your ML code iteratively.
Once ready for running the training pipeline on Kubernetes, generate the Argo template directly from your ML code.
Even the iterations for fine tuning the pipeline for SAP AI Core are supported by the Metaflow-Argo plugin, which makes productization easier and more fun!
For running the same code locally and on the K8s cluster, Metaflow stores code packages into the S3 bucket.
Create the following config file on your computer: ~/.metaflowconfig/config.json and fill in the bucket name registered for your SAP AI Core account:
Save this metaflow script into a file with the name "trainflow.py".
Develop and test your flow locally, before running it on SAP AI Core's K8s cluster:
python trainflow.py run
Learn more about the Metaflow features in these great tutorials.
Run the Training pipeline in SAP AI Core
The engine for running training pipelines in SAP AI Core is Argo Workflows.
The Metaflow-Argo plugin relieves the user from mastering the Argo workflow syntax to write such a workflow in yaml / json. The next sections describe how the workflow template can be generated from the above python ML code.
The docker images for each step of the training pipeline have to be built and pushed to the docker registry prior to starting the workflow. However, Metaflow makes the life of the data-scientist a lot easier:
When creating the Argo workflow, behind the scenes metaflow copies your metaflow script (trainflow.py) to S3.
When the Argo Workflow is started in K8s, a piece of code added to the template copies your metaflow script from S3 to the docker container. Thus it runs the same ML code as your local version!
Here is an example Dockerfile, which installs the Metaflow-Argo plugin and awscli for copying the code packages that allow your metaflow script to run in the container:
# Generates a docker image to run Metaflow flows in SAP AI Core
# Install metaflow library (updates your latest metaflow script during runtime)
RUN pip install sap-ai-core-metaflow awscli && \
pip install <additional libraries>
# SAP AI Core executes containers only in root-less mode!
RUN mkdir -m 777 /home/user
In the "pip install" section you can add additional libraries required for your ML code.
Build the docker images using above Dockerfile and push to your docker registry:
Note: the class name that you use in the ML code (e.g. TrainFlow") is used as the executable ID after being converted to lowercase (see template tag: metadata→name). Use exactly the same name also for the annotation executables.ai.sap.com/name.
SAP AI Core Secrets
Pulling the docker images and accessing the object-storage (S3) requires credentials. These are referenced as secrets in the SAP AI Core template:
Preparing for Production
After the experimentation phase, when the ML code for training the model is finalized, it is not necessary to download the ML code into the running container. For production purposes the code can be included in the docker image with the following procedure:
Generate the Training Template and search in the json file for the string "cp s3://". Here you can find the address of the "code package" in the S3 bucket.
Download the "code package" from S3 using aws cli to your local computer and name the file "job.tar".
Add the metaflow package to the docker image using this dockerfile:
# Dockerfile for embedding ML code into Production docker image for SAP AI Core
RUN mkdir -m 777 /user/home
RUN mkdir -p /user/home/.metaflow
RUN chmod -R 777 /user/home
RUN chown -R 65534:65534 /user/home
COPY job.tar /user/home/job.tar
RUN cd /user/home && tar xf job.tar
Finally the --embedded option instructs metaflow to use the ML code inside the docker rather than downloading from S3:
This blog post demonstrates how a Data Scientist can easily run the same ML code both locally and on a K8s cluster. This allows to keep the effort low when moving the code from experimentation to production on SAP AI Core. Debugging in a production environment is impaired by many restrictions, therefor it is important to iron out possible errors before pushing the code to production.
I want to thank elham and Roman Kindruk (co-developers of the Metaflow-Argo plugin) for their support in writing this post.
Please add your thoughts on this post in the Comments section below.