Machine learning is a transformative technology that continues to change the way data is processed, insights are gained, and businesses are run. However, given how relatively new this field is, data scientists and machine learning engineers often find themselves possessing more questions than answers about their data and machine learning models. These may include:
- Is my data “valid,” or fit for training a machine learning model?
- What trends exist in the data?
- Which parts of my data are more influential on the machine learning model’s learning outcomes?
- Why did the model make that prediction?
At SAP, where we develop enterprise software embedded with machine learning, answering such questions with
explainability is becoming a critical part of building trust with customers. Indeed, in products such as SAP Cash Application, where we automate the processing of various financial documents, providing a “why” to machine learning predictions has not only built transparency to our users, but it also helps establish the necessary auditability in our products. Explainability is thus becoming a topic of increasing interest to many in the company, and a group of us have been working on developing reusable explainability components that can be used by others.
We are therefore excited to announce the release of
contextual AI, SAP’s first open-source machine learning framework focused on adding explainability to various stages of a machine learning pipeline – data, training, and inference – thereby addressing the trust gap between machine learning systems and their end-users.
What does it do?
Contextual AI, which you can find in
this GitHub repository, spans three pillars, or scopes, of explainability, each addressing a different stage of a machine learning solution’s lifecycle. The library provides several features and functionalities for each:
Data:
- Distributional analysis
- Data validation
Training:
- Training performance
- Feature importance
- Per-class explanations
- Simple error analysis
Inference:
- Explanations for individual predictions
Finally, we provide a
compiler, a component that aggregates the outputs of the above functions into a PDF report, which we call the “explainability report.” With just one configuration file, the user can create an end-to-end pipeline that explains the data, training performance, and model prediction behavior. You can find the
full tutorial here.
Figure 1. A sample explainability report (link to PDF)
For more examples, do check out the
tutorials hosted in our repository. Explainability is already being integrated into several products, including SAP Data Intelligence and SAP Cash Application, and we hope that more intelligent enterprise software will build trust and transparency through Contextual AI.
Who are we?
We are a team of data scientists and engineers from SAP Cloud Platform Business Services who work on contextual AI as part of a side project. Each of us has been inspired by real customer requirements and engagements in our respective product teams to come together and work on this topic. Our mission is to build and promote the use of explainability in more products and services.
Interested in contributing?
By making contextual AI open source, we are opening doors to collaborations from both within and outside of SAP. This will help the library develop further, which in turn will make products that use contextual AI more explainable and transparent. For more information, please visit our GitHub repository. If you like what you see, do give us a star!
For more information about SAP’s efforts in open-source software, please visit our
SAP Open Source landing page. You will find additional open-source projects spanning other areas, including Gardener, a Kubernetes management tool, and Kyma, a connector of enterprise applications and cloud technologies.
Links
Open Source @ SAP:
https://developers.sap.com/open-source.html
SAP repositories on GitHub :
https://github.com/sap
Contextual AI Repository:
https://github.com/SAP/contextual-ai
Documentation:
https://contextual-ai.readthedocs.io/