Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Wolfgang_Epting
Product and Topic Expert
Product and Topic Expert
2,992

The rise of generative artificial intelligence (genAI) tools, such as ChatGPT or Bard, have caught the eye of both the public and industry due to their far-reaching applications, user-friendly interfaces, and intricate capabilities. GenAI's inherent potential to overhaul conventional business practices and pioneer rapid innovation has led to global conversations that demand broad participation from entities spanning industry, academia, government, and civil society.

Central to these dialogues is the notion of “data fairness” or sometimes named "data equity". It's an endeavor to achieve fairness, a reduction in bias, accessibility, control, and accountability in the governance of data, upheld by principles of justice, transparency, non-discrimination, and inclusive participation. Though an existing tenet solidly founded in human rights and bound with ongoing concerns about data privacy, protection, ethics, Indigenous data sovereignty and responsibility, its treatment is essential in fostering trust in digitalization. As such, data fairness and data sovereignty are seen as a prime precondition for a thriving digital society, necessitating a strategic data approach mindful of organizational, technological, and regulatory nuances.

As genAI intersects with data fairness, new challenges crop up, particularly as AI training datasets can contain biases that perpetuate existing social inequalities. It's imperative to proactively audit data and algorithms at all levels of AI development to guarantee genAI tools fairly portray all communities. Given that genAI continues to hasten AI deployment, the need to delve into and develop data fairness frameworks has never been more pressing.

Wolfgang_Epting_0-1706281388544.jpeg

SAP introduced enhanced fairness support for decision-making using machine learning models via the Predictive Analysis Library (PAL) for SAP HANA Cloud. This includes a new FairML function that aims at mitigating unfairness of prediction models due to some possible "bias" within data set regarding features such as **bleep**, race, age and so on. It is a framework that can utilize other machine learning models or technologies which makes it quite flexible. Formally, instead of optimizing some loss functions like what regular machine learning algorithms do, FairML imposes some constraints on loss functions to force the prediction model to achieve some predefined fairness.

The FairML function at the moment extends to support hybrid gradient boosting tree (HGBT) binary classification and regression models. This broadened support provides a more robust toolset to data scientists and application developers working with SAP HANA Cloud.

The key benefits of this introduction include the ability to:

  • Build machine learning models that actively work to mitigate unfairness concerning sensitive human data. This is intended to decrease disparities, prevent harm, and ensure fairness throughout all stages of decision-making.
  • Avoid any potential bias or unfairness to any group in AI-augmented decisions relating to humans; examples include college admission decisions, job candidate selection, or personal credit evaluations.
  • Comply with stringent AI ethics principles. This technology is designed to act as a safeguard, preventing potential discrimination by AI systems against specific demographic groups.

If you want to find out more about how you can ensure ethical and responsible AI practice using the new capabilities in SAP HANA Cloud, check out the links below: