The rise of generative artificial intelligence (genAI) tools, such as ChatGPT or Bard, have caught the eye of both the public and industry due to their far-reaching applications, user-friendly interfaces, and intricate capabilities. GenAI's inherent potential to overhaul conventional business practices and pioneer rapid innovation has led to global conversations that demand broad participation from entities spanning industry, academia, government, and civil society.
Central to these dialogues is the notion of “data fairness” or sometimes named "data equity". It's an endeavor to achieve fairness, a reduction in bias, accessibility, control, and accountability in the governance of data, upheld by principles of justice, transparency, non-discrimination, and inclusive participation. Though an existing tenet solidly founded in human rights and bound with ongoing concerns about data privacy, protection, ethics, Indigenous data sovereignty and responsibility, its treatment is essential in fostering trust in digitalization. As such, data fairness and data sovereignty are seen as a prime precondition for a thriving digital society, necessitating a strategic data approach mindful of organizational, technological, and regulatory nuances.
As genAI intersects with data fairness, new challenges crop up, particularly as AI training datasets can contain biases that perpetuate existing social inequalities. It's imperative to proactively audit data and algorithms at all levels of AI development to guarantee genAI tools fairly portray all communities. Given that genAI continues to hasten AI deployment, the need to delve into and develop data fairness frameworks has never been more pressing.
SAP introduced enhanced fairness support for decision-making using machine learning models via the Predictive Analysis Library (PAL) for SAP HANA Cloud. This includes a new FairML function that aims at mitigating unfairness of prediction models due to some possible "bias" within data set regarding features such as **bleep**, race, age and so on. It is a framework that can utilize other machine learning models or technologies which makes it quite flexible. Formally, instead of optimizing some loss functions like what regular machine learning algorithms do, FairML imposes some constraints on loss functions to force the prediction model to achieve some predefined fairness.
The FairML function at the moment extends to support hybrid gradient boosting tree (HGBT) binary classification and regression models. This broadened support provides a more robust toolset to data scientists and application developers working with SAP HANA Cloud.
The key benefits of this introduction include the ability to:
If you want to find out more about how you can ensure ethical and responsible AI practice using the new capabilities in SAP HANA Cloud, check out the links below:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
26 | |
24 | |
19 | |
13 | |
10 | |
9 | |
9 | |
8 | |
7 | |
7 |