How to design UX for Explainable AI
In the first part of this series, we cover the very basics of Artificial Intelligence and Machine Learning concepts. In the second part, we discovered about Explainable AI, its growing need and techniques that can be deployed to build trust in the system.
In this part we will look at from a UX Designer’s perspective to build XAI with a human-centered approach. Explainability is at the foundation for responsible AI. Explainable AI is the interface for auditing biases of models. Keep in mind when you are designing for Explainable AI that there could be more than one user group, and each will have its own set of needs and objectives, for example, achieving competence, fairness, usability, or trust in the model.
Some of the common user group who may require explainability are:
Question-driven approach
Vera Liao, a Principal Researcher at Microsoft in her research paper cited the 4-step question-driven approach for designing user experience for Explainable AI.
Step 1: Question Elicitation
When designing for Explainable AI, it is important to understand the target user group. In general, an explanation is a response to a question. As a Designer, our job is to identify user need for Explainable AI as questions.
User research can be put to practice to know from the user group what will they like to ask an AI application. Additionally capture the intention behind the users asking those questions.
Step 2: Question Analysis
Cluster similar questions to identify priorities and at the same time cluster the intents captured from the previous step. With this approach, key user requirements for Explainable AI will be determined.
Step 3: Mapping Questions to XAI Solutions
Try to map the questions and intents to the XAI technique and model that can be applied, for example, global or local explanation. This is where as a UX Designer, your understanding of XAI explained in previous part of the series will be helpful.
Step 4: Iterative Design and Evaluation
With the mapping of XAI model to be used, its capabilities are identified that becomes the basis of defining the user experience.
Major companies like Google, Microsoft and IBM have defined AI design guidelines giving importance to explainability. However, I would strongly recommend to go through the Explainable AI design guidelines from SAP. It is quite elaborate and does not only limit to theory but suggests design components and interaction patterns. Most of the principles that I have discussed in this series can be seen put to practice.
Figure 4 Overview of progressive information disclosure as defined in SAP Design Guidelines for Explainable AI
Image source: SAP Fiori Design guidelines for Web
Resources and further reads:
Generative AI at SAP – an OpenSAP course
SAP Fiori Design Guidelines for Web
UXAI – A visual introduction to Explainable AI for Designers.
Introduction to Explainable AI: Techniques and Design (By Vera Liao)
Building XAI applications with question-driven user-centered design (Blog on Medium by Vera Liao)
Trustworthy AI: How to make artificial intelligence understandable
People+AI Research (PAIR) Guidebook: AI Design Guideline from Google
AlphaGo documentary on YouTube
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 22 | |
| 13 | |
| 12 | |
| 12 | |
| 12 | |
| 10 | |
| 10 | |
| 9 | |
| 8 | |
| 7 |