Artificial Intelligence Blogs Posts
cancel
Showing results for 
Search instead for 
Did you mean: 
Abhishek_Pant01
Advisor
Advisor
548

Introduction 

Aviation safety became non-negotiable because stakeholders agreed on a clear definition: keeping humans alive and systems operational. For AI, the definition is less obvious and varies by use case. We define AI safety as systems operating within intended boundaries without causing harm: technical, ethical, or societal, as beautifully explained by Bettina Laugwitz in her introductory blogpost of this series (Keep the blue side up: Building trust in AI throug... - SAP Community). This means being lawful, ethical, and robust, echoing the principles outlined in global AI governance frameworks such as ISO/IEC 42001 and related regulations such as the EU AI Act. ISO/IEC 42001:2023 - AI management systems, EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act, Standards, frameworks, and legislation for artificial intelligence (AI) transparency | AI and Ethics 

A “safe” AI system should: 

  • Comply with laws and standards (LawfulAI)  
  • Respect human rights and agency (EthicalAI)  
  • Perform reliably under expected and edge conditions (RobustAI)  

These pillars form the foundation for engineering transparency in AI systems. Drawing lessons from aviation’s hard-earned safety culture, this first part of a two-part blogpost around engineering practices around AI transparency (which is part of a bigger series of Responsible AI blogposts around AI transparency), explores how aviation engineering principles and planning can help inform AI transparency related practices within the AI innovation lifecycle (see also the respective checklists in the SAP AI Ethics Handbook). This first part focuses on the early stages of ideation and validation which can help bring transparency to the center of all our AI innovation planning. The second part focuses on the next stages, including: Realization, Productization, and Operations. (Part 2)

For “Transparency & Explainability” in AI systems to progress beyond aspirational slogans; they must be engineered into the product, the pipeline, and day-to-day operations, with clear roles for every stakeholder. Overview of transparency and inspectability mechanisms to achieve accountability of artificial intel..., Evaluating Transparency in the Development of Artificial Intelligence Systems: A Systematic Literatu... Let's look at how.

AI's Captain and the crew: Stakeholder Requirement Mapping 

In aviation, roles are clearly defined: pilots, air traffic controllers, maintenance crew, and cabin staff, each have specific responsibilities and access to information. Pilots see cockpit instruments not the engineering schematics; maintenance teams get technical manuals, not passenger safety cards. This layered approach ensures clarity without unnecessary information overload. Similarly, transparency must be tailored to the different roles. Engineers need deep technical details while end users need simple actionable explanations of the system they are interacting with. 

What does it include: A persona mapping exercise should be conducted in the early ideation stage for the AI feature to identify stakeholders including engineers, product owners, compliance teams, end users, etc. and their transparency needs should be detailed out. Role-based access to information should follow the least privilege principle. Further to this mapping exercise, there should be a clear assignment of accountabilities related to transparency checkpoints(decided on a case to case basis). Questions like: who approves product updates, who updates the documentation, who monitors compliance, etc. Should be answered by the use case team and related mapping should be embedded into the design documents and backlog items early in the lifecycle. 

In practice: An established approach to identify user roles, users’ objectives, and their (information) needs is conducting early-stage design workshops (e. g. following the SAP AppHaus’ Business AI Explore or Design workshop methodology).  

Navigating tough terrains: Classifying use cases 

For planning appropriate contingency measures, scenarios within aviation sector are classified into different categories, based on weather conditions, passenger load, aircraft readiness, etc. Similarly, it is important to classify AI use cases into different categories early on to decide the extent of transparency measures to be implemented within the use case to mitigate the associated risks appropriately. 

What does it include: This would include classification based on low, medium, high impact (on individuals for e.g.), critically of the sector in which the use case is operationalized (HR, Finance being on the higher criticality side), automation levels (fully automated decision-making being highly critical), tool access (especially for agent use cases, in order to avoid cascading harms), regulatory exposure (For e.g. healthcare being highly regulated), etc. Considering this classification, for appropriate transparency: 

  • Human oversight mechanisms should be appropriately adjusted.  
  • Retention period for the audit logs should be proportional to the impact. 
  • High impact use cases should have rollback mechanisms and hence related documentation. 
  • Impact should be linked to internal responsible AI policies around transparency. 

In practice: At SAP, every AI use case goes through the AI Ethics Review process that categorizes them into different categories according to the respective ethical impact. A detailed blogpost around the AI ethics governance process at SAP will be published soon as part of this blogpost series around AI transparency. For now, to learn more about how the process works, please refer to the SAP AI Ethics Handbook (chapter “Principles into action”) in which categorization criteria and governance setup are described in more details (or watch the “Governing AI Ethics at SAP” lesson of the SAP Learning Journey “Putting AI Ethics into Practice at SAP”).  

AI's Pre‑flight checklists: Transparency & explainability prerequisites 

Airplanes are never shipped without a complete ticked off checklist. Similarly, AI features should never be shipped without passing a Transparency & explainability checklist. This list should also be passed down the chain to relevant stakeholders who can then use this at different checkpoints and keep an eye out for deviations and document these parameters. This not only includes architectural transparency, but also other components of overall engineering side of AI development related quality checks focusing on transparency and explainability which should be provided to relevant stakeholders within the enterprise and to the system admins at the customer/user side. 

What does it include: A pre-flight checklist for AI transparency should mandate architectural transparency by documenting system purpose, data sources, model type, and limitations; confirm explainability readiness wherever AI influences outcomes; and verify compliance hooks aligned with regulatory anchors like the EU AI Act. This checklist must integrate into the development workflow by linking items to backlog tools and making transparency a release gate item. No feature ships without proof of appropriate transparency and explainability. It should also be distributed across stakeholders including architects, QA, product owners, and system admins, with checkpoints for deviations during lifecycle changes (something on the lines of SAP Design System Guidelines). Alongside this, evidence packs should be stored in a central repository linked to project management tools for auditability. Finally, for high-risk AI systems, add enhanced monitoring and troubleshooting documentation for deployment and operations to ensure resilience and compliance. 

In practice: Transparency and explainability requirements are very different for different application contexts, situations and user personas (as explained in this blogpost: Trustworthy AI: How to make artificial intelligenc... - SAP Community). The SAP Fiori Design Guidelines for explainable AI take this variety of expectations into account by differentiating between L1/L2/L3 explanations: Level 1: Minimum explanation level for quick overview; Level 2: Condensed view of the relevant properties, amounts and contextual information; Level 3: Extended report specifically for advanced users covering details around the AI performance, further context and conditions for monitoring AI operations). Applied to an SAP use case, the need for end-user transparency is very obvious when it comes to HR related contexts, e. g. candidate analysis in recruiting. In the Learning Journey lesson “Ensuring Explainability in HR AI Systems”, our experts share how the team addressed the requirements during implementation.   

Closing 

Transparency isn’t a checklist you tick once, it’s a culture that starts at ideation and validation and continues through every stage of the AI lifecycle. By embedding stakeholder mapping, risk classification, and compliance gates early, we create a foundation where trust is engineered, not promised. These steps ensure that transparency is not an afterthought but a design principle that shapes every decision.  

In the next blog post, we’ll move beyond planning and delve deeper into practice, exploring how transparency can be operationalized during realization, productization, and operations stages, with concrete engineering patterns(Part 2). Together, these two parts outline key engineering practices for integrating transparency into AI systems aligned with the principles of lawfulness, ethics, and robustness. 

abhishek_pant01_0-1764838167771.png

 

Let's keep sharing how AI transparency helps keep_the_blue_side_up

 

Stay tuned for the next part of this blogpost with further deep dive into the engineering practices around AI transparency!!

 

References and further readings

AI Transparency Blogpost Series 

AI Transparency @SAP 

Responsible AI Assets 

Aviation Safety Research 

Regulations and governance frameworks