Artificial Intelligence Blogs Posts
cancel
Showing results for 
Search instead for 
Did you mean: 
BettinaLaugwitz
Product and Topic Expert
Product and Topic Expert
913

277483_277483_l_srgb_s_gl.jpg

Introduction

Flying has been a dream of humankind for ages, and how it can be achieved has tickled the minds of many visionaries and engineers. Part of the fascination is the possibility that humans push boundaries of what is physically possible. But physical laws are still in place, and accidents end fatally when something goes wrong, due to heights of dozens or thousands of meters or speeds needed for starts and landings. How much risk is the dream worth taking?

On December 17, 1903, Orville Wright conducted his historic flight with an airplane successfully. Just a few days earlier, Orville and his brother Wilbur had crashed in a test flight – one of the first documented aviation accidents in history.

Today, many decades later, using aircrafts is considered one of the safest travel options (measured by distance traveled https://en.wikipedia.org/wiki/Aviation_safety) – thanks to curious engineers, learnings from accidents and incidents, scientific progress, publication of relevant regulations and industry standards, and of course an ever-attentive approach to safety and security checks and improvements. We can trust planes to be safe – not per se (the owl Archimedes has a point here https://www.youtube.com/watch?v=KLI46yMdtmU) but because we have found ways to make them safe.

The fascination for machines that work similarly to the human mind has been around for ages. The potential of Artificial Intelligence (AI) for adding efficiency, creativity, and inspiration to persons’ lives is immense. The nature of the technology brings also specific risks and can be scary: machines that reason and learn imply a loss of human control – a challenge that is not only (science) fiction (HAL 9000 being just one example; for more see e. g. https://en.wikipedia.org/wiki/Artificial_intelligence_in_fiction) but reality (e. g. AIAAIC Incident Repository / Awful AI List / https://incidentdatabase.ai/ ).

Artificial intelligence has still a long way until it can be considered a matured technology which is as safe as air traffic has become. When it comes to AI today and going forward, we definitely want to trust, but verify: what are the implications when implementing a specific AI system; what is the system doing and why; in case of incidents: what was causing the issue; etc. Iteratively improving technical standards and procedural governance, considering ethical and legal requirements, will enhance safety for people, environment, and society going forward.

Every framework that discusses how to make AI systems safe for usage includes a section that has to do with transparency, including the UNESCO Recommendation of the Ethics of AI, SAP’s Guiding Principles for Artificial Intelligence, or the Ethics Guidelines for Trustworthy AI by the High Level Expert Group on Artificial Intelligence set u... (HLEG AI Ethics Guidelines).  

According to the HLEG AI Ethics Guidelines, “[t]rustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values, and (3) it should be robust, both from a technical and social perspective.” Transparency requirements play a role in all three components, so let’s use these angles to have a closer look.

Lawful: Transparency requirements in regulatory and legal areas

In the 1920s, barnstorming and so-called flying circuses were very popular: stunt shows with daredevil pilots and acrobats executing spectacular and risky maneuvers with the plane (loopings etc.) and on the plane (wing walking). Early legislation related to civil air transportation addressed the most common causes of fatal accidents, including lack of pilot qualification and stunt practices that put the spectators’ lives at risk. Over the years, laws were passed, updated, and complemented which intended to improve aviation safety and with that, the trust of the public in the technology, with the Air Commerce Act, issued in 1926, as an early example, and the Commission Implementing Regulation (EU) 2025/920 for sure not the last.

Defining and implementing regulation for technologies is inherently slower than the subject it is supposed to regulate. In the context of AI, luckily some laws that provide important guardrails to protect individuals have been in place for some time, or The California AI Transparency Act AB 853 are examples of regulation which targets the specific risks and challenges related to AI.

Individuals and legal entities have a need and partly also (will have) a right to transparency when it comes to decisions and AI; valid questions of stakeholders include: Which role does AI play in a decision that may affect my life going forward, and am I okay with that? Can I make an informed decision when I give consent to using an AI system as an individual (see @GiselleGomezJolis' related blogpost)? Do I have the transparency I need to diagnose the functioning / malfunctioning of an AI system, so that we can learn from issues and fix them going forward? How can Terms & Conditions help understand scope and limitations of AI systems?

Robust: Making AI systems secure and robust

Aircrafts are subject to thorough inspection based on sophisticated protocols that prove their respective airworthiness before they are permitted to operate. Insights from past incidents, e. g. through Flight Recorder data, are an important resource of input for improvements of hardware, software, or procedures, to help prevent future damage and harm.

Software systems need to be reliable in the sense that they work as designed and the potential risks related to their usage are minimized. Quality management, software security efforts and governance in general play a critical role when creating robust software.

The same applies to the development of AI systems, plus some extra aspects that have to do with the specific nature of those systems, in particular when they involve machine learning capabilities or are generating output autonomously (for definitions / distinctions of AI / ML / genAI / Agentic AI see e. g. the SAP AI Ethics Handbook).

What role does transparency play? For example, without transparency, testing AI system robustness and reliability are a critical challenge. At the same time, we are speaking of ‘black box’ systems – how can we even understand what is going on in the system and if that is what we expect in terms of reliability, security and safety?

Security testing of AI systems should not only look at the outcome or effect, but ideally have a certain level of insight in the processes and methods inside the system to identify root causes and potential systematic threats. Benchmarking of models or AI systems help understand / ‘diagnose’ the quality and reliability and trigger mitigation as needed, runtime monitoring and traceability help prevent unintended and unwanted shifts in an AI system’s performance.

Software companies can leverage guidelines and recommendations (e. g. OWASP AI Testing Guide), applicable industry standards (i.e ISO 9001 Quality Management System, ISO/IEC 27001 Security Management System), and adjust and complement according to the specifics of e. g. the industry or the regional context. Documenting the compliance with relevant standards, including NIST AI Risk Management Framework RMF and ISO/IEC 42001 AI Management Systems, helps build trust in the responsible development of AI systems in customers (see e. g. the SAP Trust Center) and the general public.

Ethical: Human oversight

How come airplane engineers invent seatbelts for pilots and passengers, and why was there an affordance to adjust law and regulation to reduce accidents and harm related to air traffic? Costs related to damages and injuries may have played a role, but without doubt also: empathy and ethical considerations. Situations where people lose loved ones, suffer from severe injury or face damages to their property are something to be avoided, and there is a strong motivation to reduce risks leading to them and to strengthen the public’s trust in the technology and its operation.

Yes, there are AI-related legal requirements for transparency, and technical robustness is a prerequisite for AI systems that can be trusted. Relying on current or upcoming (known) regulation only when working on AI safety is not a sustainable approach: Technology is evolving too fast to allow regulatory processes to keep up and put legal boundaries in place. Thinking only in terms of current rules and potential gaps between them will turn legally compliant innovation into a ‘jigsaw puzzle with moving parts’: Legal gaps you designed for may be closing in due course, and every new regulation poses a threat to the products in the portfolio or on the roadmap.

The ethical perspective on the other hand changes the focus from legal texts and fear of being sued, towards the vision to what we want the technology to do and what needs to be avoided. AI Ethics principles help inform legislation, but also provide guidance for technical innovation:

Not just do what is feasible, but think what is acceptable and desirable going forward, for us as individuals, for us as a company, and for us as a society.

One example: The HLEG AI Ethics Guidelines name fairness as an ethical imperative for trustworthy AI as early as 2016. SAP committed to fairness in the Guiding Principles for the Ethics of AI in 2018. The EU AI Act entered into force in 2024, with certain provisions applying immediately and others phased in over time, reaching full applicability by 2027.

Given the risks related to AI, we want to ensure that humans control AI systems as needed, and not the other way around. A prerequisite to ensure human agency and oversight: Transparency. You can’t control what you don’t know or understand. Measures like explainability, interpretability, and traceability are required to allow humans in their respective capacity and interaction with AI systems to stay in control, acquire knowledge, and gain trust and confidence that the AI is not creating any harm, neither to the individual, society, nor environment.

Different perspectives on transparency for responsible AI

The blogpost series will take the perspective of different stakeholders in the lifecycle of AI systems, including those who are “designing, developing, deploying, implementing, using or being affected by AI” (HLEG AI Ethics Guidelines). SAP experts will shed light on different aspects and approaches, from a legal, a technical, or an ethical viewpoint, providing insights into challenges and solutions, considering organizational practices as well as user needs and customer expectations. Examples from real SAP use cases will illustrate the practical relevance and the efforts SAP is taking to build AI systems responsibly and create transparency so that stakeholders are enabled and empowered to understand risks and benefits, and execute on human oversight.

This blogpost will be updated with references whenever a new post is available to make it easy to get notified and find the information you are interested in.

Closing

When on an airplane, in most situations you would want to know that the skies are above you and the ground is beneath you (exceptions include flying circuses, see above). In any case, at least the pilot should know about the current orientation of the aircraft, even if weather conditions are blocking the sight of the actual surroundings. If the physical horizon is not visible, the attitude indicator, an instrument mimicking the blue skies above an artificial horizon and a brown ground below, will help adjust the plane’s orientation. Blue side up, brown side down means that the plane is not going astray, neither upwards nor downwards – a means to keep orientation and an indicator of aviation safety.

We trust airplanes not because they are perfect and unbreakable. But because we know about their challenges, weaknesses, what to check, what to look at, how to optimize human-machine collaboration. And we have found ways to surface the information that is needed – by pilots, by engineers, by passengers, or in other words: leverage transparency to foster safety.

Meaningful transparency is ‘in the eye of the beholder’. Its design needs a human-centered approach to provide just the right angle, level, and amount of information that is relevant and helpful for individuals to interpret the situation, to “make sense, or derive meaning, from a given stimulus […] so that the human can make a decision” (Broniatowski, 2021) – a prerequisite for human oversight and trustworthy AI.

295087_GettyImages-95757310_medium_jpg.jpg

 

Let us share how AI transparency helps keep_the_blue_side_up

 

 

 

 

More blogposts about Responsible AI at SAP

Transparency 

Fairness and consistency

Compliance

 

More references and further readings

Responsible AI challenges

Responsible AI guidelines

Lawful

Robust

Ethical

 

2 Comments
Pierre_Col
Active Contributor

Good analogy, thanks for this post @BettinaLaugwitz.

BettinaLaugwitz
Product and Topic Expert
Product and Topic Expert
0 Likes

🔔 More Transparency blog posts available

Learn more about what to consider when engineering AI transparency as summarized by @Abhishek_Pant01 and join us for a legal and ethical perspective on AI Consent discussed by @GiselleGomezJolis.

Please find the links in the More blogposts about Responsible AI at SAP section above.
🛫 keep_the_blue_side_up