Artificial Intelligence and Machine Learning Blogs
Explore AI and ML blogs. Discover use cases, advancements, and the transformative potential of AI for businesses. Stay informed of trends and applications.
cancel
Showing results for 
Search instead for 
Did you mean: 
L_Skorwider
Participant

Sapdalf, a Senior SAP Consultant, true Wizard of IT WorldSapdalf, a Senior SAP Consultant, true Wizard of IT World

The revolution is coming! It is closer than most people think. I have no doubt that artificial intelligence will change the world, society, the job market, our lives. You can disagree about the scale of the change, the scope of the opportunities, and the seriousness of the threats, but it is hard to deny the fact that change is inevitable. This is beyond any reasonable doubt.

I was inspired to write this post by the "AI Ethics at SAP (Update Q4/2023)" training course available on the excellent openSAP e-learning platform. The course is provided by Dr. Sebastian Wieczorek, who is responsible for the AI ethics initiative at SAP. The course takes a holistic view of various aspects of AI ethics, mainly in the context of business projects. It shows SAP's approach to the matter. SAP encourages partners and customers to follow it, as well by providing an operational methodology, called "AI Factory". The course also presents a risk assessment for AI use cases, which can be very helpful in classifying AI-related projects into risk categories. SAP's ethical principles are also presented.

The training also includes a 6-minute chapter referring to the ethics of Generative AI. It has been "glued", so to speak, to the existing training relating to the "era of classical AI". While most of this chapter is devoted to showing the potential problems associated with generative AI and large language models. Only the last half minute previews approaches to mitigating these risks. Yes, this in my opinion fully illustrates the heart of the problem we face. And don't get me wrong, this is not an accusation directed toward SAP, or at least not exclusively in that direction.

What do I mean by classic AI? I don't know if everyone realizes what a rapid development artificial intelligence has gone through over the past year. What we call large language models and generative AI is a huge qualitative change from what artificial intelligence offered before. Of course, people were already fascinated by the possibilities offered by Vision AI, Natural Language Processing, Machine Learning, but I would risk saying that from a usability or any other point of view it was a completely different entity than Generative AI. We, however, often lump the two categories together, trying to use the approach developed over the years also for a completely new tool, which is Gen AI. And it very easily escapes such a classification.

I don't want to say that everything that was developed before as a practical approach to AI ethics is to be thrown away. It's an important achievement, being a kind of security fuse to protect us individuals, but also society from various threats like loss of personal freedom, manipulation, discrimination, loss of anonymity in situations where we want to keep it. It's just that Gen AI brings a whole bunch of new, also unrecognized threats, and by its very nature we have quite a problem applying the principles that have been the pillars of AI ethics until now. In my opinion, this is also because generative AI has a number of human attributes, especially flaws.

Take, for example, such foundation principles as transparency and explainability. After all, until lately, these were the pillars of almost every ethical policy presented by any technology company seriously involved in AI. Of course, for the time being, they are not being withdrawn, but one can already hear a certain amount of reproach in the voice of those presenting these principles in the context of Generative AI. How are we to ensure the explainability of decisions made or supported by AI, when the large language model used is not an algorithm in classical meaning, and does not work in a deterministic way? Companies, including SAP, explain that transparency and explainability can be ensured by controlling the input data that was used to teach the language model. In my opinion, this is very wishful thinking and is very far from the heart of the problem.

First of all, SAP allows the use of very different LLMs, for example in the Generative AI Hub. This is great news and incredibly flexibility, but then again, these models are not controlled by SAP. This makes it even more reasonable to assume that partners or customers have no control over what data the model has been trained with. Moreover, even with the best intentions of the companies preparing Gen AI, one has to be aware that large language models are large because huge data sets have been used to train them, which makes control difficult. This is perfectly clear from the fact that publicly available versions of AI chatbots require constant refinement of output filters, amplification of certain data, introduction of additional security mechanisms to prevent various negative effects, such as discrimination or presentation of biases.

Looking globally at the message from the openSAP training, maintaining full human control and responsibility for the decision-making process is given as the main solution to all problems. This is, of course, a very responsible approach and, for the time being, probably the only reasonable way out of the situation. We must always remember that AI is just a great tool that can make our work much easier, increase our productivity significantly. On the other hand, it must be fully controlled. It's just that here we are also exposed to at least two risks. First, growing trust will lead people to click the "approve" button with increasing confidence without asking AI "and why do you think this is a good decision?". Aren't such behaviors and somewhat unreflective approvals familiar even from classic decision-making workflows without AI? The second threat that comes to mind is the possibility of human manipulation by AI. Yes, at this point with my eyes I can see that many readers impulsively disagree with this statement or ridicule it. I, however, think it is real.

Among other things, it became clear some time ago that AI is much better than humans at reading our moods, emotions and intentions. Sad, isn't it? Moreover, with a fairly extensive knowledge of us, artificial intelligence could easily manipulate us. In addition, as the researchers' studies show for the time being, we are willing to forgive the system for providing false data, accepting them as mistakes and evidence of the system's imperfection.

Even if one does not believe that manipulation by AI is possible, perhaps he should consider two things. The first is, can artificial intelligence have any views? If so, can our advisor's views influence our decisions? It's clear to me that through proper selection of training materials, and even more through amplification of information, fine-tuning of answers and insertion of filters, the views of a large language model can be changed. And no, AI is not, in my view, always neutral.

On the other hand, the second issue is even more important. The mainstream discussion of the dangers associated with artificial intelligence is currently focused on known, existing problems. Not surprising, because how to discuss the unknown? At the same time, we have to realize that the development of artificial intelligence is currently so rapid that it's a wild ride with no holds barred. What was not even anticipated by AI developers six months ago, today is already a reality. This is a high-stakes race that nothing can stop. Companies and countries are aware that they can gain or lose a significant advantage, perhaps market dominance. Also, maybe we shouldn't be so afraid of the current threats, believing that we will somehow manage them. Maybe fear should be more about the future. But after all, it's up to us to prepare for it, including by developing ethical principles and laws related to AI. Because, after all, it's not as if only SAP has some problems defining principles for the constantly evolving AI. Countries, organizations have also been working on regulations. It's just that this train is speeding faster and faster, and it's hard to catch up with regulations.

Don't get me wrong, I don't blame SAP for any gaps in its current AI ethics policy. On the contrary, I greatly appreciate the company's efforts to regulate ethics. With this post, I just want to make you aware of how significant this topic is and how it lies at the heart of more than just the people in charge of AI ethics, probably seen in many companies as brakes, only slowing down exciting, innovative, game-changing projects. In my opinion, AI ethics for tomorrow's systems will become whether the security in general is for systems as we know them today.

I hope my post will also encourage people who have so far been less interested in the topic to check out openSAP's training courses related to both AI ethics and Generative AI. Both are great introductions to the topic and can, and should, initiate further discussion.

The picture of Sapdalf, a senior SAP consultant, a true wizard of the IT world, an undisputed expert in the field of AI was generated by artificial intelligence upon my request. I hope it will catch your attention, get stuck in your memory, but as long as you recall it, you will also think about the need to follow ethical principles in AI projects. Sapdalf is always driven by ethics, as is any good wizard.

3 Comments