Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
alexwilde
Advisor
Advisor

In October 2023, our Product & Strategy Group at SAP Signavio hosted the Generative AI Days. An SAP-internal two-day workshop to up-skill SAP Signavio colleagues on the groundbreaking new possibilities of generative AI in engineering, UX design, and product management. 

With the rise of capable Large Language Models (LLMs) in 2022/2023, many traditional software engineers (like me) were suddenly given the means to bake AI into their products or pursue completely new ideas. What used to be the domain of data scientists and machine learning engineers has now crossed the chasm into the mainstream and become available to a larger engineering audience. 

As the leader of an engineering team that had the opportunity to learn (and fail at) building LLM applications throughout 2023, I wanted to share the lessons we’ve gathered from our endeavors.

alexwilde_0-1706955747786.png
(Click to play the video)

The first case study dissects a project called PINSChat. The idea of PINSChat was to provide a chatbot to SAP Signavio Process Insights (PINS) users that could answer questions about improving a specific process. 

At SAP Signavio Process Insights, a team of experts has already built a knowledge base of articles about process improvements. The problem, however, was that customers weren’t reading them. 

PINSChat’s core value proposition was to cut down the time to get from question to answer while a customer is working inside SAP Signavio Process Insights. It would achieve this by using a capable foundational model in combination with a technique called Retrieval Augmented Generation (RAG) to extend the model’s knowledge of process improvements. 

In our quest to build PINSChat, we learned a few lessons that I share in my session: 

  • the limitations of self-hosted open-source models
  • how we solved retrieval issues in our pipeline
  • questions you need to ask *before* building a RAG system 

In my second case study, I demonstrate how the value of existing applications can be increased by utilizing LLMs. 

One superpower of generative AI is the capability to (semi-)automate time-consuming tasks inside applications previously carried out by humans. As such, it provides an opportunity to make the user’s job-to-be-done significantly more productive and seamless. 

However, some challenges come with building reliable software systems on top of unreliable, non-deterministic LLMs. In my session, I’m showing ways to work around these issues. 

If you are interested in working on AI-driven products in a small, autonomous team of mid- to senior-level engineers, please get in touch in the comments below.

1 Comment