In this blog, we will discuss how to use BTP services to create a personalized digital assistant that will help you conduct your business objectives. The blog can help in improving the digital assistant's ability to learn from current data. By separating the AI prompt from the main application, we have also enabled flexible prompting. This allows us to easily change the prompt and prevents the need to repeatedly deploy the application.
Our Business Context: The customer, utilizing S4 HANA public cloud, has relied on an external application for end-user timesheet submissions. Given the critical role of timesheets in their revenue process, simplifying the submission method was a significant challenge. This led us to develop a GenAI-powered digital assistant for the customer. This assistant can analyze historical entries, provide timesheet suggestions, and confirm user responses, thereby streamlining the submission process. With the GenAI-based assistant, users will spend significantly less time on timesheet submissions.
Pre-Read: I'll make an effort to provide all of the references we used to create this digital assistant, along with information about how they relate to one another. Here are things to consider before we get into the specifics of each step.
- Before activating, please check your BTPEA/CPEA credits as there are few paid services you must subscribe to. The consumption cost of AI models is covered in the last section I added.
- Select the AI model that you wish to employ. Below is a detailed explanation of the steps.
- Currently, the SAP BTP GenAI service doesn’t support deep learning capacity. However, in this instance, we tried a different strategy and succeeded in allowing the AI to gain knowledge from the historical data. However, the learning will only take place during one session. That is, the learning will be lost when we reload the application.
- To put the solution into practice, a business consultant and a CAP developer are needed.
So, let’s get started.
The architecture of the genAI-enabled digital assistant is shown below. The SAP documentation page below has further information on this. https://github.com/SAP-samples/btp-cap-genai-rag/blob/main/docs/architecture/multitenant-architectur...

Fig 1: GenAI Architecture
Here, we developed an application for tracking and submitting timesheets. You may use business requirements to guide the design of your application. Let's go on to the steps.
- Subaccount setup. I won't go into specifics about how to set up a subaccount. The genAI services (AI Core, AI Launchpad) are not accessible in every region or from every provider. Please use SAP Discovery Center to check for availability (https://discovery-center.cloud.sap/estimator/?commercialModel=btpea). If you have a subaccount set up currently and AI services aren't supported there, please make a new subaccount just for AI services. You can still work on the application in your original subaccount. Since AI services are not supported by our development subaccount, this blog discusses the dual-subaccount configuration.
- Services to be Procured. These are the services that need to be purchased. To assist you in activating or configuring each service, I have included remarks next to it.
Each service may have a different price; however you can check the SAP Discovery Center for pricing information by using this link. https://discovery-center.cloud.sap/estimator/?commercialModel=btpea
- HANA DB setup
- For Context: Each conversation's "context" is stored in HANA DB. We must also pass the historical conversation to genAI for every message received by the digital assistant. A single table contains all of this history.
- Flexible Prompting: In order to give the program flexible prompting, we also use a different table to contain the prompting text. The CAP application is related to this. By doing this, we can make sure that any changes to the prompt don't need redeploying the entire program.
- SAP AI Core Service: The most important phase in the development of this application is this one. The sub-steps to be taken are listed below.
- Activate the service “SAP AI Core”.
- Go to subaccount->Instances and Subscription-> Instances -> Open the Sap AI core service.
- Click on “Create Service Keys” to create new key for connection.
- Click on the generated key and pick the values of parameters “clientid” and “clientsecret”. Keep these settings handy since you'll need them when you use destination to link the application to AI Core.
- AI Launchpad setup – Configuration: Subscribe to the service “SAP AI Launchpad”.
- Select ML operations, Configurations and then Create configurationTraverse to subaccount->Instances and Subscription-> Subscriptions -> select “SAP AI Launchpad” and open the application.
- Select ML operations, Configurations and then Create configuration.

- You must enter a configuration name and fill in the properties as seen below in the create configuration screen (this is what I used, but you can modify it to suit your needs).

- In the next screen, give a model name and version. You can either chose default (which is what we did) or select other models. Currently supported models are gpt-35-turbo, gpt-4o, gpt-4, gpt-35-turbo-0125, gpt-35-turbo-16k, gpt-4-32k, gpt-4o-mini, text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large.

- You don’t need to select any input parameters. Click on next, review and save. Now, the configuration is created.
- You are free to decide the model you wish to use.
- Deployment > take URL.
- AI Launchpad setup – Deployment: Once the configuration is created, the next step is to create a deployment for the config. To do this, below are the steps:
- Open the created configuration (SAP AI Launchpad -> Configurations).
- Click on create deployment.
- Select the duration till when the deployment needs to be active, review and save.
- Scroll to “Deployments” section, select the currently created deployment and copy the URL until …ondemand.com. Note down this URL.

- Destinations setup (to AI and to backend): After setting up all the configurations, we must build destinations to link the application to the backend and AI to the application. The subaccount where you plan to deploy your application (where Business Application Studio is activated) must have all these destinations created.
- Connect to AI Core and Launchpad.
- Get the URL from the SAP launchpad deploy step (step 6.d of the blog) https://api.ai.prod.us-east-1.aws.ml.hana.ondemand.com/
- Authentication: “OAuth2ClientCredentials”.
- Set client Id and client secret values from step 4.d.
- Give the subaccount’s token service URL. This needs to be the token service URL of the subaccount which has the AI Core activated.

- To the backend: Create a destination like how you normally would do for any backend connection.
- Design directions for Source code development: Please be aware that this is not an extract of the entire source code. You may find instructions on how to use the code included in the SAP GitHub link here. https://github.com/SAP-samples/btp-ai-core-simplifying-timesheet-tracking-task-in-s-4hana-cloud?file...
Prompt Design and Enabling Data Analysis
In this situation, we must allow the AI Core to function as an assistant capable of carrying out the following tasks.
- Engage in productive user interaction.
- Examine data and produce insights that people can benefit from.
- Develop an API payload that can be used to upload data to the S4 HANA backend.
This is how we can define the prompt and API calls to accomplish the aforementioned goals.
- Define AI's role.
- Transfer historical data to AI from S4, the S4 destination is used.
- Direct API for data analysis, user feedback, and suggestion generation.
- After the user confirms the input data, generate the payload.
Below are the several types of texts and how the prompt is designed.
Prompt Section | Source |
Static prompt | Separate HANA Db table 1- row 1 |
Historic Data from SAP (backend) | To be fetched from SAP |
First, you must retrieve the prompt pre-data from the database table. Next, you must use the second connection to retrieve the historic data from S4 and concatenate it all into a single string. In order for the AI to provide the payload in the proper format, you must also provide a sample output of the payload. Here is an example of a prompt.

Please refer below links for best practices on prompt engineering.
- Things to be considered during the build phase
- AI Core Connection within the application. Below are the steps to enable the AI Core in the Cap application and to make it available for usage.
- In Package.json, use CAP LLM plugin for connecting with AI ("cap-llm-plugin": "^1.3.1",
- Store environment variables in. cdsrc.json and connect to the AI destination created in step 7.a.
- Use the remaining part of deployment URL (post .com from step 6.d) in the CHAT_MODEL_DEPLOYMENT_URL variable.
- Also add below environment variables
- Resource group
- API version.
- "CHAT_MODEL_RESOURCE_GROUP": "default",
- "CHAT_MODEL_API_VERSION": "2023-03-15-preview", Now we are all set and can access the AI Core from your Cap application
- Make calls to AI Core from CAP application (to get data, to post data into genAI, get data back from genAI). To call the AI core API, please follow below steps.
- Connect to “cap-llm-plugin”.
- Use methods from the llm plugin. We had used getChatCompletion method to post data into the AI core.
- The inputs provided by user needs to be concatenated into the chat history and to be stored in a HANA table.
- The chat history + new message will go into the llm plugin.
- The process will continue in a loop until the AI returns the final payload which needs to be used by the application to call the final S4 API to post data.
- Below diagram shows how the data is passed between the different components on program execution.
With these steps and guidance mentioned below, you should be able to develop an AI enabled digital assistant which can process business transactions.:- Accepts user input into the application.
- Store the user prompt/input into the chat history DB.
- Get the recent prompt + historical data from table.
- Pass this to the LLM Core.
- Get the output from LLM Core.
- Update the response into chat history DB.
- Display the LLM response to user.
- Additional Information
Pricing
While we need to choose the AI model to be used in our application, the cost of each model varies. Details on the costing details are available in the discovery center. https://ai-core-calculator.cfapps.eu10.hana.ondemand.com/uimodule/index.html#/gen The estimate is calculated based on the AI tokens that we consume. You can assume that one token = ~4 characters.Use Case Identification
While we had used to used AI to create a digital assistant, you can use the service for any other requirements you may have. Look at scenarios where there is natural language processing needed, summarizations needed, insights needed etc.