Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
mike_zaschka
Active Participant
7,764
Updates

  • 15.03.23 – Added information for the newly available and integrated GPT-4 model.

  • 06.03.23 – Added information for the newly available and integrated GPT-3.5 model.

  • 27.02.23 – Added a link to part 3.


This is the second post of a small series of blog posts in which I'll delve into the conceptual and technical details of building a ChatGPT-like chat app using the SAP Cloud Application Programming Model, SAPUI5 and the OpenAI API. In the first post I provided some impressions, what our chat app looks like (including a pirate personality, arrr 🏴☠️) and dived into the important parts of ChatGPT and the OpenAI API for some background information.

In this second post I will cover more technical topics like the repository setup and some aspects of the architecture and implementation of the SAP Cloud Application Programming based backend. Although the application itself isn't incredibly complex, this post may contain some interesting things, since I'll share concepts adopted from our larger projects at p36, which go beyond the usual simplified tutorial complexity (shootout to my great colleagues Daniel Weyer and danielkawkab for the very valuable discussions and input).
In the third and final post, I will explore the TypeScript-based SAPUI5 frontend and also discuss some patterns extracted from real world projects to keep the UI part well organized.

I might be using simplified code snippets in some listings to put an emphasis on the ideas, and they may also not always be syntactically correct. In case you want to look at the real code, you can check it out, is's Open Source. The repository is hosted in the the public p36 GitHub account and also includes detailed instructions on how to set things up for local development and to deploy the app to SAP BTP Cloud Foundry.

=> Public GitHub Repository

The architecture


Our messenger at its core is a standard SAP CAP application providing the database and OData service layer and a SAPUI5 frontend, consuming the backend services. The Node.js CAP backend is also responsible to talk to the OpenAI API to retrieve a list of selectable OpenAI models, as well as a completion as an answer to our question (please read the first post if you don't know what a completion is).


Architecture including required services on SAP BTP Cloud Foundry


The readme of the project will show you how to set up everything in a local (development) environment and also how to deploy the app including all required services to the SAP Business Technology Platform (all services are available in a Trial or a Free Tier account).
Since CAP is a standard Node.js application, the project can also be deployed to other platforms and environments. And in case there is no SAP HANA database available, the database can be replaced with PostgreSQL by using cds-pg and cds-dbm (for more details on this, checkout my old post introducing those libraries).

Project structure


Before we go into the details of the technical implementation, a few words on the structure of our repository, which is different from the standard pattern of a CAP project initialized via cds init. Our folder structure looks like this (simplified):

README.md
package.json
pnpm-workspace.yaml
mta.yaml
xs-security.json

packages/approuter
packages/server
packages/ui


By default, CAP creates a project in which it claims the root package.json, while other modules or apps are meant to be stored in the apps folder. Although it is generally good practice to include different application modules that share the same development lifecycle in one monorepo, having CAP block the root can be a hindrance. To overcome this issue, we choose to create a true monorepo and use a specialized tooling to manage it.

A monorepo handled by pnpm


While there are also great other libraries available in Node.js (e.g. yarn, lerna) we are using pnpm in our project and with this we are changing the core layout of the application. pnpm is a package manager that acts a replacement for the standard npm and does two things exceptionally well:

  • Its clever and disc space efficient way to handle node_modules (more infos here)

  • Its native support for monorepos


To get monorepo support, we need to provide a pnpm-workspace.yml file at the root of the project. This file is used to describe the locations of the different submodules, In our GPT chat project all those submodules are equally aligned within /packages:





















Module/Package Path
CAP /packages/server
SAPUI5 /packages/ui
Approuter /packages/approuter

The advantages of using pnpm in such a structure:

  • The root folder really only contains stuff that is relevant for all submodules/packages and for central deployment:

    • A README.md to describe the whole project

    • A package.json file only containing a bunch of build and deploy scripts

    • An mta.yml for SAP BTP deployment



  • Each building block of the application is placed in its subfolder within /packages not interfering with other parts

  • To connect those loose parts during development, pnpm has some clever commands available. Some examples:

    • Install all dependencies for all submodules with one command:
      pnpm install

    • Run a script task to deploy the database to a local SQLite database (which is only present in the server package and will therefore only be executed there:
      pnpm -r deploy:local

    • Startup the whole application in the development mode, including typescript transpiling, hot reloading, etc. for the UI and the server in parallel:
      pnpm --parallel start:dev




From our experience, having such a layout and pmnm at hand makes it pretty easy to also onboard other application modules (another UI, library, service, database module or even something completely different) into the repository.
One word of warning though:
This setup will probably only work in a local development environment with tools like Visual Studio Code. Since the SAP Business Application Studio is very opinionated, there is a risk that such a folder structure will break some specialized functions within BAS.

The server – SAP CAP with full TypeScript support


The backend module of our chat application is a CAP application/service (Node.js), containing:

  • server/db – the definition of the data model to store the chat data

  • server/srv – the definition of the OData service layer

  • server/src – the business logic to handle the conversation with the OpenAI API implemented in TypeScript (we'll be wieldin' some fancy weapons like dependency injection 🏴‍☠)


Let's look into some of the details.

The data model (server/db)


The data model itself is really not that complex and only contains three entities: Chats, Messages, and Personalities. Because we want to have stateful conversations and we need all the messages of a chat at hand to provide a sophisticated prompt for the OpenAPI completions and chat/completions endpoints (see previous post), we store the complete history of a chat in the corresponding tables.
entity Personalities : cuid, managed {
name : String;
instructions : String;
}

entity Chats : cuid, managed {
topic : String;
model : String;
personality : Association to one Personalities;
messages : Composition of many Messages
on messages.chat = $self;
}

entity Messages : cuid, managed {
text : LargeString;
model : String;
sender : User;
chat : Association to one Chats;
}

The Personalities entity currently cannot be maintained via UI from within the app. The  contents is therefore provided via csv file (server/db/data/p36.capui5gpt.chat-Personalities.csv).
By adding a personality to a conversation, we can provide instructions for the GPT model to respond in a certain way. Currently there are only three personalities included, but they can easily be extended by altering the data to the csv-file:





















Name Instructions
Developer Assistant

AI describes things from the technical perspective of a software developer. When AI provides code, AI uses a human readable markdown format with the following pattern:

```language
code
```

Example:
```javascript
console.log("Hello World")
```
Pirate AI is always answering like a pirate would do.
Poet AI is always answering pattern-oriented, rhythmic, protective, internally motivated, creative and curious, optimistic and self-actualizing and answering in rhymes.

It may sound funny to give the chatbot a pirate-personality (Avast ye, 'tis the truth, me bucko!🏴), but it also is very impressive how the GPT models handle this in a conversation. And it's even more impressive, how more complex instructions (like the Developer Assistant) really have a big impact of the format and quality of the responses you will get.
In the first blog post I pointed out, that chat-like prompts are only one use case for GPT completions (we'll stick to them in this blog series) and there is a full science field starting to grow around Prompt Engineering (just google it).

The OData service (server/srv)


The OData service also is not that complex. We basically just expose all three entities (Personalities are readonly) and add two OData functions:

  • getModels – to return a list of all existing models from the OpenAI API

  • getCompletion – to get a completion from the OpenAI API


The service definition in server/srv/index.cds also contains some typings and enums, which are mandatory to describe the API... and which we will be reusing (thanks to TypeScript) in the coding later. In the CAP backend, but also in the UI5 frontend...🤩
@requires: 'human'
service ChatService {

// Our exposed entities
entity Chats as projection on chat.Chats;
entity Messages as projection on chat.Messages;
@readonly
entity Personalities as projection on chat.Personalities;

// OData functions
function getModels() returns array of Model;
function getCompletion(model : String, personality : String, chat : String) returns Completion;

// Some required typings for the function's return values
type Sender : String enum {
AI = 'AI';
HUMAN = 'Human';
}

type Model {
id : String;
}

type Completion {
message : LargeString;
}

}

Since there is nothing special in this part of the service layer, one interesting aspect might the usage of CAP instance-based authorizations, which bind chats to users. By using this annotation, we will automatically have only those chats exposed via OData, that the logged in user has created. This way we avoid dealing with manual filters when reading the data from the UI (and also avoid exposing a threat to hijack other people's chats).
annotate Chats with @(restrict: [
{
grant: 'WRITE',
to : 'human'
},
{
grant: [
'READ',
'UPDATE',
'DELETE'
],
to : 'human',
where: 'createdBy = $user'
}
]);

And yeah, there is also a check for the user to have a role called human, which totally makes sense in an app dealing with AIs. 😉

The business logic (server/src)


One of the (many) cool things of CAP is, that due to its usage of cds, you don't have to write any code (outside of cds) to get a service exposing a data model up and running. And only in cases where you want to provide your own business logic, you need to jump in with custom code. CAP (Node.js) then provides different ways to implement services, supporting different coding styles (e.g. subclasses of cds.Service, plain functions, etc.), which are in line with the flexible and dynamic nature of CAP (e.g. CQL, etc.).

But our (subjective) experience is, that in larger projects you need to replace at least parts of the great flexibility with a more static, but (type-)safe approach. And while you may start with little logic inside the handlers, over time you will be challenged with either large and complex handler classes/modules or you need to have a plan to extract some parts and build a more sophisticated software architecture (without CAP giving a clear guidance on how to do this).
Since we very much prefer using TypeScript in our Node.js projects and the support for TypeScript in CAP is growing, but definitely has some missing pieces (e.g. entity types), we thrive for a better solution to organize our business logic in our GPT chat application. And there is already one available.

CAP with advanced TypeScript support (cds2types, cds-routing-handlers and typeDI)


cds2types and cds-routing-handlers are two Node.js modules that are around for quite a while (see the introduction blog post from 2020) and really bring TypeScript support for CAP to the next level. And with the addition of typeDI as a general purpose dependency injection (DI) library, the whole concept of DI can be applied to the application.
I won't describe all concepts and features in detail (please read the documentation of the libraries), but the core aspects and benefits in using those libraries are:

  • Full TypeScript support for handler classes and the whole business logic-layer, meaning, type-safety and development comfort during dev-time

  • Full automated generation of TypeScript typings for the whole data model and service layer based on the cds definitions (that's what cds2types does).

  • Pre-defined, but yet very flexible architecture for handler classes (by cds-routing-handlers)

  • The full power of dependency injection and TypeScript decorators (by typeID)


For our ChatGPT-like app, we only have to implement those two functions defined in the service (getModels and getCompletion) and one could argue, that the usage of those patterns may seem a little over-engineered. But many experienced developers would probably agree, that every  piece of software is cursed to become more complex over time, so I tend to follow the wise words of a fellow pirate AI friend:
We must steer clear o' the treacherous waters that lead us to the infernal abyss known as Developer Hell in the days to come! 🏴‍☠

Our handler and service class architecture


So, to have a solid architecture at hand right from the beginning, that's our approach using those libraries:

  • Have a src/server.ts file that bootstraps the application and wires everything together (no usage of cds run).

  • Have the types been generated into src/types whenever we startup in dev mode (pnpm start:dev). And since we want to have OData service-related types also available in SAPUI5, we provide them there as well (more on the usage in SAPUI5 will be coming in blog post 3).

  • Have the handlers live in src/handlers/. Since the project is not that complex, we only have one handler. In more complex applications, we would split things up for different entities and/or functions.

  • We use some inspiration from Domain Driven Design to apply a software architecture to our application. Instead of putting all the code in one handler file, we separate the different functionalities into a set of classes, that also introduce more layers to our application.
    By convention, our OData handlers on the Application Layer do not contain any business logic other then delegating stuff to service classes in the Domain Layer. And we also try to keep CQL out of the handlers and split database/remote-service access up in either the service classes or, in more complex projects, into repositories on the Infrastructure Layer.
    By introducing this kind of architecture early on, we have a fixed pattern on how to separate the logic and it will be way easier to extend the application in the future.


The following diagram shows this concept and potential extensions to our application.

 


Our class diagram for the CAP backend


Our OData handler class ChatServiceHandler will be implementing the getCompletion and getModels functions and It will make use of the different service classes covering their own domain:

  • PersonalitiesRepositoriy – Fetch the personality from the database to get the instruction

  • MessagesRepositoriy – Fetch the messages for a chat from the database

  • ChatBuilder – Build a chat representation in the correct format for the prompt for GPT-3 and GPT-3.5

  • OpenAIService – Communicate with the OpenAI API


Let's look at some code:

ChatServiceHandler – The Implementation of the OData functions


By including all the above described modules and patterns, the implementation of the ChatServiceHandler class looks like this:
import { Request } from "@sap/cds/apis/services";
import { Func, Handler, Param, Req } from "cds-routing-handlers";
import { Inject, Service } from "typedi";
import { FuncGetCompletionReturn, FuncGetModelsReturn } from "../types/ChatService";

@Handler()
@Service()
export default class ChatServiceHandler {
@Inject()
private openAIService: OpenAIService;

@Inject()
private chatBuilder: ChatBuilder;

@Func("getModels")
public async getModels(@Req() req: Request): Promise<FuncGetModelsReturn> {
const models = await this.openAIService.readModels().catch((error) => {
req.notify(500, error.message);
});
return <FuncGetModelsReturn>models;
}

@Func("getCompletion")
public async getCompletion(
@Param("model") model: string,
@Param("personality") personalityId: string,
@Param("chat") chatId: string,
@Req() req: Request
😞 Promise<FuncGetCompletionReturn> {

if (model.startsWith("gpt-3.5") || model.startsWith("gpt-4")) {
const messages = await this.chatBuilder.getChatAsMessages(chatId, personalityId);
response = await this.openAIService.createChatCompletion(messages, model);
} else {
const prompt = await this.chatBuilder.getChatAsPrompt(chatId, personalityId);
response = await this.openAIService.createCompletion(prompt, model);
}

return <FuncGetCompletionReturn>{
message: response,
};
}
}

Thanks to TypeScript and cds2types, we are able to import the automatically generated cds typings and can easily make sure, that the defined contracts are being fulfilled.
We also have dependency injection and decorators at hand (@Inject, @Params and @Req) that automatically inject parameters and instances of the required service classes, without the hassle to create and manage those by ourselves.
And we also use the provided decorators from cds-routing-handlers to register our class as a handler for our ChatService (@Handler) and the functions as the implementation of the OData functions (@Func).

The actual implementation logic for retrieving the completion is then quite simple: Since the format of the newer GPT-3.5 and GPT-4 models are different from GPT-3, we have to ask the chatBuilder instance to either build a string representation of the given chat or to build a more structured one. We then call the corresponding method in the openAIService service instance to communicate with the OpenAI API and retrieve the completion.

ChatBuilder – Building the chat representations


The ChatBuilder is responsible to fetch all the information required for building the chat in two different formats: as a string (for GPT-3) and in a pre-defined JSON format (GPT-3.5, GPT-4). The builder class does not directly uses CQL to retrieve the data, but uses the two injected repositories. The corresponding functions to build the chats are just mapping the received data to the external format.
import { ChatCompletionRequestMessage, ChatCompletionRequestMessageRoleEnum } from "openai";
import { Service, Inject } from "typedi";
import MessagesRespository from "../repositories/MessagesRepository";
import PersonalitiesRespository from "../repositories/PersonalitiesRespository";
import { Sender } from "../types/p36.capui5gpt.chat";

@Service()
export default class ChatBuilder {
@Inject()
private messagesRepository: MessagesRespository;

@Inject()
private personalityRepository: PersonalitiesRespository;

public async getChatAsPrompt(chatId: string, personalityId?: string): Promise<string> {
const instructions = await this.readInstructions(personalityId);
const chat = (await this.messagesRepository.getMessages(chatId))
.map((message) => {
const sender = message.sender === Sender.AI ? Sender.AI : Sender.HUMAN;
const plainMessage = message.text.trim().replace(/\n/g, " ");

return `${sender}: ${plainMessage}`;
})
.join("\n");

return `${instructions}${chat}\nAI:`;
}

public async getChatAsMessages(chatId: string, personalityId?: string): Promise<ChatCompletionRequestMessage[]> {
const instructions = await this.readInstructions(personalityId);
const messages = (await this.messagesRepository.getMessages(chatId)).map((message) => {
return {
role:
message.sender === Sender.AI
? ChatCompletionRequestMessageRoleEnum.Assistant
: ChatCompletionRequestMessageRoleEnum.User,
content: message.text.trim().replace(/\n/g, " "),
};
});

return [{ role: ChatCompletionRequestMessageRoleEnum.System, content: instructions }, ...messages];
}

private async readInstructions(personalityId?: string): Promise<string> {
const personaltiy = await this.personalityRepository.getPersonality(<string>personalityId);
return personaltiy?.instructions || "";
}
}

OpenAIService – Talking to the OpenAI API


The actual communication with the OpenAI API is encapsulated in its own domain class OpenAIService. One interesting aspect of this class is, that we use property injection to inject a configuration, which includes the API-Key and some properties to tweak the completion call. Instead of directly accessing things like process.env, we read the configuration while booting the server via cds.env.for() (details in the source code) and provide them as injectable properties by the typeDI container.

The communication to the OpenAI API is rather simple and delegated to the official openai NPM package, since this provides wrapper-functions for all three, the v1/completions, v1/chat/completions and v1/models endpoints.
import { ChatCompletionRequestMessage, Configuration, OpenAIApi } from "openai";
import { Service, Inject } from "typedi";

@Service()
export default class OpenAIService {
@Inject("openai-config")
config: OpenAIConfing;

private apiInstance: OpenAIApi;

get api(): OpenAIApi {
this.apiInstance ??= new OpenAIApi(
new Configuration({
apiKey: this.config.apiKey,
})
);

return this.apiInstance;
}

public async readModels(): Promise<{ id: string }[]> {
return this.api.listModels().then((response) =>
response.data.data.map((model) => {
return {
id: model.id,
};
})
);
}

public async createChatCompletion(
messages: ChatCompletionRequestMessage[],
model: string = "gpt-3.5-turbo"
😞 Promise<string> {
const attributes = this.config.completionAttributes || {};
const response = this.api
.createChatCompletion({
...this.mergeAttributesWithDefaults(attributes),
model: model,
messages: messages,
})
.then((response) => {
return response.data.choices[0].message.content;
})
.catch((error) => {
return `The OpenAI API sadly returned an error! (Error: ${error.message})`;
});
return response;
}


public async createCompletion(prompt: string, model: string = "text-davinci-003"): Promise<string> {
const attributes = this.config.completionAttributes || {};
const response = await this.api
.createCompletion({
...this.mergeAttributesWithDefaults(attributes),
model: model,
prompt: prompt,
stop: ["\nHuman:", "\nAI:"],
})
.then((response) => {
return response.data.choices[0].text;
})
.catch((error) => {
return `The OpenAI API sadly returned an error! (Error: ${error.message})`;
});
return response;
}

private mergeAttributesWithDefaults(attributes: CompletionAttributes): CompletionAttributes {
return {
max_tokens: attributes.max_tokens || 1200,
temperature: attributes.temperature || 0.8,
top_p: attributes.top_p || 1,
frequency_penalty: attributes.frequency_penalty || 0,
presence_penalty: attributes.presence_penalty || 0.6,
};
}

}

The final OData service


When everything is wired up correctly and the server is started, we have our own ChatService running. You can easily test everything by sending requests to the endpoints (make sure to use the dummy users for authentication locally and OAuth-tokens when deployed to Cloud Foundry):
### Get a list of all chats including their messages 
GET http://localhost:3001/odata/Chats?$expand=messages
Authorization: Basic pirate:ahoy

### Create a chat
POST http://localhost:3001/odata/Chats
Authorization: Basic pirate:ahoy
Content-Type: application/json

{ "topic": "An example chat", "model": "text-davinci-003" }

### Call the function to get the list of OpenAI models
GET http://localhost:3001/odata/getModels()
Authorization: Basic pirate:ahoy

### Call the function to get a completion from the OpenAI API
GET http://localhost:3001/odata/getCompletion(model='text-davinci-003',chat='f480fa4c-c31d-48bd-b76e-ac738ddb15ca',personality='')
Authorization: Basic pirate:ahoy

Closing comment of part 2


In the second part of the small blog post series, we took our time to look deeper into the structure of the ChatGPT-like app git repository and also made a deep dive into the CAP backend. While the actual implementation of the app is not really that complex, we spend a good amount of time discussing the hows and also whys. While I really think that many of the applied concepts are of great value, it's important to emphasize, that it is just one opinion and using those patterns may also have some disadvantages and challenges (BAS support, loss of short-term CAP flexibility). But I am really eager to hear your thoughts on this.

In the final chapter of this blog series, we will look at the SAPUI5 frontend. And while the UI part is also not really that complex from a functional perspective, I will also dig deeper into some of the best practices we (@p36) value when building UI5 apps. And in case you have already experienced great pain dealing with bloated controllers (> 1.000 lines of code) or are tired of using Hungarian notation (sValue) to bring in type-safety (😬), this one might be very interesting for you.

Fair winds and following seas to ye, me hearty! We'll be hoisting the anchor and settin' sail to rendezvous with ye again in part 3" 🏴
5 Comments
Labels in this area