cds init
. Our folder structure looks like this (simplified):README.md
package.json
pnpm-workspace.yaml
mta.yaml
xs-security.json
packages/approuter
packages/server
packages/ui
package.json
, while other modules or apps are meant to be stored in the apps
folder. Although it is generally good practice to include different application modules that share the same development lifecycle in one monorepo, having CAP block the root can be a hindrance. To overcome this issue, we choose to create a true monorepo and use a specialized tooling to manage it.node_modules
(more infos here)pnpm-workspace.yml
file at the root of the project. This file is used to describe the locations of the different submodules, In our GPT chat project all those submodules are equally aligned within /packages
:Module/Package | Path |
---|---|
CAP | /packages/server |
SAPUI5 | /packages/ui |
Approuter | /packages/approuter |
pnpm
in such a structure:A README.md
to describe the whole projectpackage.json
file only containing a bunch of build and deploy scriptsmta.yml
for SAP BTP deployment/packages
not interfering with other partspnpm
has some clever commands available. Some examples:pnpm install
server
package and will therefore only be executed there:pnpm -r deploy:local
Startup the whole application in the development mode, including typescript transpiling, hot reloading, etc. for the UI and the server in parallel:pnpm --parallel start:dev
pmnm
at hand makes it pretty easy to also onboard other application modules (another UI, library, service, database module or even something completely different) into the repository.One word of warning though:
This setup will probably only work in a local development environment with tools like Visual Studio Code. Since the SAP Business Application Studio is very opinionated, there is a risk that such a folder structure will break some specialized functions within BAS.
server/db
– the definition of the data model to store the chat dataserver/srv
– the definition of the OData service layerserver/src
– the business logic to handle the conversation with the OpenAI API implemented in TypeScript (we'll be wieldin' some fancy weapons like dependency injection 🏴☠)server/db
)Chats
, Messages
, and Personalities
. Because we want to have stateful conversations and we need all the messages of a chat at hand to provide a sophisticated prompt
for the OpenAPI completions
and chat/completions
endpoints (see previous post), we store the complete history of a chat in the corresponding tables.entity Personalities : cuid, managed {
name : String;
instructions : String;
}
entity Chats : cuid, managed {
topic : String;
model : String;
personality : Association to one Personalities;
messages : Composition of many Messages
on messages.chat = $self;
}
entity Messages : cuid, managed {
text : LargeString;
model : String;
sender : User;
chat : Association to one Chats;
}
Personalities
entity currently cannot be maintained via UI from within the app. The contents is therefore provided via csv file (server/db/data/p36.capui5gpt.chat-Personalities.csv
).Name | Instructions |
---|---|
Developer Assistant | AI describes things from the technical perspective of a software developer. When AI provides code, AI uses a human readable markdown format with the following pattern: ```language code ``` Example: ```javascript console.log("Hello World") ``` |
Pirate | AI is always answering like a pirate would do. |
Poet | AI is always answering pattern-oriented, rhythmic, protective, internally motivated, creative and curious, optimistic and self-actualizing and answering in rhymes. |
server/srv
)Personalities
are readonly) and add two OData functions:getModels
– to return a list of all existing models from the OpenAI APIgetCompletion
– to get a completion from the OpenAI APIserver/srv/index.cds
also contains some typings and enums, which are mandatory to describe the API... and which we will be reusing (thanks to TypeScript) in the coding later. In the CAP backend, but also in the UI5 frontend...🤩@requires: 'human'
service ChatService {
// Our exposed entities
entity Chats as projection on chat.Chats;
entity Messages as projection on chat.Messages;
@readonly
entity Personalities as projection on chat.Personalities;
// OData functions
function getModels() returns array of Model;
function getCompletion(model : String, personality : String, chat : String) returns Completion;
// Some required typings for the function's return values
type Sender : String enum {
AI = 'AI';
HUMAN = 'Human';
}
type Model {
id : String;
}
type Completion {
message : LargeString;
}
}
annotate Chats with @(restrict: [
{
grant: 'WRITE',
to : 'human'
},
{
grant: [
'READ',
'UPDATE',
'DELETE'
],
to : 'human',
where: 'createdBy = $user'
}
]);
server/src
)cds
, you don't have to write any code (outside of cds
) to get a service exposing a data model up and running. And only in cases where you want to provide your own business logic, you need to jump in with custom code. CAP (Node.js) then provides different ways to implement services, supporting different coding styles (e.g. subclasses of cds.Service, plain functions, etc.), which are in line with the flexible and dynamic nature of CAP (e.g. CQL, etc.).cds
definitions (that's what cds2types
does).cds-routing-handlers
)typeID
)getModels
and getCompletion
) and one could argue, that the usage of those patterns may seem a little over-engineered. But many experienced developers would probably agree, that every piece of software is cursed to become more complex over time, so I tend to follow the wise words of a fellow pirate AI friend:src/server.ts
file that bootstraps the application and wires everything together (no usage of cds run
).src/types
whenever we startup in dev mode (pnpm start:dev
). And since we want to have OData service-related types also available in SAPUI5, we provide them there as well (more on the usage in SAPUI5 will be coming in blog post 3).src/handlers/
. Since the project is not that complex, we only have one handler. In more complex applications, we would split things up for different entities and/or functions.ChatServiceHandler
will be implementing the getCompletion
and getModels
functions and It will make use of the different service classes covering their own domain:PersonalitiesRepositoriy
– Fetch the personality from the database to get the instructionMessagesRepositoriy
– Fetch the messages for a chat from the databaseChatBuilder
– Build a chat representation in the correct format for the prompt for GPT-3 and GPT-3.5OpenAIService
– Communicate with the OpenAI APIChatServiceHandler
– The Implementation of the OData functionsChatServiceHandler
class looks like this:import { Request } from "@sap/cds/apis/services";
import { Func, Handler, Param, Req } from "cds-routing-handlers";
import { Inject, Service } from "typedi";
import { FuncGetCompletionReturn, FuncGetModelsReturn } from "../types/ChatService";
@Handler()
@Service()
export default class ChatServiceHandler {
@Inject()
private openAIService: OpenAIService;
@Inject()
private chatBuilder: ChatBuilder;
@Func("getModels")
public async getModels(@Req() req: Request): Promise<FuncGetModelsReturn> {
const models = await this.openAIService.readModels().catch((error) => {
req.notify(500, error.message);
});
return <FuncGetModelsReturn>models;
}
@Func("getCompletion")
public async getCompletion(
@Param("model") model: string,
@Param("personality") personalityId: string,
@Param("chat") chatId: string,
@Req() req: Request
😞 Promise<FuncGetCompletionReturn> {
if (model.startsWith("gpt-3.5") || model.startsWith("gpt-4")) {
const messages = await this.chatBuilder.getChatAsMessages(chatId, personalityId);
response = await this.openAIService.createChatCompletion(messages, model);
} else {
const prompt = await this.chatBuilder.getChatAsPrompt(chatId, personalityId);
response = await this.openAIService.createCompletion(prompt, model);
}
return <FuncGetCompletionReturn>{
message: response,
};
}
}
cds2types
, we are able to import the automatically generated cds
typings and can easily make sure, that the defined contracts are being fulfilled.@Inject
, @Params
and @Req
) that automatically inject parameters and instances of the required service classes, without the hassle to create and manage those by ourselves.cds-routing-handlers
to register our class as a handler for our ChatService (@Handler
) and the functions as the implementation of the OData functions (@Func
).chatBuilder
instance to either build a string representation of the given chat or to build a more structured one. We then call the corresponding method in the openAIService
service instance to communicate with the OpenAI API and retrieve the completion.ChatBuilder
– Building the chat representationsChatBuilder
is responsible to fetch all the information required for building the chat in two different formats: as a string (for GPT-3) and in a pre-defined JSON format (GPT-3.5, GPT-4). The builder class does not directly uses CQL to retrieve the data, but uses the two injected repositories. The corresponding functions to build the chats are just mapping the received data to the external format.import { ChatCompletionRequestMessage, ChatCompletionRequestMessageRoleEnum } from "openai";
import { Service, Inject } from "typedi";
import MessagesRespository from "../repositories/MessagesRepository";
import PersonalitiesRespository from "../repositories/PersonalitiesRespository";
import { Sender } from "../types/p36.capui5gpt.chat";
@Service()
export default class ChatBuilder {
@Inject()
private messagesRepository: MessagesRespository;
@Inject()
private personalityRepository: PersonalitiesRespository;
public async getChatAsPrompt(chatId: string, personalityId?: string): Promise<string> {
const instructions = await this.readInstructions(personalityId);
const chat = (await this.messagesRepository.getMessages(chatId))
.map((message) => {
const sender = message.sender === Sender.AI ? Sender.AI : Sender.HUMAN;
const plainMessage = message.text.trim().replace(/\n/g, " ");
return `${sender}: ${plainMessage}`;
})
.join("\n");
return `${instructions}${chat}\nAI:`;
}
public async getChatAsMessages(chatId: string, personalityId?: string): Promise<ChatCompletionRequestMessage[]> {
const instructions = await this.readInstructions(personalityId);
const messages = (await this.messagesRepository.getMessages(chatId)).map((message) => {
return {
role:
message.sender === Sender.AI
? ChatCompletionRequestMessageRoleEnum.Assistant
: ChatCompletionRequestMessageRoleEnum.User,
content: message.text.trim().replace(/\n/g, " "),
};
});
return [{ role: ChatCompletionRequestMessageRoleEnum.System, content: instructions }, ...messages];
}
private async readInstructions(personalityId?: string): Promise<string> {
const personaltiy = await this.personalityRepository.getPersonality(<string>personalityId);
return personaltiy?.instructions || "";
}
}
OpenAIService
– Talking to the OpenAI APIOpenAIService
. One interesting aspect of this class is, that we use property injection to inject a configuration, which includes the API-Key and some properties to tweak the completion call. Instead of directly accessing things like process.env
, we read the configuration while booting the server via cds.env.for()
(details in the source code) and provide them as injectable properties by the typeDI
container.openai
NPM package, since this provides wrapper-functions for all three, the v1/completions
, v1/chat/completions
and v1/models
endpoints.import { ChatCompletionRequestMessage, Configuration, OpenAIApi } from "openai";
import { Service, Inject } from "typedi";
@Service()
export default class OpenAIService {
@Inject("openai-config")
config: OpenAIConfing;
private apiInstance: OpenAIApi;
get api(): OpenAIApi {
this.apiInstance ??= new OpenAIApi(
new Configuration({
apiKey: this.config.apiKey,
})
);
return this.apiInstance;
}
public async readModels(): Promise<{ id: string }[]> {
return this.api.listModels().then((response) =>
response.data.data.map((model) => {
return {
id: model.id,
};
})
);
}
public async createChatCompletion(
messages: ChatCompletionRequestMessage[],
model: string = "gpt-3.5-turbo"
😞 Promise<string> {
const attributes = this.config.completionAttributes || {};
const response = this.api
.createChatCompletion({
...this.mergeAttributesWithDefaults(attributes),
model: model,
messages: messages,
})
.then((response) => {
return response.data.choices[0].message.content;
})
.catch((error) => {
return `The OpenAI API sadly returned an error! (Error: ${error.message})`;
});
return response;
}
public async createCompletion(prompt: string, model: string = "text-davinci-003"): Promise<string> {
const attributes = this.config.completionAttributes || {};
const response = await this.api
.createCompletion({
...this.mergeAttributesWithDefaults(attributes),
model: model,
prompt: prompt,
stop: ["\nHuman:", "\nAI:"],
})
.then((response) => {
return response.data.choices[0].text;
})
.catch((error) => {
return `The OpenAI API sadly returned an error! (Error: ${error.message})`;
});
return response;
}
private mergeAttributesWithDefaults(attributes: CompletionAttributes): CompletionAttributes {
return {
max_tokens: attributes.max_tokens || 1200,
temperature: attributes.temperature || 0.8,
top_p: attributes.top_p || 1,
frequency_penalty: attributes.frequency_penalty || 0,
presence_penalty: attributes.presence_penalty || 0.6,
};
}
}
### Get a list of all chats including their messages
GET http://localhost:3001/odata/Chats?$expand=messages
Authorization: Basic pirate:ahoy
### Create a chat
POST http://localhost:3001/odata/Chats
Authorization: Basic pirate:ahoy
Content-Type: application/json
{ "topic": "An example chat", "model": "text-davinci-003" }
### Call the function to get the list of OpenAI models
GET http://localhost:3001/odata/getModels()
Authorization: Basic pirate:ahoy
### Call the function to get a completion from the OpenAI API
GET http://localhost:3001/odata/getCompletion(model='text-davinci-003',chat='f480fa4c-c31d-48bd-b76e-ac738ddb15ca',personality='')
Authorization: Basic pirate:ahoy
sValue
) to bring in type-safety (😬), this one might be very interesting for you.You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
14 | |
10 | |
7 | |
7 | |
5 | |
5 | |
4 | |
4 | |
3 | |
3 |