This blog post is the second in a two-part series introducing cds-kafka, an open-source CAP plugin (github repository) designed for seamless integration with Apache Kafka. Read the first post introducing cds-kafka and its capabilities here. In the second post, I will focus on practical use cases for the plugin and also demonstrating how to use the plugin in real-world applications, both locally and on SAP BTP.
Before looking at some concrete use-cases, let's focus where cds-kafka is placed in the various event-driven architecture (EDA) offerings by SAP.
SAP provides a range of event-driven architecture solutions to cater to various enterprise needs for messaging:
While those solutions cover the need to handle event-driven architectures in the SAP ecosystem and connect to various SAP solutions, only AEM provides the possibility to connect to external event streams in an organization. If we expand the cloud-related toolset with direct connection to Kafka (ignoring Kafka Connect as a low-level integration option), only the following tools are able to connect to and communicate with Apache Kafka:
While these offerings address diverse integration needs, they may not always align with specific requirements, such as reduced architectural complexity or direct Kafka access within a SAP cloud application. This is where cds-kafka comes into play.
cds-kafka as a messaging plugin for the SAP Cloud Application Programming Model (CAP), is tailored for scenarios where direct integration with Kafka is required at the application level, without relying on additional SAP middleware layers. While cds-kafka is not intended to replace existing SAP solutions like SAP Integration Suite or Advanced Event Mesh, it provides significant value in specific scenarios, especially when flexibility and direct Kafka access are needed:
The following scenario describes a (simplified) real-world example of integrating cds-kafka with an organization’s systems for product data management. While the actual setup is far more complex and involves multiple BTP services and other applications, the focus here is on the Kafka integration.
An organization stores product data, such as attributes, classifications and configurations, in its S/4HANA system, while additional product-related details, including additional master data, categorizations, media and documents, are managed in a Product Information System (PIM). The product data is critical for various internal systems, such as a CAD platform to generate 3D product images, and external applications like online shops and websites. In the past there were only manual or batch processes in place to combine the data, which was slow, error prone and did resulted in huge amounts of duplicated data.
To streamline access, the organization aims to establish a standardized data model and OData service layer on SAP BTP, consolidating all product information in real-time.
The solution was designed to remove redundancy and manual processes and instead provide a real-time single source-of-truth data model and service layer. SAP HANA Smart Data Integration (SDI) has been introduced to replicate product data from S/4HANA into SAP HANA Cloud, forming the foundation for a CAP-based application. Meanwhile, the PIM system, which lacks SAP-native connectors but supports Kafka, pushes updates to Kafka in real-time (via Change Data Capture - CDC techniques). Here, cds-kafka bridges the gap by enabling the CAP application to directly consume these Kafka updates and integrate them with the existing S/4HANA data.
The CAP application consolidates and combines data from both sources into a unified, real-time model, exposing it via APIs. The result is a standardized data model on SAP BTP providing a single source of truth for all downstream systems, ensuring consistency and reducing redundancy while ensuring real-time synchronized updates: changes in the PIM system are reflected instantly through Kafka, while updates in S/4HANA are replicated seamlessly via SDI.
This approach showcases the power of CAP and its core features and cds-kafka in extending the S/4HANA system by providing real-time, hybrid integrations, unifying disparate systems into a cohesive data model and service layer on SAP BTP without unnecessary middleware or complexity.
While the previous use case is intriguing, it is too complex to demonstrate here (and the details are also under NDA 😉). Instead, we’ll explore and combine two practical examples that showcase cds-kafka in action:
Those examples demonstrate how to integrate Kafka via cds-kafka in a hybrid multi-app event-driven architecture.
For this example, we use the CAP reference repository on github, SAP-samples/cloud-cap-samples, which is an excellent playground for exploring CAP features. The repository also includes examples of messaging, as described in the official CAP documentation. We will replace an existing messaging service in the sample project with cds-kafka to demonstrate how seamlessly it integrates with CAP.
We simply clone the repository to have it on our own local computer.
git clone https://github.com/SAP-samples/cloud-cap-samples.git
We will connect to a cloud-based Kafka instance in the second step. But first, let's have a fully local app-instance running, by leveraging Kafka in a docker instance. For this, we place a docker-compose.yaml inside of the project. We are not only using Kafka, but also Zookeeper and Kafka-UI to have a nice user interface to look at the data inside of Kafka:
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
hostname: kafka
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
- 9997:9997
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9997
KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka0 -Dcom.sun.management.jmxremote.rmi.port=9997
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
depends_on:
- kafka
ports:
- 8080:8080
environment:
DYNAMIC_CONFIG_ENABLED: true
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_METRICS_PORT: 9997
With the file in place, just fire up the docker containers:
docker compose up
The messaging example in the cloud-cap-samples repository involves two applications working together via asynchronous events:
Both apps need to be configured in the same way, as both needs access to Kafka.
First, we install cds-kafka in each folder:
npm i cds-kafka
Then we replace the messaging configuration in the package.json files to use cds-kafka:
...
"messaging": {
"[development]": {
"kind": "kafka-messaging",
"credentials": {
"brokers": [
"localhost:29092"
]
},
"consumer": {
"groupId": "cap-bookstore"
}
}
},
...
"kafka-messaging": {
"impl": "cds-kafka"
}
We connect to the local Kafka broker (zookeeper) on port 29092. In addition to the default configuration, we explicitly specify the groupId for the receiving application. Without this setting, cds-kafka would assign a dynamic groupId, which is not recommended in scenarios where precise control over consumer groups and offsets is required.
Since we are not modifying any source code, the Kafka topic name will automatically correspond to the event name.
Now we fire up both applications (from the root folder and in two separate terminal instances). The apps will be available at:
cds watch reviews
cds watch bookstore
In the terminal, we can see that cds-kafka is now in action.
To verify, that everything is working, we open the UI of the review application and submit a review (e.g. for book 251).
As soon as a review is submitted, the event is triggered by the reviews application and sent to Kafka. Switching to the terminal for the bookstore application, you will see that the event has been successfully transmitted and processed by cds-kafka and if you open the bookshop UI, you will also see the processed data.
We’ve successfully replaced the default messaging service with cds-kafka. This demonstrates how easy and efficient it is to integrate cds-kafka into any CAP application, without touching any application coding.
With the Kafka-UI docker container running, we can directly inspect the data stored in Apache Kafka by navigating to http://localhost:8080/. By accessing the relevant topic, we can view the message, including its data and the headers added by cds-kafka.
Since we’re not using CloudEvents in this example, the headers currently only include the correlation ID and some tenant information—a reflection of cds-kafka’s built-in support for multi-tenancy. This makes it even more versatile for handling complex scenarios in multi-tenant environments.
In the final step, we replace the local Kafka configuration with a cloud-based solution to demonstrate that cds-kafka can seamlessly connect not only to local Kafka instances but to any Kafka service.
Confluent Cloud is a fully managed Kafka service offering enterprise-grade scalability, reliability, and additional features such as schema registry, connectors, and monitoring tools. It simplifies the management of Kafka clusters and is widely used for both development and production environments.
Confluent offers a trial that allows you to quickly set up a simple Kafka cluster. Once the cluster is created, you’ll receive the necessary connection information, such as:
To connect both CAP applications to Confluent, we update the configuration in package.json as shown in the example below. Replace the placeholder values with your cluster’s connection details:
...
"messaging": {
"[development]": {
"kind": "kafka-messaging",
"credentials": {
"clientId": "<YOUR CLIENT ID>",
"brokers": [
"<YOUR BROKER>.germanywestcentral.azure.confluent.cloud:9092"
],
"ssl": true,
"sasl": {
"mechanism": "PLAIN",
"username": "<YOUR USERNAME>",
"password": "<YOUR PASSWORD>"
}
},
"consumer": {
"groupId": "cap-bookstore"
}
}
},
...
"kafka-messaging": {
"impl": "cds-kafka"
}
One important aspect to keep in mind: Confluent Cloud does not allow automatic topic creation. Therefore, you must manually create a topic named ReviewService.reviewed before running your application. This ensures that the application has a valid destination for the events.
Once the connection configuration is updated and the events are fired, you’ll see the messages appear in Confluent Cloud. Again, we have not changed any coding, only the configuration and the integration is working as expected.
As a platform that fully leverages the power of Apache Kafka, Confluent supports far more advanced use cases than the simple scenario demonstrated here. However, this example highlights the ease of integration with CAP using cds-kafka, while also showcasing its ability to connect to more complex Kafka architectures effortlessly.
Confluent provides robust support for schema registries, allowing schemas to be associated with specific topics to enforce data contracts and maintain consistency. While cds-kafka does not currently include native support for schema registries, this could be a valuable feature to explore in future versions.
While running CAP applications locally is helpful for testing and development, it’s not representative of real-world scenarios. Our production environment of choice is SAP BTP and since Kafka is not available as a service on SAP BTP, we’ll create a simple CAP app that connects to the previously created Kafka instance, subscribes to all topics and stores the incoming messages in a database. This app will then be deployed to SAP BTP, demonstrating how cds-kafka enables seamless integration with external Kafka services from within the SAP BTP Cloud Foundry environment.
To create and deploy the app, just follow the steps:
We create a new empty CAP project app via:
cds init kafka-consumer-app
Of course we need to add cds-kafka to the project:
npm i cds-kafka
To store the messages, we need a data model. Create a file model.cds in the db folder with the following contents:
namespace kafka.consumer.db;
entity Messages {
key ID : UUID;
event : String;
topic : String;
headers : LargeString;
data : LargeString;
}
To expose the messages via OData, we create a simple service. Copy the following contents in srv/consumer-service.cds:
using { kafka.consumer.db as db } from '../db/model';
service ConsumerService @(requires: 'authenticated-user') {
entity Messages as projection on db.Messages;
}
We actually want to store all incoming messages into the Messages table. For this, we simply create a subscription and fill the respective fields whenever a new message arrives. We are using the default CAP event properties here, but also some of the custom headers provided by cds-kafka with additional meta information.
Put the following content in srv/consumer-service.js:
const cds = require('@sap/cds');
module.exports = async () => {
const { Messages } = cds.entities('kafka.consumer.db')
const messaging = await cds.connect.to('messaging')
messaging.on("*", async (message) => {
await INSERT.into(Messages).entries({
event: message.event,
topic: message.headers['x-sap-cap-kafka-topic'],
data: JSON.stringify(message.data),
headers: JSON.stringify(message.headers),
})
})
}
This basically is the complete business logic. The rest is pure configuration.
Of course, cds-kafka also requires some basic configuration. Add the following snippet to the package.json.
"cds": {
"requires": {
"messaging": {
"kind": "kafka-messaging"
},
"consumer": {
"groupId": "cap-on-btp"
},
"kafka-messaging": {
"impl": "cds-kafka"
}
}
}
We are not providing any credentials here to connect to Kafka, since we don't want to hard code this within the project. Instead, we will rely on a service binding for the app, that will automatically provide the information for the kafka-messaging service.
CAP has a nice set of helper tools to automatically generate fragments within an application. First, we leverage the CLI task to add an mta.yaml file, that will contain all the required things for the SAP BTP deployment.
cds add mta
CAP also provides additional tasks to add other important configuration fragments, to the mta.yaml or package.json. Since we want to use SAP HANA Cloud to store the messages and we need the XSUAA service and the approuter for user authentication and authorization, we also add this to our local project:
cds add hana
cds add xsuaa
cds add approuter
Finally, we also need to add a user-provided service instance to provide the credentials for the Kafka instance running on Confluent Cloud. For this scenario, we will put the credentials directly into the mta.yaml file, as this is the easiest way to describe here. To make this more secure and also to support multiple stages, the user-provided service instance configuration should be provided differently in real production environments (CI/CD pipeline, mtaext-files, Terraform provider, etc.).
Place the following in the mta.yaml file to create the service instance and also to create the service binding to the CAP application:
modules:
- name: kafka-consumer-app-srv
...
requires:
...
- name: kafka-instance
resources:
- name: kafka-instance
type: org.cloudfoundry.user-provided-service
parameters:
service-tags:
- kafka-messaging
config:
clientId: <YOUR CLIENT ID>
brokers:
- <YOUR BROKER>.azure.confluent.cloud:9092
ssl: true
sasl:
mechanism: PLAIN
username: <YOUR USERNAME>
password: <YOUR PASSWORD>
Providing the service-tags is important, as CAP will automatically scan the VCAP_SERVICES environment variables to try to match services provided within the CAP configuration (package.json). Our service name is kafka-messaging and CAP will automatically put all the VCAP config options into the service credentials.
With everything setup and in place, we just need to build and deploy the application. While logged in to Cloud Foundry on the command line, we simply have to call:
mbt build
cf deploy mta_archives/kafka-consumer-app_1.0.0.mtar
When the app is running on SAP BTP, you can simply create another review in the local CAP review app. The event will then again being published to Confluent Cloud but now also being delivered to our CAP app running on SAP BTP.
If we then call the provided OData endpoint (https://<yourapp>.hana.ondemand.com/odata/v4/consumer/Messages) we see, that a message has been created in the database containing all the event information, including the additional headers described in the first blog post:
{
"@odata.context": "$metadata#Messages",
"value": [
{
"ID": "736b45b6-ed68-4860-bd52-e10028240571",
"data": "{\"subject\":\"251\",\"count\":3,\"rating\":4.33}",
"event": "ReviewsService.reviewed",
"headers": "{\"x-correlation-id\":\"5442c56b-86cf-4b95-bea2-3d512162ebc1\",\"x-sap-cap-tenant\":\"t1\",\"x-sap-cap-kafka-partition\":0,\"x-sap-cap-kafka-offset\":\"1\",\"x-sap-cap-kafka-timestamp:\":\"1736779408848\",\"x-sap-cap-event\":\"*\",\"x-sap-cap-kafka-topic\":\"ReviewsService.reviewed\"}",
"topic": "ReviewsService.reviewed"
}
]
}
We now have a running application on SAP BTP Cloud Foundry directly connected to a Kafka instance and receives messages. While this again is a very basic example, it showcases how easy it is to setup a CAP using cds-kafka and a user-defined service to create a hybrid multi-app event-driven architecture.
In this second blog post, we explored practical use cases and real-world applications of cds-kafka. Through the use cases, I demonstrated how cds-kafka enables CAP applications to connect directly to Kafka, bypassing the need for additional middleware layers like SAP Integration Suite or Advanced Event Mesh in scenarios where simplicity and direct access are required.
I also showcased two key examples:
These examples highlight the versatility of cds-kafka, simplifying integration, preserving CAP’s abstractions, and enabling hybrid event-driven solutions that seamlessly connect SAP and non-SAP ecosystems.
If you’re already using Kafka and looking for an easy-to-use SAP integration option, or if you simply want to experiment with CAP and Kafka, cds-kafka is open-source and free to use. Have questions or need assistance with cds-kafka? Feel free to reach out—I’m happy to help! 😊
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
13 | |
9 | |
9 | |
7 | |
7 | |
7 | |
6 | |
6 | |
5 | |
5 |