Amazon Simple Queue Service or short SQS is on of the most used services in AWS implementations. SQS is serverless by nature and was one of the first shared services offered by Amazon since way back in 2006. SQS comes with its own unique features in terms of scalability & resilience and can be used by all AWS compute services.
The SQS use cases range from scheduling tasks, processing message bursts asynchronously & integration task workloads.
Using SQS with a SAP dominant landscapes however has some challenges, e.g. traditional enterprise systems often use documents, bulked messages or big messages (e.g. IDocs or Attachments), but SQS messages have a payload size limit of 256 kB.
This article shows how to send and receive messages bigger then 256 kB payload size with SAP PO and KaTe's AWS adapter implementing the claim check pattern.
Applying claim check pattern to AWS SQS
One solution to work around the 256 Kb message payload size limit is the "claim check pattern" also familiar to many SAP PO developers e.g. to overcome BPM runtime message size limits.
A payload would be stored and only a key passed with the message layer. A receiver can then retrieve the huge message or document attachment with the key on the other end and process it.
With SQS it's a AWS best practice to solute this problem in the same way by saving the message contents to S3 storage and only pass a reference to it within a SQS message. The receiver can then fetch the reference directly from S3 storage (e.g. AWS implements this in the Extended SQS Library for API clients).
As big payload sizes are not unusual in SAP PO integration context (think of huge IDOCs, b2b documents or alike), the adapter implements this best practice pattern for such scenarios via configuration option.
Setup Queue, Storage & IAM credentials
In order to get started a SQS queue, a S3 storage bucket and the IAM access user needs to be setup. This can be done via the AWS console (or via API/CLI).
First a SQS queue needs to be created (here "BigMessageQueue"). The queue doesn't need any special settings.
Second a S3 bucket needs to be created in the s3 console (here "big-message-from-queue"). The S3 bucket doesn't need any special settings (default: non public access)
Now a IAM user needs to be setup with the according policies to access SQS and S3.
As the pattern requires access to SQS as well as S3 storage an IAM user needs to be created with access rights to SQS and S3.
The necessary rights on the IAM policy expressions include:
for sending messages (PO receiver channels):
for receiving messages (PO sender channels):
The access key and secret of the IAM user can be obtained from the "Security credentials" tab and is now ready to be used in the SAP PO channels with KaTe AWS adapter.
Sending a message
In order to send a message a SAP PO a receiver channel needs to be configured with the IAM credentials and the SQS queue name (here "BigMessageQueue").
If the claim check pattern should be used, the checkbox "Use messages bigger than 256 Kb payload size" needs to be ticked, which allows to define a S3 storage bucket for storage (here big-message-from-queue).
If a message with a payload size bigger than 256 Kb is sent, the adapter uploads the payload to the S3 bucket and only sends a small payload with a pointer to the S3 storage object to the SQS queue.
Here's a sample of S3 object "4ebdfa20-3ca1-43ed-bcca-898405a2b445" linked in S3 bucket "big-messages-from-queue" to be passed as MessagePointer:
If the message is smaller than 256 Kb, the adapter just simply sends the payload as message body to the SQS queue without using the claim check.
Receiving a message
The adapter also allows you to fetch such messages with the same mechanism through sender channels.
In order to receive messages in the same manour a sender channel can be configured in SAP PO.
The IAM credentials as well as the queue name need to be configured (here "BigMessageQueue"). With the checkbox "Use messages bigger than 256 Kb payload size" ticked, the adapter also requests you to set the storage bucket from where payloads are fetched.
Sender channels automatically "Long poll" the queues. Upon receiving a SQS message the adapter will inspect the JSON payload and then fetches the linked S3 object to produce a PO message with the S3 object contents as payload.
Upon successfully receiving the message, the SQS message and the S3 object are deleted.
If a message smaller than 256 Kb payload is received, the payload is simply processed without the claim check mechanism. This allows to mix big and small messages with the same channel without any additional efforts.
Deduplication & Message transformation
If any error occurs during the multiple AWS api actions while receiving the message (delete queue message, delete s3 object or in a configured adapter module), a retry of the same message might happen.
The adapter handles this with deduplicating already processed PO messages with the SQS message id and SAP PO's internal duplicate check mechanism of its adapter framework as used by many other PO adapters.
Also XML to JSON and JSON to XML transformation + En/Decoding Base64 encoding (which is quite frequent in SQS) can be used while sending or receiving a SQS message with or without claim check pattern.
AWS SQS is widely used by companies and organizations using Amazon Web Services. The Kate AWS Adapter implements all the heavy lifting necessary to connect AWS services like SQS, S3, SNS, Lambda or Kinesis to a SAP landscape. Implementation of best practices patterns like claim check with SQS are simplified through configuration.
Developers can focus on their integration scenario instead of developing custom solutions to solute multiple API calls, IAM authentication, deduplication or message transformation between XML & JSON.
As a product the adapter is fully maintained by KaTe GmbH, a SAP partner company with deep SAP & AWS expertise that keeps the product in sync with all upgrades on AWS APIs & SAP PO releases. New features & best practices are introduced continously on feedback by customers.