When working with CPI, we often wish there was an option to restart message as in SAP PI/PO, if it fails due to the target system unavailability or a glitch. There are already solutions provided for this using JMS provision and Data Store operations.
Since, JMS provision would require additional purchase and assuming that using data store operations for huge load affects tenant’s performance, we approached the solution in a different way using SAP CPI APIs.
This blog post explains about the approach to access payload stored in the tenant logs and retrigger it to the target system without any manual intervention and without storing the payload in a different storage. It also gives you an idea about the available SAP provided API s.
Requirement:
Resend the failed messages in CPI, due to target system unavailability or an exception.
Case Study:
The Customer data from C4C to SAP HYBRIS MARKETING System fails in CPI due to system unavailability .This required manual intervention to identify the customer whose data need to be resent which was quiet time consuming.
Solution:
The below picture gives you an overview of our approach.
Let’s dive deep into the design and APIs used

FLOW 1:
The Flow 1 described in the above image is the actual IFLOW which will deliver the data received from C4C to SAP HYBRIS system via ODATA. The below steps are required to use this approach to retrigger messages in case of failures.
- Use groovy Script to store the final payload being sent to the target system as an MPL Attachment.
Sample Code:
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message) {
def body = message.getBody(java.lang.String) as String;
def messageLog = messageLogFactory.getMessageLog(message);
if(messageLog != null){
messageLog.setStringProperty("Logging#1", "Printing Payload As Attachment")
messageLog.addAttachmentAsString("C4C_To_YMKT_Payload", body, "text/plain");
}
return message;
}
- Add an exception sub process to the above IFLOW that will fetch the attachment ID of the payload in case of failures and write it to .CSV file in an SFTP folder
Once, the processing enters the exception subprocess. We do a GET request to SAP CPI API to retrieve the details of attachments. Let’s see more about each step in the exception subprocess

Request Reply 1:
This step is used to retrieve the Logs that contains the details of the attachments we store .
API used: MessageProcessingLogs
Method: “GET” /MessageProcessingLogs('{MessageGuid}')/Attachments
This API is used to fetch the logs with details of all the attachments that are stored during the message processing.
URL format:
https:// TENANT_ID-tmn.hci.REGION.hana.ondemand.com/api /v1/MessageProcessingLogs ('{MessageGuid}')/Attachments"
QUESTION 1: So, where do I get the
MessageGuid?
Below is the screenshot of logs for a message failure.
The Unique MessageGuid can be retrieved using camel expression from header
“${header.SAP_MessageProcessingLogID}”

Below is an example of response for the above GET request

The response has the URI to access the attachment content stored in the tenant

All we need here is retrieve the value in the “ID” node as showed in the above screen shot and write it to .CSV file in SFTP.
Please, do replace all “:” in your attachment ID with “%3A” before writing to .CSV file.
Example: sap-it-res:msg:a83d5f88d:2254254a-8f3c-49b7-9fc9-ac3f3d856a2d
Should be written as “sap-it-res%3Amsg%3Aa83d5f88d%3A2254254a-8f3c-49b7-9fc9-ac3f3d856a2d” in your CSV file
Note: Since, our IFLOWS stores multiple Attachments.
We filter the ID of the only required attachment (final Outbound Payload) using a Mapping in the exception subprocess.
FLOW 2:
Overview:
We have a separate IFLOW which would take the Attachment ID as input and access the attachment content, Push it to the Target system once the system is up.
Fetch the .CSV file from SFTP that contains the list of Attachment IDS for the failed payload.
Check if the file name is same as the file in which we stored the Attachment ID from FLOW1 using a router.If the condition is true, then the processing enter further steps else the file is sent as an attachment to team .
Iterating Splitter to split each line from the CSV file ,then remove the line feed using Groovy Script and store the attachment ID in exchange properties.
Code Sample :
importClass(com.sap.gateway.ip.core.customdev.util.Message);
importClass(java.util.HashMap);
function processData(message) {
//body
var body = message.getBody(java.lang.String);
body=body.replaceAll("\\r?\\n","");
message.setBody(body);
//headers
var map = message.getHeaders();
var value = map.get("oldHeader");
message.setHeader("oldHeader", value + "modified");
message.setHeader("newHeader", "newHeader");
//properties
map = message.getProperties();
map.put("ATTACHMENT_ID",body);
value = map.get("oldProperty");
message.setProperty("oldProperty", value + "modified");
message.setProperty("newProperty", "newProperty");
return message;
}
Do an ODATA call to the target system (in our case, its SAP HYBRIS MARKETING system) to check if the system is up.
On Successful response, send a GET request to API “MessageProcessingLogAttachments”
URL FORMAT:
https:// TENANT_ID-tmn.hci.REGION.hana.ondemand.com/api /v1/MessageProcessingLogAttachments('${property.ATTACHMENT_ID}')/$value
Sample:
https:// TENANT_ID-tmn.hci.REGION.hana.ondemand.com /api/v1/MessageProcessingLogAttachments('sap-it-res%3Amsg%3Aa83d5f88d%3A2254254a-8f3c-49b7-9fc9-ac3f3d856a2d')/$value
The retrieved payload is pushed back to the target system.
NOTE: Exception Handling - AS, per your requirement.In our approach we are writing the attachment-IDs back to SFTP as a .CSV file with different name comparing to FLOW1.So, that this new file shall be sent via mail to the team when picked up by SFTP sender in FLOW2 and reprocessed with active monitoring to verify if the failure is due to a different exception other than Target systems unavailability.
Hence, from the above approach we could re-trigger a huge number of messages without storing the payload on a different source and expecting your expertise comments on how can we enhance this further