Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 

Introduction


The post-exit design pattern of integration flows in SAP Cloud Platform Integration has been designed to allow the customer to extend a published integration flows without changing the content provided by SAP. Further details on the integration flow extension concepts, design and their advantages can be found here.

 

While there are instructional guides on how to implement a post-exit, they are made with the assumption that the payload being sent is a singular record. In cases where the payload sent is in batch, the post-exit implementation becomes much more complicated. We have two payloads merged into one. The first one is the mapped payload, and the second is the original that arrives from the back-end system. There are more fields that need to be mapped, so a post-exit is used. The objective of this blog post is to provide an approach to deal with such payloads with multiple records.

 

Why requests with multiple records require a different design


 

The main problem is to get the correct field from the original payload and then map it alongside its analogous mapped counterpart. This is very difficult to do using graphical mapping alone, since we can’t loop over two different contexts and map simultaneously. The reason other solutions documented are able to solely use graphical mapping is that, as stated above, they rely on there being only a single record per payload, which doesn't require the same contextual considerations as the payload is basically flat.


 

 

Solution


 

So, a feasible strategy would be to save the required fields from the original payload in a hashmap with the record identifier as the key. Then we would use graphical mapping where we would map the old XSD of the receiver system (the one that doesn’t have the extended fields) with the new XSD with the extended fields included. Map all the shared fields together, map the identifier with the extended fields, and add a get script for the graphical mapping that uses as input the record identifier in order to output the saved hashmap fields into their correct attributes. The advantage of this approach is that it is straightforward and it doesn't require the creation of a new XSD with both payloads merged.


 

 

Objective


 

In this blog post we will showcase a post-exit implementation using Utilities Installation objects mapping into Agreement Terms objects in SAP Marketing Cloud. Inside each Installation we could have many nested Installation Facts. Each Installation Fact would belong to different attributes on the receiver payload. Not all Installations have facts. Here for this scenario, we use XmlSlurper to loop through the installations and get the record identifier to acquire the hashmap reference key and then loop through the facts of that record to collect the desired hashmap fields. We have made different hashmaps for each fact name, and we simply place the facts in the correct object when it’s found with the record key being the ISUContractID which is mapped as MKT_AgreementExternalID in the receiver payload.


 

 

Overview


 



 

Here we see the rundown of the iflow that will be detailed in this blog:


 

  • A : We create the hashmaps.

  • B: We no longer need the ISU payload, so we only keep the mapped one.

  • C: We use multicast to split the payload as it consists of Agreement and Agreement Terms.

  • 😧 Since we're mapping on Agreement Terms we need to remove all Agreements.

  • E: Here we do the message mapping in order to add the extended fields from the saved hashmaps.

  • F: Here we gather both the Agreement and Agreement Terms back into one payload.


 

 

 

Hashmap Generator Script


import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
def Message processData(Message message) {
//Body
def body = message.getBody(String.class);
def response = new XmlSlurper().parseText(body);
def installation = response.'**'.findAll { node -> node.name() == 'UtilitiesInstallation' }
def CDiscountMap = new LinkedHashMap();
def FDiscountMap = new LinkedHashMap();
def eMap = new LinkedHashMap();

for(i=0; i<installation.size() ; i++){
for(j=0; j<installation[i].UtilityInstallationFacts.size(); j++){



//ADD
if(installation[i].UtilityInstallationFacts[j].FactName == "CDiscount"){


CDiscountMap.putAll( [(installation[i].ISUContractID.text()): ['FactIndicator' : installation[i].UtilityInstallationFacts[j].FactIndicator,'StartDate' : installation[i].UtilityInstallationFacts[j].StartDate, 'EndDate' : installation[i].UtilityInstallationFacts[j].EndDate]]);

}

//ADD
if(installation[i].UtilityInstallationFacts[j].FactName == "FDiscount"){


FDiscountMap.putAll( [(installation[i].ISUContractID.text()): ['Percentage' : installation[i].UtilityInstallationFacts[j].Percentage,'StartDate' : installation[i].UtilityInstallationFacts[j].StartDate, 'EndDate' : installation[i].UtilityInstallationFacts[j].EndDate]]);

}

//ADD
if(installation[i].UtilityInstallationFacts[j].FactName == "eAbsAmtDisc"){


eMap.putAll( [(installation[i].ISUContractID.text()): ['Amount' : installation[i].UtilityInstallationFacts[j].Amount,'currencyCode' : installation[i].UtilityInstallationFacts[j].Amount.@currencyCode, 'StartDate' : installation[i].UtilityInstallationFacts[j].StartDate, 'EndDate' : installation[i].UtilityInstallationFacts[j].EndDate]]);

}



}
}



message.setProperty("cHashMap", CDiscountMap);

message.setProperty("fHashMap", FDiscountMap);

message.setProperty("eHashMap", eMap );

return message;
}


 

Summary


 

So here what we first do is parse the body payload and find all instances of UtillitiesInstallation, and then we create a LinkedHashMap for each type of UtilityInstallationFact that we would like to be mapped into marketing. This is implemented below in the two nested loop sequences where we loop over each UtillitiesInstallation and its respective UtilityInstallationFact. Then we check if the installation FactName belongs to any of the ones we're interested in. If so, we add a new entry to that respective hashmap with ISUContractID being the key that is the record identifier that gets mapped to MKT_AgreementExternalID. Lastly, we save the hashmaps as distinct properties and then return the message.

 

 

 

Filter


 


 

Here we filter back to the marketing payload in order to be able to map later on.

 

 

 

Multicast and Remove Agreement Script


 

We then use multicast to process Agreements and Agreement Terms separately since each object has its own XSD. Since we are performing the mapping on Agreement Terms, we will need to remove all Agreement instances.
import groovy.xml.XmlUtil;
import com.sap.gateway.ip.core.customdev.util.Message;
def Message processData(Message message) {

def body = message.getBody(String.class);
def response = new XmlSlurper().parseText(body);
def titles = response.'**'.findAll { node -> node.uri.text().contains("Agreements(")}.each{ node -> node.replaceNode{}};
def nodeAsText = XmlUtil.serialize(response);
message.setBody(nodeAsText);

return message;
}

 

Summary


 

Here, we get the message, find all instances where the batchChangeSetPart has an Agreement child by looking at the URI, and then Delete them by using replaceNode{}.

 

 

 

Mapping payloads


 


The Mapping is done with the record identifier


As you can see the mapping consists of two XSD files for Agreement Terms: one from the published iflow (on the left) and the other with the added extensibility fields (one the right). Also notice how they all use the record identifier, namely, MKT_AgreementExternalID.

 


 

Here's one sample mapping of a set of added custom fields that represent one type of installation fact which is saved in the hashmap. We use a get function to retrieve the information.
def void getFdiscount(String[] id, Output percentage, Output start, Output end, MappingContext context){
def newBody = new LinkedHashMap();
newBody = context.getProperty('fHashMap');

if(newBody.containsKey(id[0])){

percentage.addValue(newBody[id[0]].Percentage);
start.addValue(newBody[id[0]].StartDate);
end.addValue(newBody[id[0]].EndDate);
}

}

 

Summary


 

We retrieve the Fdiscount hashmap from the saved property and then see if it has a key with the inputted id variable. If it does, then it will output the available values to the necessary fields that are mapped on the outside of the function.

 

Other get scripts


def void getediscount(String[] id, Output amount, Output currency, Output start, Output end, MappingContext context){
def newBody = new LinkedHashMap();
newBody = context.getProperty('eHashMap');

if(newBody.containsKey(id[0])){

amount.addValue(newBody[id[0]].Amount);
currency.addValue(newBody[id[0]].currencyCode);
start.addValue(newBody[id[0]].StartDate);
end.addValue(newBody[id[0]].EndDate);
}




}

def void getCdiscount(String[] id, Output status, Output start, Output end, MappingContext context){
def newBody = new LinkedHashMap();
newBody = context.getProperty('cHashMap');

if(newBody.containsKey(id[0])){

status.addValue(newBody[id[0]].FactIndicator);
start.addValue(newBody[id[0]].StartDate);
end.addValue(newBody[id[0]].EndDate);
}


}

 

 

Gather


Lastly, we gather the Agreement and Agreement Terms mappings back together into the same payload.

 

 

 

Conclusion


In this blog post we covered a post-exit implementation that used hashmaps to store extensible data which was added via graphical mapping using get methods. There are a few steps required for this solution. First, create the hashmaps using the old payload. Second, filter out the old payload. Third, create a graphical mapping with the old and new receiver XSD. Last, add get methods to get the required fields from the hashmaps to their respectives fields using the record identifier as input.