I thought I would expand on alexkowalczuk's fantastic blog showing how to read files over SharePoint by also showing how to Read and process SharePoint lists and replicate them into DataSphere.
Summary:
My company has many critical business functions leveraging Microsoft Office 365 SharePoint sites with Lists. We are started off attempting to use OpenConenctors. However, with OpenConnectors being end of life, we are attempting to leverage Integration Suite directly. Thankfully, SAP came to the rescue with the "Microsoft SharePoint Adapter for SAP Integration Suite". This amazing new SharePoint adapter gives fully access to any SharePoint 365 content.
Many Ways to Achieve Integration
Before I get started, I should confess that this solution to follow is not in any way optimal or even the "right" way to do this. The ideal solution for me would be for DataSphere to read the SharePoint lists over Odata. Though, as of now, DataSphere does not supported Delegated OAuth2 authentication to SharePoint365. I suspect this will evolve in the near future.
The Architecture:
SharePoint Server:
DataSphere:
Integration Suite - Integration Flow
iFlow Steps:
Content Modifier 1
I followed Alex's instructions and added the Message Header and Exchange Properties:
Request Reply : Fetch Data From SharePoint
There are a few confusing and difficult parts in this first step.
The main notes I will make on the OData query are:
The Database
Use the HANA Database Explorer to create destination tables for the SharePoint data:
Note: Every SharePoint list had the field "ID" as a primary key.
Groovy Script: JSON to JDBC XML
Now that SharePoint has been queried and has returned a packet of JSON results, we need a way to post those records over the JDBC connection. I am sure there is a better and more correct way, but I just wrote a crude little Groovy script convert the JSON data packets into JDBC XML commands.
JDBC XML commands to HANA look like the following:
The first statement above executes a delete statement against a table with no Key values specified, which effectively means to delete all data in the table. Being these are very small tables, this methodology is fine for my requirement.
The second statement begins a batch of insert of data records with two fields, "id" and "Title".
Script: I have attached my sample script. If you improve upon this script, please share your improvements.
/* Refer the link below to learn more about the use cases of script.
https://help.sap.com/viewer/368c481cd6954bdfa5d0435479fd4eaf/Cloud/en-US/148851bf8192412cba1f9d2c17f4bd25.html
If you want to know more about the SCRIPT APIs, refer the link below
https://help.sap.com/doc/a56f52e1a58e4e2bac7f7adbf45b2e26/Cloud/en-US/index.html */
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import org.json.*;
//import groovy.util.logging.Slf4j;
/*
20240316 : Joel Blackthorne
Developer Notes:
* Create a version of the processData_BussinessUnits method below for each
* list type.
*/
//***************
//Business Units
//***************
def Message processData_BussinessUnits(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPipeline_BusinessUnit";
final String[] FIELD_NAMES = ["id", "Title", "ChannelLookupId", "SalesChannelLeadLookupId", "MarketingLeadLookupId", "ContentType", "Created", "Modified"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//***************
//Channel
//***************
def Message processData_Channels(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPipeline_Channel";
final String[] FIELD_NAMES = ["id", "Title"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//***************
//Plants
//***************
def Message processData_Plants(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPipeline_Plants";
final String[] FIELD_NAMES = ["id", "Title"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//***************
//Lines
//***************
def Message processData_Lines(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPipeline_Lines";
final String[] FIELD_NAMES = ["id", "Title", "Plantlookupid"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//*******************
//Product Categories
//*******************
def Message processData_Product_Catergories(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPipeline_Product_Categories";
final String[] FIELD_NAMES = ["id", "Title"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//*******************
//Opportunites
//*******************
def Message processData_Opportunites(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPipeline_Opportunities";
final String[] FIELD_NAMES = ["id", "Title"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//*******************
//Product SubCategories
//*******************
def Message processData_Product_SubCatergories(Message message) {
//Parameters
final String TABLE_NAME = "CORPSLSPIPELINE_PRODUCT_SUBCATEGORY";
final String[] FIELD_NAMES = ["id", "Title", "ProductCategoryLookupId"];
return MSGraphConverter.toXML(message, TABLE_NAME, FIELD_NAMES,
messageLogFactory, true);
}
//Converts Microsoft Graph JSON results
//to JDBC XML insert statements.
class MSGraphConverter extends Script {
//Required run method
def run(){
}
//Public static method for conversion of the Message contents to XML
static Message toXML(Message message, String tableName, String[] fieldNames,
def messageLogFactory, boolean deleteAll){
return convertMSGraphJSON_To_XML(
message,
tableName,
fieldNames,
messageLogFactory,
deleteAll);
}
//Main exection logic for converting JSON results to XML
private static Message convertMSGraphJSON_To_XML(Message message,
String tableName,
String[] fieldNames,
def messageLogFactory,
boolean deleteAll){
def messageLog = messageLogFactory.getMessageLog(message);
//Body
String bodyStr = message.getBody(java.lang.String);
String newBodyStr = getJDBCXML_Start(tableName, deleteAll);
String newRow = "";
String fieldName = "";
//Save existing JSON bod to the Log
messageLog.addAttachmentAsString("JSON_Str:", bodyStr, "text/plain");
JSONObject objJSON = new JSONObject(bodyStr);
//get value node
JSONArray arryJSONValue = objJSON.getJSONArray("value");
//Build the new XML for JDBC transactions
for (int i=0; i<arryJSONValue.length(); i++){
JSONObject objectJSON = arryJSONValue.get(i);
newRow = "<access>";
for (int i2; i2<fieldNames.length; i2++){
fieldName = fieldNames[i2];
newRow += writeField(fieldName, objectJSON, messageLogFactory, message);
}
newRow += "</access>";
newBodyStr += newRow;
}
//Close the XML Structure
newBodyStr += getJDBCXML_End(tableName);
//Attache the XML to the debug
messageLog.addAttachmentAsString("JDBC_XML:", newBodyStr, "text/plain");
message.setBody(newBodyStr);
//Return message
return message;
}
//Writes a field from the result set
private static String writeField(String fieldName, JSONObject objectJSON,
def messageLogFactory, Message message){
try{
//Not all the fields exist in every record, but JDBC requires that all
//fields be sent per record.
//Test to see if the field exists in the JSON Structure
def fldLst = objectJSON.get("fields");
if (fldLst.has(fieldName)){
return getLineXML(fieldName, objectJSON.get("fields").get(fieldName));
}
else{
//Field does not exist in the JSON structure
return getLineXML(fieldName, '');
}
}
catch(Exception ex){
String msg = "";
//Exception
msg += '##########################################\n';
msg += 'writeField method error:';
msg += ex.getMessage();
msg += '\n';
//print all the field names found
def fieldLst = objectJSON.get("fields");
msg += fieldLst;
msg += '\n';
msg += '##########################################\n';
def messageLog = messageLogFactory.getMessageLog(message);
messageLog.addAttachmentAsString("writeField_Error:", msg, "text/plain");
throw ex;
}
}
//Gets a line for a JDBC XML Table
private static String getLineXML(String fieldName, String fieldValue){
return "<" + fieldName + ">" +
groovy.xml.XmlUtil.escapeXml(fieldValue) +
"</" + fieldName + ">";
}
//Gets the start of a JDBC XML Statement
private static String getJDBCXML_Start(String tableName, boolean deleteAll){
if (deleteAll){
return "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" +
"<root>" +
getJDBCXML_Start_Delete(tableName) +
getJDBCXML_Start_Insert(tableName);
}
else{
return "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" +
"<root>" +
getJDBCXML_Start_Insert(tableName);
}
}
//Gets the start of a JDBC XML Statement
private static String getJDBCXML_Start_Delete(String tableName){
return "<delete_statement>" +
"<" + tableName + "_LISTS action=\"DELETE\">" +
"<table>" + tableName + "</table>" +
"<Key></Key>" +
"</" + tableName + "_LISTS>" +
"</delete_statement>";
}
//Gets the start of a JDBC XML Statement
private static String getJDBCXML_Start_Insert(String tableName){
return "<insert_statement>" +
"<" + tableName + "_LISTS action=\"INSERT\">" +
"<table>" + tableName + "</table>";
}
//Gets the end of a JDBC XML Statement
private static String getJDBCXML_End(java.lang.String tableName){
return "</" + tableName + "_LISTS>" +
"</insert_statement></root>";
}
}
Conclusion:
SAP has released an amazing improvement in the "Microsoft SharePoint Adapter for SAP Integration Suite" which now makes integration of data from SharePoint 365 into DataSphere achievable.
About the Author:
Joel Blackthorne is the SAP BI Lead Architect at CH Guenther & Son.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
19 | |
10 | |
8 | |
5 | |
4 | |
4 | |
3 | |
2 | |
2 | |
2 |