Technology Blog Posts by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
2,945

In this blog, we share integration patterns used in SAP PI Migration to BTP Integration Suite as well as shared our learning experiences during migration.

 

Here is high level diagram of integration patterns used:

saikrishna_kalivarapu2_0-1733261512585.png

 

Here are descriptions of each pattern:

  1. SAP business systems consuming SAP BTP iflow’s using XI/SOAP adapters
    • XI is mainly used for async or sync without business faults
    • SOAP is mainly used for sync with business fault messages
    • SAP BTP Integration suite has been accessed via SAP webdispatcher and blog explains on how to use the same.
  2. APIs exposed follows below patterns:
    • Pattern 1:
      •         SAP BTP API management proxy  -> SAP BTP Cloud integration iFlow -> SAP business systems (via cloud connector using XI/SOAP adapters)
      •         API management proxy is used for JWT validation (non SAP identity provider)/ip filtering and exposing API using specific url naming conventions
      • SAP CI iflow for request and response transformations
    • Pattern 2:
      •         SAP BTP API management proxy -> SAP Gateway ODATA API (via cloud connector)
      •         API management proxy is used for JWT validation/ip filtering and exposing API using specific url naming conventions/converting between odata api format and open API specification.
      • Pattern 3:
        •         SAP Cloud integration iflow -> SAP business systems (via cloud connector using XI/SOAP adapters)
        •         We are using SAP identify provider for oauth 2.0 and iflow used for request, response transformations
  3. APIs or SOAP services consumption:
    • Cloud integration iflow to Internal on premise/privately managed cloud APIs via cloud connector and Oauth 2.0
    • Cloud integration iflow to external using https with oauth 2.0 from external providers
  4. SFTP communication
    • SFTP adapter using SSH for internal SFTP communication via cloud connector
    • SFTP adapter using SSH for external SFTP communication
  5. Eventing solution:
    • Async outbound and inbound integration with Advanced Event Mesh and Event Mesh (AMQP)
    • Async outbound and inbound integration with Confluent Cloud KAFKA using Advantco kafka adapter
      •         KAFKA adapter in Cloud integration has limited functionalities and Advantco kafka adapter has advanced configurations for Confluent cloud
  6. Datasphere communication:
    • Not part of PI migration but used https and JDBC communication 
      •         Activity logs using https adapter
      •         Audit log or any data related interfaces using JDBC
    • Splunk
      • Splunk monitoring integration using MPL – using https or splunk adapter
  7. Mail adapter via cloud connector to send emails securely

 

Here are key learnings we had during PI migration:

  • Pattern by pattern migration
  • Redesign interfaces when needed (like BPM, adapter modules, authentication)
  • Migration tool is evolving but few changes in PI can be done to migration ICOs when migration tool is not working (like removing of external libraries, removing complex routing)
  • Identify workarounds if possible- as an example until IBM MQ adapter gets delivered, we could use AMQP
  • Sizing with SAP
  • Perform performance test to avoid memory exhaustion, IO exceptions, datastore connection issues, reaching limits of monitoring storage
  • Request SAP/ Open SAP ticket to increase resources based on your usage/sizing
  • Reduce usage of datastore (instead use internal JMS queues)
  • If needed increase JMS internal queues up to 100 (increase further with SAP ticket)
  • Try not to enable transaction handling (JDBC/JMS)
  • XI receiver adapter- try to use QoS Exactly once JMS queue instead of Data store
  • XI sender adapter- try to use QoS At least once JMS instead of Exactly once Datastore
  • Try to not log payloads or header for high volume interfaces
  • SOAP vs XI adapter – fault handling vs sxmb_moni logs
  • Organizing Cloud integration packages and naming conventions/development standards
  • Implement generic groovy scripts:
  • Log multiple customer headers
  • Update json datatypes in json payload
  • Implement reusable fault message types:
  • Create generic fault message types mapping
  • Update message type names using groovy scripts
  • Develop generic iflow to migrate value maps
  • Deploy Undeploy Config tool documented in blog

To conclude, we have shared integration patterns and learning experiences which can used by other SAP customers.