
With the latest Cloud Integration update, the SAP Process Orchestration to SAP Integration Suite migration tool now supports the automatic creation of integration artifacts supporting the Pipeline for Cloud Integration.
For an introduction to the migration tool, check out this blog post. For more details about the pipeline for Cloud Integration incl. example implementations, see this blog post.
In the migration tool, you can now decide between two migration approaches: standard vs pipeline. If you select standard, then one integration flow is created based on one integration scenario (an Integrated Configuration Object or a decoupled PI dual stack configuration) in the SAP Process Integration / SAP Process Orchestration system. If you like to run your scenarios using the pipeline for Cloud Integration, multiple integration flows and the corresponding Partner Directory objects are automatically created.
Let me showcase you the pipeline support in the migration tool along two examples, one for a content-based-router scenario and one for a point-to-point scenario.
So, we have created a new package where we like to migrate the scenario to. Once we switch to the edit mode, we see a Migrate button to start the migration wizard.
In the wizard, we first need to connect to the SAP Process Orchestration system.
Once done, we select the scenario that we like to migrate, in our case we select an Integrated Configuration of pattern Content-based-router.
On the next screen, the best fit pattern is identified, here we can choose between Content-Based Router and Recipient List. Furthermore, we select the migration approach. In our case Pipeline. Btw, for Pipeline, it doesn't really matter whether you select Content-Based Router or Recipient List for scenario with more than one receiver because here we use the provided generic pipeline steps and hence the generated objects are the same. For the Standard approach it differs.
On the next screen, you can either create mappings that are uploaded to your integration flows or you can create reusable artifacts. In our case, we go for reusable artifacts. Btw, this step and the next are identical with the Standard approach.
The next screen just informs about which mapping objects are created or reused.
On the Scenario screen, you need to maintain a unique scenario name. This scenario name is used for the naming of the created integration flows and Partner Directory entries. Furthermore, you need to maintain the name of the very first JMS queue of the pipeline. By default, it's PIPX01 following our recommendations. So, we assume that you like to use the integrated messaging runtime. If you like to use the fully decoupled pipeline, you need to change to PIPQ01. If you have configured an own queue prefix, then you need to maintain the queue name accordingly.
If you scroll down, you can see that one inbound conversion flow and three receiver flows will be created. Here, you have the option to maintain the xpath conditions just in case that the xpaths haven't been properly resolved from the conditions in the SAP Process Orchestration system. If you like, you can also change the end points of the scenario-specific conversion flow and the outbound flows to your needs.
On the Review screen, select Migrate.
Once, the migration has been successfully carried out, you get a list of all objects created. On the first tab, you can see that 5 integration flows have been created. The first one is the scenario-specific inbound flow. The second flow runs the inbound conversion, in our case it converts the incoming message in JSON format to XML. The last 3 flows are the scenario-specific outbound flows, one for each receiver. From here, you can navigate to the flow models. The naming convention for the integration flow names are defined as follows:
<Your sceneario name>_<pipeline step>_<pipeline type>[_<receiver name>_<interface index>]
On the Partner Directory tab, you see an overview of the partner ID which has been created and its parameters.
The Next Steps provides you information what activities may have to be carried out post migration, like for the pipeline approach the prerequisite is that you have copied the latest pipeline package to your workspace and have deployed the script collection and all the generic flows.
Let's take a look at the created Partner Directory entry. You can see that a partner ID has been created with the scenario name maintained in the wizard. Because we need to run an inbound conversion from JSON to XML, the string parameter InboundConversionEndpoint has been created pointing to the ProcessDirect endpoint of the corresponding scenario-specific flow. If you like, you can add further string parameters to customize the runtime behaviour, like the max number of retries, see Using the Partner Directory in the Pipeline Concept.
On the Alternative Partners tab, you can see that the sender system name and the sender interface is mapped to the scenario. The sender information is actually used to determine the partner ID.
On the Binary Parameters tab, you can see that a binary parameter with ID receiverDetermination has been created. This is the XSLT which is carried out to determine the receivers. Let's download the same.
Here you can see the xpaths which are carried out to determine the receivers and interfaces. Btw, using the migration tool, we only support XSLTs combining receiver and interface determination in one single XSLT to keep it simple, see Special Cases.
Now, let's run through a P2P scenarios. The steps are the same like for the case above, so we skip most of the steps here to focus on the difference. Once we have started the wizard, connected to the system, we select a P2P scenario.
This time Point-to-Point Asynchronous has been identified as the fitting pattern. We again select Pipeline as approach. In addition, we can either select or deselect the Idempotent Process at Receiver Side flag. If selected, we add an idempotent process when calling the receiver to identify duplicate messages in case that the receiver is not idempotent.
Here again, we need to maintain a scenario name.
And further below, you can see that we only have one single receiver.
Once migrated, you see that two integration flows have been created, one for the inbound and one for the outbound.
Now, let's take a look at the generated Partner Directory object. For P2P, we only need string parameters. The name of the receiver is stored in the receiverDetermination parameter, the endpoint in the interfaceDetermination_<system name> parameter.
In case that you can't see the pipeline approach option in your tenant, you may need to wait for the upcoming weekend where the rest of the data centers should be updated, so stay tuned, hopefully on coming Monday you can hold of this new feature.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.