cancel
Showing results for 
Search instead for 
Did you mean: 

Exactly once in order (EOIO) for Group messages using Advanced Event Mesh

anuj_dulta1
Explorer
0 Kudos
192

Hi there,

We have recently introduced SAP Advanced Event Mesh (AEM), which under the hood is ‘Solace PubSub+ Cloud’ in our Integration Landscape, and now I am seeking for the best way to implement ‘Exactly Once In Order’ Sequencing in our Integration.

We are on Version 10.9.1.114-0.

The Integration is going to do the following:

  1. Source System will Publish the Switching Job details (XML) to SAP AEM from WebLogic JMS Server.
  2. We have another SAP Integration Layer which then subscribes to the Solace Queue (or Topic), does Message Transformation, and then forwards that to External System.

The Challenge:

We want messages belonging to one Job (lets say Job A), should be delivered to External System in Sequence. Lets say Job A has Operation 1, 2 and 3. If for some reason Operation 2 fails to get delivered to External System, Job 3 should not go. However, Job B and it’s Operations can continue to go as usual.

So, we need FIFO for individual Job Operation, and not across all the Job Operations.

Analysis done so far:

  1. We can create One Exclusive Queue and One DMQ for the process, and Publish everything there. By doing it all the Jobs and their Operations will follow Sequence by Default. In All the Success scenarios, this should be okay (I guess). But in case of failure, the message will move to DMQ and the Subsequent message for one Job will not know and ‘might’ get processed.
  2. I went through Partition Queues, but that doesn’t seem to fit the purpose, as there the messages (Job Operations in our case) of one Job can go to any Partition
  3. Non-Durable (Temporary) Queues: What I have understood so far, we can request the Publisher (Client) to create Queues Dynamically and all the messages (Operation) of one Job should be sent to ONE Queue. As soon as those messages are consumed by our SAP Integration and forwarded to Externally System, these Queues get deleted - I am not 100% sure if this is how this will behave. In case of failures, I would expect the Queue remains intact and messages belonging to that Queue will pile up, without impacting the other Jobs.
  4. We can introduce a persistence layer to store the failed messages. Every message consumed from the queue will first be checked (using some checksum) to see if any message for that Operation has failed in the past and if yes, the push the message to DMQ or else continue normal. I will then have to keep DMQ and this persistence layer in sync. Or just use this persistence and not DMQ.

Following things are Mandatory:

  1. Guaranteed Delivery of the messages to External System
  2. Order of Operations belonging to same Job should be maintained
  3. Order for Different Jobs is not Mandatory, and those can be parallelly processed.

Any help on this would be really appreciated.

-Anuj

Accepted Solutions (0)

Answers (1)

Answers (1)

manikmdu
Explorer
0 Kudos

Well, the moment you move a message to DMQ, the sequence is broken. The order is no longer guaranteed as the original queue processing continues based on its own setup and the DMQ messages gets processed on its own setup. It is a tricky situation. The options I see is, group the messages into respective queues where EOIO has to be maintained and then from there, process the messages without moving them to DMQ in case of temporary failures. But at the end, if you move the message to DMQ, you have to compromise on the sequence, as far as I see.

anuj_dulta1
Explorer
0 Kudos

Thanks a lot for your response Mani (sorry if I get your name wrong). Are you planning to have dedicated queues for each Work Order?

I know we have option of Temporary Queues, but that doesn't guarantee the Delivery.

I was thinking this:

  1. Publisher will publish the messages to one queue (Exclusive Queue) as normal
  2. The Subscriber (SCPI in our case) will first query the database to see if this new message belongs of an Order that has failed in the past. We will use composite key of the key fields that the Order to identity the order. If the Order is found in the DB, then this message will not be continued and will be inserted to the DB (or also pushed to the DMQ).
  3. I am also thinking to create two tables - one will only hold the order number and time of arrival, and other will hold the complete payload. This way searching Order messages using Table 1 (metadata) would become easier.

HANA Cloud Database and DMQ in sycn

Since we are dealing with two data sources for failures, it’s important to make sure that both are in sync.

One way of doing it is to forward DMQed message to HANA DB Table as soon as it enters DMQ. This way DMQed messages will be available in HANA DB at All times.

The HANA DB Table will be queried before continuing the process of sending the message to a Receiver