Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
jirifridrich
Participant
2,352

There are some errors while saving file to SFTP server, which happen after the SFTP connection has been establish. The in-built retry function is not applied, as it is specified in the SAP Help documentation:
https://help.sap.com/docs/cloud-integration/sap-cloud-integration/configure-sftp-receiver-adapter
'Maximum Reconnect Attempts' setting is only relevant for establishing the initial connection to the server. If the server connection is interrupted during message processing, the connection will not be recovered. A retry attempt of the interrupted message processing can only be achieved by explicitly modeling this functionality via integration flow. 

For instance, the error I needed to handle was 
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot store file

In such case the integration processing fails. We can handle the error in exception subprocess, but if we want to apply a retry mechanism, we can't really do that within exception subprocess.  

I suggest following solution: we decouple main process from the process of saving the file via SFTP.
Let's say we receive a file via HTTPS and we want to save it to SFTP. 
The simplified main iflow will look as follows:
jirifridrich_0-1725293778924.png

  1. We set up a Looping process call, which will be our retry mechanism. 
  2. We base the loop on a header parameter 'file_saved_ok'. As long as the value is 'false', the loop continues. 
     jirifridrich_2-1725294185260.png
    After 5 attempts, an exception is raised and is handled in Exception subprocess, which can be for example an alert email.
  3. Local integration process contains only Process direct call to our proprietary iflow, which actually saves the file to SFTP server.

And this is our new integration flow, which manages saving the file to SFTP server.

jirifridrich_0-1725295758326.png

  1. Iflow is called by the Process direct, which we set in our main calling iflow.
  2. We set the file name and waiting time (see step 4).
  3. We attempt to save the file. If everything went well, the process continues to Content modifier step, where we set header parameter 'file_saved_ok' to 'true'. It is important to set it as header and not property, as property lives only within one iflow.
    Processing is then returned to calling iflow, where the header parameter causes the loop to end and the processing finishes successfully.
  4. If an error occurs, the iflow jumps into Exception subprocess, where header parameter 'file_saved_ok' is set to 'false'. Then we've got a groovy script, which triggers a sleep for a specified waiting time (see step 2). The processing returns to calling iflow, where the header parameter causes the loop to continue.

This repeats till the file is saved successfully or the number of attempts is depleated. In that case we receive an alert email, stating that even after 5 attempts the file was not saved.

Even though simplified, the above example demonstrates the principles by which we handled an SFTP error and implemented a custom retry mechanism.

 

 

8 Comments
David_Davis
Participant
0 Kudos

Thanks for the blog. I have two clarifications.
1. Why do we need to decouple the flow and use Process Direct adapter here? How about using another local integration process within the main iFlow?
2. What is the purpose of wait here? Why do we want to add wait time in case of errors?

jirifridrich
Participant

Hi David, that is a great question, as my thinking went the same way and I wanted to handle everything in a single iflow. Nevertheless, it was impossible to implement the retry mechanism in such way. Either the iflow ends up in error and terminates, or we handle the error in exception subprocess and retry it from there. But what if it fails again? We would be catching error in exception subprocess in another exception subprocess and that would be impossible or clumsy.

I like the method of decoupling, as you can have for example a single iflow for saving files generally and just fill it with header parameters from various iflows which need to save a file. Another example is sending emails, and you configure the email at one place.

The wait is there just to give the SFTP server some time to recover in case it is down for some reason. I can imagine a scenario, where instead of setting a wait time, you set another URL of a backup SFTP server, which will be used for a second attempt.

Bryan_Pierce
Explorer
0 Kudos

Thank you for sharing/posting this. I'm dealing with the same scenario for dozens of SFTP integrations now that we moved to BTP. Going to think through your creative design here and see if it can be integrated into what we are looking to achieve. I do have one question with regards to after the 5th call fails. In the example here an alert email can be generated, which makes sense. But, in my experience, when you get a 'cannot store file' error and you don't end up with a successful transfer, then the Sender SFTP adapter, via post processing, could delete the file anyway; which is how most of our SFTP Senders are configured. In your real world use of your proposed solution, do you save the payloads elsewhere in case of a future reprocessing recovery? Or, do you maybe not delete off the sender system? Trying to understand that next level of "you know it broke but still need to get the job done" to transfer the file after the fact. Thanks.

Bryan_Pierce
Explorer
0 Kudos

Follow on question. In the above image the Content Modifier step 'Attempt saving' looks to be directly connected via an SFTP Receiver Adapter to the Receiver SFTP. How is this possible? 

jirifridrich
Participant
0 Kudos

Hi Bryan, that's not Content Modifier, that's 'Send' component, they share the same icon

Bryan_Pierce
Explorer
0 Kudos

Ah! I had to go back and look at the icon. That explains it. Thanks!

Hira
Participant
0 Kudos

Hi @jirifridrich ,

Nice idea for handling retries, but what about using JMS Queue instead of custom design. 

jirifridrich
Participant
0 Kudos

Hi Hira,

JMS queue is perfectly fine. In case of error we send the message to JMS queue and retry from there. It should be sufficient in most cases. Only if we hit the limits of JMS queue or want an additional control over the handling, we apply custom design.
Queue capacity is 300 MB as per the documentation.
https://help.sap.com/docs/cloud-integration/sap-cloud-integration/jms-resource-limits-and-optimizing...

Labels in this area