cancel
Showing results for 
Search instead for 
Did you mean: 

SAP Data Intelligence: Pipeline Operator and Restart Policy

0 Kudos
1,019

Dear community, I am developing a "Restart"-graph for my main pipeline using the combination of "Pipeline"-Operator from SAP Data Hub (description: https://bit.ly/3s39RYo) and the "Restart Policy" of a group.

Basically, after the starting, the "Pipeline"-Operator will execute another pipeline (my main pipeline). If there is an error from the main pipeline, it will send the error message from the "error"-port of the "Pipeline"-Operator to the "restart-trigger"-Operator. The "restart-trigger"-Operator will throw an error and therefore will trigger an error to the whole group. With the "Group Policy" as "restart", the whole group will restart. The aim for this whole implementation is that our main pipeline can automatically restart, after it encountered an error, so that it can run permanently without manual restart and we are able to capture the errors in a text file.

My "Restart"-graph:

The "Pipeline"-Operator after starting will execute my main pipeline. If the main pipeline has an error, it will send the error message to the "error"-port:


If an error message comes to the "restart-trigger"-Operator, it will throw an error and therefore will trigger an error to the whole group:



Group policy - restart

With the "Group Policy" as "restart", the whole group will restart and the main pipeline will be executed again:

The problem is that after a few hours, this implementation will causes dopple running-pipelines from a same graph (in my example: the pipeline "POC_FRAUSCHER" has two running-pipelines).

After some investigations, I suspect that the reason for this behaviour may come from this error:

"Error: "Failed to stop SAP vFlow Pipeline task execution: no token or invalid token providedn"

Has anyone encountered this error before? And would you please provide me some tips to fix this issue?

Thank you very much

Tuan Anh Nguyen

Accepted Solutions (0)

Answers (1)

Answers (1)

andy_yin
Advisor
Advisor
0 Kudos

Hi Tuan Anh Nguyen,

Operators in the Data Workflows category are as the name indicated, used for workflow orchestration purpose.

Take an ETL or ELT scenario for example, we usually need to create three separate data pipelines, one for data extraction, one for data loading, and one for data transformation. And finally we need a way to organize them by specifying the order of their execution. If we specify the order like extraction->load->transform, then load will not start until extraction finished, and transform will not start until load finished. This is what the workflow means. And Workflow operators are used exactly for this purpose.

For your case, I am not sure if it is an appropriate way to use the workflow operators in that way. What I am suggest is you can try to use the group restart policy in your wrapped graph instead of doing this in a workflow graph.

0 Kudos

Hi Andy,

Thank you very much for your information and explanation. The reason why we implemented like above is that we want to capture the errors that cause the "dead" to the main pipeline, because the message from the error port will contain the reasons for the "dead"-graph. Because our main pipeline is big and very complex, some errors or bugs happen very rarely and we want to capture them. Correct me if I am wrong but if I use "Restart"-Policy for a wrapped graph for my main pipeline, there is no way I can capture the causes (errors) for the restart. Is that correct? The graph will just keep restart from the same error and we don´t know what is the reason. If there is a better solution for this problem, I would love to read about it.

Thank you very much.

andy_yin
Advisor
Advisor
0 Kudos

Tuan Anh Nguyen,

You can capture the errors by receiving the error notifications. Please refer to this link SAP Data Intelligence: Get notifications about your pipelines on how to do this.

Hi Andy, thank you for your reply. I have read the article. Basically, it is the same approach with what I am trying to implement. I used a "Write File" instead of "Notifications". I think both approaches are fine and it is not really my concern.

What I am trying to do is that when I run a pipeline -> if an error occurs -> the error will be documented (with notification or WriteFile) -> then the pipeline will automatically restarts -> next day I just need to read the error document to understand what is the cause of the restart. There are right now two approaches for the restart:

1) Use "Restart Policy" for the whole graph and it will restart when an error occurs -> problem: you don´t know what was the cause and the graph will just keep restart from the same error.

2) Use the kombination of Pipeline Operator and Restart Policy in my question above -> It will capture the error in a file and from the error to the group, graph will restart -> problem: after a few hours, this implementation will causes dopple running-pipelines from a same graph.

So there is no way that I can capture the error in a file and automatically restart the whole graph without using the "Restart Policy"?

andy_yin
Advisor
Advisor

Hi Tuan Anh Nguyen,

Please take a look at Subgraph (Call a Graph From a JavaScript Operator) to see how to call a graph with JS code. Then you can try the steps below:

  1. Remove the group restart policy from your existing workflow graph.
  2. Create a data pipeline graph follow the above link, and call your workflow graph within it. Also make sure to apply the group restart policy to this data pipeline graph.
  3. Run the data pipeline graph you created in step2.
  4. Check the error messages from your wrapped graph can be captured by your workflow graph, and all the graphs can be restarted in case error.
0 Kudos

Hi Andy,

thank you very much for your response. I will try that.

Kind regards

Tuan Anh Nguyen