on 2023 Jan 12 11:16 AM
Hello Folks,
How should we handle the Technical and business exceptions in SAP Data Intelligence?
I want to provide a custom error handling for the exceptions in the graphs.
Is there any way to perform below activities:
1. retry the message from which operator it got stopped /failed
2.collect detailed error traces either technical or business (data) errors and notify users via email.
3. store payload message ( only failed one) in a blob or semantic data lake
4. reprocess failed message from blob or semantic data lake in FIFO manner
Your valuable inputs are much appreciated.
Thanks,
Rajesh PS.
Request clarification before answering.
This needs custom solution - you need to design pipeline to handle above scenario.
retry the message from which operator it got stopped /failed => you need to log the messages processed for which you did not get response in some table/storage FF and before running next run you need to read that table/FF and process from the point it was unprocessed
store payload message ( only failed one) in a blob or semantic data lake => you need to have a storage - error table/FF where you can store error requests so that it can be re-processed in next run
. reprocess failed message from blob or semantic data lake in FIFO manner => read the error table/FF where you log failed messages and try executing them
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello shaktik
-> [you need to log the messages processed for which you did not get response in some table/storage FF and before running next run you need to read that table/FF and process from the point it was unprocessed]
At each operator logging in table/storage is required ? if there is any technical or business error.
how about SAP DI server itself is down, is that event exception/operator which captures that?
In next run, its not required to read that table/FF and process from the point it was unprocessed ? more details here?
Also in case of partial failure while updating to database, is there any data consistency and completeness options?
-> [you need to have a storage - error table/FF where you can store error requests so that it can be re-processed in next run]
you meant storing in azure blob / data lake for reprocessing? but when ever graph is dead, SDI have the visibility of response to reprocess?
-> [read the error table/FF where you log failed messages and try executing them ]
when processing it should be batched? and also how the failed messages can be processed sequentially?
Hello shaktik
-> [you need to log the messages processed for which you did not get response in some table/storage FF and before running next run you need to read that table/FF and process from the point it was unprocessed]
At each operator logging in table/storage is required ? if there is any technical or business error.
how about SAP DI server itself is down, is that event exception/operator which captures that?
In next run, its not required to read that table/FF and process from the point it was unprocessed ? more details here?
Also in case of partial failure while updating to database, is there any data consistency and completeness options?
-> [you need to have a storage - error table/FF where you can store error requests so that it can be re-processed in next run]
you meant storing in azure blob / data lake for reprocessing? but when ever graph is dead, SDI have the visibility of response to reprocess?
-> [read the error table/FF where you log failed messages and try executing them ]
when processing it should be batched? and also how the failed messages can be processed sequentially??
User | Count |
---|---|
33 | |
21 | |
16 | |
8 | |
8 | |
6 | |
5 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.