I am currently working on production deployment of custom iRPA Bots in Supply Chain Management. The bots are built using cloud studio and work wonderfully on a "Happy Path", that is, with correct test data. "Happy Path" definitely covers most of cases in production, but one cannot disagree that "Happy Path" is not always there, and in at least few of cases errors will come up and disrupt the Happy Path of Bots. As we understand, bots need to be told what to do when error comes up and it is very important to record these errors and handle them, to make the Bots robust and reliable.
So, Capturing errors correctly, logging them and reporting them was a key concern with my client and below is how I handled it.
Well, iRPA provides quite a few methods to capture the errors. Try-Catch Blocks, Condition Statements, Screen Switches and other Controls along with "Get Element" activity does this beautifully. This captures the error messages from Status Bars, Pop-ups, and any screens very well.
Once the errors are captured, they need to be logged and reported for further action. Below are few methods I used.
Cloud Factory Monitoring Job Logs:
To report an error in Cloud Factory Monitoring, use "Log Message" and select the type as "Warning" / "Error". Please note, if the type is set as "Info" the message will not be displayed on Monitoring page. By default, the type is "Info".
Consolidated Log File
Another method is writing the logs to a text file for later retrieval and reporting.
In this method, I was initially trying to write to excel file, but I discovered that there is a log folder available for each project. This Log folder can be accessed by the parameter "irpa_core.enums.path.log" .
At the start of project, a unique file should be created under this folder. This file is accessible across the project. Since it is created every time, there is no issue of back entries in this file. So, every BOT run, the file is clean to start logging. Also, this is agent machine Operating system agnostic, as there is no need to create a local folder on machine, create files into it, and write to a local file.
Log Message Create File
Every time a log message needs to be put in, "Append File" activity can be used to write a line to the file. The file is referenced in "filePath" of "Append File" activity and content to be written is put in "content" of "Append" file.
All variables available as per variable visibility, can be used to write the content. Similar to content in "Log Message" activity
Log Message Append File
At the end of processing, the log file can be read by using "Read File" activity. The "path" variable references the log file under irpa_core.enums.path.log folder. This activity reads the file and returns an output variable with the content. This output content can be converted into Email using Outlook SDK activities (as was my requirement) or it can be written down into excel at one shot, or it can be downloaded as a file using file dialog. There are many options once the log file is written!
Log Message Read File
The best part of this approach is, there is no need to write lengthy steps to append a log message to file, in between the process or care about references to key values and variables etc. All the processing can be done at the end of process.
Another variable that can be used to format the log messages irpa_core.enums.file.carriageReturn.CRLF . This helps to create a line feed after every log line in the above content, thereby helping to get a rightly formatted file. Just concatenate this variable at the end of message in "Append File" -> "content".
As a best practice,
I followed that all the technical errors, like time outs etc., go to Cloud Factory Job Monitoring for IT teams to resolve,
and all data related errors go into log file for Business team to resolve which is sent as email to needed recipients.
In Summary, the 2 ways of error reporting is
Through Cloud Monitoring logs by using "Log Message" activity + message type as "Error" / "Warning" and
Through irpa.core.enums.path.log folder, in which a log file can be created and accessed throughout the automation. This refreshes on every run. All logs from process can be written to it. At the end of process the file can be read and further actions taken as per requirement.
I hope that this blog post has been useful. I welcome any feedback you may have, even if it is to correct where I may have erred as I am sure there are experts that have more experience in this area.