After reading the earlier articles on performance improvement of transactions with large XML's , I realized that there are always more than one way of optimizing the transaction's performance. It all boils down to the actual BLS logic that is implemented.
Yes, XSLT is the most powerful/efficient way of transforming XML(s) . But what if there is a situation where XSLT(s) cannot be used. So here are some of the key points that may improve the performance of the transaction which I figured out during one of my performance tests..
Logging : Remove unnecessary logging of XML. For example, you have a repeater and for every iteration you log the generated XML along with the BLS logic. This would create unwanted Java Objects. In cases like these where logging happens for large XML(s), the Java Objects too become huge. They not only consume a lot of memory but also effect the performance to a great extent.
However, if you really need logging, then write the final output to an XML file in the WEB folder. You could use Write File Action to do this. The write action consumes less times and performs better than the tracer.
Looping : Repeaters are probably a better choice when looping through a large XML. For/while loops are good for a small amount of data. During my tests, I found that loops performed better than Repeater for row counts below 5000. After which Repeater improved and the difference was very encouraging. For 100,000 records, the loop took around 15 minutes to iterate and build a XML, while the repeater too a little over a minute to do the same.
XML Actions: MII has standard XML Output Action blocks which give the result in <Rowsets><Rowset><Row> format. In case you are trying to build a simple XML with no complex nesting, then the Document and Row Action blocks can be used. They perform better than manually crafting the XML using the Assignment action block.
Also, If you have a SQL Query in your transaction , ensure that you get only those columns which you need for your BLS logic. Doing a "select *" would return all the columns. More number of columns means more nodes in the XML to parse and hence greater time.
Also, as Dipankar Saha mentioned inOptimizing BLS Performance for XML Handling in SAP MII , there are monitoring mechanism built inside SAP MII with which you can see the actual execution time for each action block in a transaction. To summarize, there are the two main URLs that are used.
- http://<server>:<port>/XMII/Runner?Transaction=<Project>/<Path>/<Name>&LogStatisticsToDB=true
- http://<server>:<port>/XMII/Illuminator?service=BLSManager&Mode=Stats&ID=<TRXID>, where TRXID/transaction ID is got from the Transaction Manager Screen for the transaction under test...
However, there is one point you have to consider while doing the above monitoring.
Say for example You have two transactions A and B.
A has only one action block and it calls B from that. What would happen is all the action counts in B would be doubled. Now, this does not mean that B is getting executed twice.
Its just that the application thinks that B is running in debug mode and starts logging more than normal. This takes up the count. The logging makes up for the increase in count.This would only happen when you have LogStatisticsToDB=true and you have a transaction call.
Reference Documents:
1. Optimizing BLS Performance for XML Handling in SAP MII
2. Optimizing BLS performance of large XML process with XSLT
These where my findings until now, there could be more ways to improve too. i would update this blog as and when I find something new.