25. August 2016: Added Performance on ABSL video.
In order to develop quality and fast applications with the SAP Cloud Applications Studio, you need to educate yourself about how to use the toolset properly.
SAP Cloud for Customer is getting updated every 3 months. The added features are highlighted in the What's New section in the documentation at help.sap.com/studio_cloud. These features are often capable of replacing expensive workarounds you h...
Be aware of the execution logic and always think about how often events are called at runtime and if you can reduce the amount of iterations.
Keep in mind that modifications done inside any script can trigger other scripts. For example, let’s consider you have After Modify script on Root and Item node of Opportunity BO. Inside the Item->After Modify you change values of the Root node fields then the Root->AfterModify will also get called. Further if you change Item fields in the Root->AfterModify again the Item’s modify will be called and you will be stuck in a loop. Hence it’s best to think about the implications of modifications on a node before coding it in a script.
Also think about alternative channels that can update a business object:
Sometimes it is advisable to do expensive calculations within a dedicated action that can be called from the UI. For example calculate the item summary on the header only when the add or delete button is clicked on an item list.
The ABSL language makes it very easy to navigation through objects. Behind the scenes objects are being retrieved and discarded. In a nutshell, every dot retrieves something. Either a node or an associations. Accessing a node is fast, retrieving an object by association much slower and doing it over and over again easily adds up to several seconds.
For performance reasons, retrieves by association should be kept at the possible minimum. Results from retrieve by associations should be buffered in the coding if possible.
Example:
In the below code, toParent, toBusinessPartner, EmployeeResponsible and DefaultAddress are representing associations. The code results in 13 retrieves on the server side.
if(this.toBusinessPartner.IsSet()) {
if(this.toBusinessPartner.EmployeeResponsible.IsSet()) {
if(this.toBusinessPartner.EmployeeResponsible.Address.DefaultAddress.IsSet()) {
this.toParent.RespEmplAddrStreet = this.toBusinessPartner.EmployeeResponsible.Address.DefaultAddress.Street;
this.toParent.RespEmplAddrPostCode = this.toBusinessPartner.EmployeeResponsible.Address.DefaultAddress.PostCode;
}
}
}
A better code example would save the instances in local variable. The example below works with 7 associations, would speed it up by factor two.
if(this.toBusinessPartner.IsSet()) {
if(this.toBusinessPartner.EmployeeResponsible.IsSet()) {
var emplResp = this.toBusinessPartner.EmployeeResponsible;
if(emplResp.Address.DefaultAddress.IsSet()) {
var emplRespAddress = emplResp.Address.DefaultAddress;
var parent = this.toParent;
parent.RespEmplAddrStreet = emplRespAddress.Street;
parent.RespEmplAddrPostCode = emplRespAddress.PostCode;
}
}
}
Keep in mind that .IsSet() leads to an association. Avoid redundant “retrieve by association” operations by storing the result of an operation in a variable / collection.
Using association in trace statements: Even if the trace is not active, the content inside the trace statement is evaluated. This also might lead to “retrieve by association” operations that can be easily avoided
A pretty easy improvement is the removal of trace statements. If you have code like this in your project:
Trace.Info("Instance Count", this.toAnotherBO.Count());
The this.toAnotherBO.Count() is getting retrieved and executed even when the trace is not set to active.
As a best practice, you should not trigger a save from the UI at all. While a user is in edit mode, he should be able to work on a document. If the user hits cancel, everything he did before hitting save the last time should be rolled back. This does not work when a save is triggered as a part of the application logic.
The retrieve method retrieves an object from the current buffer. This is the fastest and the best choice to get access to an object. Using the query does bypass the buffer and is therefore slower in most of the cases.
Due to the fact, that a query bypasses the buffer and is executed on database level, the query is slow. Try to find ways to use retrieve. Even if you have to retrieve an intermediate object first, it is often faster than using the query.
If you have to use the query, use query.ExecuteDataOnly() if you're only interested in the result data and not the object instances as instance type.
If you are using queries, make sure the result set is restricted. If your code runs without a query parameter selection, or the selection is done, but empty, the full database table will be returned. Most of the time, this is not expected. Before you run a query.Execute, you should check if the selection parameter is set.
The default query QueryByElements does not support full index search. It has in general a linear dependency on the number of instances in the business object node t = O(n), where n is the number of BO instances in the database).
Therefore it should be used only if:
In all other cases, an application-defined query (defined with the Query Wizard) has to be used. An application-defined query supports full index search on all query elements t = O(log n). This advice holds for query calls in BO implementations, UIs, web services, etc. Independently from the used query, the number of selected instances must be as small as possible, as the time depends with linear dependency from the number of selected instances (t = O(m), where m is the number of BO selected(!) instances. If possible define a join query in the wizard instead of selecting a large amount of data and do the selection in your coding.
Where and Sort operations on collections are available and make it possible to reduce the number of nested loops.
Mass-enabling of actions and events is supported. In mass-enabled script files, the “this” operator is a collection of business object nodes instead of a single instance. A mass enabled script won’t be invoked for each instance, instead it will be bundled as a single call. This makes sense in large nodes. On the root node, it only has a benifit in case of file upload and integration, when multiple instances are imported at once.
You can also further optimize your logic by doing retrieves and queries you need for all nodes or instances once, and then loop through the "this" collection.
There is a dedicated document about this topic from Pradeep Kumar N: Performance best practice with Mass enabled event
Nested loops (foreach, while) on collections with a large number of members should be avoided, because they lead to a runtime t = O (n * m * ...).
You can also improve performance by avoiding Retrieve calls in a loop. Every dot access inside a loop will do retrieve as many times as the loop is executed. If you need to read some static data, do it outside the loop and store it in a variable. This way you can avoid redundant calls
Generally a reuse library is created to perform a specific function without the need of writing code for it every time. But one should be careful while calling the reuse library in a loop. If for example, the reuse library has a query to read data and the ID for the selection parameter of the query is passed to the library from inside the loop then performance will be affected. This can be avoided by passing a collection of IDs to the reuse library at one go so that the query result has all the data you need and you don’t need to call the library again.
These numbers have been collected on a small test solution and may be higher in bigger objects. These are by no means official numbers and not meant to be a KPI or performance indicator.
Keep in mind that a retrieve runtime is stable while the query goes up logarithmic, which makes it extremely expensive on larger tables. Also keep in mind that this number only applies to defined queries and not the auto generated QueryByElement query, which are much slower.
It is possible to influence the UI component loading sequence by enabling Lazy Loading.
Lazy Loading can be activated when adding Embedded Components to standard screens. It will result in an initialization of the embedded component when it gets displayed instead of an initialization when the host object gets loaded. This is often a good idea, but might lead to unintuitive behavior when the logic in the embedded component writes data back to the host BO for example. Then you would see different data on the host BO before you navigated to the embedded component the first time, and different data afterwards.
Lazy Loading can be activated on custom thing inspector level by turning the data scope handling attribute. This will lead to lazy loading of all thing inspector facets.
This feature allows the packing of multiple UI resources (javascript, css etc.) into one package. This effectively leads to a smaller number requests required on client side. It can be enabled by setting the floorplan property "Enable Backend Operations Clubbing" to "true". This has an effect mostly on very slow connections with a high latency (mobile 2G/3G networks).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
26 | |
25 | |
21 | |
12 | |
9 | |
8 | |
8 | |
8 | |
8 | |
8 |