This blog is about measuring the performance. Measuring is mostly done with the intent to improve performance. Special tips on how to improve the performance is not the main focus of this blog. However, there is a great blog with best practices about Design Studio performance. Check this out:
Design Studio Performance Best-Practices
After you have applied your performance optimizations, you should of course measure if you have really improved the performance of your Design Studio application. Or even better – before you tune the performance you should first measure where the runtime of your applications is spent. In most cases measuring performance can be simply be done by looking at the second hand of your watch. But with Design Studio there’s a way that will give you more insight.
To measure the runtime of Design Studio applications the URL parameter “&PROFILING=X” can be added to the application’s startup URL.
Start your application via the “Execute Locally” action then go to the started browser’s URL edit field and append “&PROFILING=X”:
This will re-start the application and you will see the “Statistics” window on top of your application's UI:
When you are running your apps on the BI Platform (BIP) or on NetWeaver you can start the profiling via the following three steps:
First select the “Send to Mobile Device” menu item:
Then click on the “Copy URL to clipboard” button:
Now the application URL is in the clipboard. Paste this URL to your browser’s URL field but before hitting the return key append “&PROFILING=X”.
First let’s look at the upper part of the window.
The most important information is on the first tab “Java Statistics”.
The Design Studio runtime is implemented in Java running either on BIP or NetWeaver. So the most performance critical algorithms run there. The “Java Statistics Raw” contains the same information as “Java Statistics”, but in a more “technically readable” form” (JSON). This tab is typically used when sending information to the SAP support.
The third tab contains the JavaScript “Rendering Statistics”. After the Java Runtime of Design Studio has performed its heavy duty work, it sends the data to be rendered to the browser and there JavaScript code createsthe user interface. In my experience the JavaScript code almost never was a performance bottleneck - therefore I will focus on the “Java Statistics” only (at least in this blog - I’ll post some more on this later).
The lower part consists of four buttons:
The first button downloads the “Java Statistics” to a CSV file that can be opened in Excel. This is cool for Excel Gurus. I personally prefer the next option that downloads the same information into a text file.
The third button is the “Refresh” button”. This buttonis needed because due to timing issues the rendering of the data shown in the Statistics window has been rendered too fast 😉 This means not all of the performance information is visible. Pressing the “Refresh” button ensures that you see the very latest data. Note that this affects only the information directly in the “Statistics” window. If you press one of the “Download” buttons, then all relevant data is contained in the downloaded files.
The “Close” button (guess what) closes the Statistics window. However, to continue working with the application you don’t need to close the window. By dragging on the “title bar” you can simply move it to a place where it does not bother too much. The window is modeless so you can still click on the application area. You can even resize the window by dragging the lower right edge of the window. If you have closed the window, it will re-appear after you have triggered a server-roundtrip.
After we have downloaded the relevant information as a text file – we’ll get a lot of strange looking information:
Step 0: UID: null
Runtime: 1360 ms, Java: 332 ms, remote: 1028 ms
Java events: 1360 ms
1360 ms: Application step ZEN (12500)
1360/16 ms: Process request ZMK_TEST_02 (12600)
1032 ms: Process page command SET_TEMPLATE (12608)
1032/50 ms: Initialize item main_template (14100)
0 ms: Create system service TEMPLATE for local [MASTER SYSTEM]
967 ms: Process page command CREATE_DATA_PROVIDER (12608)
0 ms: Create service PAGE_OBJECT
967 ms: Initialize data provider DS_1 (13100)
967 ms: Create system service BICS_QUERY_VIEW for Q93 [PUBLIC]
47 ms: Get connection
78 ms: Create system service MASTER_SYSTEM_SETTINGS for Q93 [PUBLIC]
795 ms: Setup initial state (13001)
795 ms: Create system service BICS_PROVIDER_QUERY_VIEW for Q93 [PUBLIC]
249 ms: RFC BICS_PROV_OPEN (10000)
47 ms: Create system service BICS_DATA_AREA for Q93 [PUBLIC]
499 ms: RFC BICS_PROV_GET_INITIAL_STATE (10000)
499 ms: BICS_PROV_GET_INITIAL_STATE: (-2)
At the top of each “Step” we can already see some useful information:
Step 0: UID: null
Runtime: 1360 ms, Java: 332 ms, remote: 1028 ms
This tells us that this “Step 0” (i.e. the application startup) took 1360 ms. But only 332 ms have been spent in Design Studio’s Java code. The majority of the time (1028 ms) has been spent “remote”. This means that the time was spent by waiting for remote systems of the data sources. That could be BW systems, HANA systems or BIP Universes.
After the top-level runtime information we get a hierarchical view on the execution of a step. Each line starts with the time (in milliseconds) that was used. Some of these numbers are followed by a second number separated by slash:
1032/50 ms: Initialize item main_template (14100)
The second number (here: 50) means that out of the 1032 overall milliseconds there are 50 ms that cannot be assigned to any of the sub-steps of this set. I.e. if you sum up all milliseconds of the sub-steps you will get 982 (1032 – 50) milliseconds. The rest of the time is consumed in the “flow” between these steps. So these times after the “/” should typically by quite low (less than 100). If you find times that are over 100, open a bug ticket to the Design Studio team and tell them that they should add a specific entry in the statistic for such a significant amount of time.
Note that the granularity of times on Windows operating systems is 15.6 milliseconds. So an entry with “0 ms” does not necessarily mean “almost no time” but just “below 15.6”. This also explains why the “below 100” entries often have values like “16 ms”, “31 ms” or “47 ms”.
In most cases bad performance is caused by accessing data sources. Accessing data sources requires accessing the “remote” system that contains the analytical data. The statistics data varies between the remote-system types (BW, HANA, Universe).
As a first example we take a look at BW data sources. BW systems are accessed via the “Business Intelligence Consumer Services” (BICS). These services access the BW system using “Remote Function Calls” (RFC). RFC calls are typically the items that take a significant amount of time.
In this example one of the long-running calls is “BICS_PROV_OPEN”. By going up the hierarchy we can see that this was triggered by “Initialize data provider DS_1”:
842 ms: Initialize data provider DS_1 (13100)
842/1 ms: Create system service BICS_QUERY_VIEW for Q93 [PUBLIC]
36 ms: Get connection
56/55 ms: Create system service MASTER_SYSTEM_SETTINGS for Q93 [PUBLIC]
1 ms: Create system service SYSTEM_CAPABILITIES for Q93 [PUBLIC]
699 ms: Setup initial state (13001)
699/3 ms: Create system service BICS_PROVIDER_QUERY_VIEW for Q93 [PUBLIC]
550 ms: RFC BICS_PROV_OPEN (10000)
If you see several of these “Initialize” entries in the statistics you should check if you really need them within this step. Especially in the startup step there are often too many data sources initialized, but in fact they only needed in later steps (or not at all if the user does not click thru all tabs). You should check the blog mentioned at the beginning to find ways how to change this.
Another important call is “BICS_PROV_GET_RESULT_SET”.
247/5 ms, 80: Get result set (13004)
240/3 ms, 80: Get provider result set (13040)
237 ms: RFC BICS_PROV_GET_RESULT_SET (10000)
The “Get result set” entry indicates that the complete result of a data source is fetched from the remote system. If you see these entries you should check whether the number of retrieved result sets matches the UI that is displayed. E.g. if a user interaction brings up a crosstab and a chart, there should at most be two “Get result set” entries. Even better there should be only one because a crosstab and a chart can share one date source in most cases.
Result sets are retrieved when they are needed for the first time:
After that the result set is re-used as long as the data source state does not change. Scripting calls that trigger “Get result set” to be called again are for example:
If the number of “Get result set” entries is higher than you would expect, then you should check your script code for calls that change the data source state.
On the other data source platforms (HANA, Universes) the “BICS_” RFC calls won’t be there. There will be other platform specific entries. E.g. SQL calls for HANA:
15 ms: Get result set (13004)
15 ms, 3: Get result set (13004)
15 ms, 3: Get provider result set (13040)
15 ms: Execute SQL: GET_CROSSTAB_DATA
This means that you should look for the “generic” texts like “Get result set” or "Initialize data provider" if you are analyzing data sources to Non-BW data sources. If you are a BW junkie (like me) and you have used BW performance analysis tools like “RSTT” (RS Trace Tool) you might like “BICS” calls more because you have seen them very often (probably too often) in RSTT 😉
What's next?
I will post some more “Design Studio Tips and Tricks” in the future.
Stay tuned and please share your comments and feedback.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
27 | |
27 | |
13 | |
13 | |
12 | |
12 | |
10 | |
7 | |
7 | |
6 |