Before you start reading this blog it is good to read the blog series.
After functional correctness, the custom code is ensured to result as expected after migration to SAP HANA. In general, the custom code gets default performance improvement by using the in-memory capabilities. But often, custom code is not written by following the SAP standard guidelines hence there is lack of performance. This section explains the next steps for improving the performance for custom code.
One should really know the basics of performance optimizations. To understand that, you should have clear understandings of "Golden Rules of SQL" and the priority changes with respect to SAP HANA. The image below depicts the golden rules of SQL. With respect to SAP HANA there are some priority changes on the rule.
There are five important golden rules should be followed by any (SAP) Application/Report. Let see the rule before and after SAP HANA and its priority changes.
When we talk about performance customers asks one basic question, “how can I find ABAP code which shall be optimized or which has potential for massive acceleration”. The answer is, in general no changes are necessary if your SQL code or Custom code follows the golden Open SQL rules. The ATC or SCI checks can be used to find the SQL patterns that violate the golden Open SQL rules. Add runtime performance data from production system to rank the findings and to find potential for massive acceleration. The code check tool (ATC/SCI) now has been improved with additional checks to identify the performance loopholes.
The image below shows the additional checks on the ATC tool and mapped with the golden rules to show the relevance.
There are additional checks on the section “Performance Checks”, which identifies the code for performance loopholes. These checks indicates the improvement on,
The SQL Monitor tool is used to identify the performance traces of each and every SQL statement which is executed on the ABAP Server (Generally in production system). This tool can get the answer for some of the questions with respect to optimization.
None of the standard performance analysis tools you might know is capable of answering any of these questions in a satisfying way. Let’s take a moment to figure out why.
Trace tools like ST05 (SQL Trace) or SAT (ABAP Runtime Analysis), on the one hand, are designed to trace a single process but not your entire system. Hence, even if activated only for a short period of time, the trace files would become way too big and the performance overhead would be unacceptable in a productive system. Monitoring tools such as STAD (Business Transaction Analysis) and ST03 (Workload Monitor), on the other hand, run system-wide and provide aggregated performance data on the process level. However, they don’t allow you to drill down in the data so there is no way to get the SQL profile of a process. Other monitoring utilities like ST04 (DB Performance Monitor) supply you with detailed information about every executed SQL statement but cannot provide a link to the driving business processes.
So how can you answer the questions stated above? This is where the new SQL Monitor kicks in by providing you with system-wide aggregated runtime data for each and every database access. You may think of it as an aggregated SQL trace that runs permanently and without user restriction. On top of that, the SQL Monitor also establishes a connection between the SQL statement and the driving business process. To be more precise, this tool not only provides you with the source code position of every executed SQL statement but also with the request’s entry point that is, for instance, the transaction code.
SAP recommends activating the SQL monitor tool on the custom production system at least for a week. Often cases, it needs to be activated for two week. The SQL monitor collects all the runtime traces of all the SQL Statement executed over the period of activation. This data can be exported and uploaded a snapshot to the development or quality system which can be used to analyze further for detecting the performance potentials. The above diagram is a collection of SQL Statements executed over a span of week on the production system. This gives insights of,
The next step after collecting the runtime traces, one needs to prioritize the finds. For doing this, SAP delivered a tool named “SQL Performance Tuning Work list (SWLT)” which combines the result of static checks and runtime findings.
The SQL Performance Tuning Worklist tool (transaction /SWLT) enables you to find ABAP SQL code that has potential for performance improvement in productive business processes. This tool combines new ABAP code scans (ABAP Test Cockpit or Code Inspector /SCI) with monitoring and analysis utilities (SQL Monitor and Coverage Analyzer), and automatically creates a condensed worklist. The resulting findings allow you to rank the worklist according to specific performance issues and your business relevance. Prior to analyzing static checks, appropriate ABAP test cockpit runs must be performed in the case of systems and their results must be replicated into the relevant system.
The image below explains how the static and runtime findings are combined on one tool called SWLT.
The SWLT tool allows managing the snapshots of SQLM taken from production system and combines the result with ATC results to prioritize the optimizations. With the combined result of ATC and SQLM results it becomes easy to prioritize.
To prioritize the findings, one can consider the following inputs.
Apart from these three above inputs there are other inputs has to be considered.
The blog series would discuss the different phases of the custom code management as follows.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
3 | |
3 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 |