HIGHER STATEMENT_MEMORY_LIMIT ADVERSE AFFECTS ON HANA DB:
As you all know, in HANA, As soon as peak memory gets hit, column store tables will be unloaded to process further new table load request based on LRU.ie, in the below screen shot, column store consumes maximum memory and If HANA hit peak memory with this memory distribution, it will unload column store tables based on least recently used algorithm (LRU) to suffice new requests.
However, in the below screen shot , we can see that , HEAP is consuming the maximum memory and in case of peak memory, HANA does not have enough memory to free up memory by unloading column store tables as , here column store tables are only holding 287GB of memory unlike statement execution which is holding 1.3 TB.So when does this happens ? How does HANA reacts in this situation ?ie, HANA has already unloaded each possible table that it can and all the remaining tables are actually used by HANA DB currently.
So,HOW DOES HANA HANDLES IN THIS SITUATION?
Answer:
In above situation, HANA is allowing the statement execution for each and every statement upto the limit set in statement_memory_limit which is here 300GB. So each long running statements have the freedom to use till 300 GB out of 2TB set per node . Ie, Any 4-5(300*4 or 5) statements which is long running could hang the system until each reaches the maximum of 300GB before that memory is used for reuse . Any poorly designed query can easily spoil the HANA DB. Hence it is very important to set the correct value for statement memory limit .
HOW CAN THIS BE CORRECTED:
So, how to determine the optimum value? To find this out , we need to understand each statement that is getting executed in the system and the memory it consumes . This can be recorded by enabling expensive statement query with threshold as -1 => Record all the statement that are running in HANA DB.
After setting these parameters , we can categorize the statements and memory consumption in the HANA DB . ie, this setting will start to populate the M_EXPENSIVE_STATEMENTS table and this can hold values till 7 days .
With the help of below output we can arrive at the number of statements and its memory consumption .
select distinct STATEMENT_HASH from M_EXPENSIVE_STATEMENTS where ERROR_CODE = 0 and OPERATION = 'SELECT'and (MEMORY_SIZE/1024/1024/1024)> '200';
select distinct STATEMENT_HASH from M_EXPENSIVE_STATEMENTS where ERROR_CODE = 0 and OPERATION = 'SELECT'and (MEMORY_SIZE/1024/1024/1024) between '100' and '200';
select distinct STATEMENT_HASH from M_EXPENSIVE_STATEMENTS where ERROR_CODE = 0 and OPERATION = 'SELECT'and (MEMORY_SIZE/1024/1024/1024)< '100';
Memory |
Number of statements |
Less than 100GB |
5690 |
Between 100 to 200 GB |
17 |
Greater than 200GB(until 300GB) |
7 |
NOTE:Any statements that tries to consume more than 300GB will be cancelled by HANA and the same will also get recorded in M_OUT_OF_MEMORY_EVENTS along with M_EXPENSIVE_STATEMENTS .
From the output, we can see that only 7 statements need more than 200GB, 17 statements more than 100 to 200 GB . 90% of statements does not need more than 100 GB. Hence system can be saved from the performance issue by reducing the statement_memory_limit to 100GB itself .
What about the remaining 17+7 statements ?
Once we set statement_memory_limit to 100, all the statements consuming or requesting more than 100GB will fail . This can be handled by using WORKLOAD classes or increasing the statement memory limit for per user basis.Please check below link for more information.
https://blogs.sap.com/2020/09/12/did-you-know-that-global-statement_memory_limit-can-be-overridden-b...
Thanks for reading!
Follow for more such posts by clicking below
rajarajeswari_kaliyaperumal
Like and leave a comment or suggestion if any!