NSE Implementation Experience
** All the screenshots are taken by me from one of our HANA implementations.
Introduction:
NSE is a fully functional warm data store on HANA database, which we can leverage to move less frequently accessed data without loading fully into memory.
This is a very good feature, where we can avoid increasing server capacity and thereby control costs on hardware.
For example, one of our customers has a 2TB production instance on Azure and they have an average memory consumption of 1.7-1.8 TB. To increase hardware capacity, keeping similar CPU size, we need to take a larger VM of around 4TB, which is oversized according to the current growth rate of the database; so customers will end up paying more for a larger VM, even though it's not being fully utilized.
Example, pricing for Azure is as shown below.
To keep the VM size the same and to control memory usage, we have the following options.
So, we have picked NSE as an option to reduce the database memory utilization.
Pre-Implementation Analysis :
Before starting implementation of NSE, we need to identify the potential tables that can be moved to NSE and be offloaded from memory.
To find out the list of tables, we need to define the criteria to pick them. The preferred criteria is, to offload tables which are having a high amount of changes like INSERTs, DELETEs, UPDATEs etc. but very few READs. Also, as a best practice, we have decided to exclude business critical tables from this exercise.
We can use the following to find out the tables that satisfy the above criteria.
As per the definition by SAP, NSE Advisor is a tool to that determines the temperature of data, and uses rule-based (data temperature threshold, access pattern/frequency, data density) algorithms to derive these recommendations, in particular to identify hot and warm objects as candidates to be either page-loadable or column-loadable.
NSE Advisor needs to be run in the Production System, for a few days to capture the table call statistics and provide recommendations. This can cause an overhead on the database, however we haven’t noticed a significant increase in CPU/Memory Consumption, so we activated NSE Advisor.
Note : Please review resource consumption on your HANA database in a quality system to understand the impact, before activating on Production. The key metrics to monitor are DB CPU, DB Memory Usage, HANA Threads Usage, Job Runtime etc.
Activating NSE Advisor :
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','system') SET ('cs_access_statistics','collection_enabled') = 'true'
WITH RECONFIGURE;
This will activate NSE with the default setting. We can configure the NSE Advisor to focus on tables which are larger than a specific size to avoid noise in the result set.
NSE Advisor Additional Configuration Parameters :
MIN_OBJECT_SIZE: Control the minimum object size (BYTE) for an object to be considered
for recommendation
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') SET
('cs_nse_advisor','min_object_size') = 'YYY'
WITH RECONFIGURE;
--default value 1048576 = 1 MiB
Replace YYY to a number in bytes to fix the minimum object size.
HOT_OBJECT_THRESHOLD_PERCENT: Controls the percentage of tables, according to row
scan count and size, to consider when calculating hot object threshold
COLD_OBJECT_THRESHOLD_PERCENT: Controls the percentage of tables, according to row
scan count and size, to consider when calculating cold object threshold
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'system') SET
('cs_nse_advisor','hot_object_threshold_rel') = 'YY'
WITH RECONFIGURE; --default value 10
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'system') SET
('cs_nse_advisor','cold_object_threshold_rel') = 'YY'
WITH RECONFIGURE; --default value 10
Note: The sum of HOT_OBJECT_THRESHOLD_PERCENT and COLD_OBJECT_THRESHOLD_PERCENT
must not exceed 100.
Once NSE Advisor is activated, we can allow it to run for a few weeks covering most of the user activity periods.
Once it is run for a few weeks, turn it off using below SQL Statement :
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','system') SET
('cs_access_statistics','collection_enabled') = 'false'
WITH RECONFIGURE;
Review the recommendations of NSE Advisor using the query:
SELECT * FROM M_CS_NSE_ADVISOR
NSE Advisor Results:
After reviewing the NSE Advisor results, we have decided to move the following tables to PAGE LOADABLE to offload from memory.
This gives us a savings of 250GB and with a potential savings of around 500GB.
Implementation:
As a first step, we moved ZARIX* tables to PAGE LOADABLE.
ALTER TABLE SAPABAP1.ZARIXBC1 PAGE LOADABLE CASCADE
ALTER TABLE SAPABAP1.ZARIXBC2 PAGE LOADABLE CASCADE
ALTER TABLE SAPABAP1.ZARIXBC3 PAGE LOADABLE CASCADE
After a week of monitoring the buffer cache, we moved the Change Documents tables to PAGE LOADABLE.
Before moving to PAGE LOADABLE, we have decided to take advantage of the partitioning and move all old historical data to PAGE LOADABLE and keep only the CURRENT partition in memory.
So, we partitioned CDHDR/CDPOS based on UDATE and CDPOS by CHANGENR.
ALTER TABLE SAPABAP1.CDHDR PARTITION BY
RANGE(UDATE)
( (PARTITION '20170101' <= VALUES < '20180101',
PARTITION '20180101' <= VALUES < '20190101',
PARTITION '20190101' <= VALUES < '20200101',
PARTITION '20200101' <= VALUES < '20210101',
PARTITION '20210101' <= VALUES < '20220101',
PARTITION OTHERS
))
ALTER TABLE SAPABAP1.CDPOS
PARTITION BY
RANGE (CHANGENR)((
PARTITION '0000000000' <= VALUES < '0010000000',
PARTITION '0010000000' <= VALUES < '0050000000',
PARTITION '0050000000' <= VALUES < '0100000000',
PARTITION '0100000000' <= VALUES < '0200000000',
PARTITION '0200000000' <= VALUES < '0300000000',
PARTITION '0300000000' <= VALUES < '0400000000',
PARTITION '0400000000' <= VALUES < '0500000000',
PARTITION '0500000000' <= VALUES < '0600000000',
PARTITION '0600000000' <= VALUES < '0700000000',
PARTITION '0700000000' <= VALUES < '0800000000',
PARTITION '0800000000' <= VALUES < '0900000000',
PARTITION '0900000000' <= VALUES < '1000000000',
PARTITION OTHERS ))
Partitioning can take time based on the table size and the following SQL query can be used to find out the progress.
SELECT * from M_JOB_PROGRESS
While partitioning, keep in mind the performance impact on the database and keep monitoring the alerts from Solution Manager, if you have it configured already.
During the partitioning process, we have received the following alerts but there is no severe impact on the performance as the partitioning is done in a period of low activity. But, it's recommended to have a monitoring setup.
Once Partitioned, move the historical partitions to PAGE LOADABLE.
Statement 'ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 1 PAGE LOADABLE'
successfully executed in 1:08.042 minutes (server processing time: 1:07.851 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 2 PAGE LOADABLE'
successfully executed in 1:08.998 minutes (server processing time: 1:08.811 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 3 PAGE LOADABLE'
successfully executed in 1:12.009 minutes (server processing time: 1:11.830 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDHDR ALTER PARTITION 4 PAGE LOADABLE'
successfully executed in 1:04.275 minutes (server processing time: 1:04.094 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 1 PAGE LOADABLE'
successfully executed in 3.513 seconds (server processing time: 3.238 seconds) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 2 PAGE LOADABLE'
successfully executed in 2:19.986 minutes (server processing time: 2:19.805 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 3 PAGE LOADABLE'
successfully executed in 2:14.751 minutes (server processing time: 2:14.573 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 4 PAGE LOADABLE'
successfully executed in 4:12.651 minutes (server processing time: 4:12.475 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 5 PAGE LOADABLE'
successfully executed in 4:56.709 minutes (server processing time: 4:56.533 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 6 PAGE LOADABLE'
successfully executed in 7:26.954 minutes (server processing time: 7:26.776 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 7 PAGE LOADABLE'
successfully executed in 7:12.293 minutes (server processing time: 7:12.116 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 8 PAGE LOADABLE'
successfully executed in 9:15.744 minutes (server processing time: 9:15.566 minutes) - Rows Affected: 0
Statement 'ALTER TABLE SAPABAP1.CDPOS ALTER PARTITION 9 PAGE LOADABLE'
successfully executed in 17:31.126 minutes (server processing time: 17:30.949 minutes) - Rows Affected: 0
Buffer Cache Size :
As buffer cache is the key factor that holds the PAGES when accessed, it needs to be sized properly.
If you keep the buffer cache too high, it's mostly left unused.
If you keep the buffer cache too small, buffer overflow events can occur and create short dumps.
By Default, Buffer Cache is sized at 10% of GAL and we can customize it using the below parameters.
As part of the NSE Implementation activity, we offloaded 180GB of tables and we kept the initial buffer cache size as 100MB, which is too low to start with, but kept it like that for testing.
We had requested our master data team to load some master data and they had observed the below short dumps.
We increased the buffer cache size to 10GB and we haven’t received any dumps further.
Buffer Cache needs to be monitored to review hit ratio, buffer reuse count etc
Buffer Cache can be monitored using :
SELECT * FROM M_BUFFER_CACHE_STATISTICS
Results:
After moving a few tables to PAGE LOADABLE, we have seen a reduction in HANA memory usage by around 240GB, which is a significant amount considering the database size.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
6 | |
4 | |
4 | |
4 | |
4 | |
3 | |
3 | |
3 | |
3 | |
3 |