Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Active Participant

In this blog we can learn about how to run a de-fragmentation data volume to claim unused space from data volumes.

Why do we have fragmented space in data volume?

In HANA if we have run a mass cleanup, though the relevant tables entries that were deleted will be removed and  the size reduction will get reflected in  M_TABLE_PERSISTENCE_STATISTICS for all the involved tables, it takes time to get reflected at OS level.

Is this operation needs downtime?

No.This is an ONLINE operation and does not need a downtime. However , it is recommended to perform this when we have a reduced load in the HANA DB.

How to determine if our HANA DB is suffering from huge unused fragmented date in data volume?

To determine the amount of space that is fragmented , we can use below SQL.


Sample output: Each of the file system is allocate with a 6TB of space.


To reclaim the fragmented space below is the command to be used :

ALTER SYSTEM RECLAIM DATAVOLUME 120 DEFRAGMENT;=>Our final data volume size will be payload + 20% fragmented space =>Can be used normally

If we want to run defragment only on specific node the we can use the below command:

ALTER SYSTEM RECLAIM DATAVOLUME '<node server name>:3<nn>03' 120 DEFRAGMENT; =>Our final data volume size will be payload + 20% fragmented space =>Can be used normally


If the fragmentation percentage is really huge, it will take a lot of time and performance issue if we try to defragment our data volume to 120% . In this case , it is recommended to run the same command in pieces like below . It is better to ensure that we are reclaiming only 250GB per defragment command.

You can also refer below link


Below is the sample output for disk defragmentation which was run from HANACLeaner:

Refer below to use HANACleaner to automate this process


Thanks for reading!!

Please leave your comments and questions here!!
Labels in this area