In every SAP HANA installation, Table Partitioning is needed for Huge tables, specially when considering the standard SAP HANA hard limit of 2,000 million records.
As stated in SAP Note:
2004468: FAQ SAP HANA Partitioning, there are several recommendations when partitioning SAP Standard Tables, however, for huge tables (containing way more than 2,000 million records), this process can take a long runtime.
There is a workaround which can result in a HUGE performance improvement:
As an example, we tried to Re-Partition table PRCD_ELEMENTS (Standard SAP Pricing Conditions table), and after more than 12 hours running, what we noticed is that most of the time was spent in step 4 (
RepartitionMain), even when all of the columns were already processed, there was a lot of time spent processing the column $
trexexternalkey$ (this column is the internal SAP HANA primary key of the table).
After 12 hours of execution the job was not even able to finish and we decided to cancel it.
However, by looking into SAP Note:
2044468, there is a recommendation on how to proceed when the process is stuck in this step:
Dropping this index improved the performance of Partition Operations from more than 12 hours whitout finishing to about 4:30 hours in total.
However, there are some considerations that needs to be taken, SAP Note is not very clear on how the DROP & ADD operations should be done (since traditional dropping of the index does not work), so what I did was to before start check the Primary Key fields of the table via
“HANA_Index_Columns” query from SAP Note:
1969700
Once we gathered the columns used for Primary Key and the name of the index itself, the commands that I used for DROP and ADD are:
ALTER TABLE "SAPABAP1"."PRCD_ELEMENTS" DROP CONSTRAINT "PRCD_ELEMENTS~0";
ALTER TABLE "SAPABAP1"."PRCD_ELEMENTS" ADD CONSTRAINT "PRCD_ELEMENTS~0" PRIMARY KEY (CLIENT, KNUMV, KPOSN, STUNR, ZAEHK);
Since this operation is dropping the internal index from SAP HANA and then recreating it, Application Servers should be shutdown in order to avoid any data inconsistency, however there are not any additional tasks that need to be done once the operation is finished.
I hope this information is useful for you.