Understanding the SAP Data Lifecycle Manager (DLM) Tool
Coping with the explosive growth of business data continues to overwhelm IT departments around the globe that maintain large-scale data processing environments. The struggle to satisfy user requirements for data accessibility and performance with more storage and processing power presents an enormous challenge—one that is generally hampered by the budget constraints.
Luckily, organizations that count on SAP HANA for their data processing needs can turn to the SAP HANA Data Warehousing Foundation—which includes the SAP Data Lifecycle Manager tool. It allows for the movement data—based on the operational usefulness, performance requirements, and access frequency—to a storage and processing tier with the cost and performance characteristics best suited for that data.
Data Lifecycle Manager (DLM) is a web-based data management tool that enables the SAP HANA data tiering process—relocating aged or less frequently used data from SAP HANA tables—for native SAP HANA applications and SQL Data Warehouse applications. DLM runs in common web browser environments and supports numerous data storage destinations. It is available for both HANA XS-Classic and XS-Advanced Application Server stacks.
To relocate hot or warm data from SAP HANA to a specific cold data store for performance improvement, a DLM profile with the required source and target data stores and relocation rules is needed. Data relocation direction can occur from either hot to cold data, cold to hot data or bi-directional to enable two-way movement.
A DLM Scheduler, available for both Application Server stacks, provides immediate execution and/or scheduling of a DLM profile. That is, the execution of the data movement based on a specified rule. The unattended or automated execution of a DLM profile supports better query performance for organizations by managing relevant data in a single storage location.
DLM also supports multi-tiering scenarios with the option to combine two DLM profiles. The initial DLM profile manages a single partitioned table (multi-store or regular column store table) by moving table-partitions to a Dynamic Tiering Node or an Extension Node for warm data management. The second DLM profile is used to manage the movement of the table partitions (and data) located on the primary Slave Node and/or Dynamic Tiering or Extension Node (hot and warm storage) to a cold storage destination. All supported storage destinations are listed below, based on the Application Server stack in use:
DLM XS-Classic supports the following storage destinations, as illustrated below: Multi-Store Table, Extension Node, Extended Table (Dynamic Tiering), SAP IQ and SAP SPARK Controller (Hadoop)
Fig 1: DLM XS-Classic Storage Destinations
While DLM XS-Advanced supports these storage destinations, as illustrated below: Extension Node, SAP SPARK Controller (Hadoop) and SAP Data Hub Cold Data Tiering (VORA disk-table).
Fig 2: DLM XS-Advanced Storage Destinations
Integrating easily into existing HANA centric data-models, DLM hosts a complete set of design and run-time database artifacts that are generated and activated by the tool—eliminating the need for manual data management and manual data access database artifacts. These artifacts, explained in more detail here, include:
DLM Data Movement Rule (compiles into a HANA stored procedure)
DLM HANA (column store) source table
DLM data target table or structure for moving data consistently from a set of connected tables
DLM Modeled Persistence Object (MPO)
DLM Generated Views (Database Union-All View [GVIEW] and HANA Calculation Scenario or DLM Pruning View [PVIEW]) for access to distributed data sets
Better Performance and Lower TCO
Data Lifecycle Manager allows organizations to define a data temperature tiering management strategy to optimize data processing performance by displacing data from SAP HANA persistency to other lower TCO storage destinations. For use in SAP HANA native use cases, DLM provides a tool based approach to model aging rules on tables to relocate aged data to optimize the memory footprint of data in SAP HANA.
Be sure to check out my in-depth look at DLM artifacts and data relocation use cases, and please feel free to send any questions or comments my way.