Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
This document demonstrates the steps required to automate the installation of an SAP HANA scale out system with non-shared storage. Participating hosts have the mandatory file systems (XFS) mounted locally, such as /usr/sap, /hana/data/, and /hana/log.

For this demonstration, two nodes have been considered, but the design can be scaled up as needed. The only file system that is shared between the hosts is /hana/shared.

The generated template can be extended with other automation scripts (refer section: Extension) for provisioning the systems automatically using Terraform in AWS.


Generating Template: The template can be generated using the following command

. /hdblcm --action=update --dump_configfile_template=/tmp/installmulti.rsp

Customizing Parameters: Maintain the mandatory parameters of the templates as below and update the other specific parameters based on your requirements.


Custom_CFG: The parameter custom_cfg is one of the important parameter needs more attentions which points to a directory, which contains custom configuration (*.ini) files, for a scale out system, the default parameter basepath_shared=”yes” for a shared storage not maintaining this value will result in a failure of installation, since the blog demonstrating a non-shared storage installation the parameter basepath_shared=”no” has been set.

Executing Installation:  [.hdblcm –cofifgfile=/tmp/installmulti.rsp]

Installation Verification:


The generated template can be extended along with the other scripts found in the listed blog can be used to provision multi-node system using Terraform in AWS.

  1. Automating SAP HANA Installation in Minutes (AWS) - Part 1

  2. Automating SAP HANA Installation in Minutes (AWS) - Part


Labels in this area