The upcoming Vora 1.3 release will initally be certified for installation of Vora in a MapR environment on RHEL 6.7. The following blog details how to install Vora 1.3 on a MapR 5.2 Hadoop cluster running on SLES 12.1
First, disclaimer: The following steps detail how to install Vora in a currently unsupported environment. To be properly supported, products must be installed in a certified environment described in the product documentation and the Product Availability Matrix (PAM -
https://support.sap.com/pam )
Now to the details.
This was tested using a 4-node SLES 12.1 cluster.
The entry-point to the MapR 5.2 installation guide is
http://maprdocs.mapr.com/home/AdvancedInstallation/c_get_started_install.html
As there is no MapR installer available for SLES (it may run but then fails as no SLES repository is set up), I followed the steps initially documented under 'Installing without the MapR Installer' using the MapR Internet Repository.
For now, configure the MapR cluster as a YARN mode cluster. For my simple, 4-node cluster I had the following MapR roles ('ls /opt/mapr/roles' on each node to see the current roles):
Node |
Roles |
01 |
cldb, fileserver, gateway, historyserver, nodemanager, webserver, zookeeper |
02 |
fileserver, gateway, hivemetastore, nodemanager, resourcemanager, zookeeper |
03 |
fileserver, hiveserver2, hivewebhcat, nodemanager, zookeeper |
04 |
fileserver, gateway, nfs, nodemanager, zookeeper |
The MapR Warden service (mapr-warden) can be used to stop and start the cluster components on each node. e.g. 'service mapr-warden stop|start|restart'
Once installed and running, Vora is added to the MapR cluster as follows:
- Obtain the distribution .tar.gz file (e.g. SAPHanaVora-1.3.xx-mapr.tar.gz) and copy to each node in the MapR cluster that Vora will run on.
- On each node, untar the distribution file to a temporary working directory. This should provide several RPMs and supplementary files and scripts:
genpasswd.sh |
mapr-vora-manager-1.3.xx...rpm |
mapr-vora-manager-master-1.3.xx...rpm |
mapr-vora-manager-worker-1.3.xx...rpm |
vora-base-1.3.xx...rpm |
vora-deps-1.3.xx..centos...rpm |
vora-deps-1.3.xx..redhat...rpm |
(as this is SuSE, the dependency files will not be used here).
- Add the following line to the /etc/sudoers file (using 'visudo'):
mapr ALL=(ALL) NOPASSWD:ALL
(This ensures that the 'mapr' user is able to start the vora components).
- Prepare for Vora installation on each node by creating the Vora (and SparkController) UIDs and GIDs per the MapR section of the Vora Installation and Administration Guide (section 2.5.2). The actual format of the commands is:
#add the vora user and group
groupadd vora --gid 44936
useradd -m -u 44936 -g vora vora
#optional if also adding the SparkController
groupadd sapsys --gid 5001
useradd -m -u 44937 -g sapsys hanaes
#add hanaes as member of hdfs
usermod -Ga hdfs hanaes
Do this on all nodes in the cluster to ensure consistency in UIDs/GIDs in mapr-fs.
- Generate the Vora Manager/Tools userid and encrypted password using the 'genpasswd.sh' utility from the Vora distribution, according to the Vora Installation and Administration Guide (section 2.5.2) and copy the generated 'htpasswd' file to the required directories.
- Depending on the role (vora-manager-master or vora-manager-worker), only certain .rpms will be installed. The vora-base and vora-manager primarily contain all the vora serivce components, while the vora-manager-master and vora-manager-worker set the MapR roles for the nodes on which they are installed. This is more fully described in the Vora Installation and Administration Guide (section 2.5.3)
- For all Vora worker nodes, use zypper to install the vora-base, vora-manager and vora-manager-worker components:
#vora-base may have dependencies to vora-dep.
#Press '2' to break any dependency and allow the install to proceed.
zypper install vora-base-1.3.xx-GA.x86_64.rpm
zypper install mapr-vora-manager-1.3.xx-GA.x86_64.rpm
zypper install mapr-vora-manager-worker-1.3.xx-GA.x86_64.rpm
# reconfigure mapr
/opt/mapr/server/configure.sh -R -no-autostart
- For all Vora master nodes, use zypper to install the vora-base, vora-manager and vora-manager-master components:
#vora-base may have dependencies to vora-dep.
#Press '2' to break any dependency and allow the install to proceed.
zypper install vora-base-1.3.xx-GA.x86_64.rpm
zypper install mapr-vora-manager-1.3.xx-GA.x86_64.rpm
zypper install mapr-vora-manager-master-1.3.xx-GA.x86_64.rpm
#currently, worker role is also required to allow deployment of vora services to master nodes
zypper install mapr-vora-manager-worker-1.3.xx-GA.x86_64.rpm
# reconfigure mapr
/opt/mapr/server/configure.sh -R -no-autostart
- The roles are now:
Node |
Roles |
01 |
cldb, fileserver, gateway, historyserver, nodemanager, webserver, zookeeper, vora-manager-master, vora-manager-worker |
02 |
fileserver, gateway, hivemetastore, nodemanager, resourcemanager, zookeeper, vora-manager-worker |
03 |
fileserver, hiveserver2, hivewebhcat, nodemanager, zookeeper, vora-manager-worker |
04 |
fileserver, gateway, nfs, nodemanager, zookeeper, vora-manager-worker |
- The Installation and Administration Guide documentation describes using the /opt/mapr/vora/service-control.sh script to deploy the master and worker nodes. Currently these scripts are defined for RHEL 6.x. For now, we will rely on the /opt/mapr/roles and the /opt/mapr/server/configure.sh to allow the mapr-warden service to start the Vora Manager master and worker nodes. On each node (starting with node running cldb), restart the mapr-warden service:
service mapr-warden restart
- Open the MapR Web Dashboard GUI (https:<maprwebserver>:8443. You will now see Vora Manager Master as a service
:
- Select 'Vora Manager Master' and when the tab appears, press the 'Popout page into a new tab' button to open the Vora Manager interface to configure, assign nodes and run the various Vora services (see the Vora Administartion and Installtion Guide). Vora-manager-worker role was added to the node with vora-manager-master role to allow deployment of Vora services to the master node as well:
When restarting the cluster, start the node with the CLDB first and then any others. You will still have to open Vora Manager and start the Vora services:
Chris