Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
This blog describes the installation of SAP NetWeaver or S/4 HANA based systems in Windows Failover clusters, which have more than 2 cluster nodes.

SWPM (SAPinst) tool takes care of the complete installation and configuration, but a few manual steps are still necessary. Important for operations is the single point of failure (A)SCS instance, which contains the message server and the standalone enqueue server.

The differences between the old standalone enqueue server and the new one, are explained in this blog:

Naming conventions used in this blog:

description  short description executable name
NetWeaver 7.x Standalone Enqueue Server ENSA1 enserver.exe
Enqueue Replication Server ERS1 enrepserver.exe
S/4 HANA  Standalone Enqueue Server 2 ENSA2 enq_server.exe
Enqueue Replication Server 2 ERS2 enq_replicator.exe


It's possible to upgrade a NetWeaver 7.52 system to ENSA2 / ERS2, but this must be done manually. See note 2639281:


SWPM tool must be used for installation on every cluster node. If you want to install an (A)SCS instance, for example in a three node cluster, this is the procedure how to install it:

  1. Install the Windows Failover cluster with three nodes

  2. Start SWPM tool on any of the three nodes and choose option "First Cluster Node"

  3. Start SWPM tool on the other nodes and choose option "Additional Cluster Node"

If you add another cluster node to an existing cluster, for example you want to install a third node to an existing two node cluster, then you have to run SWPM tool and choose "Additional Cluster Node" option, otherwise it's not possible to failover SAP cluster groups to this new, third node!


This chapter describes the mechanism how ENSA and ERS work in general on a Windows failover cluster.

SAP NetWeaver 7.x

For NetWeaver based systems that use the old enqueue server and enqueue replication server, please watch this small movie:

It's possible to install ENSA1/ERS1 on many cluster nodes, but only ONE ERS can be active at one point in time:

Which ERS will be used for replication is configured in the Failover Cluster Manager:

ENSA1/ERS1 at a glance:

  • The old ERS will be installed locally on all cluster nodes (in share „saploc“).

  • All ERS instances are „green“ in SAP MMC. This only means: The instance is started.

  • Only ONE ERS is replicating data with the enqueue server (on Windows configured via „Possible Owners“ configuration in SAP resource in Failover cluster).

  • Replication status of ERS is written to dev_enrepha trace file.

  • The communication is: ERS instance opens a socket connection to ENSA.

  • The concept can be used on more than 2 cluster nodes, but you don‘t see which ERS is active or inactive in SAP MMC.
    Don‘t forget to configure the „Possible Owners“!

  • SWPM supports the installation of more than 2 cluster nodes. However, SWPM adds every new node to the „Possible Owners“ list. The "Possible Owners" list must be changed manually after installation.


For S/4 HANA based systems that use the new enqueue server and new enqueue replicator, please watch this small movie:

The main difference of the new concept is, that there is only ONE ERS instance running at one point in time and the instance is under control of the cluster:

Both cluster groups should not run on the same node, with one exception:

ENSA2/ERS2 at a glance:

  • ERS2 is now under control of the cluster

  • ERS2 is still a locally installed instance; installation on a shared disk is not possible

  • Enqueue Server 2 opens a connection to ERS 2, not vice versa

  • Cluster software must prevent both (A)SCS and ERS cluster groups running on the same cluster node

  • SWPM installs the new ENSA2/ERS2 per default, starting with S/4 HANA 1809 release

  • ENSA2/ERS2 can be also configured for SAP NetWeaver systems based on 7.52

  • SWPM fully supports this scenario and makes the necessary configuration changes to „anti affinity“ (a MS cluster mechanism which defines, which cluster groups should not run together on the same cluster node)