Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
RobertWaywell
Product and Topic Expert
Product and Topic Expert
620

SAP HANA Cloud, Data Lake Relational Engine ("HDLRE") is the cloud native version of SAP IQ and is the ideal choice for customers who want to migrate their on premise SAP IQ systems to the cloud. While working through migration scenarios with SAP IQ customers, the HDLRE team identified a core set of features that will further simplify and support the migration experience. One of these feature requests was for the ability to assign specific workloads to different servers, or worker nodes, in the multiplex system. 

One common example of workloads that customers like to run on separate worker nodes is analytic queries versus data load operations. 

The standard process for connecting to an HDLRE instance is to use the SQL endpoint which you can retrieve from HANA Cloud Cockpit.

Copy SQL Endpoint.png

The SQL endpoint string uses the format <instance-id-guid>.iq.hdl.<landscape> where: 

  • <instance-id-guid> The specific string identifier for the instance
  • iq.hdl The system identifier for data lake Relational Engine
  • <landscape> The domain and port number. The port for data lake is always 443.

A complete SQL endpoint would look like:

<instance-id-guid>.iq.hdl.prod-us10.hanacloud.ondemand.com:443

The SQL Endpoint is load balanced across all of the worker nodes in the HDLRE instance. As long as your system is configured with 2 or more worker nodes, then even if one worker nodes fails and is in the process of being restarted, then the SQL Endpoint will direct incoming connection requests to another available worker node. 

Using DBISQL as an example for the database client, if I create multiple connections to my HDLRE instance using the SQL endpoint, I can see that different connections can get assigned to different worker nodes. In the screenshots below the first DBISQL session is connected to worker node "mpx-writer-0-0" and the second session is connected to worker node "mpx-writer-1-0".

DBISQL connection dialogue.png

DBISQL connected to mpx-writer-0-0.png

DBISQL connected to mpx-writer-1-0.png

The fault tolerance provided by connecting through the load balanced HDLRE SQL endpoint is a valuable feature and connecting to HDLRE using the SQL endpoint continues to be the recommended approach in most cases. However, connections via the SQL endpoint don't provide any control over which worker node you will be connected to and there are cases where an user may want to create connections to a specific worker node. 

With the SAP HANA Cloud 2024-QRC4 release, users now have the option of connecting to a specific worker node directly. This is done by appending the worker node # to the Instance ID using the following format:

<instance-id-guid>-writer-<# of worker node>.iq.hdl.prod-us10.hanacloud.ondemand.com:443

The numbering of the worker nodes starts from zero, so if your HDLRE instance is provisioned with 2 worker nodes the workers are numbered 0 and 1. The connection endpoints for each worker node are:

<instance-id-guid>-writer-0.iq.hdl.prod-us10.hanacloud.ondemand.com:443

and

<instance-id-guid>-writer-1.iq.hdl.prod-us10.hanacloud.ondemand.com:443

Using the DBISQL connection dialogue for an example, if I want to specifically connect to "writer-1" then it would look like this: 

DBISQL connection dialogue - Connect to a specific worker node.png