Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 


In my previous blog, within the context of a Single Global SAP Instance, I have highlighted the significance of choosing the correct Azure region for an SAP system that will be accessed by a geographically dispersed user-base; and touched on a global network strategy, supported by Azure technologies to simply global deployments.

For those customers who are implementing a single global instance of SAP S/4HANA, or who are planning deployment of SAP on Azure with a geographically dispersed userbase, this point-of-view article goes a step further.

This document aims to guide on a best practice process to obtain inputs into defining your Azure Region Strategy and provide a practical analysis method to quantify your Azure region choice for global SAP solutions.

The document covers:

  • More detail on the criteria to evaluate the choice of appropriate Azure Region

  • The Analysis Model and associated weightings

  • Colour-Coding of an Azure Region (including instructions on how to collect latency data)

  • Collating the results

In this blog, we will use the example of Contoso Ltd (briefly touched on in the previous blog), and how they have successfully quantified their region choice as part of their SAP on Azure Business Case.

To recap: Contoso Ltd is a fictional company that is planning to centralize all Finance Operations in a new single instance deployment of SAP S/4HANA.  They are operating across the globe. Their data sovereignty and security policies dictate that they data cannot be stored in the Asia-Pacific geo.

Contoso requires True Regional DR, whilst minimizing Cloud Infrastructure costs. Their S/4HANA Sizing suggest a HANA requirement of 2.8TB.

This requirement can easily be served by the Azure M-Series virtual machine. Their Front-end Strategy is to minimize the dependence on SAPGUI.

1. Evaluation Criteria

Choosing the most suitable Azure Region for a Single Global SAP instance is a daunting task, but the following evaluation criteria could inform your strategy.

Details on each criteria is provided within my previous blog. Here, each of the criteria is explained below within the context of quantifying the region choice. Not every criterion is as important as the other, and some shape our strategy more so than others. Below is an overview of which criteria will form part of the region analysis.

1.1 Data Residency and Security

This input criterion has the power to override any other success criteria. Neither Microsoft, nor the author of this blog, can be involved with customer decisions, and it is highly recommended that every customer evaluates all Country or Region specifics to identify any blockers that will prohibit a specific Azure Region. We will therefore omit this criterion from the scoring and weighting but keep the ‘override’ control in place.

1.2 Business Continuity

Based on Contoso LTD’s requirement for true Regional DR, the recommendation is to leverage Microsoft’s standard Regional Deployment architecture pattern (Availability Sets for HA – 99.95% with DR across Azure paired regions).

There are no constraints in this evaluation criteria, and all options should score evenly, unless there are missing SAP certified blocks in the default paired region.

1.3 Network Connectivity

Based on Contoso’s requirement to provide the best user experience and ensuring integrations are operating as normal. This input criteria will inform our strategy on two evaluation criteria: End-User Connectivity and Azure Backbone Connectivity (or inter-region connectivity)

  1. End-User Network connectivity tests will need to be conducted to collect sample data points. These data points will be used in the options analysis under ‘End-User Latencies’. This analysis will resemble a ‘worst case’ scenario.

  2. Azure inter-region latencies will be evaluated under ‘Azure Backbone Latencies’. This analysis will resemble a ‘best case’ scenario.

1.4  SAP Certified Building Blocks

Contoso’s, S/4HANA solution will be built according to reference architecture for SAP HANA on Virtual Machines. Initial sizing suggests roll-out requirements that are based on Mv1 with projected growth indicating a requirement for Mv2 virtual Machines after 3 years.

This criterion will be used in our analysis, but it will not be scored. See this criterion as a ‘filter’ to reduce the potential number of Azure Region options and to identify the sub-set of Paired Regions that will be scored.

For a region to qualify for analysis, the primary and paired region should both be ‘SAP-Capable’ (i.e have Mv1 and Mv2 Series Virtual Machines available). To view all azure regions and the services they run, please visit:

The following table lists the filtering process based on SAP Certified Building Blocks for Contoso.

Geo Primary Region Paired Region SAP Capable
Asia-Pacific East Asia (Hong Kong) Southeast Asia (Singapore) East Asia Mv1 only
Australia Australia East Australia Southeast Yes
Australia Australia Central Australia Central 2 Mv1 only
Brazil Brazil South South Central US Yes
Canada Canada Central Canada East Mv1 only
China China North China East Mv1 only
China China North 2 China East 2 Mv1 only
Europe North Europe (Ireland) West Europe (Netherlands) Yes
France France Central France South No
Germany Germany Central Germany Northeast No
India Central India South India Mv1 only
India West India South India Mv1 only
Japan Japan East Japan West Japan West Mv1 Only
Korea Korea Central Korea South Mv1 only
Americas East US West US Yes
Americas East US 2 Central US Central US Mv1 Only
Americas North Central US South Central US Yes
Americas West US 2 West Central US No
Norway Norway East Norway West Mv2 only
South Africa South Africa North South Africa West Mv1 only
Switzerland Switzerland North Switzerland West Mv1 only
UK UK West UK South UK West Mv1 only

For illustration purposes, after applying our Mv1 and Mv2 ‘filters’, we now have a subset of 4 geo’s and 8 regions to evaluate:

Selected Region Scope for Evaluation
Geo Region 1 Region 2
Asia Pacific East Asia (Hong Kong) Southeast Asia (Singapore)
Australia Australia East Australia Southeast
Europe North Europe (Ireland) West Europe (Netherlands)
Americas East US West US

1.5 Business Model

This criterion will be omitted for the evaluation. A single global instance implies a new centralized business model.

1.6 Cost of Ownership

This criterion will form part of the evaluation. However, detailed TCO analysis will not form scope of this specific activity and is usually separate deliverable under a more detailed Architecture Design Session and TCO analysis.

1.7  User Density

For the purpose of this analysis, we are introducing a new metric that we will use to quantify the region choice. User density is simply the number of users in a specific geography. This will be included in the evaluation. The rationale here is that a region in closer proximity to larger user-bases should score higher in the analysis.

2. Analysis Model

To quantify an Azure region choice, we will use a simple, but effective mechanism to guide our decision-making. We will leverage Pugh’s Decision Matrix based on weighted scoring.

As discussed in Section 1, the following criteria will be used to evaluate the best suited region or regions. Note this is a mathematical exercise, and the results can be overridden by Data Sovereignty or Security Principles.

The following graphic summarizes how to build the base analysis model based on short-list of Azure regions to measure on the specific evaluation criteria.

The 3 Azure Regions with the highest weighted score will form the short-list of preferred deployment choice (unless overridden by data considerations)

3. Significance of Criteria: Weightings and Scoring

3.1 Weighting and Scoring

All weightings in the following section are subjective and it is highly recommended that anyone evaluating an Azure Region choice spends time and effort on deciding how significant each of the criteria is to your business. For some customers, cost of ownership could be more significant than others, or other customers might place a higher emphasis on user densities.

Contoso decided to introduce 3 categories of significance. Low, Medium and High, each with a simple multiplier:

  • Low Weighted Criteria has a multiplier of 1

  • Medium Weighted Criteria has a multiplier of 2

  • High Weighted Criteria has a multiplier of 3

Note that each customer should decide what significance and weighting they should apply based on your unique functional and non-functional requirements.

The following section highlights the significance of each evaluation criteria as per Contoso’s requirements.

3.1.1 Business Continuity:

Weighting: Medium

Justification: True regional DR using Azure Paired Regions is a key Non-Functional Requirement


SAP Capable Region Points
Fully Compliant 2
Partially Compliant 1
Non-Compliant 0

3.1.2 Network Connectivity – Azure Backbone Latencies

Weighting: High

Justification: End-User performance across the globe is very important. Contoso requires SAP traffic to traverse the azure backbone securely.

Scoring: Azure inter-region latencies will be scored according to a simple colour-coded region analysis. The colour-coding is explained a bit later.

Region Colour Points
Red 1
Amber 2
Green 3

3.1.3 Network Connectivity – End User Latencies

Rating: High

Justification: End-user latencies represent our worst-case scenario and is very important to Contoso.

Scoring: Azure inter-region latencies will be scored according to a simple colour-coded region analysis. The colour-coding is explained a bit later.

Region Colour Points
Red 1
Amber 2
Green 3

3.1.4 User Density

Rating: Medium

Justification: User density will form part of the evaluation on a geographical level, not a regional level.  The following table illustrates the Contoso User distribution based on Geographical Area.

Geography User-base % of Users
Asia (Hong Kong) 50 4
Southeast Asia 700 28
Americas 900 36
Europe 500 20
Australia 300 12

Scoring: For Contoso’s user density, we will use the following points system to assign scored to region pairs.

Density Points
Less than 15% of users 1
Between 15% and 25% 2
More than 25% of users 3

3.1.5 Cost of Ownership

Rating: Low

Justification: TCO is important to Contoso, but not as important as providing low-latency seamless and pleasant user experiences.

Scoring: Initial Scoring TCO will be based on a rule-of-thumb to create the shortlist of regions. It is highly recommended to complete a detailed consumption model once the shortlist has been established. The scoring for our analysis model does not consider exchange rate fluctuations or customer specific pricing.

Geography Points
Asia 1
Europe 2
US 3

If the scoring is to be challenged, a new scoring mechanism can easily be incorporated. Public statistics are available here:

4. Color-Coding Regions for Latencies

The colour-coding process allows us to assign a simple score to a specific azure region based on a ‘pseudo triangulation’ and averaging of latencies.

4.1 Azure Backbone Latencies

The subset of regions that Contoso is evaluating was analyzed and color-coded as follows, based on the information Microsoft published here:

Step 1: Contoso created a similar matrix based on their subset of regions for evaluation. Each Latency was colour -coded based on the following rationale:

Red Yellow Green
Weighting 1 2 3
Latency >180ms <180ms <100ms

The resulting output:

Step 2: Count the Latency colours

Latency Colour Count Red Yellow Green
East Asia 3 3 1
Southeast Asia 1 3 3
Australia East 3 2 2
Australia Southeast 3 2 2
East US 4 0 3
West US 0 7 1
North Europe 3 2 2
West Europe 3 2 2

Step 3: Weigh up your options and colour-code your result-set

We use a weighted score (again), where red latencies are least favorable (x1), amber latencies the middle ground (x2) and green latencies are most favorable (x3). The result-set is split into the top, medium and lowest scores, and then a colour assigned accordingly.

Weighted Scores Red Yellow Green Total Region Colour
East Asia 3 6 3 12 red
Southeast Asia 1 6 9 16 green
Australia East 3 4 6 13 amber
Australia Southeast 3 4 6 13 amber
East US 4 0 9 13 amber
gWest US 0 14 3 17 green
North Europe 3 4 6 13 amber
West Europe 3 4 6 13 amber


5.10 End User Latencies

To colour-code Azure regions according to end-user latencies, the process will differ slightly from the colour-coding process for Azure inter-region latencies. We will use average latencies measured from end-user locations to each of the regions in our subset that we are analyzing.

4.1.1 Instructions to Collect Connectivity Data

This section provides a step-by-step process on how to collect latency statistics from various Contoso Location to various azure regions.

a)     Azure Speed (public latency test tool)

This article uses examples of the data captured and does not represent any specific real-life scenario. It does however provide the reader the ability to conduct a real-life exercise by measuring actual latencies using, a publicly available latency test utility.

The same results can be achieved by using niping, but that requires a niping server running in each region you would like to analyze. For more detail on niping, please refer to SAP Note 500235 - Network Diagnosis with NIPING

b)     Prepare the data capture

The data capture needs to be executed from a collection of representative user locations, such as Headquarters, remote offices and even from mobile workstations (over your corporate VPN as an example)

  • Select the Regions we have shortlisted above in Section 1.4

  • Let the latency test run for approximately two or three minutes

  • Scroll down to the ‘Latency Test’ Results

  • Lastly, record the results into the ‘End User Data Collection’ sheet as follows before colour coding based on the average latencies.

Note – colour coding end-user latencies based on averages, assumes the following thresholds of average latencies:

  • Avg Less than 160ms: Green

  • Avg Between 160ms and 200ms: Amber

  • Avg More than 200ms: Red

6. Results

6.1 Results disclaimer:

The completed analysis model will contain weighted scores for each region. Please note that neither Microsoft, nor the author of this blog can take responsibility or accountability for the accuracy of the results, as the output of the model greatly depends on customer specific input and weightings.

This blog is purely providing an evaluation model for Contoso to own and use. The majority of the input and scoring, as well as agreeing the specific weightings will fall under Contoso’s responsibility.

6.2 Contoso’s Region Analysis Unit Scores:

After scoring each region against the evaluation criteria, it is important to record your unit scores. The Following graphic illustrates how Contoso has scored each region:

6.3 Contoso’s Region Analysis Weighted Scoring Results:

Contoso has spent a lot of time internally evaluating the significance of each evaluation criteria and assign a multiplier to calculate the weighted scores of each Azure Region. They have also activated the ‘override’ on data sovereignty and security, based on their internal data classification policies which prohibits data to be stored in Southeast Asia.

The results have been calculated, and based on the weighted scoring, West US scored the highest, followed by East US and then Southeast Asia (which incidentally has been vetoed based on the data sovereignty override). The Board has approved West US as Primary Region, with its default paired Region East US for Disaster recovery.

7. Conclusion

Selecting the best azure region for your SAP deployment does not have to be rocket science. Using a simple, weighted scoring analysis, it is now very easy to quantify an Azure Region Choice based on multiple criteria, each which has a different impact to your business.

8. Postscript

The example spreadsheet used in Contoso’s region analysis can be downloaded from GitHub here. All customers can use this as a starting point to start planning your Azure Region Strategy for a globally dispersed user base.
1 Comment
Labels in this area