SAP Cloud for Customer as a Software as a Service solution is accessible over the Internet from anywhere over the world by either browser, mobile applications, web services, and others,
The user requests travel through multiple networks including the user private network, ISP public network, Internet and SAP Datacenter network to get to the Cloud for Customer servers to be processed.
SAP uses its partner Akamai to speed the delivery of web content to our customers using IP acceleration and edge caching, which entails storing replicas of static text, image, audio, and video content in multiple servers around the "edges" of the internet, so that user requests can be served by a nearby edge server rather than by a far-off origin server.
As the number of components, networks and geographical distance add up, the possibility of network overhead increases affecting the end user experience or performance of the SaaS solution.
Several conditions could affect the performance of Cloud for Customers and they could be consolidated in the following two areas:
Overhead caused by network conditions:
Hardware/ software problems, overload or even operator errors could present different type of symptoms, by example but not limited to:
Network latency, is the time that it takes to send one network package to get to the receiver, which also in some case is the time that it takes for the package to return to the sender and in this case is called round trip time or RTT.
RTT is the time for the package takes to go and come back to the sender, as shown in the image below
SAP Cloud for Customers has in-tenant embedded tools which help to measure the user network latency, details on how to use this could be found here.
Multiple reasons could cause high network latency including:
- Inefficient path: In the Internet, routing policies and protocols are based in number of factors, which could not be related with performance. BGP or Border Gateway Protocol is the protocol that runs the internet exchanging routing information between gateways as known host that can be reached, the exchanged information could contain cost associated the path and some other attributes that are used to determine the best available route. BGP sends all traffic through the shortest logical path but doesn’t take in consideration how much traffic is already going over that path what could result on path overloading and/or network congestion and in consequence high network latency.
- Distance. Network packages have to travel at the speed of ~124000 miles per seconds, around 62 miles in millisecond for a RTT, the more the package has to travel the longer it will take and the higher the network latency will be, pure physics.
- Busy networks. Queuing effects could be observed on packages traveling over the Internet. Usually a package travels over several public and shared interconnected network components, one busy link could be the main cause of high network latency. When a package arrives to a network component (router, switch, access point, etc.), the package has to be processed and re-transmitted, depending on every component processing limit and the concurrency a package arrives they might be queued and latency will be higher. In some cases, queues get over flooded and network packages get discarded, generating “package loss” and requiring the package to be retransmitted, causing a domino effect since now multiple devices will have to process and queue more packages. In general, high latency and high package loss together could cause severe slowdowns in network communications.
- Low Network Bandwidth. Bandwidth refers to how much data can be transferred from one point to another in a set amount of time. The bigger the bandwidth, the more amount of data can be transfer at the same time. Bandwidth becomes a factor where multiple applications or users use the “pipe” at the same time, the type of application and amount of data that travels from and to the Internet will determine how big the bandwidth should be. If the bandwidth becomes a bottleneck, it will slow down the throughput and with this the response time. This is similar to a highway during rush hour, at that time we might be in a bumper to bumper situation and driving at 30 mph, while on the same highway in another moment we could drive at 70 mph and getting faster to the destination.
For more information about how to troubleshoot high latency networks for SAP Cloud for Customer please click here
- Configuration issues: There are different configurations factors that could cause overhead to the response time, some of those could be but not limited to:
- DNS Configuration. Cloud for Customers uses a product from our partner company Akamai to accelerate the traffic over the Internet. This product relies on the geolocation of the external DNS server to provide an entry point (or Edge Server) to the Akamai network, finding the best route from the user to the SAP Datacenter where the customer’s tenant is located. In order to take full advantage of this feature, it is required that the DNS server and the user are in the same geographical area. A common problem is where the user is using a DNS server which is located in a different geographical region to resolve DNS queries, in those cases the DNS server will resolve the Cloud for Customer tenant DNS name to IP close to the DNS server but not necessarily to the user having a long last mile between the Akamai Edge server and the end user, examples are where a user from Europe is trying to access a SAP Cloud for Customer tenant in Europe and is using a DNS server in America, in that case the user from Europe will communicate with a Edge Server in America forcing the user to connect from Europe to the Akamai server in America to then connect to the SAP Cloud for Customers tenant in Europe, causing considerable network overhead. One method on how to identify if the DNS resolution and TCP routing is happening under the same region is explained in this blog
For more information about how Akamai works please click here
- Overhead cause by forward proxies. Some OnPremise or in the Cloud forward proxies have showed to cause overhead while using SAP Cloud for Customers, observing high TCP times, SSL handshake times and sometime even high send or receive times. Ideas on how to identify these type of issues with a HTTP tracing tool like HTTP Watch is explained here
- Overhead caused by last mile routing. In some cases, IP routing is not correct in the last mile, either within the customer private network or from the customer network to the Akamai sever (the last mile), or if Akamai is not enabled to the SAP Data Center. Some ideas on how to identify this problem is explained in this blog.