Let me shortly introduce myself: My name is Thomas Walter and I’m one of the architects who designed NW Cloud. During the last months I got a lot of questions about how NW Cloud is working behind the scenes. In this blog post I will try to answer the most common ones.While I cannot outline all the details for several reasons (security and competition etc.) I hope you still get a good overview of what is happening within our infrastructure.
To structure this article we will look into four different kinds of interaction with NW Cloud:
Below you find a simplified high-level diagram showing some entities within NW Cloud (click on it for better readability). The yellow walls show firewall boundaries. A request from your browser starts on the left side of the picture in the ‘Internet’ box, walks through load balancers in the DMZ and finally reaches your application which is located in the ‘Your NW Cloud Applications’ segment.
In the box labeled ‘NW Cloud Services’ you find all the services your application can potentially use.
There are entities, which you do not directly interact with your app, but which are important to keep NW Cloud up and running. They are located in the segment called ‘NW Cloud Infrastructure’. The most prominent piece there is the ‘Orchestrator’. We gave it this name because it manages the complete NW Cloud landscape.
When you deploy an application via the Eclipse IDE or via the NW Cloud command line tool you let NW Cloud know there is a new application. Technically, the Orchestrator offers a Deploy Service which listens for deployment requests. The protocol used to communicate is REST-based via HTTPS. During deployment you connect to this service and you’re authenticated based on the supplied credentials.
During this process mainly two things are happening:
Now that your binaries are in the NW Cloud Repository and all NW Cloud Services have prepared resources for your application to use the application is ready to be started.
Before I explain what happens at startup let me dive into some of the infrastructure fundamentals. Each running application process gets at runtime on a dedicated virtual machine (VM). You don’t have to deal with these VMs directly, instead you only see the application process. We decided for VMs and not for other approaches because virtualization gives us a lot of advantages we wouldn’t get otherwise.
Some of them are:
As we are committed to open source and open standards we decided to use Xen as virtualization technology and Linux as our operating system. Of course we are observing new trends like Linux Lightweight Containers and other exciting developments. So it might be that at some point of time we introduce new virtualization technologies.
NW Cloud – which is a PaaS offering - is decoupled via an abstraction layer from the underlying IaaS.
The most prominent task of the IaaS is to create (virtual) machines & storage resources and manage their lifecycle – all this is done via an API. We completely hide the underlying IaaS and its complexity from the end users as NW Cloud is clearly focused at PaaS.
The SAP internal IaaS we use lives within our data centers and is the same IaaS which is used already for many other SAP On Demand applications like SAP Business ByDesign. This doesn’t mean NW Cloud is based on Business ByDesign – it just uses the same infrastructure services. The advantage of having one shared infrastructure is obvious: We need only one 7x24 team and one skill set to care for several of SAP’s applications.
NW Cloud is not only a platform for SAP external developers. We used NW Cloud even before we opened it to the public within SAP for developing some of SAP’s Java based OnDemand applications. Here’s a document that provides a list of applications using NW Cloud: http://scn.sap.com/docs/DOC-32389
But let’s get back to the interaction between NW Cloud and our SAP internal IaaS.
Do we request a new virtual machine from this IaaS each time somebody starts an application? The answer is ‘No’. We have decoupled the request for a VM from the physical instantiation. So we have always a pool of prepared VMs we can use.
So what exactly happens if somebody starts an application?
The first thing a request from the Internet hits is our load balancer infrastructure. We are using highly available hardware solutions. As one would expect they reside in a DMZ. The idea of a DMZ is that each incoming connection is terminated here at a proxy – in this case the load balancer. This allows the load balancer to check the content before it forwards the request to the application, which sits behind the DMZ (see diagram in the beginning). This is as well the reason why we allow only https and not any other protocol.
If you have started several processes of the same application the Orchestrator registers all your processes with the load balancer. The balancer will distribute the requests coming from the Internet among these application nodes. If there is a new client it will be forwarded to the node, which currently has the least number of connections. We support session stickiness: If a client is returning it will be always connected to the same node it was in contact with in the past.
As you see there is no direct connection from the Internet to your application, but the load balancer is in between.
What about the NW Cloud services which your application is using during runtime? Can they get accessed from the Internet? The answer is ‘No’: Only your application living in the application segment is allowed to connect to these services. As you can see in the first diagram they are living behind firewalls in their own realm.
If your application wants to connect to a destination in the Internet the load balancers are not involved. Instead http and https traffic is forwarded by an http-proxy (see first diagram). It is additionally possible to have a direct route to SAP external servers bypassing the proxy. But we enabled this kind of access only to very few trustworthy destinations. One example is the servers Apple hosts which you need to access if you want to use Apple’s Push Services.
The NW Cloud development team follows Lean principles and works in Scrum mode. As a result of that we produce new functionality on a bi-weekly schedule (see Release Notes). This means twice a month we have some updates for our productive landscape. Normally this requires an updated version of our Orchestrator.
We inform you about an upcoming update via an infrastructure downtime announcement. The standard message in the announcement is “Development operations like deployment, starting and stopping of new components will not be possible. Productive applications will not be affected.”
During this time – normally only for minutes – we take the central Orchestrator down. This is why you can’t deploy or start anymore. Your running applications are not impacted whatsoever: they are still registered with the load balancers and can continue to use the NW Cloud services. After we updated the Orchestrator we restart it. During restart it rediscovers all the running applications automatically. As you can see the Orchestrator is no single point of failure but taking it down is daily business.
After you read the chapters before this step is easy: If you trigger a stop command your application becomes unregistered from the load balancer. After that the application server gets a stop request. Final step is to deregister the VM from the logging infrastructure. Your deployed application stays in the NW Cloud Repository so you can restart it at any time.
I hope I could contribute to a better understanding of the basic NW Cloud infrastructure concepts. Many interesting topics have been omitted like monitoring, auditing, connectivity to SAP OnPremise systems using the SAP Cloud Connector and so on. Keeping it simple hopefully makes the fundamentals clearer – and there is always the opportunity for a follow up blog.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
23 | |
11 | |
10 | |
9 | |
8 | |
6 | |
6 | |
6 | |
5 | |
5 |