Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
Showing results for 
Search instead for 
Did you mean: 
Product and Topic Expert
Product and Topic Expert
Authors: Gianluigi Bagnoli, Ralph Oliveira

Every two weeks there's a new buzzword in this industry, and that's not necessarily a bad thing. A buzzword reflects a new normality and puts a label on it. These days the last buzzword you may encounter is "polynimbus".

What does it mean ? The etymology is pretty simple: "Poly" comes from ancient Greek and means "many", while "nimbus" is the Latin word for "cloud". So this is just a very sophisticated way of saying "Many Clouds". A definition of this word could then be something like: you use a polynimbus approach when your solution uses services from multiple cloud providers. Perhaps some part of your solution runs on Microsoft Azure, some on Amazon AWS and some other on the SAP Cloud Platform.

So, most likely, you are already polynimbus even if you didn't know.

Choices, choices, choices.

Now, this "polynimbus"-ness is yet another degree of freedom you are given by the technological megatrends happening in the software industry. Once again, the Law of Moore is working to level the ground and giving you partner many many many choices. Of course, with choice comes responsibility along many dimensions you need to consider when building your next intelligent solution for the SMBs.


We already discussed how your intelligent solution should be structured. In short, gone are the days in which you sold boxes to your customers and you used the same boxes to run your solution.

Get over it: you don’t sell boxes anymore.

You sell now tenants running somewhere in the cloud. You don’t access them the same way you used to access boxes. You don’t own the run-time anymore. Then you need to find a place where to host your great intelligent solution.

And here’s where “polynimbus”-ness comes in the game. If you are not tied-up to the back-end anymore, you are free to choose the best. You shouldn’t lock your solution to a single choice, you should instead use the freedom you are given and choose the best for all the components of your solution. And of course, the best for one component can be different for the best for another component.

That means that your solution run-time is not all collocated on a single cloud provider but instead is a constellation of interacting components distributed over several cloud providers, each of them offering the best services at the best price for the best level of service for the hosted component. For instance, you may have one component running on the SAP Cloud Platform leveraging its best-in-class business services from SAP (like for instance tax or conversational services) and you can have another component using basic off-the-self standard services from some IaaS provider, like for instance AWS or Azure.

All this constellation of services works together by communicating of course via standard protocols, using the very well known classical loosely-coupled approach, described here: Loosely-coupled approach

Walk the talk.

So much for the theory.

Let’s see all this theory in action, with the very famous sample of the SMB Market Place (description and code are available on this blog post: SMB Market Place). This solution makes use of many backing services, such as PostgresSQL or REDIS.

The keyword here is services. The services-based paradigm makes it pretty easy to relocate the back-end, at the end of the day "all" you need to do is to unplug some service and plug the same service located on another IaaS. Let's see how that can be done in code.


Let's start with PostgresSQL: it's used to store vectors data retrieved by Leonardo Machine Learning services, which will be later used for comparison looking for the shoes a customer wants. (details about these services available here: Leonardo APIs).

In the original code, this database is provided directly by the SAP Cloud Platform. The database details are provided by SAP Cloud Platform Cloud Foundry, and retrieved accessing the VCAP_SERVICES environment variable, as below:

if (process.env.VCAP_SERVICES) {
vcap = JSON.parse(process.env.VCAP_SERVICES);

if (vcap.hasOwnProperty('postgresql')) {
//Postgresql on CloudFoundry services
credentials = {connectionString: vcap.postgresql[0].credentials.uri}
console.log("PostgresSQL found in VCAP Services")
} else {
console.log("No PostgresSQL found in VCAP Services")

Let's assume now that, for some reason of yours, you want to provision this database not from the SAP Cloud Platform anymore, but directly from AWS (details on Postgres on AWS can be found here: Postgres on AWS). We need then to prepare our application code to support this approach. Instead of relying only on the VCAP_SERVICES, we can create our own set of environment variables to point to the new DB created on AWS:

//After checking for PostgreSQL on Cloud foundry
if (!credentials) {
//Maybe PostgreSQL on a remote enviroment
console.log("Looking for remote PostgresSQL connection details")
if (process.env.PG_HOST) {
console.log("trying to connect to PostgreSQL on " + process.env.PG_HOST)
credentials = {
user: process.env.PG_USER,
host: process.env.PG_HOST,
port: process.env.PG_PORT,
database: process.env.PG_DATABASE,
password: process.env.PG_PASSWORD,
ssl: true
} else {
console.log("No remote PostreSQL details found, will try to connect locally")

By not replacing the former approach and just adding the support for an external service, we guarantee app's availability and also a smooth transition between Cloud Foundry Services to an external service. All it would take is to, set the environment variables with the AWS details (or any other provider), unbind the SCP's PostrgreSQL, and restart the app.


We can use the same logic to provision REDIS services (REDIS) directly from Azure, for instance. You can find details on how Azure supports this service here: REDIS for Azure and some how-tos in this article How to use REDIS in C#). All we need is a connection to a REDIS service on Azure that can be used on our application. Similar code than above:

//Check where the Redis instance will come from.
//From CF BackingServiecs, OR a Remote Host OR a local (credentials = null)
if (process.env.VCAP_SERVICES) {
vcap = JSON.parse(process.env.VCAP_SERVICES);
if (vcap.hasOwnProperty('redis')) {
credentials = vcap.redis[0].credentials; = credentials.hostname
console.log("Redis credentials found in VCAP")
} else {
console.log("No Redis found in VCAP Services")

if (!credentials) {
//Maybe Redis is on a remote enviroment
console.log("Looking for remote Redis connection details")
if (process.env.REDIS_HOST) {
console.log("trying to connect to Redis on " + process.env.REDIS_HOST)
credentials = {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
} else {
console.log("No remote Redis details found, will try to connect locally")

In this sense, the application orchestrates several services in a truly multi-cloud environment. See the whole constellation of services used by the application, collocated in different IaaS, in the picture below:




You have seen in this article how technology gives you today choice on where to host your solution components according to your needs. You can choose whatever IaaS/PaaS stack to host your components according to several dimensions (availability of services, price, SLA, etc) and build your solution as a constellation of services cooperating all with a loosely-coupled approach.

You can check all the code in GitHub: GitHub