Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 


I am a big fan trying to keep things as simple as possible. This is sometimes fighting against my philosophy as a developer but having the history and experience of doing operations in my mind, I was happy all the time if I did not have to read 5.000 lines of code before understanding what's going on. Moving from the onPremise world to the Cloud and therefore to BTP, it was my personal aim to implement in a microservice approach as much as possible.

Therefore I was quite happy as I saw the announcements about the serverless service on BTP (previously known as 'Extension Center'), giving me the opportunity to have an easy entry to the world of Function-as-a-Service (FaaS) on the BTP.

This blog post will provide my experience I did collect the last 6 months using serverless service on the Cloud Foundry environment, provide you some use-case ideas and background knowledge, but also show up the nuts and bolts of this service.


What's serverless-service and FaaS?

Function-as-a-Service is providing you a function as a service. Yes, it's the easiest way to explain it. The combination of the runtime and environment is missing in this description. So let's say it's a function we do deploy to a kubernetes cluster (K8s) who is running this function and waiting for someone who is going to call / consume this function. We do only pay per usage. So if nothing is happening and no one is using it, we do not need to pay for the container.
The function is sleeping in a fluffy cloudy container and only when it is needed, it wakes up and does something
And the good news:
A sleeping function doesn’t cause cost

# carlos.roggan

Check out this wonderful blog post series about more background details and your first hands-on with the serverless services. Thx for this good job, Carlos.


How is it working?

In a nutshell: You're setting up your own nodeJS application, define a trigger and additional configs in a separate config file (faas.json) and you're ready to deploy to the K8s instance. Therefore you're going to use a specific xfsrt-cli and you'll get the defined trigger endpoint back from the cli. Otherwise you can use the frontend to check the endpoint link, define secrets and have a look into the log.


My use cases

My clients project is going to do a digital transformation for their R/3 landscape to a S/4 HANA Public Cloud solution. A lot of process-gaps or custom UIs were implemented on the BTP to fulfill the transformation. For environment specific operations I searched for an easy approach to host microservices and get rid of containers around it, I need to document and to take care of. One important point you need to think about are runtime-costs (check Pro and Cons chapter).

Use case FaaS001 - Set up IAS user

On BTP we're using a lot of different services and I've sometimes the feeling that I am belonging to the glamour team of "first-customers" with some of them. Therefore we often do open OSS Incidents belonging to BTP services. Most of the time, SAP is requesting access to the cockpit / service-instance. I implemented a FaaS microservice to provide the ability to create a new IAS user on the IAS tenant, following a naming convention we defined first and assigning required rules.

The FaaS001 is the actual endpoint for some kind of 'user self service' where a developer can request such a 'OSS IAS User' with the required IAS-Groups.

This approach is required as the IAS-SCIM-API (connected through a destination-service-instance) is to powerful to provide it directly to a UI5 application and the naming convention and security aspects (like a random password with numbers, special characters and so on) are easier to maintain and control.

Use case FaaS002 - Deactivate IAS user

After generating users for the OSS support, we also want to check on these users and deactivate the users after 7 days automatically. So the next FaaS I did implement is doing exactly that. The endpoint published by FaaS002 is getting called by a Job Scheduler service instance.

Developer pro-tip:

Both FaaS microservices communicate with me (and the authority team) via the Alert Notification Service. I'll get notifications for everything that is happening immediately. Something similar I also did with APIM, check here. So this message I do receive via Google Chat every morning at 00:59 (job planned time).

Use case FaaS003 - Workflow checker

We're using a lot of workflow instances on our BTP. Unfortunately sometimes they crash if the developer did not check his code or the endpoint isn't there at runtime. I want to inform us developers about these circumstances so we can check the workflow instances and restart it, if necessary.

The Job Scheduler is calling every 20 minutes this microservice, who is using the workflow API to fetch workflow-instances in state "FAILURE". For every entry FaaS is sending a message via the Alert Notification Service to the developer Google Chat group.


Pro and Cons I did discover


  • Runtime-Costs: Instead of paying memory for deployed containers on the Cloud Foundry environment, we do only pay per usage of this microservice. This could be useful for functions getting called twice a day by a Job-Scheduler service but maybe expensive for 'wbs element checker invoked by a time-recording ui5 application' within 25.000 calls a day.

  • There is "more less to care about". I mean, yes, deploying to Cloud Foundry is already easy and my responsibility outside the container is already low. But here it's just about 1 service instance. That's it. More less is impossible (!?).

  • Easy handling. I like the xfsrt-cli and the approach, how to deploy FaaS projects to the serverless instance. We also managed to do this via an Azure DevOps pipeline within some minutes, so it's deploying now automatically.

  • It's stable. Within the last 6 months, there were no service-interruptions, key changes or anything else we had to deal with (beside the renaming)


  • Per serverless instance, there are only 5 projects allowed. Each project can contain up to 5 functions, so we can have 5x5 = 25 functions up and running on one serverless instance

  • The maximum number of service instances allowed for a subaccount is 1. So 25 is the final sum of possible functions on our subaccount. Why, SAP? Are you afraid of earning money with this service???

  • Limitation continues; the maximum size of source code per project is 50mb. This sounds enough, but as soon as you're starting using modules from npm like suggested, this 50mb can be achieved very fast!

  • Bindings to service instances (beside XSUAA) isn't supported. You'll need to add your secrets (e.g. for your destination service instance) to your faas-secrets.

  • Logs. The logs are visible to you via the Extension-Center-UI and the xfsrt-cli. Fine! But there is no way to forward these log entries to Kibana / Cloud Foundry application logging service.

  • No HANA client usage possible. We wanted to connect a FaaS with an AMQP trigger, related to Enterprise Messaging Service, and generate something on a HANA database. Unfortunately, beside the missing bindings, the library would be too big to use it here.

  • Updates. I am sitting on the customer site, but it feels like there is some kind of silence on the "What's New Section" for serverless. Does this have future? Do I smell a new 'Deprecated soon' flag coming up?


Serverless is a nice service and opportunity on the BTP, providing microservice capabilities as easy as possible. Saving money included. But as soon as your requirements are involving something like a HANA database connection, you're out. So try to know your requirements first (I know, this is tough in times where we do agile-requirements) and set yourself a limit of coding-lines and complexity for a FaaS. Think about the usage / hits for your endpoint. Will this getting called twice a day? 1000 times? 500k times?

Size, complexity, hits per day. These are the 3 dimensions you need to know before starting with serverless (from my point of view).

I'd like to tell you my limits I set for my decision matrix:

Size: 500 lines of code (of course, 50mb with all modules :D)

Complexity: maximum 2/5 (that means for me, doing some request-promise calls, checking and formatting, transformation of in/out objects)

Hits per day: 5000 (equals 0.1 capacity unit)

Check the Cloud estimator tool to see, what 1 capacity unit does cost for you.



Serverless is the great entrance to the world of kubernetes and serverless hosted apps. It's not the solution for every application but it's an easy approach for nodeJS microservices and it's worth it, spending some time and energy into it. Thinking about productive scenarios and the real usage in customer projects, it completely depends on the use cases and the size. These kind of limitations existing for this service are following the feeling, that the service could be deprecated soon and the successor will be Kyma.


All screenshots and pictures made, designed and captured by myself.
1 Comment
Labels in this area