Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
As some applications / services on our Cloud Foundry environment do hit all over and over again the same endpoints for the same requests, I was wondering which opportunities I have on the SCP to cache requestes and responses.

Our cloud environment is almost completly running on Cloud Foundry. A service like Business Rules is running next to the html5 containers, who're required to have UI5 frontend applications up and running. It is easy to bind services and applications with each other, its also easy to use destinations, to fetch results from a ruleset-execution. But as we have API Management on the plate, we're routing all access requests from outside and inside the environment through APIM. This provides us a lot of opportunities and one of them I wanna show you now: Cache things!

Let's be honest. This is nothing new and a common functionality - but with the APIM solution in place, it's done within minutes.

Something similiar from 2018, but only about metadata, is already described here. Shoutout to divya.mary- thx for sharing!


Pre Requisites

You need to have a running APIM instance. Get one using trial SCP here. I will use a business rules instance and the service keys of this instance as well. Find blogs about this cool service (some kind of next generation of BRF+) here. You can use any other endpoint for testing purpose as well.


Setting up your API / Proxy

Create a new API on your APIM and provide an endpoint. Like already said, I'm going to execute a rule-set on the Business Rules Service (running on SCP Cloud Foundry). Therefore I need to post a json (payload) to the endpoint. In my demo case, this payload contains an ID for the corresponding rule-set and the vocabulary. Within the vocabulary the Business Rule Service will use a decision table to find and retrieve back the response.

Note: The execution API is protected by oAuth V2. Therefore we need to request an token first.

We're starting by adding the policy "Response Cache" to the PreFlow:

With the XML tree we can now controll a lot of things. In my usecase I'm going to provide a payload to receive a business-rules response. That means the response is based on the payload. In other cases, you may think about using query-parameters or headers, to base your cache-key on. As the payload is saved in "request.content", this is provided as KeyFragment. Beside of this I want to reset the cache every day at 12 am and I wanna have an opportunity to bypass the cache. All of this is done within this XML config:
<ResponseCache async="false" continueOnError="false" enabled="true" xmlns="">
<KeyFragment ref="request.content"></KeyFragment>
<TimeOfDay ref="">00:00:00</TimeOfDay>
<!-- <TimeoutInSec ref="">3600</TimeoutInSec> -->
<SkipCacheLookup>request.header.bypass-cache = "true"</SkipCacheLookup>


The "ResponseCache" policy will do automatically the rest for you. You only need to add the same policy to the response flow of your PostFlow. Use the "+" button on the "Created Policy"-Area on the left:


Check your easy cache policy

This is all for now. Let's stay simple.

If we're now going to hit the API:

This takes 218ms!

Hit it again!

And now we have 32ms for receiving our response. This is working! You will see as well, if you're going to change your payload and search within another boCompCode the time will raise back to something around 200ms.

If you wanna bypass your cache use the provided header opportunity. Add "bypass-cache":"true" and proceed. You will see the ms raise!

This is fine for now, and actually we could wrap up the blog post here. But there is more!


Extend the policy for things like a bearerToken

As the business-rule service is protected by oAuth V2, we need to request an Bearer Token with our credentials. This job can be done from the destination behind an APIM-Provider.

But we can do this as well with a "Service Callout" policy on our policy:
<ServiceCallout async="true" continueOnError="false" enabled="true" xmlns="">
<!-- The request that gets sent from the API proxy flow to the external service -->
<Header name="Authorization">Basic chteYl2lxovwbomrUltdi[..]=123</Header>
<!-- the variable into which the response from the external service should be stored -->
<!-- The time in milliseconds that the Service Callout policy will wait for a response from the target before exiting. Default value is 120000 ms -->
<!-- The URL to the service being called -->
<!-- The SSL reference to be used to access the https url -->

This policy will hit the oAuth token generator with basic credentials, provided fixed as header (don't judge me, it's for the blog post) and receive the response in a variable called "tokenResp".


Nice. Now we can transform this response inside our variable to the request.header.Authorization variable. We can use "Access Entity" or "Assign Message" or however policies on APIM. As I'm a devleoper I'm doing this with some lines of javascript:
var responseJSONstring = context.getVariable("tokenResp.content");
var responseJSON = JSON.parse(responseJSONstring);
context.setVariable("request.header.Authorization", "Bearer " + responseJSON.access_token);
// integer to string
var expires_in = responseJSON.expires_in - 10;
expires_in = expires_in.toString();
context.setVariable("request.header.expires_in", expires_in);

The "tokenResp.content" contains something like this, coming back from oAuth token generator:
"token_type": "Bearer",
"access_token": "fdvsTVGE1UaQbzcHIATfEiKaoRQ0",
"issued_at": 1593790351936,
"expires_in": 1799,
"scope": ""

Beside the Bearer token ("access_token") I did fetch the "expires_in" value and saved it as well (with 10 seconds delay, I reduce instantly). This will help us destroy the cached oAuth token later ...


Save your bearer token to the APIM cache

With the "bearer token" and "expire in seconds" values saved in variables, we can now add a new policy to the PreFlow: "Populate Cache":
<!-- configures how cached values should be written at runtime -->
<PopulateCache async="false" continueOnError="false" enabled="true" xmlns=''>
<!-- configures a unique pointer to a piece of data stored in the cache -->
<KeyFragment ref="">scpCFbusiRuleToken</KeyFragment>
<!-- specifies the cache where the data is to be stored -->
<!-- the number of seconds after which a cache entry should expire -->
<TimeoutInSec ref="request.header.expires_in"/>
<!-- specifies the variable whose value should be written into cache -->

This policy will save our token, already provided in "request.header.Authorization" on APIM cache referenced to keyFragment "scpCFbusiRuleToken". The "expires_in" is used to determine, how long the value is valid and should expire.


Read your bearer token from the APIM cache

And now we just need to retrieve the token back from the cache, before doing the "request a new token" policy. This is done by adding the policy "Lookup Cache" to your PreFlow policy:
<!-- configures how cached values should be retrieved at runtime -->
<LookupCache async="false" continueOnError="true" enabled="true" xmlns=''>
<!-- configures a unique pointer to a piece of data stored in the cache -->
<KeyFragment ref="">scpCFbusiRuleToken</KeyFragment>
<!-- the variable to which the cache entry should be assigned after it is looked up -->


Your PreFlow policy should look similiar like this:

cacheResponse (Response Cache) => getBearerTokenFromCache (Lookup Cache) => getBearerToken (Callout service) => transformToken (javascript) => saveBearerToken (Populate Cache)


Before you're going to save & deploy your API, you need to add a condition to 3 policies. Why?

Because if the cache has still a valid token saved, you do not need to do a service callout, transform the token and overwrite it. Add this condition, who will prevent to get active if the cachehit was successfull, to the last 3 policies:

Note: Replace "getBearerTokenFromCache" wit the name of your "Lookup Cache"-Policy!

Time for save & deploy.

Test your doubled-cached-API

You will now notice, that even if you're changing the payload, the seconds will not be as high as on you first call. That means your policy is working and you did save your oAuth token as well as your payload & response. Good job!

Have also a look on the debugger of your API. You will be able to see the usage of the response and if we found something on the cache. The second arrow shows you: He used the cache and he did not hit the (real) endpoint:


Wrap up

Using API Management to cache stuff is really easy and it's worth it. Not only for business rules - Image how much you can cache! Things like country-codes are not changing that much, why not preventing traffic to your golden-source?

So ... cache me if you can!
Amazing tutorial, incredible potential on reducing workloads while improving user experience! Thank you!
Great description - thanks for sharing this, Cedric!
0 Kudos
Very good blog. Many thanks for it.

One question. If at some point SAP API Mangement becomes unavailable, are the values/payloads available in the cache going to be still there once the system is back and running again?

Many thanks,



0 Kudos
Hi christian.abeledomarin ,

this is a really good question. As this is a SaaS hosted on PaaS, I can only guess....

I guess as the runtime is isolated from the designtime, the cache is hosted on the runtime as well and will be available all the time. If the runtime is also not available, the cache isn't available either.

Hello cedvup,

I would have a question. You mention that " This job can be done from the destination behind an APIM-Provider". I have tried the same. It seems perform the authentication as set in the destination when performing a test connection, but as soon as I call the API proxy directly the destination authentication setting seems to be ignored.

Did you ever come across this issue.


0 Kudos
Hey andreas.katzer2

thanks for pointing out this issue. I discovered the same thing like you some weeks ago. I guess (but I do not know and do not have the chance to test it anymore), that's a difference between NEO and Cloud Foundry. Back this time I wrote this post I was using NEO.

You're right - The destination behind the Provider isn't able to do the oAuth stuff for you. Same for onPremise destinations via Cloud Connector. Wasn't able to put a technical user into auth of destination and had to do that on the proxy 😕



Labels in this area