Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Active Contributor

Intro. What is a centralized key-value store system and why to introduce it?

In SAP PI/PO, mapping lookups are commonly used to retrieve values stored in an external backend system, during mapping execution. Sometimes lookup in a backend system involves execution of complex logic before value can be derived and returned, but in some cases lookup is about accessing key-value store / database, where for a provided key, corresponding stored value is returned. A list of country codes and their names defined in ISO 3166 standard, mapping of various codes in one system to corresponding codes in another system (e.g. for material groups and categories, customer groups, account types,etc.), mapping of an IP address to a host name or user account name to their first name / last name are good examples of key-value store application.

In diverse heterogeneous landscapes, key-value store functionality is implemented and spread across variety of backend application systems, sometimes leading to duplication of stored information and potentially its inconsistency across landscape. Solution for this problem has been found in configuration of centralized key-value stores, which are used to hold and persist key-value pairs published by different application and technical infrastructure systems (literally speaking, by any system that originates information about key-value pair) and act as a single provider of these key-value pairs to consumer systems in a generic way. Common requirements for key-value stores are:

  • high availability,

  • scalability,

  • consistent data persistence,

  • high performance and robustness in accessing key-value pairs by provider and consumer systems,

  • secure access to key-value pairs,

  • availability of generic APIs that can be consumed or utilized in technologically heterogeneous landscape,

  • capability of bulk upload of key-value pairs.


In order to fulfil these requirements, key-value store solutions are commonly heavily relying on in-memory and NoSQL technologies, providing capabilities for distributed operations and clustering. Libraries developed in different programming languages and implementing key-value pairs access and key-value store system management APIs facilitate usage and smooth integration of such solutions in enterprise infrastructure. In contrast to other NoSQL based solutions targeted at fulfilling requirements for column store (like Apache Cassandra) or document store (like MongoDB), key-value store solutions, as their name implies, are specifically designed for storing key-value pairs - making operations for maintaining and querying key-value pairs extremely robust (in contrast to a case if they would have been implemented using classic relational database or NoSQL solution of other type).


Key-value store systems bring some extra efforts to IT infrastructure - most noticeable of them are

  • necessity of replication of key-value pairs from a source system to a key-value store system as well as keeping them up-to-date and consistent,

  • necessity of maintaining and supporting additional system (which is a key-value store system) in an IT landscape,

  • necessity of ensuring a key-value store system fault tolerance and high availability (since as any other centralized system, it becomes a single service provider for many other enterprise systems).


On the other hand, there are several significant advantages that key-value store system brings:

  • reduce workload related to non-primary and incidental functionality. For example, provider systems don't need to handle and process key-value pair lookup requests any more and can allocate released resources to system primary tasks and core functions,

  • provide a single endpoint and service provider for all key-value pair lookup requests. No need to look for a specific backend system to query this data from, since it is already accessible from a single storage. This also results into decoupling of a consumer system executing lookup from a key-value pair source system (if a source system is not available, consumer system will still be able to execute lookup and query required information),

  • provide diverse integration capabilities. Sometimes implementation of lookups to specific backend systems may be challenging due to lack of interoperability functionality between consumer and provider systems (e.g. libraries available for provider system, are not suitable for consumer system, or communication protocols are not supported by both systems). This constraint is effectively resolved by means of variety of libraries shipped for key-value store solutions.

In this blog, I would like to describe experience of using one of such solutions for mapping lookups in SAP PI/PO. Redis ( Redis ) has been chosen as a key-value store system. Redis is an open source NoSQL key-value store solution having very light resource footprint, flexibly scalable, providing really fast concurrent access to consistently stored information, which is achieved by usage of in-memory storage in combination with periodic replication to disk storage for consistent long-term persistence. Writing "really fast" here, I refer to request processing time being millisecond or even less, and request processing rate being tens of thousands of requests per second. In order to get an idea about robustness of Redis, I encourage getting familiarized with a summary of benchmark tests done by Redis team (available at How fast is Redis? – Redis ).

For this demo, a Redis server has been installed on a local laptop. Even without thorough performance tuning, following benchmarks could be achieved for SET and GET requests:

Redis ecosystem provides variety of libraries in several programming languages as well as additional toolset that exposes capabilities for administering, monitoring and querying Redis.

Outlook at Redis database content used in demo

For this demo, a Redis server with one database has been used. Several key-value pairs were uploaded to a Redis database for test purposes. Discussion of a Redis server installation and configuration is out of scope of this blog - detailed technical documentation can be found on Redis official web site.

For uploaded key-value pairs, following naming convention for keys was used (following naming convention recommendations provided by Redis team): <object type>:<object id>:<field>, where object type represents location of an object in a hierarchical structure (structural levels being delimited with colons, multi-word structural level / object type names being separated with dots), object id is used for unique identification of a looked up object and object fields represent object attributes (multi-word field names being separated with dots). In general, good practice is to establish clear naming conventions for keys stored in a key-value store system during its design or before its usage, and then always follow them, so that consumer systems can easily derive key names when they need to address their queries to a key-value store system. This will help reducing future maintenance efforts and gaining more benefits from using a central key-value store system.

In a used test Redis database, 4000 key-value pairs were mass uploaded: 2000 sample objects, each having 2 attributes / fields (type and text). For example, value of a text attribute of a sample object with ID 00001 can be retrieved by getting value of a key "test:vadim:sample.object:00001:text":

In sake of mapping lookup demonstration, a SAP PO system will execute a lookup of object text by object ID.



Implementation of lookup to Redis (baseline option): Usage of UDF / function library utilizing Redis client library

A described option for accessing Redis and querying key-value pairs within mapping lookup is based on usage of a custom developed mapping function implementing necessary functionality for interoperability with a Redis server utilizing one of commonly available Java libraries for Redis. Below is summary of corresponding required steps:


  • Download Java client library for Redis

For this demo, I used Jedis library (xetorthio/jedis · GitHub) as one of commonly used and recommended client libraries for Redis. An archive with a JAR file of a library can be downloaded from Releases · xetorthio/jedis · GitHub. Please note that this is not the only available Redis client library - an extensive list of client libraries can be found at Redis official web site.


  • Import a JAR file of a library into a PO system as an imported archive in ESR


  • Implement a lookup function as a part of a function library

Since an idea is to make a function re-usable in different mappings whenever key-value pair lookup is required, user defined function is not an option here since it doesn't provide cross-mapping re-usability. Having this in mind, usage of function library has been chosen.


The function implements a simple logic for establishment of a connection to a given Redis server by its host name and port, querying a key-value pair for a given key and a default value that should be used if key-value pair wasn't found:

Make sure that corresponding used Jedis classes are specified in import instructions section and imported archive is specified in used archives section:


  • Make use of the developed function of a function library in a message mapping

In a message mapping, target field "Text" is filled with a looked up value of a text for the given object ID.

Some explanatory notes and comments:

  • concatenation is done beforehand in order to compose a key name which is compliant to key naming convention discussed above. This is where we can benefit from using agreed clear naming conventions for key names, simplify and unify required mapping logic for key name generation,

  • Redis server connectivity details (such as host name and port) are exposed as parameters of a message mapping (here, REDIS_HOST and REDIS_PORT, correspondingly) in order to make message mapping more flexible and reduce its maintenance efforts in case Redis server connectivity details are changed in the future.

Both points mentioned above have nothing to do with primary subject of a blog, but are nice to have, so worth mentioning here.



Mapping lookup runtime test

Test was executed by means of standard message mapping test functionality.


Corresponding required custom mapping parameter values were specified:

After providing source message payload and triggering a mapping test, it can be seen that target message was constructed successfully and result of a lookup query addressed to a Redis server was obtained successfully:

Redis has built-in functionality for collecting log information about processed requests and capture timestamps of their execution. For example, this can be helpful when evaluating executed queries and their latency when running mapping lookup test:



Implementation of lookup to Redis (alternative): Usage of mapping lookup API utilizing HTTP based communication

As an alternative to the approach described above where 3rd party Redis client library needs to be imported to a SAP PI/PO system, it is possible to send requests to a Redis server over HTTP. Out of the box, Redis does not provide these capabilities and cannot accept and handle HTTP requests. In order to make this possible, another 3rd party solution, a lightweight HTTP server Webdis (Webdis, an HTTP interface for Redis with JSON output), can be used, that acts as an HTTP proxy interface to Redis native commands, which supports HTTP based communication and output (response) in JSON format. Please note that Webdis is not exposing RESTful services as a proxy layer for accessing Redis server, it is meant to be an HTTP proxy to Redis which exposes an HTTP interface to external callers. As a result, when dealing with this kind of communication, it is not accurate to state we get true use of REST architectural style, but it is more appropriate to refer to it as to HTTP based communication, which may remind REST in some aspects, but which is not 100% REST compliant.

After Webdis is installed, configured and started, HTTP calls can be sent to its listener endpoint - for example, GET requests containing lookup queries. Below is a sample HTTP request that is leading to exactly the same lookup request to Redis server as the one used in test earlier. Note that Redis command (GET) and looked up key (test:vadim:sample.object:00100:text) are passed in URL, received response is in JSON format:

Having this in mind, we can make use of Webdis + Redis by querying it e.g. using an HTTP or REST adapter in SAP PI/PO. From SAP PI/PO perspective, this is seen as a normal HTTP lookup query, with no specifics in regards to Redis, besides a URL pattern that should comply to the one expected by Webdis.

This can be achieved from a user defined function / a function of a function library or from a Java mapping program by means of SAP standard Lookup API, which is a part of mapping API of a SAP PI/PO system. Usage of Lookup API is a common way for calling lookup functionality through communication channel of arbitrary adapter type. In this blog, I will not focus on detailed implementation aspects for this approach, since Lookup API is well documented in SAP Help JavaDoc (Generated Documentation (Untitled)) and described in SAP Help (Implementing Lookups Using SystemAccessor - Managing Services in the Enterprise Services Repository ...). In addition to this, there are helpful SCN materials that demonstrate usage of Lookup API with practical examples and code snippets - here are just few of them:



Outro. Which integration option to choose?

Currently Redis doesn't provide any generic means of querying its database - for example, communication mechanisms involving plain HTTP or more advanced techniques based on REST. Instead, the one need to get use of Redis specific client libraries if we need to consume data from Redis database (for example, make lookups) or maintain data to it. This causes additional development overhead, commonly leading to necessity of (minor) custom development and potential increase of technical debt.


On the other hand, Redis is positioned as a very robust key-value store. In this way, introduction of any intermediate infrastructure components like HTTP proxies / interfaces may cause increase of end-to-end lookup query response time and mitigate original performance related benefits of Redis usage. Even though such HTTP interfaces make interoperability with Redis more generic (from consumer perspective, being agnostic to Redis client libraries), performance implication of their usage should be thoroughly considered.


As a result, I don't see a unique clear and unambiguously correct answer to a question regarding which integration option for Redis would be the right one - as seen, it is a merit between interoperability / communication mechanisms unification and performance. If in the future Redis will provide performant HTTP based mechanisms for querying its database, this will definitely be a promising feature to look at. Until that, it is advisable to consider performance requirements: if performance requirement is critical and performance KPI is challenging, then the approach involving usage of Redis client library is definitely an option to go, otherwise HTTP based interfacing is a nice and convenient alternative.

Labels in this area