Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
wozjac
Active Participant
2,570

There are some cases when data from infotypes are used very often and in very large scope i.e. for some statistical calculations before presenting some information. Situate this in some application, multiply by a large number of users, add a high frequency of usage and we have a nice recipe for a huge amount of database operations. When thinking about performance improvements in this scenario first thing that comes to mind is buffering - we can use, for example, some intermediate table with aggregated data, but I want to discuss usage of Shared Objects for this case.


The main idea is to set up a buffer and update it only when necessary - this means only when data in specified info type were changed. So whole mechanism has 2 parts:

1. In-memory buffer.

2. Trigger for buffer update.

The buffer is realized by Shared Objects and for the trigger, we can use customer exit EXIT_SAPFP50M_002 (additional checks) which is used when data are changed via PA30 or function module HR_INFOTYPE_OPERATION. We can also use dynamic action or additional PAI module. Below are some details about this concept - for the guide about Shared Objects search SDN or read topic page on help.sap.com.



Shared memory area as a buffer.


The buffer is implemented as a shared memory area with 2 helper classes - loader and broker. The loader is responsible for the creation of shared memory instance and is used as a constructor class - this makes our shared memory preloadable which means that it will be constructed at first read. For high-availability scenarios (read area while it is updated) consider enabling versioning.



The constructor in loader class must satisfy interface IF_SHM_BUILD_INSTANCE and BUILD method contains code for setup shared memory. This is the moment where needed data is gathered, calculated, stored in an internal table etc.


The broker class is a proxy for getting data from buffer - it provides methods for returning requested data from the shared memory area. It should raise an exception when something went wrong i.e. memory area was under construction - catching this kind of error gives us opportunity to get data in old fashion (directly from a database).



Triggering buffer's update.


Ok, we have our data in the shared memory area. Now, we have to refresh buffer when infotype data held in the buffer have been changed. All we need to do is use invalidation mechanism - thanks to making our area preloadable when we call the invalidate method, the area's construction will be started again.


For example - in EXIT_SAPFP50M_002 we can put this:
CASE innnn-infty.
WHEN '2001'.
    CALL METHOD zcl_shm_example_area_loader=>refresh_buffer.

....

You can limit in user exit when refreshing should take place - when adding, updating or deleting infotype.


Using shared memory with user exit it is possible to implement simple or sophisticated buffer - with a whole range of programming techniques (object ABAP inside Shared Objects model) and variety of control (conditions in exits).


Summarizing, below is the simplified flow of this mechanism:



Before first use, shared memory area instance not exists - when we call a method from broker class requesting buffer data, it tries to attach to read our area. Because it is marked as preloadable, it calls build() method from loader class. The area is constructed, needed data from infotype are selected, aggregated etc. and returned to the application. In next calls, the buffer is returned from shared memory. When data on buffered infotype have been changed, a method for buffer refresh is called from user exit. This method invalidates shared memory area, so it runs build() method again, reloading needed data.



Labels in this area