Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Former Member

I'm addressing two difficulties that arise, when working with object instances of any kind during workflow processing:

  1. The same object instance is used several times during a single sequence of an ABAP transaction
  2. The same or different attributes from that instance are used

The approach to work with problem A is: Use a “caching algorithm”, which is put into place on the central spot, where the SAP Workflow Runtime system is picking up any instance: FIND_BY_LPOR of the workflow interface. There is a template implementation provided within the ABAP OO White-Paper from SAP AG, Jocelyn Dart (thanks again to Jocelyn), and I have extended it to be used for a central generic instantiation (please see below).

For the duration of an ABAP transaction, all created object instances, will be put into a system cache, where they are picked up again, whenever they are asked for again.

For problem B one can improve database performance to pre-select the main database tables that are commonly used, to avoid a second re-read at a later time.

This is good to provide a more simple delivery of the object’s attributes, that can more easily be used within a workflow implementation.

The approach to work with problem B is to follow the principle of a “big” object. Big objects pre-select all their possible values from the database before they are used. This is a very convenient way, how the workflow implementation can easily access and switch on the object’s attributes. The workflow is usually working with a very limited number of object instances (usually, there’s only one leading object). There is a certain drawback, when the object instances are used within mass processing, that takes place in front or aside of a workflow implementation: Too much data is selected, when there’re only a subset of attributes is needed, and secondly, the memory consumption of the ABAP runtime increases, as per solution to problem A, the object instances – once created – stay in place for the duration of the ABAP transaction.

In a "real" implementation, this could mean that a job, that previously took around 5 minutes to work, runs for hours and hours, because the system is busy to instantiate all the various objects for more-or-less nothing or barely no effect.

Counteractions, when using mass processing

  • Mass processing should avoid to instantiate the business objects, before there’s an action required, where the implementation or logic is really needed. So it can be advices to make some part of the database selection first, and have the object instantiated later on for processing.
  • The cache algorithm needs to be taken care of. A method .release() can be implemented to finish up the object instance cache, to reduce memory usage.

BOR Objects

Unlike with the ABAP Object implementation of a Business Workflow you have so-called "Virtual attribute" available, that processes an underlying getter-method, once the attribute's value is processed. (A "Table attribute" is basically the same, but they all refer to the same getter method). That is, the expression &BUS2081.CreatedOn& will execute implicitly a method get_createdOn within the BOR object's implementation. Here you can have a similar caching algorithm, which is usually generated by the BOR wizard when creating a "Table attribute".

Thanks to Sue, who outlined this feature below in the comment section, so I have included this section here for referring to it, too.

Once I had to manually downgrade an existing ABAP-OO implementation to a BOR-based workflow I have refactored the functional methods to become virtual attributes, and then using the existing caching algorithm from below. This worked really well, too.

So here follows the approach to have something similar with ABAP Objects.

The generic caching algorithm

Create a new class ZCL_WORKFLOW_CACHEABLE, non-final public instantiation implementing interface IF_WORKFLOW

Provide a private static attribute gt_instances_base, holding a table of object references to ZCL_WORKFLOW_CACHEABLE, including two more - the key -

fields: typeID and instID, as they are also used within the SIBFLPOR structure.

All classes that are being used within the SAP Business Workflow implementation inherit from here and implement the same CONSTRUCTUR signature.

Implementation the method BI_PERSISTENT~FIND_BY_LPOR method roughly as following:

Within such a centrally implementation object instantiation, one could also easily implement the functionality for system-wide delegation, as it was known with the BOR object implementation.

You'll need to fill in some gaps, like a DDIC structure for the cache table, but as this blog-post is mainly addressing experienced Workflow programmers, you'll know, how to fill the missing parts.


*   Cache lookup for existing instance

    FIELD-SYMBOLS: <ls_instance> LIKE LINE OF gt_instances_base.

    READ TABLE gt_instances_base ASSIGNING <ls_instance>
                                 WITH KEY typeID   = lpor-typeID
                                          objectID = lpor-instID
                                 BINARY SEARCH.
    IF sy-subrc <> 0.

*   Instantiation of a new object

       DATA: lo_instance_object  TYPE REF TO ZCL_WORKFLOW_CACHEABLE,
             lv_objectID         TYPE SIBFINSTID.

       lv_objectID = lpor-instID.

       CREATE OBJECT lo_instance_object TYPE (ls_lpor-typeID)
                        iv_objectID   = lv_objectID
                        iv_do_refresh = gc_true.          "Indicator to perform full database selection upon creation

       result ?= lo_instance_object.

       add_instance_to_cache( lo_instance_object ).


*   Retrieving and assigning result from instance cache
       result ?= <ls_instance>-instance.



This was the most important algorithm. However you'd need some more small methods around it. One is to add an object to the cache, and another helpful one is, to check, if an object is already cached. The third one was to implement the BI_OBJECT~RELEASE method, as well.

The adding to the cache is done centrally to ensure the correct sorting. Be aware that the internal table may grow, when many instances are being processed. So a few thousand entries could be possible.


    "Check on cache existence first, as the constructor of the class
    "may have added itself to the cache already to avoid endless loops
                                 WITH KEY typeID   = io_instance->ms_lpor-typeID
                                          objectID = io_instance->ms_lpor-instID
                                 BINARY SEARCH.
    IF sy-subrc <> 0.

       DATA: ls_instance       LIKE LINE OF gt_instances_base.

       ls_instance-typeID   = io_instance->ms_lpor-typeID.
       ls_instance-objectID = io_instance->ms_lpor-instID.
       ls_instance-instance = io_instance.

       APPEND ls_instance TO gt_instances_base.
       SORT gt_instances_base
            BY typeID objectID.



A helper method to work with an explicit release.

    "Delete from instance cache

    "There should only be exactly one typeID/objectID existing
    "in the instance cache table. But however, we need to be
    "absolutely sure to erase all traces here, so we loop to
    "find everything.

    "We can be sure now, as I have checked all references to ensure sorted table
                                 WITH KEY typeID   = ms_lpor-typeID
                                          objectID = ms_lpor-instID
                                 BINARY SEARCH.

    "Lookup oneself in the instance cache and remove here
    IF sy-subrc = 0. "Found
       DELETE gt_instances_base INDEX sy-tabix.

    me->release_instance( ).   "Method, which is to be redefined by subclasses to do their cleanup


And a last (public) method to give a calling report/functionality the opportunity to switch program flow if an object is already cached or not, without instantiating the object.

method IS_IN_CACHE.

                                 WITH KEY typeID   = is_sibflpor-typeID
                                          objectID = is_sibflpor-instID
                                 BINARY SEARCH.
    IF sy-subrc = 0.
       rv_in_cache = gc_true.
       rv_in_cache = gc_false.


Any helpful comments are very welcome.

Take care

    Florin Wach


    SAP Business Workflow Senior-Expert

Labels in this area