Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
8,511

In the first foundation piece (sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146), we examined the conception and technical background of delta extraction related to logistic flows towards BW system by using the Serialized V3 Update.
Now we will analyze the other side of the coin.

The Serialized V3 Update: the end of a kingdom

Up to (and including) PI 2001.2 (or PI-A 2001.2), only the Serialized V3 update method was used for all applications of the extract structures in the Logistic Cockpit.
The logical reason for this ‘absolutism’ was that, at a first sight, this specific BW update option has being guaranteed evident useful features in a data warehouse management perspective:


  • the requirement of a specific job to be scheduled, resulting in a temporally detachment from the daily business operations;
  • the peculiarity of the serialization (that is the right delta sequence from an R/3 document hystory point of view) in the update mechanism of BW queues that allows a consistent data storage in the datawarehouse.

In spite of these well known benefits, with the advent of PI 2002.1 (or PI-A 2002.1) the short kingdom of the ‘serializer’ ends: as a result of the new plug-in, three new update methods for each application come up and, as of PI 2003.1 (or PI-A 2003.1) the Serialized V3 update method is no longer offered.

What’s happened !?

In the reality, all that glitters is not gold.
In fact, during daily operations and by facing with all practical issues, some restrictions and technical problems arose.


  • Collective run performance with different languages

During a collective run processing, requests that were created together in one logon language are always processed together.


Starting from this assumption, let’s try now to imagine what happens when several users logged on the source system in different languages (just thinking to a multinational company) create/modify documents for a relevant Logistic Cockpit application.

In this case the V3 collective run can only ever process the update entries for one language (at a time) during a single process call.
As a consequence, it’s easy to understand that a new process call is automatically started for the update entries belonging to the documents entered in a different language from the previous one.
So, if we want that the delta mechanism to maintain the chronological (serialized) sorting despite the different languages, it’s possible that only a few records (even only one record !) is processed per internal collective run processing.

This was the reason why the work processes carrying out the delta processing could be often found in the process overview with the "Sequential reading" action on the VBHDR table for long time.
In fact, for every restart, the VBHDR update table is read sequentially on the database (and you can bet that the update tables can become huge): the risk is that processing the update data may require so much time that, in the meanwhile, the number of new update records generated on the system are over the number of records being processed !

Fundamentally, in the serialized V3 update only update entries that were generated in direct chronological order (to comply with the serialization need) and with the same logon language (for technical restrictions) could therefore be processed in one task.


If the language in the sequence of the update entries changed, the V3 collective update process was terminated and then restarted with the new language, with all performance impacts we can suppose.


  • Several changes in one second

For technical reasons, collective run updates that are generated in the same second cannot be serialized.
That is, the serialized V3 update can only guarantee the correct sequence of extraction data in a document if the document did not change twice in one second.


  • Different instances and times synchronization

I think it’s easy to verify how much it is probable that in a landscape in which there are several application servers for the same environment different times can be displayed.
The time used for the sort order in our BW extractions is taken from the R/3 kernel which uses the operating system clock as a time stamp. But, as experience teaches, in general, the clocks on different machines differ and are not exactly synchronized.

The conclusion is that the serialized V3 update can only ensure the correct sequence in the extraction of a document if the times have been synchronized exactly on all system instances, so that the time of the update record (determined from the locale time of the application server) is the same in sorting the update data.


  • The V2 update dependence

Not to be pitiless, but the serialized V3 update have also the fault of depending from the V2 processing successful conclusion.

Our method can actually only ensure that the extraction data of a document is in the correct sequence (serialized) if no error occurs beforehand in the V2 update, since the V3 update only processes update data for which the V2 update is successfully processed.
Independently of the serialization, it’s clear that update errors occurred in the V2 update of a transaction and which cannot be reposted, cause that the V3 updates for the transaction that are still open can never be processed.

This could thus lead to serious inconsistencies in the data in the BW system.



In the next (and last) episode...

...we’ll analyze all the elements required in order to do a right, but, above all, an aware selection from three new update methods (direct, queued, unserialized) by now available for the applications relating to the Logistics Cockpit.

In the meanwhile, as the song said, I WISH A MERRY CHRISTMAS AND HAPPY NEW YEAR to all of you !!!

4 Comments