‎2011 Jan 26 11:37 AM
Hello Friends,
I'm working on a transactional SAP screen which will be used by approximately 400 users simultaneously,
and I'm designing a good performance model.
This program is using material master, customer master and some other master data too frequently,
Than off course I'm using table buffering ( especially full buffering ) too improve the performance,
because master tables in our system is not being changed frequently.
Here is my problem :
I've created an area in shared object memory using transaction SHMA,
I'm storing an object there to store material master data and this object supplies me some methods
for fast searching like BINARY SEARCH technique etc.
Everything is working fine , different users are able to access this memory too quickly
by using attach / detach methods and search mechanism is very fast.
But my problem is about the server synchronization,
I should refresh the data in that shared object when our material master tables changed by a user.
I mean I should read data again from database when the buffers of this tables are invalidated.
SAP's buffer mechanism is already achieving this process by using profile parameter rdisp/bufretime (frequency )
and logging changes in table DDLOG.
Now I want to refresh when table buffer is invalidated.
Is there any function module that returns the buffer for a table is invalidated or not?
Or how can I refresh my data parallel to SAP's buffer
without reading all data with SELECT a statement each read access ?
Thanks for your time reading this small novel
Any help will be appreciated.
Bulent
‎2011 Jan 26 4:32 PM
> The table buffers are synchronized every 2 minutes.
But only if necessary.
He said:
>because master tables in our system is not being changed frequently.
so it is the question how often something changes. It can be there are only one or two changes in a day, but after the change the shared objects should also be adapted in few minutes.
So you can schedule a check of the DDLOG entries and trigger an update of the shared objects if necessary.
However, you should also store the time stamp of each shared objects update and should prevent that can become to frequent.
Siegfried
‎2011 Jan 26 12:03 PM
Hi,
Isn't this done by preloading? Just look for the "Session ID: ABAP251" document on google. Part of this document handles preloading and it sounds to me that's what you need.
Good luck!
‎2011 Jan 26 12:49 PM
Thanks for the answer,
But I've already read that document but it's not mentioning about how can I understand
database table has changed.
I have to use this before preloading.
‎2011 Jan 26 1:17 PM
[SHMA/SHMM Presentation|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/98dedf90-0201-0010-7790-b299be258b63?quicklink=events&overridelayout=true]
Here's something I used for a presentation to my colleagues. Not sure it helps, but check out the propogation settings and the lifetime settings...
‎2011 Jan 26 2:29 PM
Hi,
The link contains the same document unfortunately. I've already read this.
‎2011 Jan 26 3:47 PM
Hi Bulent,
sounds like you want to reimplement the table buffer synchronization for shared objects... i'm not sure whether this is a good idea.
You can specify a life time for your shared objects. If you refresh your shared objects every few hours, is that not good enough?
The table buffers are synchronized every 2 minutes. Do you really need the changed master data that quickly in your shared objects? If so i would question the usage of shared objects i think. They were not designed for frequent updates or fast synchronization but for shared and copy free reading of data that changes rarely.
However coming to your question:
There are the function modules:
SBUF_SEL_DDLOG_DATA
SBUF_SEL_DDLOG_RECS
the fist one tells you nr. of records an minimum and maximum time stamp
the second one reads records for a given minimum and maximum time stamp and returns them
in this structure:
SYSTEM_ID
SEQ_NR
TIMESTAMP
CLASS
TABNAME
MODE
KEY_LNG
KEY
Here you could check for TABNAME.... .
But:
- the function modules are not released
- polling DDLOG frequently adds load on your system
- i would probaly not do this in a production system but instead work with the life time of shared objects
Kind regards,
Hermann
‎2011 Jan 26 4:32 PM
> The table buffers are synchronized every 2 minutes.
But only if necessary.
He said:
>because master tables in our system is not being changed frequently.
so it is the question how often something changes. It can be there are only one or two changes in a day, but after the change the shared objects should also be adapted in few minutes.
So you can schedule a check of the DDLOG entries and trigger an update of the shared objects if necessary.
However, you should also store the time stamp of each shared objects update and should prevent that can become to frequent.
Siegfried
‎2011 Jan 26 4:43 PM
> > The table buffers are synchronized every 2 minutes.
> But only if necessary.
agreed.
> So you can schedule a check of the DDLOG entries and trigger an update of the shared objects if necessary.
> However, you should also store the time stamp of each shared objects update and should prevent that can
> become to frequent.
good idea. with this measures reading only intervals every few minutes and checks that it is not updated too
often... i think as well: why not?
Kind regards,
Hermann
‎2011 Jan 26 9:27 PM
Hello Dear Siegfried and Hermann,
Thanks for allocating your time,
Actually SAP application servers are already checking the invalidated table buffer records from DDLOG,
in cycles of profile parameter rdisp/bufreftime. ( The default value of this parameter 120 seconds in transaction RZ11 )
And than if the buffer is invalidated for a table, data of this table will be reloaded from database
on next - first SELECT statement from that application server.
So the application server knows that buffers is invalidated. Keeping this information somewhere.
What I was actually asking is there any system function or function module exists in system to get this information...
it would be more useful if exists.
But keeping the last timestamp in a shared object attribute and checking the DDLOG at regular intervals is a good idea,
it won't create a load on system and I can make the interval time as a customizable parameter.
Anymore I can put a spoiler behind the system after doing this for all necessary master data
Best regards
Bulent
‎2011 Jan 26 9:32 PM
By the way,
if anyone reading this thread has information about the system function or function module I mentioned above,
it still would be very useful for me. Please let us know.
‎2011 Jan 27 7:21 AM
Hi,
i see what you mean... .
So what you want is the status of a buffered table from the local apllication server, right?
In case it is not valid (it had DDLOG records and got invalidated) you will trigger an update of
your shared objects. In that case:
For generic or fully buffered tables:
SBUF_GENERIC_SHOW_OBJECT
input -> tabname output -> stat_id (status)
For single record buffered tables:
SBUF_PARTIAL_SHOW_OBJECT
input -> tabname output -> stat_id (status)
Kind regards,
Hermann
‎2011 Jan 29 9:07 AM
And Dear Hermann
The functions SBUF_GENERIC_SHOW_OBJECT and SBUF_PARTIAL_SHOW_OBJECT
are exactly what I'm looking for when I was posting this thread.
Thank you for it.
‎2011 Jan 27 12:31 PM
> So the application server knows that buffers is invalidated. Keeping this information somewhere.
This is relevant, because there are lots of tables, you must decide something on the basis of the tables connected to your shared object. I see nothing simpler than to check whether the connected tables have created DDLOG entries.
>And than if the buffer is invalidated for a table, data of this table will be reloaded from database
>on next - first SELECT statement from that application server.
no, not on the next SELECT ... there are several SELECTs which go to the database (pending and loadable buffer status). The buffer is not loaded with the next SELECT and it is only for the for application server on which this SELECT appears. Only the local application server on which the update was executed has always an up-to-date buffer.
So the table buffer keeps the load from the invalidations small, what you should also do if the shared object is invalidated.
‎2011 Jan 29 9:06 AM
Dear Siegfried,
I researched about it,
Yes you are right .. buffer of the application server is not being refreshed at next first select
after an invalidation.
Documents I found says next "n" read access are forwarded to database to protect buffer frequent invalidations.
I use st02 to display status of buffer for a table, after I modify a full buffered table
status is set to "pending" and I use st05 to understand buffer used or not .. Buffer is not used for a long time.
Than I wrote a small a program to send 1000 select pieces to that table.. but the status "pending" is not being changed..
After a period of 24 hours I checked the st02.. is still the same.
Can u clarify me when the buffer gets status "valid" again?
Is it belong to a profile parameter?
I use many of tables except for my shared memory object,
so this pending period can cause a countable load for our system.
Thanks anyway.
Best regards.
‎2011 Jan 29 12:57 PM
Hi,
> Can u clarify me when the buffer gets status "valid" again?
> Is it belong to a profile parameter?
yes. Check parmeters documentation for
zcsa/sync_reload_c
zcsa/inval_reload_c
in RZ11. However these parameters should not be changed
in production systems.
For the table that was not loaded anymore... was there enough free space
in the table buffer available? Was your statement able to use the buffer?
Btw: you can find more information about table buffer usage here:
http://www.sap-press.com/products/ABAP-Performance-Tuning.html
Kind regards,
Hermann