‎2010 Nov 19 12:48 PM
Hi all,
I have needed to fetch huge data (800,000) from CDHDR and CDPOS tables in a report. In select statement I have used all the possible key fields in the where condition and tried with CHANGEDOCU_CDPOS_READ functional module , But still CDPOS has taking more time for fetching the data from database. It takes more than 5-10 hrs while running in the background. Can you please anyone suggest me to retrieving data faster from CDPOS?
Thanks in advance,
Regards,
Raj
Edited by: Thomas Zloch on Nov 19, 2010 1:55 PM - local units converted
‎2010 Nov 19 3:27 PM
You need to post your code to get related tips on performance improvements. Have you traced the selection? Where is the time lag - CDHDR or CDPOS? 10 hours is a long time, even for that number of records...
‎2010 Nov 20 1:32 PM
‎2010 Nov 23 11:13 AM
Hi,
it should not take so much of time. how are you doing it?
you must be getting CHANGENR for your OBJECTCLAS and OBJECTID or may be other selection criteia from your CDHDR,
sort the table on basis of OBJECTCLAS OBJECTID and CHANGENR.
delete adjacent duplicates from the table.
then fetchthe records from CDPOS for all these entries of CHANGENR in the table(CDHDR table).
while selecting the data from CDHDR you can take packet size in to consideration.
Regards,
Gunjan
‎2010 Nov 23 6:00 PM
FAEI and use of package size will NOT (generally speaking) improve performance. Package size is primarily a memory management tool. If one does a lot of sorts, loops, etc., use of package size may result in some performance improvement. FAEI is rarely the fastest way to select data, as noted on the ABAP forums, and as proven by SE30 Tips & Tricks.
‎2010 Nov 23 6:20 PM
Well, CDPOS is a cluster table, so a view or JOIN is not possible.
Rob
‎2010 Nov 23 6:27 PM
Dave,
FAEI is rarely the fastest way to select data
Only if the no of entries( in subset table ) is less
‎2010 Nov 26 10:03 PM
Hi Raj,
my understanding is that change documents tables are optimized for fast WRITE operations. The poor performance of the READ is accepted. SAP takes great care to optimize the CDPOS query in the function group SCD2, e.g. use CLIENT SPECIFIED in the FOR ALL ENTRIES to specify all key fiels (should we always do this for use cluster tables?). If you have applied all OSS notes to your system, I would not expect much from a custom query.
Now, if you can afford spending a lot of time and resources on this problem, I will sugest a "poor man index" table for CDPOS, designed with your report/application in mind, that will be filled with periodic job analyzing the change history table and updating your custom "index" table. The gain would then be truly noticeable.
regards,
J.N.N
‎2011 Oct 11 11:33 AM
Hi Jacques,
It seems the way you have proposed to have the index table is very attractive. Especially one of my case.
We are gonna need to access CDPOS first to have the objectid etc. Code will be like below.
CALL FUNCTION 'CHANGEDOCU_CDPOS_READ'
EXPORTING
IV_OBJECTCLAS = 'BANF'
IV_TABNAME = 'EBAN'
IV_FNAME = 'FRGKZ'
IV_CLIENT = SY-MANDT
IMPORTING
EV_COUNT = I_COUNT
ET_CDPOS = IT_CDPOS
ET_PCDPOS = IT_PCDPOS
EXCEPTIONS
NO_IMPORTPARAMETER = 1
OTHERS = 2.
After this, we will do below.
LOOP AT IT_CDPOS INTO WA_CDPOS.
MOVE-CORRESPONDING WA_CDPOS TO WA_CDPOS1.
SELECT SINGLE *
INTO WA_CDHDR
FROM CDHDR
WHERE OBJECTCLAS = 'BANF'
AND OBJECTID = WA_CDPOS-OBJECTID
AND CHANGENR = WA_CDPOS-CHANGENR
AND UDATE IN R_DATE.
IF SY-SUBRC = 0.
WA_CDPOS1-UDATE = WA_CDHDR-UDATE.
APPEND WA_CDPOS1 TO IT_CDPOS1.
ENDIF.
ENDLOOP.
Which means we need to have the objectid from cdpos first. So far, other FM which mentioned in some of the sites are pending to be taken.(CHANGEDOCUMENT_READ_HDRS_ONLY,CHANGEDOCUMENT_READ_HEADERS etc)
The direct way to fine tune the abovve select to have the index table like you mentioned. But how we can use periodic job to update the Z table to keep the synchronous records with CDPOS have?
Can I have some information?
‎2011 Oct 11 4:33 PM
It is a one year old thread ...
I have my doubts that you will get the info you requested
Volker
‎2011 Nov 16 9:06 PM
Hi Raj,
Some other ideas. OSS note 646106 talks about options to read the tables. OSS notes 513454, 706836, and 1257133 may be helpful to a basis admin or DBA. I would ask if the 800000 records are needed and if active archiving of the data is occurring. The 1257133 note may help there to reduce the volume.
Another option is to create a tablespace on EFD (Enterprise Flash Drive) / SDD and place the key tables there. These drives are optimized for reads. EMC has storage with a mixture of EFD/ Fibre Channel / and SATA drives. This would require efforts from the DBA and storage folks and should be a last resort if all other ideas fail. 800000 records are not a lot and access should take minutes not hours.
Good Luck.
Allan Stone
SAP Solutions Architect
EMC Solutions Design Center
Edited by: Rob Burbank on Nov 16, 2011 4:08 PM
‎2012 Jan 05 6:20 PM
Hi,
first get records from CDHDR table using Object class and Object id.
Object id having document number whatever you required.
Based CDHDR table records, get CDPOS table by using Object class, Object id and Change number
It will increase performance for getting Change number documents.
‎2012 Jan 25 10:21 PM
Hello,
If you are doing everything right and passing all possible key fields in select statement and still facing issues - then I would suggest you to give a try using CURSOR. Please find a link below regarding more details to use cursor:
[http://help.sap.com/saphelp_470/helpdata/en/fc/eb3b23358411d1829f0000e829fbfe/content.htm]