‎2009 Nov 05 7:23 AM
Hello Gurus,
I need a help.
The below code is used in a BI-FIGL custom user exit.
BSIS read statement.
ABAP Code:
Get Cost Center Accounting from BSIS table
SELECT bukrs "Company Code
gjahr "Fiscal Year
monat "Fiscal Period
hkont "General Ledger Account
prctr "Profit Center
budat "Posting Date in the Document
gsber "Business Area
zuonr "Assignment Number
belnr "Accounting Document Number
kostl "Cost Center Accounting
FROM bsis
INTO TABLE pt_kostl_acc
FOR ALL ENTRIES IN pt_gl_data
WHERE bukrs EQ pt_gl_data-bukrs
AND gjahr EQ pt_gl_data-gjahr
AND monat EQ pt_gl_data-monat
AND hkont EQ pt_gl_data-hkont
AND prctr EQ pt_gl_data-prctr
AND budat EQ pt_gl_data-budat
AND gsber EQ pt_gl_data-gsber
AND zuonr EQ pt_gl_data-zuonr
AND belnr EQ pt_gl_data-belnr.
The extractor literally stops at the above select statement.
Please advise how to solve it.
I created an index . Actually it wroked fine in Q. When I moved to production it is not working. Did I do anything wrong ?
Let me know. Thanks
‎2009 Nov 05 8:17 AM
Hi Senthil,
From what I understand from the post..I assume pt_gl_data must be initial in production. In this case the query tries to fetch all entries from backend.
If this is the scenario..please run the query after the checking the table for initial value
if pt_gl_data is not initial.
select..
endif.
Regards,
Jemin Tanna
‎2009 Nov 05 1:53 PM
Thanks for the reply. I already check if that int table is not intial then only it comes to this select statement. This is a BI extractor and the first statement is check that internal table is not initial then do the processing.
Thanks
Senthil
‎2009 Nov 05 1:56 PM
Hi Senthil,
The only way to avoid this is then fetch in parts...there are two ways of doing it
Split internal table in subset internal table of say 1000 at a time and query..or use upto X rows in select statement and then modify the furthe query to remove the already fetched rows. option 1 is more apt. according to me.
This will help solve the problem.,
Regards,
Jemin Tanna
‎2009 Nov 05 2:03 PM
Hi,
select..endselect would be a better option for BSIS rather than using for all entries in. We have came across this issue before and replacing for all entries in with select..endselect worked out much faster.
regards,
nilesh.
‎2009 Nov 05 2:07 PM
Hi Nilesh,
This will work out too...but the proposal of table split will be faster as..select end select is looping and fetching 1 record at a time..aplting in 1000 is fetching 1000 at a time which is faster than 1000 time loop.
In any case both are faster than the current problem addressed by Senthil.
Cheers
Jemin
‎2009 Nov 05 2:11 PM
hi jemin,
This was just a suggestion. Both of them would work fine. In our case, after analyzing the ST05 trace, we decided to go with select..endselect. It is upto the user which type query suits best for their system.
regards,
nilesh.
‎2009 Nov 05 6:31 PM
Thanks to all.
here is my problem. I had the same issue in QA few weeks back. QA was refreshed with PRD data and almost the no of records are same. Around 13.1 million records. This is a user exit in standard BI extractor. It used to hang at the sequential read from BSIS. I created a secondary index and moved to QA and updated the statistics. it worked fine. I can extract almost 2 quarters of data to BW without any issue. I moved all the transports to production couple of days back and again started the extractor. The same problem it got stuck at the BSIS select. I have the same index in production and also updated the statistics. I don't have the ST05 access and I have asked the basis to give to find oout what index it is using to select from BSIS. I can change the code accorind to the suggestions above. In BI each packet size is 15 K records. So for each packet it will do the select each time. So I don't think 15 K records are too high.I though of asking the basis to look into this issue. Since it is working fine in QA. Do you guys have any suggestion. Again thanks for your help.
Edited by: Senthil Esakkiappan on Nov 5, 2009 7:31 PM