‎2007 Mar 14 3:03 AM
Hi,
I'm trying to join the following FI tables into 1 table:
bkpf
kna1
knb1
bsid
and these into another table:
bkpf
kna1
knb1
bsad
...so any efficient way of joining these? Current code uses inner joins.. but its causing performance issues.
i'm thinking of using for all entries...
any comments on which method? and why that method? if use for all entries...
what is the order i should start with?
Thanks.
‎2007 Mar 14 3:14 AM
The amount of data in these tables do cause the performance issue.
The only way to increas the performance is to use all key fields for the tables to extract data.
If this select is calling many times then instead call all the required data from all the tables in 4 different internal tables and do READ statement usng where claue.
This method is still applicable if you just read data from BKPF,BSAD,BSID in the table and use READ on it.
Regards,
Amit
Reward all helpful replies.
‎2007 Mar 14 3:26 AM
I have answered you in another post, here it is
You first need to understand what is contained in the secondary index tables as compared to the main FI document posting table BKPF. Then you can make a better decision in joining.
The index tables BSID (only customer active postings) and BSAD (only customer and that too cleared postings, huge piles up over time) are much smaller in size compared to BKPF which also contains postings to general ledger, vendor posting and so on. So BSID is subset of BSAD which in turn is subset of BKPF.
KNA1 on the other hand is master data table and should be still smaller in size (why because for each customer there may be many FI postings)
Also the MOST IMPORTANT point is which fields are available to you for selection? For example, if you have customer number, first join KNA1 with KNB1 and then with BSAD or BSID, then join with BKPF.
So tell us the selection fields that are available to you to seed the query
‎2007 Mar 14 3:57 AM
Thanks for all the input. I'll reward after i try them out.
i'm also taking into consideration this information
total no of records in tables:
kna1: 2,356,681
knb1: 6,430,912
bkpf: 30,489,104
bsid: 507,874
bsad: 18,075,629
some fields include:
bukrs "Company code, mandatory, usually single value selection
kunnr "Customer No, mandatory, may be single, range or selecting all
busab "Accounting clerk, mandatory, may be single or range values
ktokd "Customer Account Group, mandatory, usually single value selection
budat "13 months selection, mandatory, range, fixed range based on current date
zuonr " not mandatory
‎2007 Mar 14 9:09 AM
‎2007 Mar 14 1:36 PM
Why not keep kna1 and knb1 in one table and bkpf, bsid and bsad in another.. It will help reducing database load..
‎2007 Mar 14 5:41 PM
Hi Charles,
According to your requirement there could be multiple ways you can try and solve this issue.
1) try joining kna1 & knb1 and use for all entries or loop and select from bkpf and bsad or bsid separately using index ( as they have huge data).
2) Join KNA1, KNB1 and BKPF and the resultant use in BSAD and BSID separately as the first resultant is common for both the tables.
3) If indexes are generated then try and read them separately and then join them.
Try all methods and check the performance..
Hope you find a solution here..
Let me know if you have further questions..
BR
Rakesh
‎2007 Mar 16 7:42 AM
‎2007 Mar 16 8:08 AM
i'm thinking of the following steps:
1) inner join kna1, kbb1
2) access bsid with for all entries (kna1, kbb1)
3) access bsad with for all entries (kna1, kbb1)
4) append bsad to bsid itab
5) acess bkpf with for all entries (bsid itab)
the problem is i'm cant really make use of indexes for accessing bsid and bsad...
which is causing the slowness...
accessing bsid has the same selection criteria and where condition as bsad..
but these the where fields are either using IN and ranges.. index cannot be used..
"select with quite a number of fields....
WHERE bukrs IN s_bukrs
AND kunnr IN s_kunnr
AND budat BETWEEN backdate_13 AND p_r_date
AND blart IN r_blart
AND zuonr IN szuonr.
‎2007 Mar 16 9:06 AM
Hi charles
refer to the blog related to FI tables.
/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops
regards,
madhu
‎2007 Mar 16 9:10 AM
‎2007 Mar 16 10:59 AM
What is wrong with this SQL statement? abap dump generated
checking through ST11...found this..
ERROR => max. statement length (65536) exceeded
there are
114 records in r_blart
44 records in itab
in this selection statement
SELECT
bukrs
belnr
gjahr
kunnr
blart
buzei
budat
bldat
mansp
shkzg
xblnr
bschl
dmbtr
zuonr
INTO TABLE i_bsid
FROM bsid
FOR ALL ENTRIES IN itab
WHERE bukrs = itab-bukrs
AND kunnr = itab-kunnr
AND budat BETWEEN backdate_13 AND p_r_date
AND blart IN r_blart
AND zuonr IN szuonr.
‎2007 Mar 16 11:03 AM
Hi
It seems you have not enough memory space for WHERE condition, probably it can depend on the number of the hit in R_BLART.
Or check how you have defined the table i_bsid: it should have only the fields indicated in the select.
Max
‎2007 Mar 16 11:10 AM
but how come if there are
43 records in itab
..there will be no dump?
the dump is something to do with the length of the SQL statement?
as mentioned in ST11?
‎2007 Mar 16 12:44 PM
hi charles
can u post how itab is populated and its declaration too.
regards,
madhu
‎2007 Mar 16 2:56 PM
yes, i suspect its due to the length of the SQL after it is translated into native SQL.. something to do with the SQL statement length
i wonder if there is any way to solve this?
i'm doing performance tuning for a report which does a inner join for
kna1, kpb1, bkpf and bsid...
the report later does another inner join for
kna1, kpb1, bkpf and bsad...with the same criteria and combines both itab into a big itab..
these 2 inner joins can take up for several hours for huge data... (selection for 13months)
so, i'm trying to break this up into more efficient statements...
itab is a standard table.
SELECT
knb1~bukrs
knb1~kunnr
knb1~busab
kna1~ktokd
kna1~adrnr
kna1~brsch
kna1~name1
INTO TABLE itab
FROM knb1 INNER JOIN kna1
ON knb1~kunnr = kna1~kunnr
WHERE knb1~bukrs IN s_bukrs "Company code
AND knb1~kunnr IN s_kunnr "Customer Number
AND knb1~busab IN s_busab "Accounting clerk
AND kna1~ktokd IN s_ktokd. "Customer Account Group
‎2007 Mar 18 2:05 PM
Hi Charles,
your problem occurs because the deprecated ranges couldn't be split into smaller ranges by the database interface of the SAP kernel. Therefore the size of the SQL statement exceeds sooner or later the database limit. If you rewrite it with an internal table and for all entries, the kernel will split the SQL into suitable pieces (depending on the rsdb/max_blocking_factor and rsdb/max_in_blocking_factor parameters).
If your ranges contains real ranges (low - high which results into BETWEEN) you should split your ranges to suite the database limits.
Regards
Ralph Ganszky
‎2007 Mar 16 6:04 PM
Here is the deal.
In your case the Inner join is working against you. you will be far better off, breaking down the Inner join into separate sorted internal tables. Let me know if you need help with that.
Also consider reading data in packages rather than all at once, may be 25,000 to 50,000 and build them into one big table if it is for a report, if it is for a file then you are better off sending the combined string directly to the file, if it is ALV then u have no choice but to build a big internal table, if you are doing SapScript or write statements then consider inserting report logic within the selections.
Let me know if you need any assistance in any areas I have described.
‎2007 Mar 17 9:06 AM
Hi Richard,
This is just a normal report with total of almost 30 variants, not using alv...
but requires calculation..so no choice but to combine everything into a big itab...
the problem is with the inner join of the 4 tables.
& this is done 2 times.. 1st with bsid.. then 2nd with bsad.. & then both itabs are combined.
i realised if i break up this 2 inner join statements into smaller statements and use for all entries... i may encounter other problems..
eg. if i start by inner join kna1 and knb1..
some variants will result in a huge amount of customer data... so for all entries would have problem here
‎2007 Mar 20 12:11 PM
‎2007 Mar 23 2:26 AM
hi,
How about the LDB:BRF?
Maybe this way is easier for you.
‎2007 Mar 23 3:11 PM