Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

SQL performance: Joining FI tables

Former Member
0 Likes
2,135

Hi,

I'm trying to join the following FI tables into 1 table:

bkpf

kna1

knb1

bsid

and these into another table:

bkpf

kna1

knb1

bsad

...so any efficient way of joining these? Current code uses inner joins.. but its causing performance issues.

i'm thinking of using for all entries...

any comments on which method? and why that method? if use for all entries...

what is the order i should start with?

Thanks.

21 REPLIES 21
Read only

amit_khare
Active Contributor
0 Likes
2,001

The amount of data in these tables do cause the performance issue.

The only way to increas the performance is to use all key fields for the tables to extract data.

If this select is calling many times then instead call all the required data from all the tables in 4 different internal tables and do READ statement usng where claue.

This method is still applicable if you just read data from BKPF,BSAD,BSID in the table and use READ on it.

Regards,

Amit

Reward all helpful replies.

Read only

Former Member
0 Likes
2,001

I have answered you in another post, here it is

You first need to understand what is contained in the secondary index tables as compared to the main FI document posting table BKPF. Then you can make a better decision in joining.

The index tables BSID (only customer active postings) and BSAD (only customer and that too cleared postings, huge piles up over time) are much smaller in size compared to BKPF which also contains postings to general ledger, vendor posting and so on. So BSID is subset of BSAD which in turn is subset of BKPF.

KNA1 on the other hand is master data table and should be still smaller in size (why because for each customer there may be many FI postings)

Also the MOST IMPORTANT point is which fields are available to you for selection? For example, if you have customer number, first join KNA1 with KNB1 and then with BSAD or BSID, then join with BKPF.

So tell us the selection fields that are available to you to seed the query

Read only

Former Member
0 Likes
2,001

Thanks for all the input. I'll reward after i try them out.

i'm also taking into consideration this information

total no of records in tables:

kna1: 2,356,681

knb1: 6,430,912

bkpf: 30,489,104

bsid: 507,874

bsad: 18,075,629

some fields include:

bukrs "Company code, mandatory, usually single value selection

kunnr "Customer No, mandatory, may be single, range or selecting all

busab "Accounting clerk, mandatory, may be single or range values

ktokd "Customer Account Group, mandatory, usually single value selection

budat "13 months selection, mandatory, range, fixed range based on current date

zuonr " not mandatory

Read only

Former Member
0 Likes
2,001

anymore input? thanks!

Read only

0 Likes
2,001

Why not keep kna1 and knb1 in one table and bkpf, bsid and bsad in another.. It will help reducing database load..

Read only

Former Member
0 Likes
2,001

Hi Charles,

According to your requirement there could be multiple ways you can try and solve this issue.

1) try joining kna1 & knb1 and use for all entries or loop and select from bkpf and bsad or bsid separately using index ( as they have huge data).

2) Join KNA1, KNB1 and BKPF and the resultant use in BSAD and BSID separately as the first resultant is common for both the tables.

3) If indexes are generated then try and read them separately and then join them.

Try all methods and check the performance..

Hope you find a solution here..

Let me know if you have further questions..

BR

Rakesh

Read only

Former Member
0 Likes
2,001

any more input? thx

Read only

Former Member
0 Likes
2,001

i'm thinking of the following steps:

1) inner join kna1, kbb1

2) access bsid with for all entries (kna1, kbb1)

3) access bsad with for all entries (kna1, kbb1)

4) append bsad to bsid itab

5) acess bkpf with for all entries (bsid itab)

the problem is i'm cant really make use of indexes for accessing bsid and bsad...

which is causing the slowness...

accessing bsid has the same selection criteria and where condition as bsad..

but these the where fields are either using IN and ranges.. index cannot be used..


"select with quite a number of fields....
      WHERE bukrs IN s_bukrs 
        AND kunnr IN s_kunnr
        AND budat BETWEEN backdate_13 AND p_r_date  
        AND blart IN r_blart 
        AND zuonr IN szuonr. 

Read only

0 Likes
2,001

Hi charles

refer to the blog related to FI tables.

/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops

regards,

madhu

Read only

0 Likes
2,001

ok thanks

Read only

Former Member
0 Likes
2,001

What is wrong with this SQL statement? abap dump generated

checking through ST11...found this..

ERROR => max. statement length (65536) exceeded

there are

114 records in r_blart

44 records in itab

in this selection statement


    SELECT
      bukrs
      belnr
      gjahr
      kunnr
      blart
      buzei
      budat
      bldat
      mansp
      shkzg
      xblnr
      bschl
      dmbtr
      zuonr
      INTO TABLE i_bsid
      FROM bsid
      FOR ALL ENTRIES IN itab
      WHERE bukrs = itab-bukrs
        AND kunnr = itab-kunnr
        AND budat BETWEEN backdate_13 AND p_r_date
        AND blart IN r_blart
        AND zuonr IN szuonr.

Read only

0 Likes
2,001

Hi

It seems you have not enough memory space for WHERE condition, probably it can depend on the number of the hit in R_BLART.

Or check how you have defined the table i_bsid: it should have only the fields indicated in the select.

Max

Read only

Former Member
0 Likes
2,001

but how come if there are

43 records in itab

..there will be no dump?

the dump is something to do with the length of the SQL statement?

as mentioned in ST11?

Read only

0 Likes
2,001

hi charles

can u post how itab is populated and its declaration too.

regards,

madhu

Read only

0 Likes
2,001

yes, i suspect its due to the length of the SQL after it is translated into native SQL.. something to do with the SQL statement length

i wonder if there is any way to solve this?

i'm doing performance tuning for a report which does a inner join for

kna1, kpb1, bkpf and bsid...

the report later does another inner join for

kna1, kpb1, bkpf and bsad...with the same criteria and combines both itab into a big itab..

these 2 inner joins can take up for several hours for huge data... (selection for 13months)

so, i'm trying to break this up into more efficient statements...

itab is a standard table.


  SELECT
    knb1~bukrs
    knb1~kunnr
    knb1~busab
    kna1~ktokd
    kna1~adrnr
    kna1~brsch
    kna1~name1
    INTO TABLE itab
    FROM knb1 INNER JOIN kna1
      ON knb1~kunnr = kna1~kunnr
    WHERE knb1~bukrs IN s_bukrs   "Company code
      AND knb1~kunnr IN s_kunnr   "Customer Number
      AND knb1~busab IN s_busab   "Accounting clerk
      AND kna1~ktokd IN s_ktokd.  "Customer Account Group

Read only

0 Likes
2,001

Hi Charles,

your problem occurs because the deprecated ranges couldn't be split into smaller ranges by the database interface of the SAP kernel. Therefore the size of the SQL statement exceeds sooner or later the database limit. If you rewrite it with an internal table and for all entries, the kernel will split the SQL into suitable pieces (depending on the rsdb/max_blocking_factor and rsdb/max_in_blocking_factor parameters).

If your ranges contains real ranges (low - high which results into BETWEEN) you should split your ranges to suite the database limits.

Regards

Ralph Ganszky

Read only

Former Member
0 Likes
2,001

Here is the deal.

In your case the Inner join is working against you. you will be far better off, breaking down the Inner join into separate sorted internal tables. Let me know if you need help with that.

Also consider reading data in packages rather than all at once, may be 25,000 to 50,000 and build them into one big table if it is for a report, if it is for a file then you are better off sending the combined string directly to the file, if it is ALV then u have no choice but to build a big internal table, if you are doing SapScript or write statements then consider inserting report logic within the selections.

Let me know if you need any assistance in any areas I have described.

Read only

0 Likes
2,001

Hi Richard,

This is just a normal report with total of almost 30 variants, not using alv...

but requires calculation..so no choice but to combine everything into a big itab...

the problem is with the inner join of the 4 tables.

& this is done 2 times.. 1st with bsid.. then 2nd with bsad.. & then both itabs are combined.

i realised if i break up this 2 inner join statements into smaller statements and use for all entries... i may encounter other problems..

eg. if i start by inner join kna1 and knb1..

some variants will result in a huge amount of customer data... so for all entries would have problem here

Read only

Former Member
0 Likes
2,001

will open cursor help in this case?

Read only

Former Member
0 Likes
2,001

hi,

How about the LDB:BRF?

Maybe this way is easier for you.

Read only

0 Likes
2,001

?