Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Determining performance while development

Former Member
0 Likes
1,293

Dear all,

We are creating a new report that selects data from the BSEG and two other tables. There is very little data available on the development server. In such a situation, will the results of SQL trace run on development server be indicative of how the report will perform in production.

We intend to run several tests using joins/views. Can we say that the best performance case here will result in best performance case when the tables have a huge data?

Thanks,

Michael

1 ACCEPTED SOLUTION
Read only

ThomasZloch
Active Contributor
0 Likes
1,264

You could compare two SE30 traces with Siegfried's tool Z_SE30_COMPARE

/people/siegfried.boes/blog/2008/01/15/a-tool-to-compare-runtime-measurements-zse30compare

and find out runtime differences and also whether you have problematic coding which would cause an exponential increase in runtime.

However, certain problems will only surface once the code hits a huge database, e.g. different access paths for complex joins due to different value distribution.

Thomas

14 REPLIES 14
Read only

Former Member
0 Likes
1,264

Yes you are right..............performance can only be analysed in best way if we have huge data.

But you can still run the SQL trace and see how much average time your query is taking to pull datat from database. It will give you an idea in case you are not hitting the table with proper keys in hwere clause.Based on run time in dev you can always extrapolate the results for initial performance testing.

Read only

ThomasZloch
Active Contributor
0 Likes
1,265

You could compare two SE30 traces with Siegfried's tool Z_SE30_COMPARE

/people/siegfried.boes/blog/2008/01/15/a-tool-to-compare-runtime-measurements-zse30compare

and find out runtime differences and also whether you have problematic coding which would cause an exponential increase in runtime.

However, certain problems will only surface once the code hits a huge database, e.g. different access paths for complex joins due to different value distribution.

Thomas

Read only

Rui_Dantas
Active Contributor
0 Likes
1,264

If the tables already exist in production, one thing you can is get the query from ST05 trace (in DEV), and then enter that statement in ST05 (in PRD) to see what the actual explain plan will be. From there you can see if the access will be good or not.

Read only

naveen_inuganti2
Active Contributor
0 Likes
1,264

Yes, And that is the reason we should follow proper performance condition while writing the code itself.

As a developer you should know which select query, which kind of data declartion, condition checks, caluclations in the report can increase the performance of the program.

Some time even if you write every thing good wrt performance conditions, that may creates problem at higher level system. Yes, it may happen, because memory utility is also part of program, this is independent of the runtime and performance environment of our program. Hope you know, for this we should clear our local variable where ever unneccessary and refreshing the internal table, Also make use of the FREE syntax to avoid such a kind of syntax.

One more case, here our program will run properly both performance wise and Memory allocation wise, but it may creates problem to another programs and transaction like locking and unlocking of data base tables , making utility of text symbols and all can few examples of this kind of issues.

So always do best code,

All the best

Naveen Inuganti.

Read only

Former Member
0 Likes
1,264

Hi,

Performance measured with huge data can be considered as bench mark for production server, but as this report deals with BSEG data which contains enormous data in production server which will result in performance issues.

According to me, you should look into other tables available along with BSEG which can be used instead of that. Tables Like BSIS, BSAS, BSID, BSAD, BSIK, BSAK.

Usage of these tables will help alot in enhancing performance of report.

Regards,

Brajvir

Read only

Former Member
0 Likes
1,264

It is nice, that the tool is recommended, but it is not really intended for that purpose.

Database problems can not be found if the dbtables are very small. There the optimizers

do too much interpretation. They will choose a lot of full table scans for example what they

will not do, when the db tables are large. Be aware, that the cursor cache must be refreshed

when you increase the data gradually, otherwise you will continue to use the same access

paths also for the large tables.

The tool is intended for problems with internal tables and also problems on large db tables.

There you can have performance problems when you process mass data, i.e. 10.000 or more table

lines. These problems are normally invisible if you run the program with 100 lines. With the tool

you can see that some program parts increase too fast when tables increase from 100 to 500 for example. These parts are the problems at 10.000 lines.

Siegfried

Read only

Former Member
0 Likes
1,264

BSEG is very large data base table and also you cant use secondary index on that table .

so to improve performance i thing you should more where condition in the program.

Read only

Former Member
0 Likes
1,264

If you use proper performance tuning techniques, it shouldn't matter. For SELECTing from BSEG, see [Quickly Retrieving FI document Data from BSEG|/people/rob.burbank/blog/2007/11/12/quickly-retrieving-fi-document-data-from-bseg]

Rob

Read only

former_member207438
Participant
0 Likes
1,264

A word of caution. BSEG is a cluster table. The actual database table name is RFBLG.

Only use BUKRS, BELNR, GJAHR fields in your SELECT statements with BSEG.

Also when selecting from BSEG make sure you do not run out of memory. I had to use PACKAGE SIZE with my select to limit the number of selected records.

Read only

0 Likes
1,264

>

> Only use BUKRS, BELNR, GJAHR fields in your SELECT statements with BSEG.

That's incorrect. It may not mean much from a performance standpoint, but if you know BUZEI, use it in the SELECT to get correct results.

Rob

Read only

former_member207438
Participant
0 Likes
1,264

Rob,

I just read your blog on retrieving FI docs.

On my company SAP installation BUZEI is not part of the primary key of BSEG. Even though it's shown as a primary key of the BSEG table. The actual database table RFBLG does not contain BUZEI in its primary key.

Read only

0 Likes
1,264

I don't understand your argument about BUZEI. From a developer point of view, I'm dealing with the DDIC definition of BSEG and not with the physical cluster RFBLG.

Did you measure any negative impact when including BUZEI in the selection on BSEG, provided all other primary key fields are supplied as well?

Thomas

Read only

former_member207438
Participant
0 Likes
1,264

Agree. Specifying BUZEI (provided all other key fields are also specified) is faster than leaving it out and doing subsequent processing.

I guess it's my Oracle-self tells my ABAP-self, "BUZEI is not really part of the key"

Read only

Former Member
0 Likes
1,264

Thanks everyone.