Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance tuning SELECTs: Can I force a TRUE db read every time?

former_member210148
Participant
0 Kudos

Good day everyone!

I've been programming in ABAP for almost 6 years now, and I'd like to begin learning more about performance tuning with respect to database performance.

More specifically, I'm testing some things in a particular SELECT in a report program we have that is timing out in the foreground because of the SELECT. When I first run the program, the SELECT goes against the database, as we all know. Subsequent runs, however, use the buffered data, so the response is a lot quicker and doesn't really reflect that first, initial database read.

Am I correct in assuming that I should be testing my various approaches and collecting performance runtimes against that initial, "true" database read? If that's the case, is there any way I can force the system to actually read the database instead of the buffered data? For those experienced with this kind of performance analysis and tuning, what's the best approach for someone very new to this area such as myself?

Thank you,

Dave

1 ACCEPTED SOLUTION

Former Member
0 Kudos

You can add BYPASSING BUFFER to the SELECT. But SAP recommends that you do multiple SELECTs and take the lowest time for your analysis.

Rob

14 REPLIES 14

Former Member
0 Kudos

You can add BYPASSING BUFFER to the SELECT. But SAP recommends that you do multiple SELECTs and take the lowest time for your analysis.

Rob

0 Kudos

So the recommendation is to take a few test run readings and get an average?

Thanks Rob!

0 Kudos

No - the lowest. I think you'll find this documented somewhere.

Rob

0 Kudos

Gotcha. I was reading too much into your original reply.

Thanks again, Rob -- this is a big help!

Dave

0 Kudos

A better solution is to use transaction ST05. Look at the stickies at the top of this forum to get some more ideas.

Rob

former_member182566
Active Contributor
0 Kudos

Hi Dave and Rob,

Just my two cents (and yes, I know this is already answered, but..).

I think you might be confusing 2 things: one is SAP buffering, and another one is caching at other levels (database, operating system, etc).

From what I understood Rob was talking mainly about SAP buffering. In that context it is true that if there is a first execution that loads the buffers (for example, some not so small fully buffered tables) then that is an atypical execution, and should be discarded. In real life you will never have execution times like those, except maybe on the very first execution on a monday morning.

Another thing is database caching. If you execute a report twice with exactly the same parameters then you might not be actually making physical reads in the second execution. This second execution will be very fast, but that will not be simulating real life: no user wants a report to be fast the second time you execute it with exactly the same parameters.

To avoid this in Oracle you can empty the so-called SGA, but that is not so useful and it will probably not get you closer to what happens in the real life.

So what to do? In doubt, measure it several times, with different parameters, and probably exclude the extreme values.

Regards,

Rui Dantas

0 Kudos

Yeah - I knew that; I was just trying to keep it simple

Rob

0 Kudos

Okay, I think my concern was with the database CACHING. That is, the first read takes much longer than subsequent reads. I suppose I could just do a "junk" run with different parameters so that my next test run wouldn't use the cached data. Does that sound feasible?

0 Kudos

The "explain" function in ST05 should give you results independent of caching. Please check those articles and blogs.

Rob

0 Kudos

>

> I suppose I could just do a "junk" run with different parameters so that my next test run wouldn't use the cached data.

This also won't make sure everything is out of the cache (but again, you don't necessarily want that).

I would still advise you to measure several times and with different parameters.

Rob is right that you should check the Explain Plan to know the path the db is using (if you don't know how to read the explain you really should learn). Do keep in mind though that this explain will estimate costs, but the estimation might not be correct (that is, a query might be estimated as better but end up running slower).

It also depends if you are tweaking one specific query, or if you have two completly different approaches that you want to compare.

(sorry if I am just making things more confusing, but they are really not that simple)

former_member194613
Active Contributor
0 Kudos

It is good that the difference between table buffer and caching, mainly DB, is explained here.

Generally, we measure executions with cached data and not the physical ones. This is the only data which are reliable and repeatable. And the only one which can optimized properly.

The first execution will improve also when the subsequent executions improve. But you can not change the speed of the physical read without changing the hardware.

0 Kudos

Hi,

fully agreed. What our DB colleagues do IF the first execution (physical I/Os) matters is

reorganize the data on the hardware (e.g. sort tables) in order to achieve better access times through better clustering.

Kind regards,

Hermann

0 Kudos

Hello colleagues,

there are other technical measures that may help here. For example, make sure that the blocks of certain DB table are always in the DB cache. Increasing the cache itself can also be a good idea.

But yes, the topic is tricky

I would actually start with the minimization of the physical reads. Here some measures like adding selected fields from the table to the index to avoid table access may help reducing the number of buffer gets and subsequently physical reads.

So we come back to the optimization of the execution plan of an SQL statement.

Cheers,

Yuri

0 Kudos

Hi Yuri,

nice to see you here.

And yes, i agree with you.

First tune the SQL (as far as possible) then optimize storarge or change configuration of caches (if necessary) .

Kind regards,

Hermann