cancel
Showing results for 
Search instead for 
Did you mean: 

Database resources: how/what to measure?

Former Member
2,223

We are running some processes overnight, and we've seen some odd behavior.

We have one process that does a really big select statement, then exports this data to a comma delimited file. The export does not use SQL Anywhere unload functionality, but a custom written piece of software that grabs data row by row (and we will change that, but I am not even sure this info is relevant)

The thing is, on a database that is just started, this process will take 4 minutes. On a database that's been running a couple of days in production, this can take 40 minutes and longer. It's not that this process is stuck, it does eventually finish correctly, but it takes very, very long. We suspect we might be doing something that is leaking resources on the database. It might very well not even be this process, but something else that happens during normal business that we don't see the effects off until we try to run this process.

What resources does a SQL 11 database use? What statistics can/should we look at, and where?

Thanks for helping!

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member

On my blog you'll find a whitepaper that discusses diagnosing performance issues, entitled "Diagnosing Application Performance Issues with SQL Anywhere":

http://iablog.sybase.com/paulley/whitepapers/

The whitepaper documents how one can use SQL Anywhere's Application Profiling feature to profile an application and diagnose where the performance issues may lie. With your example specifically, I would be looking for execution plan differences between the query running "cold" and the instance running "hot". But there may be other factors (blocking, for example) that can account for the difference in elapsed time.

MCMartin
Participant

We have had a similar problem in the past. You should have a look at the query plans directly after restart of the database and after some workload has performed. I expect you will see a difference. Directly after start with a clean cache the query optimizer prefers plans which don't rely on cached data, as the cache is still empty. In our case that let to the effect, that directly after start it used an index to find the effected rows and was fast. With a warm cache then the optimizer choosed a sequential scan relying on a full cache, but in real it was not as fast as estimated 😉

VolkerBarth
Contributor
0 Kudos

@Martin: Interesting effect - have you been able to improve this behaviour (by rewritung your query, using an index-hint or the like)?

Breck_Carter
Participant
0 Kudos

@Volker: I have seen that behavior on rare occasion, more often with older versions of SQL Anywhere and yes, forcing an index has worked. That may be why Glenn suggested using the "application profiling" feature because capturing the plan on-the-fly in situations like this is rather a challenge otherwise. Plans are not frozen in time at compile time, they are built at run time.

MCMartin
Participant
0 Kudos

@Volker: It was with 9.0.2 and using an index hint helped, but a patch from sybase regarding the statistics estimation solved that particular problem for me.

Breck_Carter
Participant

One thing that restarting the database does is flush and recreate the "database temporary file".

One thing that can happen to the temporary file over time, as more and more connections are started and complex queries are run, is that it grows in size... and becomes physically fragmented.

I'm not saying this is your problem, but... you can test it as follows: Wait until your Query From Hell slows down, and then defragment just the temporary file to see if that helps.

Step by step (leave the database running while you do this)...

(1) Download Microsoft contig from http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx

(2) Run this dbisql query to find where the temporary file is:

SELECT DB_PROPERTY ( 'TempFileName' );

DB_PROPERTY('TempFileName')
'C:\\\\DOCUME~1\\\\bcarter\\\\LOCALS~1\\\\Temp\\\\sqla0001.tmp'

(3) Copy and paste the dbisql cell value into a contig command and run it:

C:\\download\\sysinternals\\contig_1_55\\contig.exe C:\\DOCUME~1\\bcarter\\LOCALS~1\\Temp\\sqla0001.tmp

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\\Documents and Settings\\bcarter>C:\\download\\sysinternals\\contig_1_55\\contig.ex
e C:\\DOCUME~1\\bcarter\\LOCALS~1\\Temp\\sqla0001.tmp

Contig v1.55 - Makes files contiguous
Copyright (C) 1998-2007 Mark Russinovich
Sysinternals - www.sysinternals.com

Summary:
     Number of files processed   : 1
     Number of files defragmented: 1
     Average fragmentation before: 13 frags/file
     Average fragmentation after : 1 frags/file

C:\\Documents and Settings\\bcarter>

(4) Determine if the Query From Hell runs any faster. If it does, then we can look for the reason(s) the temporary file is getting messed up... post another question.

If not, then just down-vote THIS answer 🙂

Former Member
0 Kudos

I will try this. Since we restarted the db last night, it will take a few days for results. Regardless of result, posts of this Q get thumbs up, not down, from me. 🙂

Breck_Carter
Participant
0 Kudos

You should also do application profiling, now and after things slow down... THAT will take a bit more effort than running contig but it may find the problem. If you have questions about application profiling then post new questions here.