cancel
Showing results for 
Search instead for 
Did you mean: 

Commit blocking all database connections

Former Member
7,729

Using an SQL Anywhere database (v12.0.1) and have 2-3 open connections. When one of the connections executes a COMMIT, other connections (which are performing reads on unrelated tables) are blocked until the commit completes.

In fact, even opening an Interactive SQL session to the database and executing command such as CALL sa_conn_info( ); blocks the client until the commit on the other connection completes. This is making it quite difficult to debug the issue as the whole database seems to be blocked until the commit completes.

Configuration parameters are set to their default values. Some key parameters that I have checked are:

isolation level = 0
cooperative_commits = On
cooperative_commit_timeout = 250ms
delayed_commits = Off

Does anybody have any idea of any database configuration options or anything else which may be causing this issue?

Breck_Carter
Participant
0 Kudos

That is an unexpected symptom... even wild guesses like "are you running with dbsrv12 -m?" wouldn't (shouldn't) cause it.

Here's another wild guess: Is wait_for_commit = 'On' for the connection doing the COMMIT?

How long does the COMMIT take? They usually don't take very long because they aren't actually doing much of anything (ROLLBACK is the real killer).

What else is going on, at or near the point the COMMIT is executed? Try running a debugger to see if the commit is actually the culprit... an application-level debugger if that's where the COMMIT is, or the stored procedure debugger if it is inside a stored procedure... the idea is to see what the code is actually doing when the other connections get stuck.

justin_willey
Participant
0 Kudos

What is the client? Is it possible that connection settings are getting overwritten and the troublesome connection is in fact operating at a higher isolation level? Are you able to check the actual level being used by the connection in Sybase Central (maybe you are - I wan't quite sure from the question).

Breck_Carter
Participant

...but why would the effects of high isolation only come into effect when a COMMIT is executed, rather than earlier when the locks were first obtained?

And why would an independent connection doing sa_conn_info() be affected?

...this is a puzzler, worthy of Sherlock Holmes 🙂

justin_willey
Participant

Very true - it can't be something as simple as that.

Breck_Carter
Participant
0 Kudos

...no, it's something even simpler ...doh! 🙂

Accepted Solutions (1)

Accepted Solutions (1)

Former Member

Thanks for the comments/suggestions from everybody! It turns out that the solution was very simple.

The problem was that the database was not using a transaction log. As I understand, this results in a checkpoint being performed on each commit. In this particular case, it had a very severe performance impact (commits were sometimes taking up to 60 seconds to complete).

In our case, we are using the DBTools C API to create (an embedded) database and a transaction log is not used by default unless explicitly specified.

Breck_Carter
Participant

THANK YOU for posting the solution! It should be a real head-slapping moment for people (like me) who didn't suggest that answer before, because all the so-called experts should remember that running without a transaction log makes commits slowwwww, as in point 19 here:

http://sqlanywhere.blogspot.ca/2008/04/how-to-make-sql-anywhere-slow.html

The DBTools behavior sounds like a bug because the docs for a_create_db say this... http://dcx.sybase.com/index.html#sa160/en/dbprogramming/programming-sadbtoolscpp-a-create-db-str.htm...

logname const char * New transaction log name. Equivalent to dbinit -t option.

That implies it should work like dbinit, where the default is to create a transaction log. With dbinit you have to specify -n to suppress the log, and that's not even possible with a_create_db.

If it's not a bug, it certainly violates The Watcom Rule: "Watcom does things the way they should be done."

VolkerBarth
Contributor
0 Kudos

Just for clarification:

Have you tested this a_create_db behaviour, or is this based on the API doc?

Breck_Carter
Participant
0 Kudos

Me? No, I am responding to Yukiko's statement... according to the API docs there is no way to say whether the default log file is created or not. There is only a field for specifying the explicit log file spec. I assume Yukiko left that field empty (a safe assumption IMO). He is reporting that's the reason the log file was not created.

Perhaps it is worth an independent test ... hint ... hint ... Volker 🙂

VolkerBarth
Contributor
0 Kudos

I hope John can tell what the actual code does (yes, that's another hint...:)

Breck_Carter
Participant
0 Kudos

John's a busy guy... he might not read this far down in the comments... but he would read a new post by... hint... hint... 🙂

I've tested this and can confirm calling a_create_db with a NULL or empty string for logname will result in a database without a transaction log. It seems that dbinit performs this logic before making the API call.

VolkerBarth
Contributor
0 Kudos

@Mikel: I'd suggest to add this to the API documentation (with a CAVEAT), since the current documentation apparently makes readers (at least Breck and me) expect a different behaviour...

The current documentation certainly makes it look like 'logfile' is the exact equivalent of the -t option on dbinit. I’ve added a comment to the dcx page for now, but I’ll see if we can get the page updated to be more descriptive of the actual behavior in the future.

Former Member

Thanks for the follow up from everybody! As concluded, the documentation for a_create_db is not clear and should be updated. I suppose there should also be the possibility not to have a transaction log which is probably what was intended (but not documented) with the NULL/empty value.

johnsmirnios
Employee
Employee
0 Kudos

I checked and Mikel is correct: a NULL or empty logname creates a database without a log. Note that there is no field in a_create_db that specifies whether you want a log or not so the desired behaviour is inferred from the value of the logname field. Some magic value or values needs to mean "no log".

Personally, I generally don't like magic values like these and in this case it also means that code for generating the default log name needs to be replicated outside of the dbtools API. If I had written the API, I might have had a separate boolean to indicate whether a log was wanted or not. I don't think it's worth changing the interface but the doc could be improved.

Answers (1)

Answers (1)

Former Member
0 Kudos
Breck_Carter
Participant

The "similar incident" is not similar at all; it talks about checkpoints which are completely different from commits.