on 2012 Feb 28 4:25 AM
After moving our product from SA10.0.1 to SA12 (12.0.1 EBF 3311(Linux); 3324(Windows) we encountered with problem which haven't been appears in SA10. In case long Select(few hours report) from table A , Load Table statement from file in the same table A cannot be done and wait till Select will be finished. Select(report) is done from Java server application connection to DB by jconnect(jdbc3) or Insert from Select is done from T-SQL stored procedure.
Question which change between SA10 and SA12 is done regarding to Lock table mechanism and what can be done to solve this problem!?
Request clarification before answering.
LOAD TABLE has always required an exclusive lock on a base table -- that is not new behaviour in SA12. You could use LOAD TABLE to put the data into a temporary table and then use 'insert ... from select ...' to put it into the base table but it is extra processing and requires temp space proportional to the size of the file you are loading.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Oh, you can alsu use INSERT FROM ... OPENSTRING rather than using a temp table. INSERT will add locks and have the undo-log and redo-log overhead of regular inserts but it doesn't require an exclusive schema lock.
Does your app use different isolation levels with v10 and v12? Or different transaction processing (fewer commits, longer transactions)?
As to the docs, the exclusive table lock used by the LOAD TABLE statement (AFAIK!) should not prevent a transaction on isolation level 0 to read from that table - but it would block transactions running on higher isolation levels, and vice versa...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
OK, then I stand corrected, and then even at isolation level 0, another connection would be unable to read from that table. So my suggestions above are moot...
FWIW: The LOAD TABLE statement docs are not that clear: To cite:
LOAD TABLE places a write lock on the whole table.
...
Requires an exclusive lock on the table.
That (and the fact that LOAD TABLE don't do schema changes) has made me think it would just use an exclusive table lock and no exclusive schema lock.
Could be this issue related to follow thread in forum http://sqlanywhere-forum.sap.com/questions/5946/v12-isolation-levels-and-row-locking
You're right that the doc is not being very explicit about the types of locks acquired. Even the SA doc referenced by the link you provided with your first comment is poorly worded:
"An exclusive table lock prevents any other transaction from accessing the table for any operation (reads, ..."
followed by
"transactions executing at isolation level 0 can still read the rows in a table whose table lock is held exclusively. "
Those are contradictory. The first statement should use "locking" rather than "accessing" but the subtleties of schema vs table locks still aren't incredibly clear with that change.
Well, it's all about exclusive schema locks - and that means the lock is used to prevent other connections from reading a table that is about to be altered.
That's clearly a neccessity for ALTER TABLE. What I do not understand is why LOAD TABLE would need that type of lock (instead of an exclusive table lock) - but John will know.
LOAD TABLE uses an exclusive schema lock because it uses page-level undo. If the LOAD fails, modified table & index pages are restored from the checkpoint log to return them to how they were at the beginning of the LOAD. The pages are restored in arbitrary order as they come out of the checkpoint log and even an isolation level 0 scan couldn't cope with that,
As this is starting to become a "Locking" class, would snapshot isolation be of help here?
I.e. would a SELECT on table A with isolation level "snapshot" (or its variants) still block the LOAD TABLE and vice versa?
Again, the docs are not very clear on this advanced topic, as they primarily explain that snapshot isolation prevents read locks...
Well, if "quick data insert" refers to "not too big amount of data" - instead of "huge amount, which takes too long to insert via INSERT" - then I strongly would follow John's advice (see his answer) to LOAD into a temporary table and then INSERT into the real table from the temporary one. That won't block any readers unnecessarily, and you won't have to drop connections...
User | Count |
---|---|
75 | |
30 | |
9 | |
7 | |
7 | |
6 | |
6 | |
5 | |
5 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.