cancel
Showing results for 
Search instead for 
Did you mean: 

What are best Practices for Unit-of-Work Lock Management

glenn_barber
Participant
5,076

We have complex single transactions in our powerbuilder client-server application that can involve 20 or more subordinate tables in a complex hierarchy. We would like to control access to the entire set via some lock management scheme which allows inquiry and reporting access but that prevents more than one authorized user updating or inserting records in the set.

We would also like to be able to see all such locks system wide (across tables) and release them in case of a system restart or user lockup.

Right now we are looking at implementing our own separate record locking table which records the lock by table, and key value, recording the userid, timelocked, application, and connection.

Are there already well worked out schedmes for such management - or are there built in feautres of SQL Anywhere which could facilitate this?

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member

For the trigger: let's assume that your lock table will contain a set of rows with each row constituting a lock. In that row you'll store the connection id, user name, timestamp, possibly some other meta data. When the lock is acquired (the row is updated for that user/connection) a COMMIT makes the lock durable.

Then, whenever an IUD is performed, an IUD trigger queries the lock table ensuring the connection ID of the current connection is identical to that held in the lock table. If not, the trigger raises an exception as the update transaction doesn't hold the required lock to perform the update. You'll want to give yourself an "out" by having the trigger disregard lock checking for a sysadmin connection, which will permit you to modify the database as necessary to correct problems without issuing a bunch of DDL to disable or modify the trigger.

glenn_barber
Participant
0 Kudos

What we are currently experimenting with is somewhat application driven - ensuring that the user cannot get update access without successfully inserting a lock record for the table and key (which is limited via an alternative key uniqueness). We commit the lock record and test for success before the users application can access the entire transaction. We also create a lock on insertion of new transactions (while the user remains in the edit process), and remove the lock on deletion (that could be managed by a trigger) and when the user completes their work. We have System event triggers that cleanup a connection's locks on disconnect and another that truncates the lock table on startup. As the implementation allows an administrator to query the lock, the administrator can also override it. A nice feature is that a user encountering an unexpected conflict is advised as to who is working on the transaction set that they have requested.

I can see how the implementation of the IUD trigger on any tables in the set of transactions might alert us to uncontrolled access to the tables via rogue processes, however coordinating the trigger on the table and row involved may require a rather more complicated query to determine whether the row involved was in the set controlled by the lock.

Former Member
0 Kudos

I would agree with you, Glenn, that the process could get complex. In particular, the more fine-grained the lock implementation (or the intent of the implementation to be fine-grained) the greater the possibilities of either (1) a transaction blocking on a lock query or update for a different lock - which could be mitigated through the blocking_timeout option - and (2) deadlock caused by concurrent update transactions attempting to acquire the same locks but in different orders.

Former Member

Answers can vary depending on precisely what you want the semantics to be.

Your desire for only one updater, thus serializing update transactions, can be accomplished by creating in a couple of ways. You could create a new table expressly for the purposes of controlling the serialization of update transactions - you could either use the LOCK TABLE statement to lock that table (all other attempts will fail until the owner issues a COMMIT or ROLLBACK)). Or you could go finer-grained and do so at a row level using UPDATE statements, again causing other updaters to block. Keep in mind that in SQL Anywhere each blocked transaction will tie up a worker for the duration.

With that - what semantics do you want read transactions to have? Do you want them to have a consistent view of the entire hierarchy? Do you want to permit dirty reads? What kind of blocking are you comfortable with?

glenn_barber
Participant
0 Kudos

Thank you Glenn

As the owner might issue one or several commits while working on the set of records, and others may want unrestricted access to other records in the same table, it doesn't look like Lock Table is a candidate.

Using a row level lock on the top record in each set might accomplish preventing others from maintaining the set of records if we require that they acquire an lock before any maintenance activity. However this doesn't in any way protect the entire transaction set nor prevent insertions into the set from other applications unless they require the row lock.

If I do use the row level locking feature, is there an easy way for other applications to know what locks are being held, by whom and for how long?

t1950
Participant
0 Kudos

I've had to deal with the same scenario in a PB app. I don't have the problem of 20 tables (4 tables with up to 1200 rows per table). My solution was to do 1 commit at the end of the all of the updates.

glenn_barber
Participant
0 Kudos

We also implemented a unit at work update with a commit at the end - however the user can decide to Save (and therefore commit) and not leave the maintenance for that application. At one point we thought we would require the user to reinitiate the edit (and reacquire a lock) but that proved to be an unpopular behavior.

Also being a MDI application, the user could open another window and issue a commit on the same connection without leaving the first window.

There are also issues in complex applications where there must be controlled behavior in related apps. For example you might want to prevent a Contract from being edited at the same time another user is generating invoices for that contract.

No matter how we slice it, there always seems to be unworkable limitations when depending on Record Locking to control access in Unit of Work situations - especially because of the inability to limit the impact of a Commit.

We are revisiting the old physical lock scheme for these complicated unit-of-work schemes- where the existence of a row in a specialized table (identified by user, application, table, key, connectionid) represents the hard lock of the unit of work. This can be messy because of hotspots in the locking table, and then there is the cleanup that needs to be done when connections drop or on DB initialization - and finally some mechanisms are needed for management of contention.

While we are working this out we were interested in what other approaches others had taken to deal with locking schemes in complex unit of work situations.

Former Member
0 Kudos

Right. This is a fairly common requirement of complex transactions, and I'm certainly familiar with them from my background in both individual and group insurance (seems like a lifetime ago now).

One technique you can use to ensure that other applications "play ball" with your UOW locking scheme is to create INSERT-UPDATE-DELETE statement triggers that verify the existence of a row in the "lock" table. You'll have to code that check carefully to ensure the trigger doesn't block on any other (valid) connection. You can ensure only the appropriate UOW logic manages the lock table by using a stored procedure that executes as a different user who has the only permissions to modify the table.

You still have to conjure a way to get yourself out of difficulty if any update transaction "hangs" or waits, because you'll need to commit the update to the lock table independently of other UOW sub-transactions. You could implement that using an event that looks for any updating business transaction taking a significant amount of time, and if that time is exceeded then kill the connections for that user and remove the lock.

One thing we did at Great-West Life was use "staging" or "scratchpad" databases that roughly mirrored production but permitted lengthly UOW transactions. Only when a tx was complete did the user then "submit" the complete business transaction and a background process copied the scratchpad to production. Perhaps this is what you're already doing.

glenn_barber
Participant
0 Kudos

Thanks Glenn

Thank you for you ideas on the IUD Triggers on the lock file - I would certainly appreciate more detail on how that mechanism worked.

The scratchpad approach harkens back to the old mainframe transaction posting concept where the transactions were fed in from an external interface (sometimes via tape) and posted in batch. This approach still has value when considering the many complications in interactive applications that it avoids.

The Batch concept fit the idea of posted financial transactions which were never subsequently edited, but with complex sets that need to undergo regular revision, one needs to check the transaction set out of the system back to the scratch pad for editing This allows for extensive workflow review of the changes before they arrive back in the system to update the "posted" version. Unfortunately, while the transaction set is being edited in the scratch pad area - one still still has to provide for protection of the both the Posted version and the Scratchpad version to prevent others from also editing.

One benefit of this approach is that It allows you to keep a log of the transactions which update the posted version - providing a historical record of change which is particularly helpful with SOX compliance. Another is that it allows for unimpeded reporting based on the "Posted" version while edits in the scratchpad are ongoing.

This scheme may also be useful when refactoring client-server as Web Apps, as the resultant unit-of-work transaction can be pushed to the remove server as an XML package.

Former Member
0 Kudos

You wrote:

Using a row level lock on the top record in each set might accomplish preventing others from maintaining the set of records if we require that they acquire an lock before any maintenance activity. However this doesn't in any way protect the entire transaction set nor prevent insertions into the set from other applications unless they require the row lock.

Exactly; if you want to implement your own application, unit-of-work locking scheme that spans COMMITs, then all of your applications have to "play ball" and work in concert.

If you want to rely on the server's concurrency control model, then the limitation that you have to deal with is that the server releases all long-term locks on (either) COMMIT or ROLLBACK; so each system transaction is atomic, but the server doesn't help you with business transactions that span across system transactions.

If the intent is to ensure that your application sees a "consistent" view of the data before it starts updating anything, then snapshot isolation might be worthwhile investigating.