on 2015 Oct 01 3:58 AM
Dear Experts,
Recently we came across an error 1204 with severity 17, state as 2 and with the description as "ASE has run out of locks. Re-run your command when there are few active users, or contact SA to reconfigure the ASE with more locks"
Can any one explain me why we have received this error even though we are having a handsome number of locks available and also the percentage of locks that are active is less than 1%?
Output of sp_monitorconfig 'locks' below pertaining to our environment,
1> sp_monitorconfig 'locks'
2> go
Usage information at date and time: Sep 30 2015 7:32AM.
Name Num_free Num_active Pct_act Max_Used Reuse_cnt Instance_Name
------------------------- ----------- ----------- ------- ----------- ----------- ------------------------------
number of locks 14991870 8130 0.05 15000000 0 NULL
(1 row affected)
(return status = 0)
Regards,
Dilip Voora
The number of locks used is very volatile, a single process can accumulate and release a large number of locks in a short period of time. Once the 1204 is hit, the process rolls back and the locks are released, which can happen quite quickly, particularly if the locks are shared locks. What matters is not the conditions now, but what the conditions just before the 1204 was raised.
You can configure ASE to take a shared memory dump (CSMD) on the 1204 error. Then the next time this happens, the resulting dump can be looked at to see what used all the locks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Dilip
At my work place we poll MDA table monProcessActivity to zero down to SPIDs with high number locks using column called LocksHeld. In fact we do not allow more than 1M+ locks for a SPID. SPID with more than 1M+ is terminated during the poll and DBAs are notified with SPID + KPID + Login details.
We have third party tools collecting SQL so we use those to get actual SQL/Plan if need be.
But the same can also be extracted from monProcessSQLText or good old dbcc sqltext(spid) before the SPID is sent to a better place !!
What I have observed is that number of locks climbs extremely rapidly when user(s) use "select into" for creating temporary tables while extracting data from a very active table with datarows locking scheme and also with huge number of rows. Most probably server can not lock the table(s) due to high activity and the number of locks keep increasing to cross the configured limit.
I would suggest not to increase the lock limit but get end users to agree on some reasonable number of locks and then use that as hard limit to terminate a SPID. If you provide enough evidence to end users they normally agree because no one wants to see a worst case scenario of recycling a server during a busy day.
In summary MDA tables are your friends. Use them positively to help you.
HTH
Avinash
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Along with the other suggestions, you might also want to post the values of the following configuration options:
page lock promotion HWM
page lock promotion LWM
page lock promotion PCT
row lock promotion HWM
row lock promotion LWM
row lock promotion PCT
lock scheme
If you identify a transaction running against a large table (e.g. >15000000 rows) then you could perhaps change the escalation so a full table lock is taken out, our use the granularity of the locking system to set that table first to perhaps DPL (data-page locking) instead of DRL (data-row locking) if that is the issue.
Tables can be changed to different locking schemes, even if the server-wide lock scheme is set differently.
Chris
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I disagree slightly here. If you have identified a transaction that you now know is going to lock a large number of rows, I would first consider using the LOCK TABLE command to ensure the transaction used only a single table lock for the table (as it will be locking most of the table anyway).
The lock escalation settings are of more use for unpredictable ad-hoc user queries. The main issue with lock escalation is that if there is even a single incompatible lock on another row in the table, ASE will not be able to escalate to a table lock, which makes the settings irrelevant.
-bret
monitorconfig shows you've hit the max number of locks
Name Num_free Num_active Pct_act Max_Used Reuse_cnt Instance_Name
------------------------- ---------------- ----------------- ---------- --------------- --------------- ------------------------------
number of locks 14991870 8130 0.05 15000000 0 NULL
15,000,000 is a lot of locks.
Check transactions aren't too long and you're not doing any table scans on update or incorrect joins.
Or maybe you've just got a large system with lots of concurrent activity.
sp_lock will show you the locks in use - but once this get large its a very slow process.
you can also use the mda tables to look at locks but that's hideously slow to select from once it gets over a million.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
70 | |
10 | |
10 | |
7 | |
6 | |
6 | |
6 | |
5 | |
5 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.