cancel
Showing results for 
Search instead for 
Did you mean: 

Can you temporarily disable the tranaction log when deleting rows?

Former Member
7,787

On a daily basis, we have to delete large volumes of data. Our delete statements can take a long time to run. We've had some success using the MERGE statement but this is one table in particular where that won't work. I think the slow performance of the delete statements is because it constanly writes to the log while it does this. Is there a way to speed this up by not writing to the log (i.e. disabling it briefly and then turning it back on)? We can't use TRUNCATE TABLE because the delete only removes a subset of the total rows in the given table. Thanks, Tom

VolkerBarth
Contributor
0 Kudos

Make sure you're doing commits during the delete quite frequently - otherwise the rollback log will become huge as it has to store all deleted rows until the transaction is commited/rolled back. So I would try to split the delete into smaller chunks, e.g. by using "DELETE TOP n ... ORDER BY <yourpreferredorder>; COMMIT;" in a loop until all wanted rows are deleted.

Former Member
0 Kudos

Volker - I tried this today using a simple test procedure. I didn't notice any improvement in performance -- it took just as long to run when not using a loop in ISQL. But you think this will cut back on log growth? That in itself would be good -- our log grows over 1GB every day until restarted during our backup process. Tom

VolkerBarth
Contributor
0 Kudos

@Tom: The split-approach won't reduce the size of the translog as still the contents of each deleted row has to be stored in the translog. That's necessary for recovery purposes. It should reduce the size of the rollback log (which itself is part of the translog AFAIK) which contains all uncommitted changes and is dropped on rollback/commit. But the reduced size of the rollback log is only a temporary effect and as such should not impact the general translog file size.

VolkerBarth
Contributor
0 Kudos

@Tom: Whether the split approach might fasten your deletes significantly or not will depend on the actual query. Are there lots of locks held during the delete (or may the delete have to wait for other connections to commit)? - It might be helpful if you could present your query (with plan) here so that experts like Glenn can give advice...

Breck_Carter
Participant
0 Kudos

Did you find any answer? If so, please post it here. Thanks!

Former Member
0 Kudos

Here is a standard delete for one of our client uploads:

DELETE FROM gt_payments WHERE fd_entity = '0048'; COMMIT;

The gt_payments table has just shy of 1 million rows. It has about 50,000 rows for client 0048. I just ran this statement in ISQL and it took 12 minutes to complete. The table has a pk of 3 columns and has 3 indexes -- none of which I created. Per Ron's point below, how can I determine if one or more of these indexes is unnecessary?

I created a plan. What's the best way to get it uploaded here? thanks, Tom

VolkerBarth
Contributor
0 Kudos

Just to correct that older statement: AFAIK now, the rollback log is part of the system dbspace, not of the translog - to cite from the 12.0.1 docs:

Whenever modifications such as inserts, updates, or deletes are made to the database, entries are added to the rollback log, which is stored within the system dbspace. If many of these operations are performed before a commit is executed, the rollback log can become very large and may increase the size of the database.

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member

The transaction log is critical for recovery -- so allow me to restate your question to "How can I improve the performance of my DELETEs"

My first course of action would be what Volker stated in his comment, which is to loop through a series of smaller deletes -- and committing each time. I don't know why this works, but it does for me -- I assume it's because it creates smaller locks -- again as Volker suggests.

Another thing is that when you delete records, any related index has to be updated, and dependencies have to be checked. So make sure you don't have any unnecessary indexes defined and you don't have any unnecessary foreign key relationships defined either.

Former Member
0 Kudos

You also want to make sure you're not running any delete triggers. (Probably obvious, but maybe not for a everyone.)

VolkerBarth
Contributor

Just as a further pointer (wthout knowing if the actual problem is still unsolved...):

Nick Elson [Sybase iAnywhere] has listed a lot of factors that may make a difference between the performance of a SELECT statement and an according DELETE statement based on the same SELECT.

So it's basically focussed on the question "Whis is a delete much slower than the according select?".

Don't know if this is the actual question here - but I'd like to quote the answer anyway. As it is in some kind of home-brewn table-format, I hope I have quoted it correctly:)

Deletes need to

  • modify table and index pages
  • all indexes must be modified
  • rollbacklog pages need to be allocated and filled with insert operations (can grow file too)
  • can require much more cache around check contraints & pubs
  • operations need to be logged
  • statistics might need to updated
  • will set off trigger actions, ri actions, cascade actions
  • require exclusive locks and will block more readily and be exposed to more contention
  • will cause disk reads and writes [more reads and writes are going to be more expensive than reads as well]
  • the need to visit all pages assoc. w\\all columns, esp. extra's CLOBS, BLOBS, long types, ...
  • replication could even be playing a role here

Selects only need to

  • only accesses pages required
  • only parts of indexes accessed by the plan are visited
  • writes to the rollback log only if done in an updateable cursor
  • occasionally stats get updated
  • no triggers or checks
  • typically only shared locking
  • at low isolation levels there may be little or no contention
  • if the cache is warmed then select may have no i/o to do except when returning results to the client
  • selects may skip some of the extra stuff like BLOBs
  • replication is not a big factor

Not to mention the potential for system resource limitations leading to swapping to the PagingFile, I/O contention with other processes, ... etc

Much depends upon the state of the server, the specific schema, the data distribuition, the state of the table and indexes in the database, and other activity on other connections and in applications.

Source: news://forums.sybase.com/sybase.public.sqlanywhere.general, thread "Slow deletes vs selects" starting 2010-03-31.

thomas_duemesnil
Participant

You could restart the engine in bulk modus. -b But for a productive environment this will not be a good idea.

Regards Thomas