on 2010 Dec 09 1:24 PM
We have a setup with two DB instances, connected via sql-remote and FTP.
In one table we had about 6million rows, and we did then delete ~5millions of them on the remote site in ~500000 rows blocks. (This took ~20 minutes)
Now we got ~2700 messages in the flowing back from the remote to the consolidate.
Our problem is, that it now takes about 50 minutes per FTP message to apply these deletes from the remote side. So applying all messages will take arround 96 days..... And of course all other "real" updates/inserts etc. are just queued up in more messages...
We don't see a lot of io traffic from the database, but the database uses 100% cpu time of one (out of 8 available) cores.
What could we do to speedup applying these messages ? We did already add the usefull indexes, but that did not improve performance.
Is there a chance that this process is faster:
Actually we just can't affort to wait 96 days for the queue to clear up....
Or any other ideas how to speedup dbremote... (Using more than one CPU/core for example)
André
Lesson learnt from this question:
The strong recommendation that all replicated tables should have a primary key is not only focussed on avoiding inconsistent data (s. doc note below) but also on avoiding miserable performance...
I wasn't aware of this problem - though, fortunately, I have never used SQL Remote without fitting PKs.
A table without a primary key or unique constraint refers to all columns in the WHERE clause of replicated updates
When two remote databases make separate updates to the same row and replicate the changes to the consolidated database, the first changes to arrive on the consolidated database are applied; changes from the second database are not applied.
As a result, databases become inconsistent. All replicated tables should have a primary key or a unique constraint and the columns in the constraint should never be updated.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I did now the following:
So the databases are now in sync and replicating again. What is still left to do, is to deduplicate the unloaded data from the table and then reload it in the table.
Thanks Volker for your help.
André
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
68 | |
8 | |
8 | |
6 | |
6 | |
6 | |
6 | |
6 | |
6 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.