on 2011 Sep 05 6:48 AM
Hi
Is something like the following viable, in a replicated scenario:
Take backup of the consolidated DB, then use this backup to extract client sites (instead of using the main running DB instance)?
Thanks Ivan
I guess you are refering to the technique as described here:
"Creating multiple remote databases".
It does use a copy of the consolidated database to extract data from there. It does tell how to take care that log offsets are fitting.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The point is that the subscriptions are created and started from the original database (as usual), so log offsets are fine. And when extracting several remotes, it's the multiple REMOTE RESET with one final COMMIT that takes care that offsets can be set in one single transaction (REMOTE RESET itself does no AUTO COMMIT in contrast to START SUBSCRIPTION).
The "make a backup immediately after starting the subscriptions" rule makes sure that the extracted data itself fits to the state of the data as to the according log offsets. - So you're somewhat saying "My extract does guarantee that the data fits the log offsets...".
FWIW, we have used this technique for several years - until we switched to a custom extraction process which was wayyyy faster than DBXTRACT and allowed us to extract from the production database directly.
Ok, just one more bit of clarity I still need here. Scenario: - start consolidated DB copy (which was made immediately after start sub subscriptions) - add or change a record - extract a new DB
Now this extract has the change or added record. What happens?
I am not suggesting I would do this, it is easy to not do it, but trying to understand the mechanics better. Thanks
The newly extracted remote db will contain the changed/added record whereas the production db will not. So data will be inconsistent between the consolidated and the remote.
In contrast, adding data on the consolidated while the extract is being made from the copy will generate according SQL Remote messages for the "remote under construction". Once the remote is created and starts replicating, it will apply the already existing messages and will add the changes from the consolidated.
To make it short: The copy should only be used to extract data, not to change it in any way. After extraction is done, it should be erased.
Ok, so the key thing for me then is that the log file of the copy DB is not essentially a problem in terms of any changes logged to it - which may move any local offset. This offset is only important in the live DB.
Naturally no changes should be made to any "user data" in the copy DB, because of course this wont then be consistent with the live DB.
Yes, the transaction log of the copy database is not used at all - SQL Remote has to run against the "live" database and scan its log. So in the end you even might be able to use the copy db without its log to do the extraction. (Note that I haven't tested this myself.)
All that being said, I still would recommend to check whether a custom extraction process (against the live database) may help to speed up the extraction in contrast to DBXTRACT. If that is feasible, I would prefer such an approach because the particular steps to make and use the database backup are, well, somewhat complex and error-prone...
User | Count |
---|---|
75 | |
10 | |
10 | |
10 | |
10 | |
9 | |
8 | |
7 | |
5 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.