on 2013 Feb 10 10:48 PM
Using sqlanywhere 12, we run a full dbbackup daily, with -x option to truncate the active log. The backup is run on the same server, compressed, and ftp'd to a secondary server.
We want to add a continuous live backup to the secondary server, to cover the period between full backups.
When the full backup runs, the live backup stops[EDIT: the live backup continues and the live log is truncated and restarted] (because active log is truncated).
Is there a way to automatically restart the live log backup? [EDIT: It restarts automatically. How can we avoid losing the content of the live log when it is truncated? Surely we want to keep it until the new backup reaches the secondary server?]
Thanks!
Request clarification before answering.
The live backup is working exactly as designed here. A live backup's job is to keep a live copy of the active transaction log. When you perform your full backup (and I assume rename+restart the active transaction log), that will cause the transaction log being maintained by the live backup to also be truncated. There is no option to delay the truncation of the transaction log being being maintained by the live backup while an external process does "something", which in this case is compressing + FTPing the backup.
Options :
Reg
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Reg, thank you for your answer. I realize the live backup transaction log must be truncated if the active transaction log is truncated.
We restart but do not rename the active transaction log.
Here's the output of the live backup log on the secondary machine while a full backup on the active server takes place:
...
(529 of estimated 529 pages, 100% complete)
(529 of estimated 529 pages, 100% complete)
Transaction log truncated by backup -- restarting ...
(0 of estimated 1 pages, 0% complete)
(1 of estimated 1 pages, 100% complete)
Live backup of transaction log waiting for next page...
Regarding the options:
Full backup from secondary: This looks like the best option to keep the secondary up to date. I expect it would be slower and use more bandwidth because the backup is half the size when compressed, but that might not be a problem.
Different disk: If the disk holding the operating system failed after a full backup but during compression or ftp, we would lose anything since the previous full backup on secondary because the active server is unusable, the secondary log was truncated, and the latest full backup had not reached the secondary. Is that fair to say?
Stop live backup before full backup: This looks good. In the event of failure we would stand to lose only changes that occurred during backup + compress + ftp.
Would it be possible for the live backup to do option 3 automatically? Ie, just before the live backup log is truncated, instead of discarding the data, save it to a timestamped file. If the active server died, the timestamped log plus the live log could be applied to the secondary backup. Wouldn't this be like incremental backups?
Sorry for my limited understanding and thanks for your advice. It's helpful because we were seeing 'device has a bad block' errors on a disk a few days ago so moved to a different server and are reconsidering our backup strategy.
@Mark, have you also thought over installing a High Availability solution, as is available for SQL Anywhere 10?
If the drive when the OS resides fails during the compress/ftp process, you don't need to recover from the secondary machine, you would recover from the full backup that was just taken and written to the other hard drive on the same machine. Your data loss would only be the transactions that occurred on the live database during the compress/ftp process.
There is not a way for live backup to do this automatically right now. I think it's a good suggestion though that if you added the "-r -n" switch to dbbackup -l that it would keep an archive of old log files on the other machine as well.
No need to apologize. It's much better answering questions about backup and recovery as opposed to answering questions on why a customer can't recover their database after a disk failure.
@Dmitri: The basis of our SA On Demand product relies on the SQL Anywhere HA capabilities. A significant amount of work was put into v1201 (and v16) for HA. If you liked the concept of HA, but were running into a few issues, it might be worth your time to re-visit the feature on the latest v1201 EBF or in the forth coming v16 release.
It's much better answering questions about backup and recovery as opposed...
That gave me a smile:) And I guess lots of folks share the "we were seeing <whatever> errors on a disk .. and are reconsidering our backup strategy" way of gaining insight...
@Volker, thanks, we probably do need to consider it.
Live backup looks great because it offers an almost up to the second replica in a different data centre that could be recovered within minutes, and it's reasonably simple to set up.
I think it's a good suggestion though that if you added the "-r -n" switch to dbbackup -l that it would keep an archive of old log files on the other machine as well.
What about something like the "delete_old_logs" option, as used by SQL Remote and MobiLink clients? I.e. an option to tell the live backup to preserve old logs for n days or the like?
FWIW multiple separate live backups can be created; e.g., local and offsite, in case of zombie apocalypse and the like.
Restoring from log offset: So we could periodically stop the live log backup on secondary, apply to full backup and restart the live log backup?
EDIT: I have turned that previous comment onto the question into an answer and I hope the experts like Reg will confirm the following:
In my understanding, you should be able to do the following at the secondary site:
So, that way you could "freshen" your secondary site during the day.
Note, however, that for a second "refresh", you will have to start with the full bakup again, i.e. it should not be possible to "freshen" the database from step 7 with an more current live backup log, as that log fits to the active database, and not to the restored and already "freshend" one.
So, in case the primary site goes down, you now have three options:
Option 1 is the normal usage of a live backup log, so I can't tell whether options 2 or 3 would be useful for you...
No, I have never used that technique myself.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
47 | |
6 | |
6 | |
5 | |
5 | |
4 | |
4 | |
3 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.