on 2016 Jan 07 10:14 AM
We are looking at using dbtools API to kick off a "live" log backup.
The documentation is all pretty clear and we think we see how to use the DBBackup method and the associated a_backup_db structure (putting the required log file name into member hotlog_filename). However what we can't see is how to later stop the backup when that is needed. Other types of backup just terminate when they are finished, but by its nature the live log backup continues indefinitely.
Obviously we could use a Windows API call to kill the process, but that seems rather crude and maybe runs the risk of a damaged log file?
Is there a clean way of terminating a live log backup process started by the DBBackup method?
Many thanks
v16.0.0.2018
What you need to do is the following:
HTH
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
It's rude to answer a question with a question, but why would you need to stop (or use) a live backup that is still running?
Generally speaking, a live backup is only "needed" after the database fails, and when the database fails, the live backup stops automatically because it has lost its connection.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
That gap can be minimized by running (more frequent but remote) incremental backups; in tandem . . . if that helps any.
But, then again, if you are not doing incrementals ... what exactly do you mean by 'truncate'?
If you are truncating the log as part of your full backup, then you may be playing this ~'a little too close to the edge of the cliff there'~
Thanks Nick - I mean using the -x option with dbbackup
The problem is that I have to truncate the log (on the live server) at some point because it will just get bigger and bigger. When that truncation happens my live log backup gets trashed - there not being an option to keep the old one and start a new one as discussed here:http://sqlanywhere-forum.sap.com/questions/15476/how-to-run-a-live-backup
Are you suggesting that I could do full but non-truncating backup leaving the live backup running; then do a separate transaction log backup with truncation? That would certainly minimize the exposure time.
Sorry, I'll try to explain more clearly:
Two servers in different locations, Live & Standby.
On the Standby Server there is a full copy of the database made earlier and a "live backup" running. Periodically (say every 30 minutes) we take a copy of the "live" back-up log and apply it to the Standby database. The live backup log gets steadily longer during the day, so the log application gets a bit slower as more and more of it is skipped.
A some point in the 24 hours we start the Standby database in read only mode and validate it.
A some point (could be daily, weekly etc) we want to replace the standby copy with a new copy of the Live database. Not strictly necessary from my point of view, but customers and their auditors seem to insist. Either way we do have to truncate the log file of the live database (and therefore the "live backup" log) as they will grow too large to handle.
When the truncate log instruction is issued to the Live Server (either by doing a full back up or log backup only) the "live backup" log is immediately deleted by dbbackup and a new one started.
If we successfully retrieve the full backup (db & log file) from the Live Server then we have no problem - we apply the backed up log to the backed up database and then we can continue applying copies of the "live backup" log.
However if we lose the Live Server while the full backup it is being copied across (which could take hours with a 200 GB database), there are transactions that happened between our last copy of the "live" transaction log and when the truncation happened that we don't have anywhere. Even though the new "live backup" log has transactions between the truncation and the loss of the Live Server we can't use them because of the missing ones in the gap.
But if I understand it properly,Nick's point is that I can minimise the gap by doing the log backup with truncation separately from the main backup. That certainly seems like a good idea.
Hope that makes more sense 🙂
Are you suggesting that I could . . . leaving the live backup running; then do a separate transaction log backup with truncation?
Yes indeed ... In fact, that has been part of my standing suggestion on how to implement a cold/semi-warm stand-by configuration for the better part of 20 years now.
I prefer renaming (-r) combined with matching (-n) over truncation (-x) myself. That gives you extra convenience with the backup identification, protects synchronization and replication scenarios and does give you an extra degree of redundancy via the offline logs (which you will want to age off the system through a different mechanism if delete_old_logs is not operating).
Many thanks for this Nick.
I hadn't realised that -r would also restart the the "live backup" log in the same way as -x does. So if I use that with -n and carry out the backup direct to by StandBy Server, the "live backup" won't be truncated until I've successfully received the one from the Live Server, which is great. (And I'll have a extra copy one the Live Server)
The only thing I'll have to work out then is how to manage the build-up of old logs on the Live Server. I'm not doing any replication and setting the delete_old_logs option doesn't seem to have any effect. The answer is probably a database preocedure to clear them out after that I call from time to time - that way I won't have to grant any folder rights on the Live Server to the user on the StandBy Server.
User | Count |
---|---|
67 | |
10 | |
10 | |
10 | |
10 | |
8 | |
8 | |
7 | |
5 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.