cancel
Showing results for 
Search instead for 
Did you mean: 

How to run a live backup?

4,096

Using sqlanywhere 12, we run a full dbbackup daily, with -x option to truncate the active log. The backup is run on the same server, compressed, and ftp'd to a secondary server.

We want to add a continuous live backup to the secondary server, to cover the period between full backups.

When the full backup runs, the live backup stops[EDIT: the live backup continues and the live log is truncated and restarted] (because active log is truncated).

Is there a way to automatically restart the live log backup? [EDIT: It restarts automatically. How can we avoid losing the content of the live log when it is truncated? Surely we want to keep it until the new backup reaches the secondary server?]

Thanks!

Accepted Solutions (1)

Accepted Solutions (1)

regdomaratzki
Product and Topic Expert
Product and Topic Expert

The live backup is working exactly as designed here. A live backup's job is to keep a live copy of the active transaction log. When you perform your full backup (and I assume rename+restart the active transaction log), that will cause the transaction log being maintained by the live backup to also be truncated. There is no option to delay the truncation of the transaction log being being maintained by the live backup while an external process does "something", which in this case is compressing + FTPing the backup.

Options :

  1. Run your full backup from the secondary server. The backed up database and log will be on another machine when the backup completes, at the time that the active transaction log (and transaction log being maintained by the live backup) are renamed.
  2. Take the full backup on the same machine, but to a different hard drive on the machine. This will at least keep you protected from media failure on a single device for the time that the backup is compressed and ftp'd to the secondary server.
  3. Your backup process could be to shut down the live backup, do your full backup, compress+ftp backup, and then restart the live backup. When you stop dbbackup -l, the transaction log will be left behind, so it should still be usefull for recovery.

Reg

0 Kudos

Reg, thank you for your answer. I realize the live backup transaction log must be truncated if the active transaction log is truncated.

We restart but do not rename the active transaction log.

Here's the output of the live backup log on the secondary machine while a full backup on the active server takes place:

...

(529 of estimated 529 pages, 100% complete)

(529 of estimated 529 pages, 100% complete)

Transaction log truncated by backup -- restarting ...

(0 of estimated 1 pages, 0% complete)

(1 of estimated 1 pages, 100% complete)

Live backup of transaction log waiting for next page...

Regarding the options:

  1. Full backup from secondary: This looks like the best option to keep the secondary up to date. I expect it would be slower and use more bandwidth because the backup is half the size when compressed, but that might not be a problem.

  2. Different disk: If the disk holding the operating system failed after a full backup but during compression or ftp, we would lose anything since the previous full backup on secondary because the active server is unusable, the secondary log was truncated, and the latest full backup had not reached the secondary. Is that fair to say?

  3. Stop live backup before full backup: This looks good. In the event of failure we would stand to lose only changes that occurred during backup + compress + ftp.

Would it be possible for the live backup to do option 3 automatically? Ie, just before the live backup log is truncated, instead of discarding the data, save it to a timestamped file. If the active server died, the timestamped log plus the live log could be applied to the secondary backup. Wouldn't this be like incremental backups?

Sorry for my limited understanding and thanks for your advice. It's helpful because we were seeing 'device has a bad block' errors on a disk a few days ago so moved to a different server and are reconsidering our backup strategy.

VolkerBarth
Contributor
0 Kudos
0 Kudos

We tried HA for some time, and I can't say I'm impressed ;). Currently we do full and live backups on the secondary server, so if primary dies, all I need is to apply live backup log to the latest full backup and start server on the secondary.

regdomaratzki
Product and Topic Expert
Product and Topic Expert
  1. If the drive when the OS resides fails during the compress/ftp process, you don't need to recover from the secondary machine, you would recover from the full backup that was just taken and written to the other hard drive on the same machine. Your data loss would only be the transactions that occurred on the live database during the compress/ftp process.

  2. There is not a way for live backup to do this automatically right now. I think it's a good suggestion though that if you added the "-r -n" switch to dbbackup -l that it would keep an archive of old log files on the other machine as well.

No need to apologize. It's much better answering questions about backup and recovery as opposed to answering questions on why a customer can't recover their database after a disk failure.

regdomaratzki
Product and Topic Expert
Product and Topic Expert
0 Kudos

@Dmitri: The basis of our SA On Demand product relies on the SQL Anywhere HA capabilities. A significant amount of work was put into v1201 (and v16) for HA. If you liked the concept of HA, but were running into a few issues, it might be worth your time to re-visit the feature on the latest v1201 EBF or in the forth coming v16 release.

VolkerBarth
Contributor

It's much better answering questions about backup and recovery as opposed...

That gave me a smile:) And I guess lots of folks share the "we were seeing <whatever> errors on a disk .. and are reconsidering our backup strategy" way of gaining insight...

0 Kudos
  1. I see, the backup is safe on the good drive. We need to find out how long it would take to access that drive again.

  2. Archived logs on secondary would be great!

Thanks Reg.

0 Kudos

@Volker, thanks, we probably do need to consider it.

Live backup looks great because it offers an almost up to the second replica in a different data centre that could be recovered within minutes, and it's reasonably simple to set up.

VolkerBarth
Contributor
0 Kudos

I think it's a good suggestion though that if you added the "-r -n" switch to dbbackup -l that it would keep an archive of old log files on the other machine as well.

What about something like the "delete_old_logs" option, as used by SQL Remote and MobiLink clients? I.e. an option to tell the live backup to preserve old logs for n days or the like?

Breck_Carter
Participant

FWIW multiple separate live backups can be created; e.g., local and offsite, in case of zombie apocalypse and the like.

Answers (1)

Answers (1)

VolkerBarth
Contributor

Restoring from log offset: So we could periodically stop the live log backup on secondary, apply to full backup and restart the live log backup?

EDIT: I have turned that previous comment onto the question into an answer and I hope the experts like Reg will confirm the following:

In my understanding, you should be able to do the following at the secondary site:

  1. Have the full backup available (with "truncate log" mode).
  2. Run a live backup from the secondary against the active database on the primary site.
  3. Stop the live backup and make a copy of the log (say, as liveCopy.log).
  4. Restart the live backup.
  5. Start a secondary server with a copy of the full backup - it will recover automatically to the end time of the full backup - and shut down that server.
  6. Start the secondary server again to apply the liveCopy.log (-a option) - and shut down that server.
  7. Now you should have a new "up-to-date" backup consisting of the database and log file of the secondary server at the time of the stopped liveCopy.log.

So, that way you could "freshen" your secondary site during the day.

Note, however, that for a second "refresh", you will have to start with the full bakup again, i.e. it should not be possible to "freshen" the database from step 7 with an more current live backup log, as that log fits to the active database, and not to the restored and already "freshend" one.

So, in case the primary site goes down, you now have three options:

  • use the full backup from step 1 with the current live backup log to restore the database (with possibly some recovery time for today's operations)
  • use the freshened database from step 7, which is immediately available (and accept to lose the contents after the "refreshening" - )
  • use the freshened database from step 7 and use the current live backup to filter out the most recent contents (say, by running DBTRAN against the log limited to the time after the "freshening"), then running those operations against the database to make it up-to-date... (requiring skills and some time to recovery/apply operations).

Option 1 is the normal usage of a live backup log, so I can't tell whether options 2 or 3 would be useful for you...


No, I have never used that technique myself.

0 Kudos

How best to stop a live backup?

Can combine 5 & 6 using -ad to recover using multiple logs.

The process sounds close to how incremental backups work.

Thanks Volker, your comments are helpful.

VolkerBarth
Contributor
0 Kudos

How best to stop a live backup?

I'd recommend to ask that as a separate question... - and no, I don't have an answer...