cancel
Showing results for 
Search instead for 
Did you mean: 

Simple archiving setup for SQLAnyhere 10

Former Member
2,338

I'm running a SQLAny10-App where a couple of tables is frequently updated. I'm logging these changes via triggers into log tables (one head record for each changed record and n child records for each changed column).

The problem is, these log tables are growing quite fast, and are now consuming almost half of the database size. I've already set up an archive database at the customer's site, where we occasionaly unload the log tables from the main db, and reload them into the archive db, then truncate the log tables on the main db. I order to reclaim the space used, we have to rebuild the main db competely. This is a tedious an error-prone task, so I'm looking for a simpler way to archive the changes. I've already tried to let the triggers write directly into the remote archive-db (remote server is set up and accesible), but I'm running into problems with the identity column, which does not seem to work correctly for remote databases.

My question to you experts out there is: Is there a simple setup for archiving tables? I've started to look into QAnywhere, but this seems to be an oversized approach for this issue. Isn't there any simpler way?

Accepted Solutions (1)

Accepted Solutions (1)

Former Member

We have same solution here. What we do is: Every day, log db connect to prod db and inserts all log records prior today (select * from logtable where date < current date) in log db and then delete those record from prod db.

With this approach, prod db will not fail if log db is offline.

VolkerBarth
Contributor

That approach with frequent DELETEs will also prevent the prod db from growing that large. As such the need for the rebuild step (to reclaim free space) won't be necessary.

Former Member

This seems a viable approach. Tried this locally and it works like a charm. Thx!

Answers (1)

Answers (1)

Former Member

I think the flaw in your original implementation was the belief that you needed to rebuild the database in order to reclaim the space. While the database file wouldn't shrink when you deleted the logging data, the pages would be freed up within the database file and would then be reused by the database. Rebuilding the database would make the database file smaller, but then the database would just have to grow again until the next time you archived your data. If you stuck with deleting the archive data without rebuilding the db file then it would reach a "normal" size and reuse the empty free pages within the db without growing other than for any normal growth due to the size of your dataset.

I would suggest that the biggest reason to rebuild the database would be to then pre-allocate space to the db file once you know how big a database you are likely to need. The benefit of pre-allocating space would be to minimize file fragmentation on disk.

As others have said, archiving more frequently will reduce the maximum db file size you require, but that is independent of any misconception that you needed to rebuild the file in order to reclaim space.

VolkerBarth
Contributor
0 Kudos

@Rob: Good annotation! However, w.r.t the "biggest reason to rebuild the database would be to then pre-allocate space" - wouldn't an "ALTER DBSPACE ADD..." be enough (and easier) in such a case?