on 2018 May 03 6:44 AM
I would like to compress a Long Binary column in a table (email .msg files, PDF's, JPG's). We have a replicating DB's
If I apply ALTER Table ALTER Column COMPRESS at the consolidated DB (with passthrough) - will I have to dbunload -ar each remote DB and the consolidated to reclaim disk space ?
Request clarification before answering.
Yes and no.
If "reclaim disk space" means "reduce the size of the *.db file" then yes, recreating the *.db file from scratch (dbinit dbunload reload etc) is the only way to do that... it is not limited to COMPRESS.
If "reclaim disk space" means "reuse free space for new or expanding rows" then no, freshly-created free space in a table data page can be used for new data in the same table, and free pages can be used for anything, without having to recreate the *.db file.
FWIW you might also consider a deduplication algorithm if that is a common condition (same object stored in multiple rows).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Just to add:
Whether your compressed data will really require less table pages as before (or even more) should be checked via the "sa_column_stats" system procedure - I had tested with "medium length" strings a while ago (256 bytes each) just to find out they were not compressed at all, which led to that DCX remark...
That being said, I guess for typical BLOBs like PDFs and images compression should acutally take place.
User | Count |
---|---|
76 | |
30 | |
8 | |
8 | |
7 | |
7 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.