on 2015 Jan 27 3:14 PM
This is a follow-up this question which was about page size and performance of validation when using a SAN.
The current page size is 4K. The supplier of the hosted environment is insistent that we should try re-building the database using a 32K page size to get better I/O over the FCoE link to the EMC SAN for the validation / back-up process.
80% of our of rows are under 4K in size (90% of those in the critical core tables). Given all the warnings elsewhere about using too large a page size, what are the actual dangers concerning normal day-to-day usage of the system?
Any pointers would be helpful.
Request clarification before answering.
If each row in your table is 256 bytes long each 4k Page can contain 16 rows. When you use 32k Page each page can contain up to 128 rows.
So when a sample query use the data that is spread over 500pages the needed cache memory goes up from ~ 2MB to 16MB to hold the needed pages in cache.
So for the same data you need more memory.
At the same time a update to the table is written in page chunks. So if you update a page you lock much more rows at the same time. This comes into play when you have a high rate of updates on a table where many connection create and update date.
HTH
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
So for the same data you need more memory.
I guess this is only true under the assumption that the same data is spread over the same number of pages (as you have stated, of course). As each 16K page can hold at least four times the same number of rows compared to a 4K page, it could be expected that the same data would usually be spread over much lesser pages, so the typical increase in required paged memory should be lesser than four times (but may still be significant).
So if you update a page you lock much more rows at the same time.
Are you sure? I'd thought (but don't know) SQL Anywhere would usually apply row locks here unless they are escalated to page locks...
FWIW, in case the same data modification do apply to the same number of rows (or at least to more than a fourth compared to a 4K page), I'd think that also the size of the checkpoint log and the time required for a restore would increase...
(All just guesses, I've not have had the requirement for huge page sizes...)
A big advantage, agreed:)
But wasn't there something like "Lock escalation" in former versions? (I can't find this in current and older (v8) docs, they only list this for DB/2 as ML consolidated...)
Even if so, I guess it would turn row locks into table locks and would not matter in the page size issue, either.
User | Count |
---|---|
61 | |
8 | |
7 | |
6 | |
6 | |
5 | |
4 | |
4 | |
4 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.