on 2021 Jul 26 6:58 PM
we have a ASA 17 database which has go live for a year. the database file size keeps growing, like 2-3G a day. I understand that some data are loading to the DB every day. But if I list the tables and sizes, the number doesn't add up. ie, this query give me summary of 25G. ( select sum(table_page_count * DB_PROPERTY('PageSize'))/1024/1024 from SYSTAB WHERE creator = 1; ) and I see the biggest table is just 5G. so why the database file is almost 400G now.
Request clarification before answering.
dbinfo -u or sa_table_page_usage( ) would provide a better understanding of where the space is being used.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Chris answered the question "How do I find where all the space is going?"... run dbinfo.
Here's an answer to the question "How do I use the system catalog to find where all the space is going?"
SYSTAB.table_page_count doesn't account for all the used pages in the database file... there's also SYSTAB.ext_page_count and SYSPHYSIDX.leaf_page_count, as well as free pages.
SELECT ( SELECT SUM ( SYSTAB.table_page_count ) FROM SYSTAB ) AS USED_main_table_pages, ( SELECT SUM ( SYSTAB.ext_page_count ) FROM SYSTAB ) AS USED_blob_extension_pages, ( SELECT SUM ( SYSPHYSIDX.leaf_page_count ) FROM SYSPHYSIDX ) AS USED_index_leaf_pages, used_main_table_pages + used_blob_extension_pages + used_index_leaf_pages AS total_USED_pages; SELECT CAST ( DB_PROPERTY ( 'FreePages' ) AS BIGINT ) AS free_FILE_pages, CAST ( DB_PROPERTY ( 'FileSize' ) AS BIGINT ) AS total_FILE_pages, CAST ( total_file_pages - free_file_pages AS BIGINT ) AS unfree_FILE_pages; USED_main_table_pages USED_blob_extension_pages USED_index_leaf_pages total_USED_pages --------------------- ------------------------- --------------------- --------------------- 523719 21264 40962 585945 free_FILE_pages total_FILE_pages unfree_FILE_pages -------------------- -------------------- -------------------- 3471 589911 586440
I don't know how to explain the difference between total_USED_pages and unfree_FILE_pages ( but I don't care... maybe it's the VAT 🙂
Also, the physical file may actually shrink in size when the database is shut down. When the database is running, the checkpoint log data is appended to the end of the physical file, and that space is freed when a CHECKPOINT is executed on shutdown.
However, that doesn't affect free space.
Sooooo, The "missing space" may be going to extension (blob) pages, index pages, and free pages.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank you Chris. I am able to get this out from the query
SELECT ( SELECT SUM ( SYSTAB.table_page_count ) FROM SYSTAB WHERE table_name ='my_big_table') AS USED_main_table_pages, ( SELECT SUM ( SYSTAB.ext_page_count ) FROM SYSTAB WHERE table_name ='my_big_table') AS USED_blob_extension_pages, ( SELECT SUM ( SYSPHYSIDX.leaf_page_count ) FROM SYSPHYSIDX WHERE table_id =7537) AS USED_index_leaf_pages, used_main_table_pages + used_blob_extension_pages + used_index_leaf_pages AS total_USED_pages;USED_main_table_pages,USED_blob_extension_pages,USED_index_leaf_pages,total_USED_pages 533255,47172559,126571,47832385
so the space is mostly used by the blob column. I got it! Thank you!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
so the space is mostly used by the blob column
Are you using blobs intensively in this table? Because extension pages are also used for other types of "fragmented rows", i.e. when you insert a row with primarily "empty" data (such as many NULLed columns) and later update these columns with real data: Then the new contents will usually occupy different space and therefore may need much more pages than would be required if data was filled "rigt away" during the INSERT, and these additional pages are also counted as extension pages.
Cf. that help article.
Volker is correct... pages allocated by non-blob row splits are indeed counted as "extension pages".
Here is a demo where a null DECIMAL column is updated with 30 digits of data.
CREATE TABLE t ( pkey INTEGER NOT NULL PRIMARY KEY, data DECIMAL ( 30 ) NULL ); INSERT t ( pkey ) SELECT row_num FROM sa_rowgenerator ( 1, 100000 ); COMMIT; CHECKPOINT; SELECT ext_page_count FROM SYSTAB WHERE table_name = 't'; ext_page_count -------------- 0 UPDATE t SET data = 123456789012345678901234567890; COMMIT; CHECKPOINT; SELECT ext_page_count FROM SYSTAB WHERE table_name = 't'; ext_page_count -------------- 648
Here's what the row split numbers look like:
SELECT SUBSTR ( SYSUSER.user_name, 1, 10 ) AS owner_name, SUBSTR ( SYSTAB.table_name, 1, 10 ) AS table_name, CAST ( ( SYSTAB.table_page_count + SYSTAB.ext_page_count ) * CAST ( DB_PROPERTY ( 'PageSize' ) AS INTEGER ) AS INTEGER ) AS bytes, CAST ( sa_table_fragmentation.rows AS INTEGER ) AS rows, CAST ( sa_table_fragmentation.row_segments AS INTEGER ) AS row_segments, row_segments - rows AS row_splits, CAST ( sa_table_fragmentation.segs_per_row AS DECIMAL ( 11, 3 ) ) AS segs_per_rows, IF segs_per_rows NOT IN ( 0, 1 ) THEN 'Imperfect' ELSE '' END IF AS "Imperfect?" FROM SYSTAB INNER JOIN SYSUSER ON SYSUSER.user_id = SYSTAB.creator, LATERAL ( sa_table_fragmentation ( SYSTAB.table_name, SYSUSER.user_name ) ) AS sa_table_fragmentation WHERE ( SYSUSER.user_id = 1 OR SYSUSER.user_id BETWEEN 101 AND 2147483647 ) -- DBA or non-system AND SYSTAB.server_type <> 3 -- not a proxy table AND SYSTAB.table_type <> 21 -- not a VIEW -- AND rows >= 1000 -- optionally exclude empty and tiny tables ORDER BY owner_name, table_name; owner_name table_name bytes rows row_segments row_splits segs_per_rows Imperfect? ---------- ---------- ----------- ----------- ------------ ----------- ------------- ---------- dba t 3932160 100000 196556 96556 1.966 Imperfect
User | Count |
---|---|
47 | |
6 | |
6 | |
5 | |
5 | |
4 | |
4 | |
3 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.