on 2021 Jul 26 6:58 PM
we have a ASA 17 database which has go live for a year. the database file size keeps growing, like 2-3G a day. I understand that some data are loading to the DB every day. But if I list the tables and sizes, the number doesn't add up. ie, this query give me summary of 25G. ( select sum(table_page_count * DB_PROPERTY('PageSize'))/1024/1024 from SYSTAB WHERE creator = 1; ) and I see the biggest table is just 5G. so why the database file is almost 400G now.
Request clarification before answering.
Thank you Chris. I am able to get this out from the query
SELECT ( SELECT SUM ( SYSTAB.table_page_count ) FROM SYSTAB WHERE table_name ='my_big_table') AS USED_main_table_pages, ( SELECT SUM ( SYSTAB.ext_page_count ) FROM SYSTAB WHERE table_name ='my_big_table') AS USED_blob_extension_pages, ( SELECT SUM ( SYSPHYSIDX.leaf_page_count ) FROM SYSPHYSIDX WHERE table_id =7537) AS USED_index_leaf_pages, used_main_table_pages + used_blob_extension_pages + used_index_leaf_pages AS total_USED_pages;USED_main_table_pages,USED_blob_extension_pages,USED_index_leaf_pages,total_USED_pages 533255,47172559,126571,47832385
so the space is mostly used by the blob column. I got it! Thank you!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
so the space is mostly used by the blob column
Are you using blobs intensively in this table? Because extension pages are also used for other types of "fragmented rows", i.e. when you insert a row with primarily "empty" data (such as many NULLed columns) and later update these columns with real data: Then the new contents will usually occupy different space and therefore may need much more pages than would be required if data was filled "rigt away" during the INSERT, and these additional pages are also counted as extension pages.
Cf. that help article.
Volker is correct... pages allocated by non-blob row splits are indeed counted as "extension pages".
Here is a demo where a null DECIMAL column is updated with 30 digits of data.
CREATE TABLE t ( pkey INTEGER NOT NULL PRIMARY KEY, data DECIMAL ( 30 ) NULL ); INSERT t ( pkey ) SELECT row_num FROM sa_rowgenerator ( 1, 100000 ); COMMIT; CHECKPOINT; SELECT ext_page_count FROM SYSTAB WHERE table_name = 't'; ext_page_count -------------- 0 UPDATE t SET data = 123456789012345678901234567890; COMMIT; CHECKPOINT; SELECT ext_page_count FROM SYSTAB WHERE table_name = 't'; ext_page_count -------------- 648
Here's what the row split numbers look like:
SELECT SUBSTR ( SYSUSER.user_name, 1, 10 ) AS owner_name, SUBSTR ( SYSTAB.table_name, 1, 10 ) AS table_name, CAST ( ( SYSTAB.table_page_count + SYSTAB.ext_page_count ) * CAST ( DB_PROPERTY ( 'PageSize' ) AS INTEGER ) AS INTEGER ) AS bytes, CAST ( sa_table_fragmentation.rows AS INTEGER ) AS rows, CAST ( sa_table_fragmentation.row_segments AS INTEGER ) AS row_segments, row_segments - rows AS row_splits, CAST ( sa_table_fragmentation.segs_per_row AS DECIMAL ( 11, 3 ) ) AS segs_per_rows, IF segs_per_rows NOT IN ( 0, 1 ) THEN 'Imperfect' ELSE '' END IF AS "Imperfect?" FROM SYSTAB INNER JOIN SYSUSER ON SYSUSER.user_id = SYSTAB.creator, LATERAL ( sa_table_fragmentation ( SYSTAB.table_name, SYSUSER.user_name ) ) AS sa_table_fragmentation WHERE ( SYSUSER.user_id = 1 OR SYSUSER.user_id BETWEEN 101 AND 2147483647 ) -- DBA or non-system AND SYSTAB.server_type <> 3 -- not a proxy table AND SYSTAB.table_type <> 21 -- not a VIEW -- AND rows >= 1000 -- optionally exclude empty and tiny tables ORDER BY owner_name, table_name; owner_name table_name bytes rows row_segments row_splits segs_per_rows Imperfect? ---------- ---------- ----------- ----------- ------------ ----------- ------------- ---------- dba t 3932160 100000 196556 96556 1.966 Imperfect
User | Count |
---|---|
31 | |
10 | |
6 | |
6 | |
6 | |
6 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.