cancel
Showing results for 
Search instead for 
Did you mean: 

Checking dump sizes - data & log separately

Former Member
0 Kudos

Hello,

Is there any way to check how much space is needed for data and log before loading dump file?

This information would be useful before creating database for load and don't need to alter it with more space after getting error.

Any answers would be appreciated

View Entire Topic
former_member188958
Active Contributor
0 Kudos

load database

The "with headeronly" option to load database will give you the size of the database you need to create (in pages) without actually loading the dump.  (You can use any database name in the command as it won't actually be loaded).

You can also work out the order and size of data vs log fragments from the sysusages information that is output by this command.  Segmap of 3 is system (1)+default (2) [aka "data"], 4 is log, 7 is data and log.

Example:

1> load database test from "/work/test.dmp" with headeronly

2> go

Backup Server session id is: 5. Use this value when executing the

'sp_volchanged' system stored procedure after fulfilling any volume change

request from the Backup Server.

Backup Server: 6.28.1.1: Dumpfile name 'test153090D1FE   ' section number 1

mounted on disk file '/work/test.dmp'

General:

        This is a full database dump

        Name: test

        Id: 7

        Dump date: Nov  5 2015  2:55PM

        Database page size:  4096.

        Number of pages: 5120

        Checkpoint RID: (2625, 1)

        Next object id: 992003534

        Sort order id: 25

        Status: 128

        Charset id: 190

        Database durability: UNDEFINED

        Compression level: N/A

        Database encrypted: NO

        Database Encryption Key: N/A

Versions:

        Adaptive Server version: Adaptive Server Enterprise/15.7/EBF 23814

SMP SP134 /P/Sun_svr4/OS 5.10/asecarina/3924/64-bit/FBO/Sa

        Backup Server version: Backup Server/15.7 SP131/EBF 23462/P/Sun_svr4/OS

5.10/ase157sp131x/4751/32-bit/OPT/Mon Nov  3 15:18

        Database log version: 7

        Database upgrade version: 35

Dump timestamps:

        Full: 0x0000 00000000

        Cumulative: 0x0000 00000000

        Delta: 0x0000 00000000

Sysusages:

        segmap: 0x00000003 lstart=0 vstart=[2, 18432]  lsize=1536 unrsvd=630

        segmap: 0x00000003 lstart=1536 vstart=[2, 57344]  lsize=1024 unrsvd=1020

        segmap: 0x00000004 lstart=2560 vstart=[2, 59392]  lsize=512 unrsvd=494

        segmap: 0x00000004 lstart=3072 vstart=[2, 72704]  lsize=2048 unrsvd=2040

        Number of logical pages: 5120 (20 MB)

        Number of physical pages: 5120 (20 MB)

Sysdevices:

        devtype=0, vdevno=2, devname=datadev1,

                physname=/work/releases/rel1572sp130/data/datadev1,

                devsize=754688, dbdevsize=5120, vstart=0,

                devstat=2, devstat2=1

Sysdatabases:

        status=0x0005, crdate=Sep 23 2015  3:10PM,

                dumptrdate=Sep 23 2015  5:37PM, status2=0x0000,

                audflags=0x0000, deftabaud=0, defvwaud=0, defpraud=0,

                def_remote_type=0, status3=0x20000, status4=0x4000,

                durablility=1, lobcomp_lvl=0, inrowlen=0,

                dcompdefault=1 status5=0x0000

Sysattributes:

                flmode: 0x0004

Features found in the database or transaction dump header:

  ID =  4 : 15.7.0.000  : Database has system catalog changes made in 15.7 GA

  ID =  7 : 15.7.0.020  : Database has system catalog changes made in 15.7

ESD#02

  ID = 11 : 15.7.0.100  : Database has the Sysdams catalog

The database you are attempting to LOAD was DUMPed under a different sort order

ID (25) or character set ID (190) than the ones running on this server (SortOrd

= 50, CharSet = 1). If the sort orders differ, at least one of them is

non-binary.

If you have dump history configured, you can use the "with create_sql" option to actually generate the scripts to create the database.

Former Member
0 Kudos

Thanks a lot. It would be even nicer if it could sum all lsizes with same segmap id and multiple it by page size. But probably I want to much