cancel
Showing results for 
Search instead for 
Did you mean: 

What's the size of your biggest database?

Former Member
6,147

Size: 66.3Gb
Growing rate: 50mb/day
OS: Windows Server 2003 x64
SA Version: 9.0.2

Breck_Carter
Participant
0 Kudos

The original reply was spam, and so it remains after my edit 🙂

Breck_Carter
Participant

A downvote? Does that mean you want the original spam reply back? Or you don't like Monty Python? Gadzooks! Who doesn't love Monty Python? 🙂

Chris_Kleisath
Participant
0 Kudos

Perhaps the downvote was made to cause this answer to go to the bottom in a "vote sorted" list of answers.

VolkerBarth
Contributor
0 Kudos

I would ask that as a new question...

0 Kudos

@Volker, will do!

Accepted Solutions (1)

Accepted Solutions (1)

Former Member

Only 11 GB, but it's not a production database, it's a test database for recommender system applications.

Answers (7)

Answers (7)

Former Member

Our largest one is 255GB and is growing at about 3-5GB a month. We have several others that range from 70 to 200 GB on our ASP servers.

ASA 9.0.2.3508 on Business class hardware .

The primary bulk is from blobs containing images, but we do have some tables with millions of rows and some with over 100 million rows (but these are fairly 'narrow' rows).

Performance is fine. We do spend a fair amount of time making sure that queries perform well before adding them to the system. We also have built a tool that automatically extends the database to minimize fragmentation and also automatically defragments tables and indexes as needed.

Don't have metrics on the number of queries/updates per day at the database, but do for EAServer function calls (the busiest server does over a million a day) per day. All database access goes through EAServer.

Alex

Former Member

I've got a 25gb database replicating most everything to a 35gb database via dbremote. Works great! Hasn't got much spam in it.

Former Member

my comment made more sense before the list was resorted.... it used to be below Breck's waitress dialog. I guess you should never trust to sequence of things in life without an explicit ORDER BY

justin_willey
Participant

About 120GB, a lot of it is blob data (docs and photos). Product enhancement suggestions to optimize dealing with important but rarely accessed (individually) blobs is something I've been meaning to post on for a while.

With a databases this size we have found validation time is the biggest issue . Generally performance is fine with appropiate (but not silly) hardware.

Breck_Carter
Participant
0 Kudos

So ask your questions! re: important but rarely accessed blobs, and validation time. And tell us about your hardware: is it consumer or business grade? etc. If you can tell us.

Breck_Carter
Participant
0 Kudos

This isn't a contest, but... Justin is currently number one on the leader board of real databases.

justin_willey
Participant
0 Kudos

I get the impression from the newsgroups that thre are much much larger ones out there - into the multiple hundreds of GB

reimer_pods
Participant

Biggest customer DB

Size: 22Gb

Growing about 2 MB/d

HW: Dual Xeon 2.4 4 GB RAM

OS: W2K3 Server 32 Bit

SA Version: 9.0.2

Breck_Carter
Participant

600G

Alas, I cannot talk publicly about client databases, but at this moment my newer R&D server is set up with a fairly large database for Foxhound testing. Right now, it's empty, and it doesn't have the 600G secondary dbspace set up. The box was locally-assembled by my good friends at Tsunami Technology. All the bits and pieces are consumer-grade because of my natural bias towards helping people save money (including me). Increasing the RAM is on my to-do list.

Database: SQL Anywhere 11.0.1.2276

OS: Windows Vista Ultimate SP1 64-bit

CPU: 4x Intel Core 2 Quad Q9450 2.66GHz

RAM: 4GB (3GB allocated to SQL Anywhere cache)

HDD: system dbspace on 931G 😧 drive, translog on 465G E: drive, translogmirror on 465G C: drive, temporary on 931G F: drive... in the stores these are called "1T" and "500G" drives.

Foxhound says this...

Image Hosted by ImageShack.us

Here are the Windows command lines that go from empty drives to a pair of dbisql client sessions. You can read about the options here: dbinit, dbspawn, dbsrv11 and dbisql.

"%SQLANY11%\\bin64\\dbinit.exe" ^
   -dbs 600G ^
   -et ^
   -m C:\\data\\big\\big.log ^
   -p 8192 ^
   -s ^
   -t E:\\data\\big\\big.log ^
   D:\\data\\big\\big.db

"%SQLANY11%\\bin64\\dbspawn.exe" ^
   -f "%SQLANY11%\\bin32\\dbsrv11.exe" ^
   -c 3G ^
   -ch 3G ^
   -dt F:\\data\\big ^
   -o D:\\data\\big\\dbsrv11_log_big.txt ^
   -oe D:\\data\\big\\dbsrv11_log_fatal_big.txt ^
   -os 10M ^
   -x tcpip ^
   -ze ^
   -zl ^
   -zp ^
   -zt ^
   D:\\data\\big\\big.db

"%SQLANY11%\\bin32\\dbisql.com" ^
   -c "ENG=big;DBN=big;UID=dba;PWD=sql;CON=big-1"

"%SQLANY11%\\bin32\\dbisql.com" ^
   -c "ENG=big;DBN=big;UID=dba;PWD=sql;CON=big-2"
Former Member

Currently 53GB, with about 800MB/day in growth.

Our situation is a little unique because an external program cycles binary PDF data out of the database after 15 days, and re-entered when requested through a secondary database/remote procedures. Without this system, the database would be at least 600GB. Non-binary data accounts for about 65MB/day in growth.

Database: SQL Anywhere 9.0.2.3249

OS: Windows Server 2003 R2 32-bit

CPU: 4x Intel Xeon Dual Core 3.00GHz

RAM: 64GB (60GB Allocated to SQL Anywhere cache)

HDD: DB on RAID5 SCSI-320 (15k RPM - 4 Disks), TempFile/TranLog on RAID5 SCSI-320 (15k RPM - 4 Disks)

Former Member

Size: 13G

Database: SQL Anywhere 9.0.2.3804

OS: Windows Server 2008 64-bit running on VMWare vSphere running on HP BL480c 2xX5470

Allocated: 4 vCPUs, 8G of RAM

Storage: EMC Clariion 4-120 with 30x300G fibrechannel disks