on 2014 Dec 03 9:56 AM
Hello Experts,
We have ms sql 2008 R2 database with 4 data files each of 900 GB. The database is growing and it has left with only 300 GB free space.
So I would like to take an opinion with you all if
1). It is a good idea to add one more data file of 900 GB or
2). Perform the restructuring of the database and distribute this 4 datafile data to may be 8 new datafile with more space?
If we choose any of the above option, what about the performance and the indexing of the database?
If you have any other option then please suggest. Thanks in advance
Regards,
Mohit
Request clarification before answering.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Srinivasan,
Thanks for the message.
You suggest to add new data file?
As i stated before the data files are of big sizes of 900 GB. Adding one more of 900 GB will keep on moving the new data only in this drive and the data will be heterogeneous and i dont think that would good for the database.
What you suggest on that?
Regards,
Mohit
Hi Mohit
You can perform the Database compression as mention in the SAP notes you get free space around 20 to 50 %
1488135 - Database compression for SQL Server
BR
SS
Hello SS,
You mean this compression is different from the compression that we perform from the database.
Like We do with command
DBCC SHRINKFILE (N' SIDDATA1', 14750)
I think this compresses the data both in rows and column.
If your approach, will compress data further then please confirm.
Regards,
Mohit
Mohit,
That's right, SS is not suggesting SHRINKFILE, he is suggesting MSSCOMPRESS. It's very different. This doesn't shrink the file, it actually compresses the data, and it's quite effective. Have a look at the Note he linked earlier. SAP and Microsoft both recommend using database compression from SQL Server 2008 and up. This should create more free space in your existing data files.
You could also add another drive with another data file, but as you pointed out this will not help your I/O much, as all new data will just end up going into the new data file. It is possible to use the MSSCOMPRESS process to migrate some of the data to the new file during the compression process, and that's one way you could mitigate this. The best option, however, would generally be to export your database with SWPM, add the new drive and data file, and then re-import it across all the data files. This will spread the data evenly. However, for a large database like yours, this could involve quite a lot of downtime, so you would definitely want to test it on a copy of production first to get an idea of how long it will take.
Either way, you're almost certainly going to want compression.
Regards,
Matt
Hi Mohit,
We are alreay use MSSCOMPRESS, which is reduce my database from 250G to about 80G, which will provide your system by a more available free space and this a recommended by SAP and Microsoft.
Note: you can run this program MSSCOMPRESS and compress table by table if transaction log increase, try to shrink it.
Or you can Export the database with SWPM, and import it again.
BR
Said Shepl
| User | Count |
|---|---|
| 13 | |
| 9 | |
| 6 | |
| 4 | |
| 4 | |
| 3 | |
| 3 | |
| 2 | |
| 2 | |
| 2 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.