Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

I have just come to a new company and we have one database which was created with standard initial size and autogrowth parameters for data and log files (1 MB autogrowth for data / 10% for log).

The database is 70 GB right now and I guess the datafile is spread all over the disk in 1 MB chunks, so sequential reads get penalized. There is just one raid for OS and other RAID 5 for database files (data and logs together, unfortunately).

Will a backup and restore help to store the datafile in a better way, physically speaking? Will there be improvement for long sequential reads? Otherwise, what can I do?

Thanks.

EDIT: Actually, I have just discovered another database in the same server, with the same problem but a size of 500 GB.

2 Answers
2

A restore will not correct the page fragmentation. The best option would be to take an outage and defrag the entire database using DBCC INDEXDEFRAG or DBCC DBREINDEX. For the OS file fragmentation if you simply detach the database, move the files to a different physical drive and then move it back you should be good.

Indexes are taken care off daily with a maintenance plan. Of course, backups are stored in a different place than the datafiles, so I don't see any difference on attaching/detaching the database as you say rather than backup/restore from a different physical drive, or even from tape - which is actually my case.
–
olmed0Dec 5 '11 at 13:28

Correct, I think the operations would be equivalent. I have a preference of detach/move/attach over restore, but as long as you are removing the files from the disk and them putting them back you will be accomplishing the same goal of handling the fragmentation on the OS level.
–
Jason CumberlandDec 6 '11 at 18:21

I might not be experienced enough to answer this, but I'll give it a shot.

If the database size is 70GB but the actual data file is still only 1MB (unlikely) then you probably need to just truncate and shrink the logs. I can't remember because I only have SQL Express installed right now, but in SQL Server (2005+) there is an option when right-clicking a database to basically chop those logs down to size. Check this article, although it doesn't explain how to do it with SSMS.

However, if your actual database has grown to ~70GB then I would assume it is still in one huge .mdf file, no? I'm not sure what you mean by "the datafile is spread all over the disk in 1MB chunks". I don't think this is ever the case. If your actual database file is that large then I would investigate partitioning the database or removing old/unused data. Check this ServerFault question for more details.

Perhaps a simple system defrag would help (run in off-peak hours of course).

1 MB is the value of autogrowth, so the datafile is 70GB but it has been growing in pieces of 1 mb rather than reserving the 70 gb in the first place (if they would have specified that as the "initial size"). So, what I am saying is that presumably, the datafile has been written here and there in the disk, not sequentially. And I would like to fix that in order to get better performance with long sequential reads. I hope I am explaining it clearly now.
–
olmed0Dec 2 '11 at 16:15

Oh I understand now. I thought you were thinking it was actually thousands of little 1mb files spread over your C: drive. You're talking about the actual data on the hard disk/RAID. I'm thinking a defrag might do exactly this .... or as you mentioned a restore on a database with a size set for 70GB initial.
–
MikeMurkoDec 2 '11 at 16:17