Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

Sadly, the setup and use of that database I can't change, much (thanks to some internal politics).

It's running on SQL Server 2008r2.

Its only been live for 5 days and has grown from 20GB to upwards of 120GB in that time. (essentially most of the data gets deleted and then imported, but like I say I can't control that side of things)

I would love to run nightly jobs to shrink the database and reorganise the indexes, but I know that's a long way from best practices and could lead to more problems than I've already got!

QUESTIONS

What's the best way to handle a database that's rapidly increasing in size?

Should I be looking at moving the file group around to keep the physical size on disk down?

Is there any way to stop the server running out of space within a month?

Can you tell us the recovery model you are using? Also, can you expand on what you mean by "deleted and then imported"?
–
Liam ConfreyJun 4 '13 at 13:43

Simple recovery model and by "deleted and imported" I mean that instead of using UPDATE its DELETE and INSERT. I know its bad, terribly terribly bad but its not something I can change!
–
James SinclairJun 4 '13 at 14:44

3 Answers
3

I don't recommend to shrink your database files unless you are completely sure that the space will not ever be needed again. Why shrink every night only for it to grow every day? You're going to run into the pain points of shrinking your data files, and you're forcing yourself to run into performance impact when the database files need to grow during the day.

If it ramped up to 120 GB because of initial space requirements, is it safe to say that the database will not be larger than that (obviously with some buffer)? Size your database appropriately.

Otherwise, if the database will continuously grow, and you don't have any input on the data growth that happens (in other words, if you can't archive data off of the database) then you will simply need to ensure that there is ample drive space for the required growths.

Do you know which database files have been growing? There is different behavior that will cause data files to grow, as opposed to transaction logs to grow as well as maintaining their size. If the transaction log is the main consumer of the space because of heavily logged transactions, you can consider more frequent transaction log backups to ensure the log is reused more often (there are some stops here, as open transactions, so this might not be possible across the board).

More information surrounding what is growing and the scenario is needed to get more specific, but I don't recommend scheduled shrinking of your database files.

+1. The OP is clearly in the category that hates databases having enough space to breath. Hint - in most large setups ther is no shrinking and expanding, things are preconfigured and get reconfigured regularly - but never automatic and not daily and not shrinking for the sake of looking better in the morning.
–
TomTomJun 4 '13 at 14:14

@TomTom i have no worries at all in giving the DB as much space as required, my concern is that in 5 days of use the db has grown 5x over - what happens in 10 days time, 100 days time... The data has remained fairly stable (i.e. number of rows hasn't changed much) its just that because almost every row is altered (and altered using DELETE and INSERT not UPDATE) I have a DB that is four 5ths empty space and one 5th data after just 5 days.
–
James SinclairJun 4 '13 at 14:51

1

Just wait some more days. Run reorg if needed - not of the database, of the tables. Just do not expand/shrink the files. Find the REASON for this behavior - we (the others) clearly do not see it.
–
TomTomJun 4 '13 at 14:54

1

@TomTom thanks, I'll give this a go as well - I suspect the reason is lazy/inept developers but that's sadly out of my control.
–
James SinclairJun 4 '13 at 15:16

SADLY that will be on your desk permanently then. SQL is tricky - do something stupid, do not properly test, and there you go, wasting tons of precious resources.
–
TomTomJun 4 '13 at 15:22

Based on the information you have given, I would say the best way to manage this database is to sit down with the core users, determine the expected growth rate for 1, 3, 5 and 7 years. Pre-size the data files to what the users expect the 3 year size to be, allow auto-growth in manageable increments up to the 5 year size on the off chance the database reaches the 3 year size quicker than expected, and make sure you have enough drive and storage space to handle the 7 year size, to include all of your backup strategies and DR plans.

I will add a caveat: Since you are using R2 you might want to look at installing a Utility Control Point for the database instance. This will allow you to track space used in the data file(s). If the initial input of data was truly 5x greater than needed you might be able to determine this and re-size the data files to a smaller size and then initiate the 1-7 year growth plan.

You mention that records are "altered using DELETE and INSERT not UPDATE". If that's the case then can you confirm that the growth is in the database files alone and not the log files as well?

If you have a lot of throughput and every UPDATE is being doubled as a DELETE/INSERT then you will probably see higher than usual log growth - that should be find so long as your backups are done regularly and the log gets truncated nicely.

I would get a spreadsheet document and start logging data/log sizes each day to see how long before you run out of practical room. (Perhaps you get a nice chart that shows a decent trend line aiming towards your maximum disk space) You don't specify the maximum space available, but even with 1TB plus of storage, you might see disk usage causing I/O issues if its particularly "busy".

If you can show that you are going to reach capacity very soon, it might make for a good argument for management ;)

Things that will lessen the growth (slightly). Converting datetime to datetime2, nvarchar to varchar etc where appropriate will shave some bytes (off the data and the indexes on the data), but you will probably need downtime to do this. Run http://www.brentozar.com/blitzindex/ on key tables, you may find indexes with 0 read counts that can be safely dropped if nobody intends to use them.