Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

2 Answers
2

My first suggestion would be to use intelligent re-indexing rather than just blindly re-indexing everything. This will help to minimise the impact on the logs un-necessarily. Ola Hallengren's offers one such solution. See http://ola.hallengren.com/

External storage on a SAN that uses either cheap discs (SATA), or dedupe technology, makes it more affordable than expensive SAN discs.

Increasing the frequency of the log backups,and the copy and restore operations, makes sense to me. Ensure the maintenance is out of hours so that any learning curve about capacity minimises any pain.

If you incure heavy log overhead because of rebuilds I would advise a strategy where you would change to reorganize instead of rebuild.

Yes it's true on higher fragmentation reorganize is less efficient then rebuild but the great thing about reorganize is that you can stop it, and next time when yu start it it will continue where it left.

So you could start scheduling regular intervals of reorganizing spread across the day/night so you equalize the load instead of creating huge peeks.

technically, a reorganize doesn't continue where it left. But say that it has done the first 10% of the pages. The next time it will start, it will fly super fast past this first 10% since it doesn.t need to rearange to pages again.
–
Edward DortlandAug 29 '12 at 18:42