Over the past Christmas holiday week, when the website I work on was experiencing very low traffic, we saw several Request timed out exceptions (one on each day >> 12/26, 12/28, 12/29, and 12/30) on several pages that require user authentication. We rarely saw Request timed out exceptions prior to this very low traffic week.

We believe the timeouts were due to the database that it uses being "spun down" on the SQL Server and taking longer to spin up when a request came in.

There are 2 databases on the SQL Server (SQL Server 2005), one which is specifically for this application and the other for the public facing website and for authentication; so in the case where users were not logged into the application (which definitely could have been for several hours at a time over Christmas week) the application database probably received no requests. We think at this point SQL Server reallocated resources to the other database and then when a request came in, extra time was needed to spin up the application database and the timeout occurred.

Is there a way to tell SQL Server to give a minimum amount of resources to a database at all times?

4 Answers
4

If this is on, then during times of low traffic, it basically takes the db offline. The first query against it attempts to bring it back online, but depending on the size, that might take awhile.

EDIT: If that's not the case, then a few other things to check:

Does your SQL server try to run IO-intensive virus scans or something during times of low activity? If so, make sure to exclude your SQL folders.

Any other pieces of your app that might be responsible here? For example, if you are using IIS with a .Net web app, resetting the application pool would force pages to re-compile the next time they are called.

Beyond this, the only thing I can think of (short of upgrading to SQL 2008 Enterprise Edition with Resource Governer) is to optimize your query so that it performs optimally even when the table's aren't cached.

Is there anything like this in SQL Server 2005?
–
Jeff WidmerJan 9 '10 at 2:00

Not really. I know of projects that have used Activation mechanism to control resource allocation (via max_queue_readers, using a mechanism similar to rusanu.com/2009/08/05/asynchronous-procedure-execution), but is complex to implement and difficult to get back the result set, and only controls the number of queries that can execute simultaneously, not the resources used by said queries.
–
Remus RusanuJan 9 '10 at 3:15

In SSMS (Management Studio) right click on the server name in Object Explorer to get Server Properties. Select Memory in the select a page section. There you set the minimum and maximum memory to make available to SQL Server. Set aside enough for the OS related items (I typically use 4 GB) and then make the rest available to SQL Server. By spinning down and spinning up I assume you refer to caching (procedure and buffer) which could be an issue.

But that is for the entire SQL Server. How can I specify that SQL Server allocate a minimum set of resources to a single Database on the SQL Server? DB1 should get a min of XX resources and DB2 can get everything else.
–
Jeff WidmerJan 9 '10 at 2:05

YOu can not. Updatde to 2008 an use the resource governor - only realistic choice.
–
TomTomApr 18 '10 at 5:31

Hi mfinni, Do you have a suggestion on what to look for to prove my theory?
–
Jeff WidmerJan 9 '10 at 2:01

You have two tiers - web and database. Did you do anything to rule out the web tier? The website, after low load, might need to reconnect to the DB, or reload code from disk after an app pool recycle, or put things back into RAM that it had paged out. Do you have perfmon or anything else to instrument either tier?
–
mfinniJan 10 '10 at 21:57