TempDB

I did some research a while ago about tempdb optimization and answered my own question on Stackoverflow. Here is what I found out.

In order to optimize tempdb performance pay attention to physical disk configuration, file configuration, as well as some settings within the database.

Physical disk configuration

tempdb should reside on its own dedicated physical disks. This allows it to split I/O transactions from the remainder of volumes on the SQL Server.

To move tempdb to a new disk drive, use ALTER DATABASE. It is the key T-SQL command to perform this operation. Microsoft offers a good example in SQL Server 2005 Books Online. The article name is ALTER DATABASE (Transact-SQL) and the specific section is 'G. Moving tempdb to a new location.'

The tempdb is a very high-write database. So, a RAID 5 array isn't the proper place for it. You should put the tempdb on either a RAID 1 or RAID 10 array as they're optimized for high-write applications. If you can afford additional RAID 1 or RAID 10 arrays for each physical database file for the tempdb, you'll get increased performance.

Database files

You should have one physical file per CPU core in the server. So, if you have a dual-chip, dual-core server, you should have four physical database files for the tempdb database. When adding more database files, it's important to configure the files at the same initial size and with the same growth settings. That way, SQL Server will write the data across the files as evenly as possible.

Database file size

The size of the tempdb database can affect the performance of a system. For example, if the size that is defined for tempdb is too small, part of the system-processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. You can avoid this overhead by increasing the sizes of the tempdb data and log file.

Determining the appropriate size for tempdb in a production environment depends on many factors including the existing workload and the SQL Server features that are used. Microsoft recommends that you analyze the existing workload by performing the following tasks in a SQL Server test environment:

TempDB sizing isn't really dependent on the size of the environment, but on how large it needs to be. If the tempdb needs to be 100 Gigs make it 100 Gigs. If it needs to be 5 Gigs make it 5 gigs. Starting off to small will just hurt performance in the long run. Don't forget tempdb is deleted and recreated each time the SQL Service is restarted.
–
mrdennyJul 10 '09 at 6:40

Sorry I asked this question as an unregistered user and now can't mark it as answered. I am waiting to here back from the admins to see if they can attach my account to the question again.

To give some more insight this server will not be under huge load, its actually a source control backend. Buying a server and a DAS or a ISCSI SAN or something would have been over kill and the cost would have crushed the projects chances. I work for a SMB of about 100 people so our IT budget is tight especially right now.

@mrdenny

We did ask the questiong to the software vendor who recommened this configuration. We bought it then when we posed the quest I asked here they said to put log, databases, and tempdbs all on the raid 10 partition. I am not a big DB guy (really??) but this sounded fishy, as pretty much everyone knows not to mix your ldfs and mdfs on the same spindles.

@splattne - Thanks for the insight on tempdb's, this will come in handy for this and future SQL installs.

@SuperCoolMoss - I have talked to some people in my own IT firend network and they agree with you on this. OS, Pagefile and LDFs on RAID1 and tempdb and MDFs on RAID10

Thanks to everyone one else that commented.

From my reading these are the basic rules I think you should follow when delploying SQL when it comes to disks. In my opinion they should be followed in this order, let me know if you agree or disagree.

What amount of performance do you need? It's difficult to come up with an answer based on "here's some disk". Plus SQL performance is much better if you can shove all your data into cache, so make sure you've got plenty of RAM in that thing.

I would go for OS and pagefile on your 2x74Gb volumes. If you are getting into a situation where you have heavy log traffic, you can split your raid 10 into a pair of raid 1's and put your logs on there.

Obviously there are a lot of religious wars over the "best" layout, but you don't really have enough disks to move around. There's no point partitioning your data drives, because what you really care about is the number of spindles, and you're limited by what you can fit in the server.

You might be better off getting a PowerVault or some such, and recycling the 146Gb drives into that (along with as many other drives as will fit) and splitting that into data/logs/system+tempdb volumes.

I would start with collecting some IO statistics from the old system. For example start with the Performance Dahsboard reports: http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en This will give you the IO Statistics for your databases and then you can make an informed decision, rather than a 'one size fits all' generic advice. Of course the old server may have different IO characteristics per database than the new one, specially if different amount of RAM is available. Still is better than a pure guess.

Ideally one aims to separate higher load I/O files into their own devices, and also don't want to mix the random I/O of MDF access with the linear writes of the LDF access. In your case, there simply isn't really too many combination, you really only have 2 locations (the 2 disks in RAID1 and the 4 in RAID10). You can sum up the IO per database as collected from the old server and then split the file locations so it distributes the IO between the 2 destinations at a ratio of probably 1/5 on the RAID1 and 4/5 on the RAID10 (to account for the OS and page file IO on RAID1). Hope you don't have any write intensive application (high sequential LDF write load)...

That isn't really a very fast disk system. Full credits to splattne for an awesome answer, but really that isn't a very fast disk setup, so I'm not sure how much trouble it's worth going to.

The two disk RAID1 gives you little or no performance increase over a single unraided disk. The four disk RAID10 is giving you about the speed of a two disk RAID0, which is fast but not stunningly so. It's not a slow server, but it's not an obvious candidate for running a really demanding SQL database.

If it were my server I would find two more 146GB disks and make it a six disk RAID5. OK, I'm going to get screamed at for putting tempdb on the same spindle(s) as the data files, but the Perc controllers are very good at RAID5 and a six disk RAID5 will be so fast you'll (probably) get better performance than having tempdb on a RAID1 and only 4 disks in your RAID5. I say "probably" because as with everything it depends on how you load it.