Trace flag 834 causes SQL Server to use Microsoft Windows large-page allocations for the memory that is allocated for the buffer pool. The page size varies depending on the hardware platform, but the page size may be from 2 MB to 16 MB. Large pages are allocated at startup and are kept throughout the lifetime of the process. Trace flag 834 improves performance by increasing the efficiency of the translation look-aside buffer (TLB) in the CPU.

First, a better explanation of what the TLB is, how its efficiency can suffer, and how allocating large pages helps:

In the CPU, there’s a translation table of pages to their locations in memory called the Translation Lookaside Buffer. If you have more pages than will fit in the table, only the most recent address are kept in the CPU, and the whole table is in elsewhere. Just like data served out of SQL Server’s buffer pool is accessed much more quickly than data served off a disk, pages in the TLB are accessed much more quickly than if the address has to be retrieved out of the full table in main memory. The fewer memory pages you have, the more likely that they will all fit in the buffer in the CPU, avoiding those costly trips out to look at the full translation table. The 834 trace flag tells SQL Server to allocate larger pages for its buffer pool (2 MB to 16 MB, depending on your hardware, rather than 8 KB) so that there will be fewer of them.

Sounds intriguing! Would my high performance system benefit from this? The information out there is pretty slim, but here’s what I dug up.

This Usenet posting by a SQL MVP suggests that you should only use it if your system is CPU-bound rather than IO-bound and your signal to resource wait times are high. To see if your signal to resource wait times are high, shimmy on over to the sys.dm_os_wait_stats DMV:

Mine are all zero, and my production cluster sees about 1500 batch requests per second during its peak use.

Monitoring for translation lookaside buffer misses is also mentioned, but I can’t find any way to do this on a Windows system. There are a few Intel Pentium 4 manuals and an O’Reilly system tuning book that mention the existence of the TLB, but there are no perfmon counters to monitor. The O’Reilly book does mention a tool for Solaris.

One of the exciting new SQL Server 2008 features for those of us deploying SQL changes to multiple servers is the new Multiple Server Query Execution. We immediately found that opening a SQL file against a server group fails if you use Windows authentication to connect to your database servers. The query window shows that all connections are disconnected, and a login failure shows up on every SQL Server in the group.

Workaround: Open a blank query window, and paste in the contents of the SQL script you want to run.

Yes it can, provided that you have kept your 2005 installation up to date!

We’re in the process of upgrading our environment to SQL Server 2008. Some folks have been unable to install SSMS 2008 for various exciting reasons related to an inability to apply Visual Studio 2008 SP 1 (a documented Microsoft bug). This isn’t really a problem — you can use SSMS 2005 to connect to SQL Server 2008, you just won’t get the new SSMS features.

Occasionally, though, a particularly laggardly developer who is just now trying for the first time to connect to our development 2008 instance will email me a screencap of the following dialog box and testily inform me that it NOT TRUE that you can use 2005 to connect to 2008 (they’re fond of capslock, those developers).

The problem is on the workstation end — a failure to keep SSMS 2005 properly patched. Simply apply SQL Server 2005 SP 2 (it came out in 2007, for crying out loud!), and you’ll have no problems using SSMS 2005 to connect to SQL Server 2008.

The March issue of SQL Server magazine has a blurb about a new SQL Server stress-testing tool called SqlQueryStress. It lets you plug in a query and (optionally) parameterize it from the results of another query. For example, a query which has a user ID as a parameter, and it will fill that parameter with values from a query that selects some subset of your users. Very cute.

Anyway, the fun part of this is that the network admin and I have been trying to devise ways to sent lots of traffic to a recalcitrant SQL Server which seems to be suffering from a TCP chimney bug with HP’s NICs for which the fix is to turn off some TCP chimney setting. It randomly falls offline itself, so we want to reproduce the error, turn off the setting, then verify that we can no longer break it. We’ve been trying combinations of copying several large files and running queries, but without success, so today I installed SqlQueryStress on fifteen servers, set it to spawn 200 threads that each ran 24,000 queries (this would cause it to run for about an hour), and let it rip. Within three minutes, we had to shut them all down because we’d flooded the network with so much traffic that our secondary office building down the street could no longer use the internet.

The March issue of SQL Server magazine has a blurb about a new SQL Server stress-testing tool called SqlQueryStress. It lets you plug in a query and (optionally) parameterize it from the results of another query. For example, a query which has a user ID as a parameter, and it will fill that parameter with values from a query that selects some subset of your users. Very cute.

Anyway, the fun part of this is that the network admin and I have been trying to devise ways to sent lots of traffic to a recalcitrant SQL Server which seems to be suffering from a TCP chimney bug with HP’s NICs for which the fix is to turn off some TCP chimney setting. It randomly falls offline itself, so we want to reproduce the error, turn off the setting, then verify that we can no longer break it. We’ve been trying combinations of copying several large files and running queries, but without success, so today I installed SqlQueryStress on fifteen servers, set it to spawn 200 threads that each ran 24,000 queries (this would cause it to run for about an hour), and let it rip. Within three minutes, we had to shut them all down because we’d flooded the network with so much traffic that our secondary office building down the street could no longer use the internet.