SQL cursors are bad, evil, wicked things… a scourge on performance and a pox on the skills of any good SQL author. Any good SQL man worth his salt will scoff indignantly at the mere sight of the CURSOR keyword in any production code. It’s clear to those experts that a set-based operation will always outperform a crude cursor-based approach.

… and then, summoning all my years of SQL skill, I compute a running total as follows:

select
t1.id, sum(t2.value) as Total
from #test t1
inner join #test t2
on t2.id <= t1.id
group by t1.id
order by t1.id

The magic is that little <= clause creating a sliding ‘window’ and yielding results like this:

id

Total

1

100.00

2

200.00

3

300.00

4

400.00

5

500.00

Awesome, that totally works and I am a genius… but being the diligent SQL guru that I am, I take a little gander at the ‘ol Query Plan.

Yeah, that plan looks amazin… wait, what?!?

And I just crapped my pants… that innocent query against 10,000 rows just blew up with… wait for it… over 50 MILLION rows!

That’s your little friend known as an Eager Spool. For each row that you need a running total, sql server needs to pull that row and any row before it… over and over. So to get the first value you read one row… for the second value, you read two (for a total of three)… for the third, you read three (for a total of 6), and so on.

SQL Server helps you out here by just exploding the combinatorial results. This is really the sum of sequential integers (1+2+3=6)… a summation calculable by the formula ((N^2)+N)/2. Plug in 10,000 and you get… lo and behold… 50,005,000 rows. Exactly what our eager little buddy is showing.

You can imagine that, as the number of records goes up, your query performance will go dramatically down… and your imagination would be right. This little baby goes pear-shaped real quick.

The only solution is to calculate a result, store it, and then add the next record, store that again, and so on… in other words, don’t look back at every record before you, but rather store an accumulating result.

How can you do that? Well, you can wait for SQL Server 2012 and the new SUM() OVER (ORDER BY) capability which will do this exact optimization. This is an extension to the windowing functions added in SQL 2005. Prior to SQL 2012, however, the ORDER BY clause is only supported in non-aggregating functions.

Ye Gods! How could this possibly happen?!? Thousands of years of mathematics and physics turned on its head from one simple SELECT statement!

I’ll tell you how it happens… SQL Server has to break this query into a step-by-step plan… and, in its nearly infinite wisdom, it may decide to do things in an order you did not expect.

For example, it may decide to filter out the SalesDate part first, then do all the joins, and then the division and THEN filter on the TotalQuality clause. If this is the plan it chooses, and you have zeroes for TotalQuantity… BLAMMO! You just got yourself a divide-by-zero before you even got to the last filter step which would have saved you.

The ONLY way to guarantee this doesn’t happen is to use a CASE statement in your SELECT as follows. It’s ugly, but safe:

Did you know that ADO.Net leaks isolation levels across pooled connections?

This is particularly bad if you’re trying to use Snapshot isolation in SQL Server 2005+.

So, let’s say you create an ADO.NET connection (say, from Entity Framework) and do an update… you’re fierce about your data integrity, so you set your isolation level to Serializable… or you use TransactionScope and don’t bother to specify an isolation level and you get the default of Serializable (ye gods!)

When you’re done, that connection goes back into pool.

Now you open a new connection to and try to read data from a snapshot database with a simple select statement. That connection you just got came off the pool, but with Serializable isolation level still set!

Now, rather than the beautiful clean, non-blocking read you were hoping for, you get a blocked read… exactly the thing you hoped snapshot isolation would prevent!

The problem, essentially, is that sp_resetconnection does not reset the isolation level. You need to do this manually, or use different connection strings for updates/reads so you don’t grab the same connections off the pool.