January 27, 2016

Remember that functions – including those found in default constraints – are not executed simultaneously. This can sneak up on you whenever you have multiple function calls in a single statement or multiple default constraints in a single table.

I recently found a flaky unit test that involved datetime columns. And as Martin Fowler says“Few things are more non-deterministic than a call to the system clock.”

But the cause can be subtle. Two columns with the same default of SYSDATETIME can have different values in the same row. To demonstrate, consider this example.

January 24, 2016

I want to use extended events to store recent deadlock graphs. And I want to create an asynchronous file target instead of using the existing system_health session. I don’t like the system_health target for a couple reasons. It’s too slow to query and it rolls over too quickly and it disappears after server restarts.

So I searched the web for a solution and when I couldn’t find one, I wrote my own solution, I tested it and I decided to blog about it.

Guess what? Apparently I “reinvented the wheel”. The extended events session I created is equivalent to one that Jeremiah Peschka wrote two years ago in Finding Blocked Processes and Deadlocks using SQL Server Extended Events. The embarrassing thing is that in Jeremiah’s article, he references a tool I wrote. And the first comment was written by yours truly.

So go read Jeremiah’s article, it’s really well written. What follows is my solution. The only difference is that mine only focuses on deadlocks. Jeremiah’s focuses on both deadlocks and blocked processes.

Create The Session

Here’s the session that I use. It

has five rollover files so that a couple server restarts don’t lose any recent deadlock graphs

uses an asynchronous_file_target which I prefer over the ring buffer,

and it cleans itself up over time. I don’t need a maintenance job to remove ancient data

January 20, 2016

The dmv dm_exec_query_stats doesn’t track stats for OPEN CURSOR statements. This is a problem because the OPEN statement is the one that “runs” your query and if you rely on these stats to monitor performance, then cursor performance is hidden from you.

Cursors have a bad reputation, probably well-deserved. When I see a cursor, I see someone trying to use SQL as a programming language. It’s not what SQL is good at and there’s often a better way.

The pragmatist in me doesn’t care too much. If a cursor is performing well and not causing too much trouble, then fixing will not be a priority. But my monitoring solution doesn’t show me how expensive those cursors are! I realize I have no idea what my cursors are doing or how expensive they are.

Cursor Statements

Developers use a number of SQL Statements when writing cursors: DECLARE, OPEN and FETCH. Performance-wise, the DECLARE CURSOR statement takes no time. The OPEN statement runs the query and puts the results in a temporary table. And the FETCH statement reads the next row from the table.

If a cursor’s query is untuned, it’s the OPEN statement that consumes the most resources.

Example

The OPEN statement is missing from sys.dm_exec_query_stats. I want to demonstrate that. Run the following on a dev box.

Workarounds

If your cursors are defined inside a procedure, you can inspect dm_exec_procedure_stats. This is not an option when cursors are run as ad-hoc SQL (outside a procedure). Remember that you’ll only get the performance numbers for the entire execution of the procedure. This view doesn’t tell you which statements inside the procedures are expensive.

There’s good news if your monitoring solution is based on extended events or SQL Trace. You’ll be able to monitor cursors correctly.

If you plan to use Query Store, the new feature in SQL Server 2016, then you’ll be able to see statistics for the OPEN query. Query Store doesn’t store statistics for the DECLARE statement. But that’s acceptable because DECLARE statement don’t use any resources.

Summary

Use the following to keep everything straight.

DECLARECURSOR

OPENCURSOR

FETCHCURSOR

dm_exec_query_stats

(queryhash = 0x0)

dm_exec_procedure_stats(if run as sproc)

(aggregated)

SQL Trace (e.g. Profiler)

SET STATISTICS IO ON

Show Plan

*

Query Store (2016)

*

* This is acceptable because we don’t care about the performance of DECLARE statements.

January 13, 2016

Some years ago, a friend of mine told me I should check out the company he worked for. There was a position that was focused solely on SQL Server. At the time I didn’t think of myself as a database developer, I was a software developer with a knack for SQL. But I applied and it wasn’t long before I signed on.

It’s been over eight years and I still work for D2L, a vendor best known for its Learning Management System educational software.

I get to show up to a different job every day. The variety is amazing. Officially though I’ve only had positions with four distinct teams.

Job 1: In House Consultant

When I started at D2L my position was Senior Software Developer, but really I just wanted to be know as the database guy. The first couple years were about learning SQL Server and building reputation. A number of things helped with my reputation at D2L.

A friend of mine, Richard, left D2L leaving a sort of gap behind. Richard was known by many as the developer to talk to for answers. After he left, people began wondering who to talk to and I saw that was an opportunity for me. I tried to fill those shoes, at least for database issues.

Firefighting. Unfortunately, putting out a fire is more visible than preventing one in the first place. But I had enough opportunities to do both.

Eventually I felt some sort of obligation to give back to the database community and so I started this blog. The act of blogging can actually help clarify fuzzy ideas.

As D2L grew, so did the separation of duties. DBAs do DBA work and developers develop. But as a vendor, developers retain a lot of the control and responsibilities typically associated with DBAs. For example, at D2L, developers are responsible for indexing a table properly. It’s part of the product. We also have a large say in deployment, performance, scalability and concurrency.

So I had several years of fantastic on-the-job training facing new scalability challenges gradually. And as time went on, I worked on more preventative and proactive efforts.

Job 2: Business Intelligence

Then I chose to work with an analytics team. Everyone was talking about “Big Data” which was the new buzzword and I was excited about the opportunity to learn how to do it right.

It was a project based in part on Kimball’s approach to data warehousing. I worked with a great team and faced a number of challenges.

My reputation as the database guy still meant that I was getting asked to tackle problems on other teams. But I didn’t see them as interruptions. Eventually those “distractions” just showed me that I missed the world of relational data. So a year later, I changed jobs again.

Job 3: Project Argon*

So I joined a team called Argon. Our job was to overhaul the way we deliver and deploy our software. It was exciting and we enjoyed a lot of successes. One friend Scott MacLellan writes a lot more about what we did on his own blog. For example “Deploys Becoming Boring”

For my part I had fun writing

A tool that asserted schema alignment for any given database for some expected version of our product.

A tool that could efficiently cascade deletes along defined foreign keys in batches (giving up atomicity for that privilege).

I still found myself working as an internal consultant. Still helping out with production issues and still having a blast doing it.

*Fun fact, Argon is the third project in my career with a noble gas for a codename, the other two being Neon and Xenon

Job 4: Samurai Team

Then at the end of 2015 I changed jobs again. This is where it gets good. All that internal consulting I’ve been doing? That’s my full-time job now.

A couple months ago I joined a brand new team with an initial mandate to “make our product stable”. We’re given the autonomy to determine what that means and how best to carry it out. I’m focused on the database side of that and I’m having a great time.

It’s still mainly technical work, but anything that increases software stability is in scope.

If internal training is needed, we can provide that. That’s in scope.

If increased Devops is needed (blurring the lines or increasing collaboration between devs and DBAs) we do that too.

What’s fun and what’s effective don’t often overlap, but at the moment they do.

Continued Blog Writing!

And I get to write and draw about it as I have been for over five years! Speaking of which, I got some news January 1st:

January 6, 2016

Regular Blog Posts

You’re going to see more content coming out of this site. Most of my posts are technical and they’re based on SQL lessons learned in a very busy OLTP SQL Server environment. I do my best to make each one accessible for everyone without shying away from tricky topics.

If the posts come too frequently, you’re going to be tempted to “mark all as read”. But I think most readers will easily be able to keep up with one post a week.

In 2016, you can count on a blog post every Wednesday. But how do you want to get them?

Via Twitter
If you found my site via twitter, consider subscribing if you want to keep up with this site.

Via RSS
If you’ve already subscribed to the RSS feed, you’re going to continue to get them as you always have, but the world seems to be moving away from RSS.

Via email (new!)
And if you want to get these posts in your inbox, I’ve set up a mailing list. There’s a link at the top of my web site for that. (The mailing list is new, I set it up with tips from Kendra Little).

Continued Illustrations

Those familiar with this site know that I like to draw. It’s fun to combine that hobby with the blog. And I’m going to continue to include illustrations when I can.

Now Using SVG
One change is that I’m going to start including the photos as svg files instead of png. Basically I’m switching from raster to vector illustrations. The file sizes are slightly larger, but they’re still measured in KB. If you do have trouble looking at an illustration, let me know (include device and browser version).

Have fun zooming! If you do, you get to see lots of detail (while I get to notice the flaws).