Wednesday, October 22, 2014

I just got confirmation today that I'll be speaking at SQL Saturday #350 in Winnipeg. If you're up that way, stop in and get some of that fancy learnin'. This will mark my first trip to Canada and now makes me an international speaker. So I've got that going for me, which is nice. :)

Friday, October 17, 2014

This morning as I was going through my morning reading, I came across a post that I knew would be relevant to my interests. Specifically, Aaron Bertrand wrote about a script that he had written to drop and re-create all the foreign keys in a given database. And this time, he explicitly put out an invitation for powershell solutions to the same problem (so I don't feel too bad about it this time). So here I am and here we go.

As you can see, most of the code is preparations (deleting the files if they're already there and setting up the ScriptingOptions objects). The actual meat of it is just a couple of lines long. If you run this script, you'll end up with two T-SQL files that you can run: one to drop the foreign keys and one to re-create the foreign keys. Enjoy!

Tuesday, September 30, 2014

In this morning's SQL Server Central newsletter, there was a link to an article from Aaron Bertrand about how to change whether a computed column is persisted or not. He presented a T-SQL method for doing so. Here, I'll present a powershell method for doing the exact same thing.

Seriously, that's it. If you wanted to be fancy (or just want a chance to review the script before executing it), just use ScriptAlter() instead of Alter() and pass a ScriptingOptions object it. I leave this as an exercise to the reader. With nothing against Aaron, I find the above to be much more readable. To quote Larry Wall "TMTOWTDI", though. So pick your favorite and go with it.

I'll be giving a presentation at SQL Saturday #332 on just this sort of thing. If you want to learn a little powershell yourself, consider attending my session!

Wednesday, July 23, 2014

I don't think it's a secret that I loves me some powershell. I think it's one of those "force multiplier" tools that allow you to do what otherwise might be very tedious fairly simply. Case in point: in this StackOverflow question, the poster asked if there was an easy way to change a user-defined function that was used in multiple default constraints in his database.

If you've ever used a user-defined function for a default constraint and tried to change it, you're well acquainted with the error message that essentially tells you "you can't change the definition of that function because it's being used!". So, in order to change the function, you need to drop all of the default constraints that reference the function, alter the function, and then re-create all of the previously dropped constraints. Depending on how widely used the function is, that could be a lot of work!

The below script will create two T-SQL scripts: one to drop all of the constraints that reference the function and one to add them all back. In practice, you'd run the drop script, alter your function's definition, and then run the create script. As you can see below, I used AdventureWorks2012 for my database and getdate for my function. Change those as appropriate for your situation and you should be good to go.

Wednesday, July 2, 2014

People seem to be scared of replication in general. I can't say that I blame them; it's reputation for being fragile and non-intuitive is (in my opinion) well-deserved. However, there are some nice things. One of my favorite "costs almost nothing" features is tracer tokens.

Monday, March 10, 2014

I'll be the first to admit that I don't know everything about T-SQL. But I do like to think that I've had broad enough exposure to know of the existence of certain things so that when their use is appropriate I know where in TFM to R.
﻿﻿﻿

Hit Any key to continue

Case in point: I was working with one of my developers recently who had an interesting problem. Of the columns that we were already capturing via CDC, he was only interested in a given change if any of a subset of those columns were changed. His initial approach was to use fn_cdc_has_column_changed which was very slow (and the documentation even suggests against using it for that very reason!).

By way of example, let's say that I have CDC set up against the Production.Product table in AdventureWorks2012 that captures change for all columns in the table. But, for whatever reason, we have a process that only cares about changes to the Name, StandardCost, and ListPrice columns. If any (or all) of those columns are updated, we want to know. If none of those columns are updated, we don't want our query to return a row for that change.

I came up with a solution involving fn_cdc_get_column_ordinal and fn_cdc_is_bit_set that I thought was a little clever. Plus, it marks the first time that I've ever used the ANY operator in actual code. Here goes:

Let's break it down. First, I've got two common table expressions. The first of which just takes the names of the columns we're interested in, pumps them through a table value constructor to produce a table. The second CTE takes the results of the first and figures out what position each column is in in the CDC change table. For instance, we find that the Name column is column number 2.

Next, we use the cdc.fn_cdc_get_net_changes_... function to, well, get the net changes for that capture instance. The magic comes in the where clause. The first predicate is just saying that we're only interested in "after update" rows. The second predicate looks at the update mask that comes along with the CDC record and if one of the columns that we care about was updated returns a 1. Otherwise, it returns a 0. Once that calculation is done for all columns, that result set is fed to the ANY operator and if any of the columns that we care about was updated, that predicate will evaluate as true and the row will show up in the result set.