Another excellent post over on Barclay Hill’s blog. This time he’s showing how to use pre- and post- deployment scripts to manage data when deployment might result in data loss. I am jazzed for part 2 of this set to see how what I should have been doing all along.

Barclay Hill, Program Manager for Visual Studio Team System Database Edition at Microsoft has just launched a new blog that I’m excited about. He’s going to be blogging on best practices for VSTS:DB. His initial post outlines the topics he hopes to address and it’s an impressive list. I responded immediately with an offer to help because I’m using the tool and struggling a bit to get it to do everything we need and because I really like all that VSTS:DB has done for me and my team already. Any one else interested in pitching in should go over there and get involved.

Here’s what I sent as an initial message:

I saw your blog post and request for interest and participation and decided to pitch in. The company where I work has been using VSTS:DB through several iterations for about two years now. We have several automated mechanisms for deployments and some standards in place for how best to use & configure the tool. But we need more, so I’m pretty excited about your project. The three most interesting topics are the three that we’ve wrestled with the most, Build Automation and Build Management, Deployment Automation and Continuous Integration and Targeting Multiple Database Environmments. The biggest problem we face today is with incremental deployments where we want to retain the data. Clearly builds of this type require a manual intervention, but unfortunately, at this point, with VSTS:DB, we can’t then automate the deployment but must deploy manually.

Anyway, I look forward to your future posts and please feel free to bounce things off of me. I can’t guarantee I can help, but I’ll sure try.

This is great news. I’ve been working with the CTP’s for several months now, telling the other DBA’s on my team that they had to wait until it was completely ready for release. It’s been a long wait, but I’m sure it’s worth it.

My congratulations to the team. I met several of you at the PASS Summit. I really apreciate the work you’ve put into this great tool. It really makes a difference in how we develop and deploy databases. The changes in the GDR are making a great utility even better. Thanks Mr. Drapers. Thanks also to Jamie Laflen, especially for helping validate some of the ideas I presented at PASS. Thanks to all the rest of the team, whose names I don’t recall, especially the guy who took so long to explain to me what the “literal” was for in the reference page. Excellent work everyone. You guys should be proud.

Second round of testing. Instead of associating with a project, I tried creating a reference to a .dbschema file. Same error. This time, I’m going to clear out everything. I tried creating it initially on top of the code from CTP 17.

So, another chance to try out the reverse engineer process of “Import Database Schema.” Works great. New AdventureWorks database inside Visual Studio with a tested deployment faster than you can spit.

Created a new server project. Added a login, just to give it something to do. It deployed fine.

On to the compound project. Let’s see what happens. Just to see, I did a build and deploy before I added references or objects or anything. It worked great. Whatever that means. I’m taking it a step at a time, to try to see where it breaks, if it breaks. So, I’ve added just the Adventureworks database project (not the .dbschema file). Trying a build & deploy now. Build went through. No issues. Deploying… Damn. I’m hitting a collation error. However, that’s good because I never got that far before… It worked! On to adding the server as a second reference and then deploying that… Whoops, there it is. Failure. Remove, reset and redeploy without the Server project.

Time for some more experimentation…

It looks like I was getting an error in the server project. I rearranged a few things and now I have the whole thing working again.

Using the Team Edition of Visual Studio for Databases (VSDB) enables you to build a database out of source control. This means you can treat your database like code, as much as you can. The fact is, because of persistence, a database just isn’t code. Because you have to keep all the data previously entered, when you deploy a new version of your database to production, you don’t simply get to replace the database like you do with the code. You have to run scripts that alter that which can be altered, but preserve the existing data everywhere. That’s just how it is in production. You have to do the work necessary to protect your data.

Not so in Development. Development (and QA, Test, Financial Test, Performance Test) is the place where you can, for a time, treat your database like code. Do a full tear down and replace. I’m pushing this because I’m working with two extremes these days. Most of our new projects have been using tear down and rebuild from day one. Because of this, we know everything there is to know about the objects inside the database. There are no integrity issues, old code, missing keys, etc. We know this because every time we deploy, we get to rebuild the database. I also have a few older databases. These have not been rebuilt from scratch, ever. Ever.

I’ve just started going through and rebuilding one of them for the first time. There are broken stored procedures everywhere. These are procedures that got deployed to production years ago. Over time the structures which they referenced was changed or even dropped entirely, yet the procedures were left in place. Because they had already been stored by SQL Server and they’re not being referenced by application code (replaced completely by a deployment after all) they’re just sitting out there, waiting for the unwary. There are missing constraints on tables, found during data loads. There are all kinds of problems, usually identified quickly and easily when the database is rebuilt with each deployment.

Following on to my adventures in creating multi-environment deployment processes with the new version of Data Dude (DBPro, VSTS Database Edition, whatever we’re calling it this week).

I’ve create a new configuration, copying all the settings from the Debug configuration. I’m adding a new Deployment configuration file and making a change. The deploy worked. Woo hoo! Now to get really funky. I’ll create a new “Sql command variables file:” and add a variable for setting the data directory. Now to deploy and… Urk! Failed. It’s not recognizing my variable.

Now I’m stuck. I’ve checked the syntax. It’s right. I double checked it all and reran deploy. Now it works… Color me confused. Whatever. Successful test. Time to create another configuration, simulating a QA server… Got that working too. I don’t know what went south yesterday, but it’s all working now. Now to quantify it all and begin to create some standards.

I’m also going to check this all into source control to see how the connection information is stored. In 2005/2008 (not GDR), the connection info was stored in the .user file. We finally started checking our .user files in with the rest of the project, to the abject horror of every MS developer that heard about it. More on this tomorrow.

I’ve mentioned it before and I think it’s worth mentioning again, Microsoft Connect really works. I’ve seen bugs and enhancements listed there receive enough attention that they were in the next release or service pack of the product in question.

That brings me to DBPro. We use DBPro for all our database development. It’s a great tool. However, it’s still a bit to geared toward the individual user and not the team, despite it’s moniker (Visual Studio Team Edition for Database Professionals). One thing that really does bother me is how it stores some settings, such as Target Connection in the .user file within a project. This means that each individual sets the connection for the project each time they check it out after another user has had it. Another option is to actually check in the .user file with the project, checking it out and overwriting your own .user file each time you need to work on the project. This apparently gives the MS guys fits, but it’s what we’re doing and it works. However, I’d like to see these project level settings be stored with the project. So I’ve put forward a Connect Enhancement Request. Here’s the link. Please, go and and vote for it, rate it high. The more votes we can get on this, the more likely it’ll be in the next SP or the next release. Thanks.