John Sansom (Blog | Twitter) is a Microsoft Certified Master (MCM) of SQL Server and publisher of the free SQL community ebook DBA JumpStart, an inspiring collection of advice for Data Professionals, written by 20 SQL Server experts. Awarded the Microsoft Community Contributor(MCC) award, John is a prolific blogger and can be found regularly writing about SQL Server and Professional Development over at www.johnsansom.com.

This post is part of the T-SQL Tuesday Meme, this month hosted by Jason Brimhall. His chosen theme, to come up with a story where you “stood firm” and relate it to these words: resolve, resolution, or resolute.

Regular readers will know and tell you that I’m resolute in my philosophy that the Best DBAs Automate Everything. So when I was presented with an opportunity to dish out a dose of unadulterated #SQLWinning recently, the temptation was just simply too great for me to resist.

We use SQL Server Replication quite a bit in the current shop. For this one particular system a waterfall replication topology was in play. That’s a Publisher replicating to a Subscriber, that then Re-Publishes the same Articles to a number of Subscribers. The very first Publisher is the recipient of a rather large ETL process and the data is then replicated throughout the waterfall replication topology.

A Simplified Waterfall (Re-Published) Replication Topology

The system had been happily working away perfectly for years until one not so fine day, when all manner of replication related performance problems began to occur. Just for kicks some of these included excessive Transaction Log growth, Log Reader Agent issues (scans taking hours), distribution database bloat, distribution database clean-up job issues, subscription expiration and the list goes on. The ins and outs of the various issues and their troubleshooting are beyond the scope of a T-SQL Tuesday post but suffice to say many a DBA man hour were lost in the battle. The point is that the current system process was grinding to a halt due to the huge volume of replicated commands.

Taking on a Challenge

Something obviously needed to be done to resolve the situation. Enter the politicians. One of the really interesting aspects of working for a larger organisation is that sometimes it can be quite fun trying to identify who actually owns a given system, particularly one that is a number of years old and straddles multiple business domains.

Yes you guessed it, nobody stepped forward to own the problem and more importantly the development of a solution for it, meanwhile each time the current system borked, DBA resource was needed to get things moving again.

Personally I enjoy taking on a tough problem, it’s good for the soul (not to mention your future bank balance), so I pitched management on the idea that I would like to spend some time developing a prototype solution to our woes. In their eyes I was offering the prospect of coming up with a resolution to a high profile and sensitive issue, for the trade off of a few hours. I was putting myself forward and offering to take ownership of the problem. Of course I got the go ahead!

What’s The Problem You’re Trying to Solve?

Before embarking on any noble quest it’s important to understand why we’re doing it, what’s the point or in business speak, What is business problem that we are trying to solve? As it turns out the requirements for this system were quite simply to get some data from the source location and to make it available at a number of target locations, with no interruption to service.

It’s easy to see why replication was the weapon of choice for the original solution considering the requirements but given the growth of the database data it was no longer the right tool for the job.

Using the Right Tool for the Job

“Tools, some of them useful”

From experience I knew that this is exactly the sort of thing that SSIS is perfect for. A classic ETL solution mixed in with a little DBA abstraction magic to take care of the minimal interruption to service requirement (the inspiration for which was taken from a SQLCAT article that I had previously used to solve a different problem). The solution also had to be none invasive, that is to plug into the existing environment with no changes required to the application tier.

There was scope to really go to town on the solution design, leveraging things like SQL Partitioning or the use of schemas rather than separate databases but this needed to be a version one solution. Quick and effective. Something that could be developed, shipped in the shortest possible time and get the job done.

Digression: Solution Overview

I’ve brushed over a lot of details here because this post is not about the solution design however I wanted to give you at least an overview.

SSIS Solution Server Architecture

An SSIS package was built and deployed to a Job Server. The package transfers the data of interest from the source location directly to the target locations.

Each target location hosts 3 databases relevant to the solution. A “Shell” database (that’s the original database and would have been the Subscriber db in the Replication topology) and two “Base” databases (Base A and Base B).

The base databases contain the actual table data. The Shell database contains only Synonyms that point to either of the two base databases but only one at a given point in time. The details for which database is currently in the “live” role is stored in a table within the Shell database.

High Level Solution Overview

TIP: If you want to create an SSIS ETL solution that is fast then The Data Loading Performance Guide is essential reading for you. It contains instruction on how to take advantage of Bulk Load methods and how to engineer your solution to leverage Minimal Logging with these methods. Implementing these techniques can significantly improve the overall execution time of your SSIS solutions.

In this case, I was able to use SSIS to transfer all required data from source to target location in less than 5 minutes, including making the database data live. When you consider that the same data would take hours to transfer through the replication topology previously used, this was a significant improvement.

Big Picture Thinking

This year I encourage you to reach out and take ownership of more problems in your shop.

I’ll often hear mediocre Data Professionals talking about how “it’s not their job”, “team super dev should deal with that” or “I don’t have the time”. That kind of thinking will get you nowhere fast. If you take a good solid solution proposal to you manager, they will make the time for you. Especially come annual review.

What issues or problems are being put up with in your environments instead of taken care of? Resolve now to take ownership of more problems this year.