January 17, 2018

Takeaway: It’s been frowned on for a while, but SHA1 is definitely broken for security purposes.

In October of 2010, Michael Coles created a contest on his blog called “Find a Hash Collision, Win $100“. The contest was part of a discussion at the time about whether the SHA1 hash was useful for detecting changes. For what it’s worth, I still think SHA1 is valuable as a consistency check if not for security.

In the first draft of this series, this post didn’t exist. I wanted to show a really simple example of a column switch and include it in the Blue-Green (Details) post. I planned for something simple. But I ran into some hiccups that I though were pretty instructive, so I turned it into the post you see here.

The Plan

For this demo, I wanted to use the WideWorldImporters database. In table Warehouse.ColdRoomTemperatures I wanted to change the column

ColdRoomSensorNumber INT NOT NULL,

into

ColdRoomSensorLabel NVARCHAR(100) NOT NULL,

because maybe we want to track sensors via some serial number or other code.

The Blue-Green plan would be simple:

The Trouble

But nothing is ever easy. Even SQL Server Data Tools (SSDT) gives up when I ask it to do this change with this error dialog:

There’s two things going on here (and one hidden thing):

The first two messages point out that a procedure is referencing the column ColdRoomSensorNumber with schemabinding. The reason it’s using schemabinding is because it’s a natively compiled stored procedure. And that tells me that the table Warehouse.ColdRoomTemperatures is an In-Memory table. That’s not all. I noticed another wrinkle. The procedure takes a table-valued parameter whose table type contains a column called ColdRoomSensorLabel. We’re going to have to replace that too. Ugh. Part of me wanted to look for another example.

One last thing to worry about is a index on ColdRoomSensorNumber. That should be replaced with an index on ColdRoomSensorLabel. SSDT didn’t warn me about that because apparently, it can deal with that pretty nicely.

So now my plan becomes:

Blue The original schema

Aqua After the pre-migration scripts are run
An extra step is required here to update the new column and keep the new and old columns in sync.

Green After the switch, we clean up the old objects and our schema change is finished:

Without further ado, here are the scripts:

Pre-Migration (Add Green Objects)

In the following scripts, I’ve omitted the IF EXISTS checks for clarity.

Pre-Migration (Populate and Keep in Sync)

Normally, I would use triggers to keep the new and old column values in sync like this, but you can’t do that with In-Memory tables. So I altered the procedure Website.RecordColdRoomTemperatures to achieve something similar. The only alteration I made is to set the ColdRoomSensorLabel value in the INSERT statement:

That keeps the values in sync for new rows. But now it’s time to update the values for existing rows. In my example, I imagine that the initial label for the sensors are initially: “HQ-1”, “HQ-2”, etc…

Eagle-eyed readers will notice that I haven’t dealt with the history table here. If the history table is large use batching to update it. Or better yet, turn off system versioning and then turn it back on immediately using a new/empty history table (if feasible).

Post-Migration

After a successful switch, the green application is only calling Website.RecordColdRoomTemperatures_v2. It’s time now to clean up. Again, remember that order matters.

Using the Blue-Green deployment method, database changes are decoupled from applications changes. That leaves us with one last challenge to tackle. The schema changes have to be performed while the application is online. It’s true that you can’t always write an online script for every kind of schema change you want.

The challenge of writing online schema changes is essentially a concurrency problem and the guiding principle I follow is: Do whatever you need to do, but avoid excessive blocking.

Locks Are Hot Potatoes

You can’t hold them for long. This applies to schema changes too. Logically if you don’t hold a lock long, you can’t block activity. One exception might be the SCH-M lock which can participate in blocking chains:

SCH-M locks

There are two main kinds of SQL queries. SELECT/INSERT/UPDATE/DELETE statements are examples of Data Manipulation Language (DML). CREATE/ALTER/DROP statements are examples of Data Definition Language (DDL).

With schema changes – DDL – we have the added complexity of the SCH-M lock. It’s a kind of lock you don’t see with DML statements. DML statements take and hold schema stability locks (SCH-S) on the tables they need. This can cause interesting blocking chains between the two types where new queries can’t start until the schema change succeeds:
Some suggestions:

Don’t rebuild indexes while changing schema

Rely on the OLTP workload which has many short queries. In an OLTP workload, the lead blocker shouldn’t be a lead blocker for long. Contrast that with an OLAP workload with long-running and overlapping queries. OLAP workloads can’t tolerate changing tables without delays or interruptions.

When using Enterprise Edition, use ONLINE=ON for indexes. It takes and holds a SCH-M lock only briefly.

Changes to Big Tables

Scripts that change schema are one-time scripts. If the size of the table is less than 50,000 rows, I write a simple script and then move on.

If the table is larger, look for metadata-only changes. For example, these changes are metadata-only changes:

If a table change is not a meta-data change, then it’s a size-of-data change. Then it’s time to get creative. Look for my other post in this series for an example of batching and an example of a column switcheroo.

Pragmatism Example

If you think “good enough” is neither, you may want to skip this section. There are some schema changes that are still very difficult or impossible to write online. With some creativity, we’ve always been able to mitigate these issues with shortcuts and I want to give an example which I think is pretty illustrative.

When a colleague asked for a rowversion column on a humongous table. We avoided that requirement by instead creating a datetime column called LastModifiedDate. Since 2012, new columns with constant default values are online. So we added the column with a constant default, and then changed the default value to something more dynamic:

So now for the nitty gritty. In my last post, Blue-Green Deployment, I talked about replacing old blue things with new green things as an alternative to altering things. But Blue-Green doesn’t work with databases, so I introduced the Blue-Aqua-Green method. This helps keep databases and other services online 24/7.

The Aqua Database

What does the Aqua database look like? It’s a smaller version of Blue-Green, but only for those database objects that are being modified. Borrowing some icons from Management Studio’s Object Explorer, here’s what one Blue-Aqua-Green migration might look like:

Start with a database in the original blue state:

After the pre-migration scripts, the database is in the aqua state, the new green objects have been created and are ready for traffic from the green application servers. Any type of database object can use the Blue-Green method. Even objects as granular as indexes or columns.

Finally when the load has switched over to the green servers and they’re nice and stable, run the post-migration steps to get to the green state.

Blue-Green for Database Objects

How is the Blue-Green method applied to each kind of database object? With care. Each kind of object has its own subtle differences.

PROCEDURES:
Procedures are very easy to Blue-Green. Brand new procedures are added during the pre-migration phase. Obsolete procedures are dropped during the post-migration phase.

If the procedure is changing but is logically the same, then it can be altered during the pre-migration phase. This is common when the only change to a procedure is a performance improvement.

But if the procedure is changing in other ways. For instance, when a new parameter is added, or dropped, or the resultset is changing. Then use the Blue-Green method to replace it: During the pre-migration phase, create a new version of the procedure. It must be named differently and the green version of the application has to be updated to call the new procedure. The original blue version of the procedure is deleted during the post-migration phase. It’s not always elegant calling a procedure something like s_USERS_Create_v2 but it works.

VIEWS:
Views are treated the same as procedures with the exception of indexed views.
That SCHEMA_BINDING keyword is a real thorn in the side of Blue-Green and online migrations in general. If you’re going to use indexed views, remember that you can’t change the underlying tables as easily.

INDEXES:
The creation of other indexes are nice and easy if you have Enterprise Edition because you can use the (ONLINE=ON) keyword. But if you’re on Standard Edition, you’re a bit stuck. In SQL Server 2016 SP1, Microsoft included a whole bunch of Enterprise features into Standard, but ONLINE index builds didn’t make the cut.

If necessary, the Blue-Green process works for indexes that need to be altered too. The blue index and the green index will exist at the same time during the aqua phase, but that’s usually acceptable.

CONSTRAINTS:
Creating constraints like CHECKS and FOREIGN KEYS can be tricky because they require size-of-data scans. This can block activity for the duration of the scan.

My preferred approach is to use the WITH NOCHECK syntax. The constraint is created and enabled, but existing data is not looked at. The constraint will be enforced for any future rows that get updated or inserted.

That seems kind of weird at first. The constraint is marked internally as not trusted. For peace of mind, you could always run a query on the existing data.

TABLES:
The creation of tables doesn’t present any problems, it’s done in the pre-migration phase. Dropping tables is done in the post-migration phase.

What about altering tables? Does the Blue-Green method work? Replacing a table while online is hard because it involves co-ordinating changes during the aqua phase. One technique is to create a temporary table, populate it, keep it in sync and cut over to it during the switch. It sounds difficult. It requires time, space, triggers and an eye for detail. Some years ago, I implemented this strategy on a really complicated table and blogged about it if you want to see what that looks like.

If this seems daunting, take heart. A lot of this work can be avoided by going more granular: When possible, Blue-Green columns instead.

COLUMNS:
New columns are created during the pre-migration phase. If the table is large, then the new columns should be nullable or have a default value. Old columns are removed during the post-migration phase.

But sometimes it’s not easy. When altering columns on a large table, it may be necessary to use the Blue-Green technique to replace a column. Then you have to use triggers and co-ordinate the changes with the application, but the process is much easier than doing it for a whole table. Test well and make sure each step is “OLTP-Friendly”. I will give an example of the Blue-Green method for a tricky column in the post “Stage and Switch”.

Computed persisted columns can be challenging. When creating persisted computed columns on large tables, they can lock the table for too long. Sometimes indexed views fill the same need.

DATA:
Technically, data changes are not schema changes but migration scripts often require data changes to so it’s important to keep those online too. See my next post “Keep Changes OLTP-Friendly”

Automation

Easy things should be easy and hard things should be possible and this applies to writing migration scripts. Steve Jones asked me on twitter about “some more complex idempotent code”. He would like to see an example of a migration script that is re-runnable when making schema changes. I have the benefit of some helper migration scripts procedures we wrote at work. So a migration script that I write might look something like this:

We’ve got these helper scripts for most standard changes. Unfortunately, I can’t share the definition of s_INDEX_AlterOrCreateNonClustered_Online because it’s not open source. But if you know of any products or open source scripts that do the same job, let me know. I’d be happy to link to them here.

Where To Next?

So that’s Blue-Green, or more accurately Blue-Aqua-Green. Decoupling database changes from application changes allows instant cut-overs. In the next post Keep Changes OLTP-Friendly I talk about what migration scripts are safe to run concurrently with busy OLTP traffic.

The Blue-Green technique is a really effective way to update services without requiring downtime. One of the earliest references I could find for the Blue-Green method is in a book called Continuous Delivery by Humble and Farley. Martin Fowler also gives a good overview of it at BlueGreenDeployment. Here’s a diagram of the typical blue green method (adapted from Martin Fowler).

When using the Blue-Green method, basically nothing gets changed. Instead everything gets replaced. We start by setting up a new environment – the green environment – and then cut over to it when we’re ready. Once we cut over to the new environment successfully, we’re free to remove the original blue environment. The technique is all about replacing components rather than altering components.

Before I talk about the database (databases), notice a couple things. We need a router: Load balancers are used to distribute requests but can also be used to route requests. This enables the quick cut-over. The web servers or application servers have to be stateless as well.

What About The Database Switch?

The two databases in the diagram really threw me for a loop the first time I saw this. This whole thing only works if you can replace the database on a whim. I don’t know about you, but this simply doesn’t work for us. The Continuous Delivery book suggests putting the database into read-only mode. Implementing a temporary read-only mode for applications is difficult and rare (I’ve only ever heard of Stackoverflow doing something like this succesfully).

But we don’t do that. We want our application to be 100% online for reads and writes. We’ve modified the Blue-Green method to work for us. Here’s how we change things:

Modified Blue-Green: The Aqua Database

Leave the database where it is and decouple the database changes from the applications changes. Make database changes ahead of time such that the db can serve blue or green servers. We call this forward compatible database version “aqua”.

The changes that are applied ahead of time are “pre-migration” scripts. The changes we apply afterwards are “post-migration” scripts. More on those later. So now our modified Blue-Green migration looks like this:

Start with the original unchanged state of a system:

Add some new green servers:

Apply the pre-migration scripts to the database. The database is now in an “aqua” state:

Do the switch!

Apply the post-migration scripts to the database. The database is now in the new “green” state:

Then remove the unused blue servers when you’re ready:

We stopped short of replacing the entire database. That “aqua” state for the database is the Blue-Green technique applied to individual database objects. In my next post, I go into a lot more detail about this aqua state with examples of what these kind of changes look like.

Ease in!

It takes a long time to move to a Blue-Green process. It took us a few years. But it’s possible to chase some short-term intermediate goals which pay off early:

Start with the goal of minimizing downtime. For example, create a pre-migration folder. This folder contains migration scripts that can be run online before the maintenance window. The purpose is to reduce the amount of offline time. New objects like views or tables can be created early, new indexes too.

Process changes are often disruptive and the move to Blue-Green is no different. It’s good then to change the process in smaller steps (each step with their own benefits).

After adding the pre-migration folder, continue adding folders. Each new folder involves a corresponding change in process. So over time, the folder structure evolves:

The original process has all changes made during an offline maintenance window. Make sure those change scripts are checked into source control and put them in a folder called offline: (offline)

Then add a pre-migration folder as described above: (pre, offline)

Next add a post-migration folder which can also be run while online: (pre, offline, post)

Drop the offline step to be fully online: (pre, post)

Safety

Automated deployments allow for more frequent deployments. Automated tools and scripts are great at taking on the burden of menial work, but they’re not too good at thinking on their feet when it comes to troubleshooting unexpected problems. That’s where safety comes in. By safety, I just mean that as many risks are mitigated as possible. For example:

Re-runnable Scripts
If things go wrong, it should be easy to get back on track. This is less of an issue if each migration script is re-runnable. By re-runnable, I just mean that the migration script can run twice without error. Get comfortable with system tables and begin using IF EXISTS everywhere:

-- not re-runnable:CREATEINDEX IX_RUN_ONCE ON dbo.RUN_ONCE(RunOnceId);

-- re-runnable:IFNOTEXISTS(SELECT*FROM sys.indexesWHERE name ='IX_RUN_MANY'ANDOBJECT_NAME(object_id)='RUN_MANY'AND OBJECT_SCHEMA_NAME(object_id)='dbo')BEGINCREATEINDEX IX_RUN_MANY ON dbo.RUN_MANY(RunManyId);
END

-- re-runnable:
IF NOT EXISTS (SELECT *
FROM sys.indexes
WHERE name = 'IX_RUN_MANY'
AND OBJECT_NAME(object_id) = 'RUN_MANY'
AND OBJECT_SCHEMA_NAME(object_id) = 'dbo')
BEGIN
CREATE INDEX IX_RUN_MANY ON dbo.RUN_MANY (RunManyId);
END

Avoid Schema Drift
Avoid errors caused by schema drift by asserting the schema before a deployment. Unexpected schema definitions lead to one of the largest classes of migration script errors. Errors that surprise us like “What do you mean there’s a foreign key pointing to the table I want to drop? That didn’t happen in staging!”

Schema drift is real and almost inevitable if you don’t look for it. Tools like SQL Compare are built to help you keep an eye on what you’ve got versus what’s expected. I’m sure there are other tools that do the same job. SQL Compare is just a tool I’ve used and like.

Schema Timing
When scripts are meant to be run online, duration becomes a huge factor so it needs to be measured.

When a large number of people contribute migration scripts, it’s important to keep an eye on the duration of those scripts. We’ve set up a nightly restore and migration of a sample database to measure the duration of those scripts. If any script takes a long time and deserves extra scrutiny, then it’s better to find out early.

Measuring the duration of these migration scripts helps us determine whether they are “OLTP-Friendly” which I elaborate on in Keep Changes OLTP Friendly.

Tackle Tedious Tasks with Automation

That’s a lot of extra steps and it sounds like a lot of extra work. It certainly is and the key here is automation. Remember that laziness is one of the three great virtues of a programmer. It’s the “quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs…”. That idea is still true today.

There’s increasing pressure to keep software services available all the time. But there’s also pressure to deploy improvements frequently. How many of us would love to reduce the duration of migration windows or better yet eliminate them entirely. It’s always been challenging to make changes safely without interrupting availability, especially database schema changes.

And when responsibilities are split between different groups – DBAs responsible for availability, developers responsible for delivering improvements – it causes some tension:

So I’m beginning a series describing the “Blue Green” continuous delivery technique. It’s a technique that really works well where I work, It helps us manage thousands of databases on hundreds of servers with monthly software updates and zero downtime.

We actually don’t follow the book perfectly. I’ll describe what we do differently, and why.

OLTP-Friendly
With some effort and creativity, we can break our database migrations into small chunks and deploy them while the application is still live.

Many changes to schema will lock tables for longer than we can tolerate. Often, the schema changes will need to take a brief SCH-M lock on certain objects and so this technique works best with OLTP workloads (workloads that don’t send many long-running queries).

I explore ways to make schema changes that run concurrently with an OLTP workload. What kinds of changes are easy to deploy concurrently and what kind of changes are enemies of concurrency?

Co-ordination is Key

This series is meant to help people investing in automation and other process improvements. It takes careful co-ordination between those responsible for uptime and those responsible for delivering improvements. So in your organization, if Dev Vader and DBA Calrissian are on the same team (or even the same person) then this series is especially for you.