Migrating a Database to SQL Azure using SSDT

Update July 16, 2015: With the latest update to Azure SQL Database we have closed the gap dramatically between the features supported in Azure SQL Database and SQL Server. As a result migration for many databases will require little more than moving the database. This migration cookbook has the latest and best advice on how to migrate to Azure SQL DB – we’re sure you will be impressed at how easy and straightforward this has become! The original article’s content is preserved below.

With current interest in cloud computing, migration to SQL Azure is a hot topic. Migrating a database to the cloud is a significant undertaking, with many architectural challenges to be considered. High among these is that some features of SQL Server are not, or not fully, supported in SQL Azure, requiring you to identify and resolve any dependencies your database has on these features.

If there are no such dependencies then moving the database itself is largely a database management scenario. With SSMS in SQL 2012 you can extract a dacpac (schema only) from the database and then deploy it to SQL Azure, or, if you want to deploy the data with the schema, you can use the Deploy Database to SQL Azure feature, new in SQL 2012, which under the hood exports a bacpac and then imports it to SQL Azure. You could also use SSDT to deploy just the schema but if you’re not making any changes it adds little value. One thing all these methods have in common is a validation pipeline that blocks deployment if SQL Azure compatibility issues are detected.

If the database won’t deploy without change then you’re into a development scenario and SSDT has plenty to offer. You will need to identify any unsupported features the database depends on, and then adapt the design of the database (and any affected applications) accordingly. SSDT lets you work on the schema offline in a database project. With SQL Azure designated as the target platform, building the project highlights any compatibility issues, which you can then work through, correcting or removing the offending object definitions. Once the project builds you can deploy and test the database locally using Local DB, the lightweight sandbox database service that comes with SSDT, and then deploy it to SQL Azure to complete your testing in the cloud.

Let’s work through an example to see how this works and explore some interesting options. We’ll use the Adventure Works LT 2008 database, which is small enough to work through in a few minutes but has enough problems to illustrate the process nicely.

To follow along you’ll need access to a SQL Server 2008 or later release. You also need SSMS and SSDT from SQL Server 2012. Or, if you want to skip the SSMS parts of the walkthrough you can just download and install SSDT (which is free). Next download and install the AdventureWorksLT2008 sample database. You’ll also need a SQL Azure account if you want to deploy the database to the cloud. You can open a free trial account here.

First let’s see whether the Adventure Works database we’ve downloaded is SQL Azure compatible. One way to do this is to try and deploy it. Open SSMS and locate the database and try Tasks > Deploy Database to SQL Azure. This is one of several DAC-related tasks you can perform on the database. In SQL Server 2012 SSMS and SSDT support a unified DAC infrastructure, so both tools can process dacpacs created by either tool.

Go through the deployment wizard and pick a minimum-sized Web edition database on your SQL Azure account for this experiment – don’t worry about incurring costs at this point, as the operation will fail.

While you get a lengthy error list (above right), it’s just a report from the task and can’t be used interactively. If we switch over to SSDT, however, we can see how we can leverage its integrated SQL Azure validation to help us correct these errors. We’ll come back to SSMS later.

First let’s import the database into a SQL Server Database project. SSDT provides two ‘import’ entry points – you can either create a SQL Server Database Project in the Solution Explorer and then Import into it from a database or, my favorite, you can locate the database in SQL Server Object Explorer (SSOX) and choose Create New Project (below).

Either way the outcome is the same – a project that fully describes the Adventure Works database. Once you have populated the project you need to change its target platform to SQL Azure on the Project Settings tab of the Project Properties dialog.

With the target platform changed, Build the project to validate the database schema. At this point you will get the same long list of errors that you saw when attempting to deploy the database in SSMS, only now you can get started on fixing them. Many errors are systemic – SQL Azure does not support Extended Properties, for example, and as Adventure Works uses these widely this accounts for many of the errors. Double-clicking on an error in the Error List takes you to the offending code so that you can fix it. Fixing errors this way is very deliberate as you can review and consider each issue individually. While fixing these errors is an important one-time task which needs to be done thoroughly, it can also be very time consuming. Let’s look at another way to build the project that can resolve many of the issues more quickly.

Instead of using Import you can use Schema Compare to populate the project selectively from the database. Schema Compare compares any two schemas and can update one from the other. Let’s use it to compare the Adventure Works database schema to an empty project. If you just Compare and then Update Target, Schema Compare will act like Import and produce the same outcome. However, Schema Compare supports a couple of ways in which you can influence the outcome. You can review the comparison results and exclude individual actions from the update, which would let you exclude the Extended Properties from each object, for example. While this is a little easier than deleting them from script after import, it is still a lengthy exercise. More interesting is Schema Compare’s ability to exclude all objects of a specific type from the comparison, and thus the update. Let’s see how that works.

First create a new comparison using SQL > Schema Compare > New Schema Comparison. Then open the comparison Options dialog and on the Object Types tab uncheck the object types you want to exclude. This lets you exclude, for example, Extended Properties on all objects. With a single click you’ll cut out hundreds of errors! Of course it needn’t stop with Extended Properties – there are many object types that SQL Azure doesn’t support and which can be excluded by Schema Compare. You can see some of these in the Options dialog below with a full list of objects to exclude at the end of this post.

Given how many objects you need to uncheck and that you might want to repeat the process, it’s a good idea to save the comparison after configuring the object list but before setting source and target schemas. This file can then be used as a migration comparison ‘profile’. (I suggest working with a copy of the file as it’s easy to overwrite it unintentionally). Now let’s see how we can use our new file.

First create an empty database project. Then with a copy of the .scmp file we just created select the Adventure Works database as the source schema and the new project as the target and Compare the two. The comparison will show a series of Add actions that will be applied if you press Update. (If you select some of the tables in the results grid – without unchecking the action – you can see in the script display at the bottom that there are no Extended Properties). When you’re ready press Update and the project will be populated without any of the unsupported objects.

If you now set the project’s Target Platform to SQL Azure and build you will see many fewer errors. While filtering out unsupported objects cut out most of the problems, there are still some unsupported properties on a few objects that you will have to address. You will need to fix the following (while the quick fixes below ‘solve’ the problems, the impact of each change would needs to be considered and accommodated in your overall solution long term):

Delete the User [NT AUTHORITY\NETWORK SERVICE] which cannot be used in SQL Azure.

Remove all uses of ROWGUIDCOL on UniqueIdentifier columns.(To fix, go to the first occurrence, select the string and use Find and Replace (Ctrl + H); press Replace All to fix all occurrences in the project).

Remove all uses of NOT FOR REPLICATION.

Remove the view [vProductModelCatalogDescription] from the build as it uses XQuery. (Change the Build Action property on the file from ‘Build’ to ‘None’ then Build the project again. Notice how this preserves the definition to be fixed later.)

Modify the datatype of column ProductModel.CatalogDescription to be simply XML, removing the reference to the XML Schema Collection.

At this point you need to build the project again. This will expose one remaining error caused by the table dbo.BuildVersion not having a clustered index as required by SQL Azure. If you mark the SystemInformationID column as the primary key this will be fixed. Build again and you should get no errors. In as much as we have ‘fixed’ the problems, our schema conversion is complete (you would clearly need to address the storage and handling of catalog description data holistically later). If you want to see a summary of all that has changed you can open a new Schema Compare and compare the project with your original database.

You could press F5 at this point to deploy and test the schema locally using Local DB, or Publish it to SQL Azure to test. If you don’t want to deploy the data from the original database you’re schema migration task is effectively complete.

An alternative at this point is to update your original database to make it SQL Azure compliant and then deploy it to the cloud with its data using SSMS and the bacpac mechanism that we tried earlier. Let’s step through that approach as it’s rather more interesting and brings us full circle.

First, reset the project’s Target Platform to its original value (the version of SQL Server hosting your Adventure Works database). This ensures the database will be validated correctly and can be deployed to that target, and also illustrates how switching the target platform can be used to ensure a design is platform neutral.

Next, Publish your changed schema back to the Adventure Works database. Publishing will update your original database (dropping several objects from the schema and losing some data) so you might want to do this to a copy of the database. To make such sweeping changes to a database you will need to override two of the default deployment/update options:

You’ll find these options by pressing the Advanced… button on the Publish dialog (below).

If you open the Data Tools Operations window after publishing you can look at the Preview, which summarizes the changes, and the script actually used. (Publish also lets you generate a script that you might hand-off to a DBA if you don’t have permission to update the database directly.)

Once the Adventure Works database is updated you should be able to deploy it to SQL Azure with its data. Go back to SSMS and try the Deploy Database to SQL Azure task again on the updated database, and it should now work like a charm!

Summary

Migrating a database to the cloud is a significant architectural change, often requiring changes are made to the database schema; SSDT and SSMS can provide valuable support during the process.

SSDT lets you bring a database schema offline into a project where you can change it to be SQL Azure compatible.

Schema Compare can be used to filter out incompatible objects when creating the project.

Setting the target platform to SQL Azure causes SSDT to report compatibility errors which can be fixed and tested locally using Local DB.

SSDT can deploy the schema to the cloud or update the database so that it can be deployed with its data using SSMS.

Join the conversation

I am using AdventureWorks2012 as sample database. I have followed all the steps as mentioned as per the article and I am getting the following errors when I try to publish the schema changes from Azure compatable database to 2012 source database…..I am using VS 11 developer preview and SQL 2012 Evaluation copy.

The object [PRIMARY] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.

The object [AdventureWorks2012_Copy_Data] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.

The object [AdventureWorks2012_Copy_Log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.

The type for column Resume in table [HumanResources].[JobCandidate] is currently XML(CONTENT [HumanResources].[HRResumeSchemaCollection]) NULL but is being changed to NVARCHAR (MAX) NULL. Data loss could occur.

The type for column AdditionalContactInfo in table [Person].[Person] is currently XML(CONTENT [Person].[AdditionalContactInfoSchemaCollection]) NULL but is being changed to NVARCHAR (MAX) NULL. Data loss could occur.

The type for column Demographics in table [Person].[Person] is currently XML(CONTENT [Person].[IndividualSurveySchemaCollection]) NULL but is being changed to NVARCHAR (MAX) NULL. Data loss could occur.

The type for column CatalogDescription in table [Production].[ProductModel] is currently XML(CONTENT [Production].[ProductDescriptionSchemaCollection]) NULL but is being changed to NVARCHAR (MAX) NULL. Data loss could occur.

The type for column Instructions in table [Production].[ProductModel] is currently XML(CONTENT [Production].[ManuInstructionsSchemaCollection]) NULL but is being changed to NVARCHAR (MAX) NULL. Data loss could occur.

The type for column Demographics in table [Sales].[Store] is currently XML(CONTENT [Sales].[StoreSurveySchemaCollection]) NULL but is being changed to NVARCHAR (MAX) NULL. Data loss could occur.

The object [dbo].[uspSearchCandidateResumes] must be changed in order for deployment to succeed; however, [dbo].[uspSearchCandidateResumes] only exists in the target database and cannot be changed without introducing unintended side-effects.

Only the last item in your list is an error, the others are warnings. The first three are references to database files that will not be dropped, then all but the last item are warning about data loss from type conversion. Changing the XML content type to nvarchar(max) is only a temporary 'fix' – if doing this for real you would need to take steps ahead of time to preserve this data – I mention this in my post.

The final issue (with uspSearchCandidateResumes) looks like the only one that has stopped deployment. I suggest that you take this one procedure out of the build (set the Build Action property on the file in the project to "None") and try again. This will cause the procedure to be dropped from the database. In my post I described doing this for a view which uses X-Query, which I suspect may be the case here also. Again, in a real world case you'd need to examine procedures like this carefully to determine how you will handle its function/purpose in your final database.

Hello -I have the need to create bacpac files to restore to test environments and to Azure for dev/test needs. However, I want to use extended properties on my actual database to store information that is used to generate my data dictionary.

Is there a way to keep the extended properties on my Db, but ignore them when creating the bacpac?

Thanks for posting, but I really have not got the time to spend implementing something this complex just to move a database.

Microsoft ought to jolly well provide the standard mechanism to move an on-premise database into SQL Azure with a single click – we know it, you know it, and they know it!

Until they do, my company, our clients, their clients and pretty much everyone else I know will stay either on-premise or create our own virtual machines running SQL Server at Amazon or Google or Rackspace.

With the latest update to Azure SQL Database we have closed the gap dramatically between the features supported in Azure SQL Database and SQL Server. As a result migration for many databases will require little more than moving the database. As a great example, the AdventureWorksLT2012 database that I used in the post above now migrates to Azure with NO CHANGES required when you remove the user NT_AUTHORITYNETWORK_SERVER (which as refers to a built-in Windows user is not applicable/supportable in Azure).

You can download the new versions of SSMS and SSDT which provide built-in support for this upgrade. If no changes are required you can use SSMS to deploy a SQL Server database to Azure directly or to export a bacpac and then import it to create a database in Azure. Using the SQL Server tooling in VS (aka SSDT) and the techniques described in the post above still apply, but only if you need to make changes.

You can find out more about the Azure SQL Database update by following the links below:

If you have the tendency to horde things, you will have to learn let go.
Will you need the cleaning service to do a thorough cleaning for you.
Equipment should also be serviced on a weekly basis to make them more efficient.