I hate to use the old developers refrain but...it worked on my db. There were several constraints which had been disabled and they were again afterwards. I will re check this but I'm pretty certain it does this already.

Hmmm...clearly some brain melt going on from my end. Serves me right for checking this on my phone whilst out drinking! I'm going to amend the script to grab include columns (I though about doing them before, as I recall, but since the DB I was working on had none I didn't bother to save time, but it'll annoy me if I don't!) and yes, the constraints aren't coming up disabled/untrusted. Again, omission on my part. Oh well, never said it was complete or perfect!! Neither are insurmountable things for a competent develoepr to add, but I'll re-post the script once II have the time to make my amendments.

Thought I'd update this since I have re-visited the original script quite a lot recently and made some improvements. The major change is that when I ran this on real-world databases instead of test databases (i.e databases of many tens of GB rather than ones which were practically empty) it took a VERY long time. In fact I stopped it running in most cases. The problem was that trying to change lots of columns in a table with tens of millions of rows is pretty slow and requires acres of log space. As a result, tables are now created afresh with the suffix '_new' and data is then transferred across, the original table is dropped and the new one renamed. This process also takes care of explicitly casting data, identity inserts and so on. As a result, a database which we gave up on after some 15 or 16 hours now takes around 50 minutes to convert.

Other enhancements include:

Database recovery model is changed to BULK_LOGGED at the start and reverts to its original value afterwards.

If the database and/or log are set to no auto-growth and/or fixed size, this is changed to allow auto-growth and unlimited size, with these values reverting to their original state at the end.

Handles multiple file groups to ensure replacement tables end up on the correct filegroup (indexes etc already had this).

Adds include columns to indexes where they exist.

Anyway, here it is - hope it's useful. Will try and take account of any mistakes or omissions pointed out when I get the time.

First and foremost YOU MUST ENSURE THAT YOU BACK UP THE DATABASE FIRST AND THEN VERIFY THAT YOU CAN SUCCESSFULLY RESTORE THAT BACKUP. This phrase will be repeated several times. If you do not do this and the process goes wrong you will have NOrollback plan at all. This script involves commands that should only be used in extremis and only when you have a solid, verifiable backout position which demonstrably works.

This script is designed allow all TEXT, NTEXT, CHAR and VARCHAR columns within a database to be changed en-masse to NVARCHAR. To accomplish this in the simplest way, we must drop all foreign keys, primary keys, indexes, defaultconstraints, unique constraints and check constraints before altering the columns and then recreating all the droppedobjects. The purpose of this script is to generate, based on the system views of a database, all the nescesary DROP,ALTER and CREATE statements for this operation. All keys, indexes etc will be re-created exactly as they were in terms of column order, data sorting direction and so on, and foreign keys will also be re-created WITH NOCHECK if they werepreviously disabled (as will check constraints, althoguh if check constraints were previously enabled and trusted thena statement will be generated to ensure the constraint is set that way again). Schema names are automatically prepended to appropriate obejct names according to ownership. Indexes, Primary Keys and Unique Constraints will be assigned to theappropriate filegroups automatically.

LIMITATIONS1) This script will only work on SQL Server 2005 and above as it uses system views in the sys schema, and not INFORMATION_SCHEMA views or the old SQL 2000-style system tables.

2) Foreign keys that are disabled are re-created as disabled. However, if after this you simply re-enable the key thisdoes not mean the optimiser will use it as the is_not_trusted field may not have been reset. To counter this, either drop the key re-create it WITH CHECK or user ALTER TABLE <tableName> CHECK CONSTRAINT ALL command. Avoid using the DBCCCHECKCONSTRAINTS command as this has been deprecated after SQL 2005.

3) This script was designed to work with a specific database and thus only encompasses those objects which wouldbe a barrier to changing the string type columns in that database. In theory at least, this script should work on ANY database, however it may not take into account all objects which could potentially affect a column, and therefore it may require additional script blocks for those objects. Once again, it will NOT work on SQL 2000.

USAGE1) Ensure that all users are disconnected from the database. However, do not put the database into single user mode as ifsomething goes wrong it can be difficult to get it out again, and if you back up a database in single user mode, it willrestore in single user mode.

2) As always, before making any structural or data changes to any database YOU MUST ENSURE THAT YOU BACK UP THE DATABASE FIRSTAND THEN VERIFY THAT YOU CAN SUCCESSFULLY RESTORE THAT BACKUP. Do not assume that just because you have performed a backup thatyou are protected, always verify that the backup will restore correctly before you proceed. Once you have a functioning backup, it is strongly recommended that you make a dry-run of this procedure by trying it on a restored copy of the database before running it against the final target, especially if the final target is a production platform.

3) Ensure that you replace DBNAME_HERE in the USE statement near the top with the name of the database that you wish to alter.

4) Ensure that you output results to text and set the column width to 8192 (In SSMS go to Tools -> Options and select theQuery Results node in the treeview on the left. Change the 'Default destination for results' drop down to 'Results to text'. Now expand the 'Query Results' node and then the 'SQL Server' node below it, and finally select the 'Results To Text' node.Set the 'Maximum number of characters displayed in each column' field to 8192 (this is the largest allowed value). Now clickOK. These changes will only be applied to new query windows so if needs be, open a new query window and then re-open this script.

5) Run this script (it should not take very long).

6) Copy the ouput from this script to a new query window and then parse the query to verify that the syntax is correct (this is the blue tick button on the SSMS toolbar next to the Execute button).

7) Once you are satisfied that the query will parse, you can execute it. The length of time it takes to execute will varydepending upon the size of the database, and the capability of the hardware but it may take anything up to 10 or 15 minutesor more. You can see the progress of the script in the results as the script will print out which section it is executing.

8) If you encounter any errors you may need to refer to the script author to include any extra object types which prove to be a barrier to making column alterations. DO NOT ATTEMPT TO EDIT THIS SCRIPT YOURSELF UNLESS YOU ARE COMPLETELY FAMILIAR WITHSQL SERVER SYSTEM CATALOG VIEWS AND HOW TO DROP AND CREATE OBJECTS. Also, avoid editing the output from this script.

************************************************************************************************************************/SET NOCOUNT ON

--Declare some variables to hold the existing growth and max size values for the system files. If the values for either faile --are set to 0 (i.e. no growth) or the max size is not set to -1 (i.e. unlimited) then those values will be set arbitrarily to --10 and -1 then reset back to what they were afterwards. Don't care if growth is in percent or pages so long as it can grow.DECLARE @dbGrowth INTDECLARE @logGrowth INTDECLARE @dbMaxSize BIGINTDECLARE @logMaxSize BIGINTDECLARE @dbFileName SYSNAMEDECLARE @logFileName SYSNAME

--Get ID of this database. Need this as we execute the first few commnds of the generated script in the context of the master db.--Need recovery model so we can revert to it at the end.SELECT @dbId = d.[database_id], @dbRecoveryModel = d.[recovery_model_desc] FROM sys.databases d WHERE d.[name] = DB_NAME()

--Get the max size and growth information for the first data file you can find. Doesn't matter which data file it is, --but at least one MUST have autogrowth switched on for this script to work. This also applies to log files in that--at least one must be able to autogrow. Original settings are restored afterwards.

--Generate statements to create new copies of tables which require alteration, copy data across, drop old table and rename new ones.--Altering data types on columns in tables which have lots of rows (millions of rows are not uncommon) is deeply inefficient. For the--purposes of generating scripts which can run without interference the simplest and most efficient alternative is to make copies of --the tables with the new data types in place, copy and transform data into them then drop the originals and rename the new ones. It's--still not what you'd call lightning fast but you can't have everything.

--Now we want column definitions and various permutations of the columns lists. This will be used with the data extracted above to form --CREATE TABLE statements with corrected datatypes (NVARCHAR and NCHAR).INSERT INTO @columns ([tableId], [systemTypeId], [width], [orderId], [colName], [colDef]) SELECT tt.[tableId], c.[system_type_id], c.[max_length], c.[column_id], c.[name],

WHEN c.[system_type_id] = 231 THEN --Check for SYSNAME amongst NVARCHAR fields. CASE WHEN c.[user_type_id] = 256 THEN --SYSNAME has a system_type_id of 231, same as NVARCHAR as it is a synonym for a specific size of N VARCHAR field. --It is identified by the user_type_id of 256. '[SYSNAME] ' ELSE '[NVARCHAR] ' END

WHEN c.[system_type_id] = 231 THEN --NVARCHAR and SYSNAME fields stay as they are. SYSNAME does not require a width declaration so check if the field is an --NVARCHAR in the user_type_id and set its length as needed. Note that c.[max_length] must be divided by 2 for N fields. -- -1 means NVARCHAR(MAX) CASE WHEN c.[user_type_id] = 231 THEN CASE WHEN c.[max_length] = -1 THEN '(MAX) ' ELSE '(' + CAST(c.[max_length]/2 AS NVARCHAR(4)) + ') ' END END

WHEN c.[system_type_id] = 239 THEN --NCHAR stays as it is. Note that c.[max_length] must be divided by 2 for N fields. '(' + CAST(c.[max_length]/2 AS NVARCHAR(4)) + ') '

WHEN c.[system_type_id] = 35 OR c.[system_type_id] = 99 THEN --TEXT and NTEXT become NVARCHAR(MAX) '(MAX) '

--Save column list and select list for later use (that way we don't have to go thought and build it up a second time). INSERT INTO @columnLists (tableId, columnList, selectList) VALUES(@tableCount, @columnList, @selectList)

--Generate shrink logfile commands.--Have you created a backup of the original database? YOU MUST ENSURE THAT YOU BACK UP THE DATABASE FIRST--AND THEN VERIFY THAT YOU CAN SUCCESSFULLY RESTORE THAT BACKUP. These commands are the kind that are used only--in extremis and if they go wrong and you have no backup you are in trouble.--Belt and braces approach: Use TRUNCATEONLY option with SHRINKDATABASE then perform a CHRINKFILE on the log, just in case.--We use TRUNCATEONLY instead of specifying a percentage of free soace to be left for performance reasons.SELECT 'PRINT ''Shrinking log file...'''UNION ALLSELECT 'ALTER DATABASE [' + DB_NAME() + '] SET RECOVERY SIMPLE WITH NO_WAIT;' + @crlf +