Blog

“Enterprise Edition was installed for SQL Server, but it turns out that we only have a license for Standard Edition. Is that an easy change?”

I see this question a lot. The answer is well documented by Microsoft, but it seems to be really hard for folks to find! If you’d like to go straight to the source, everything I’m going to highlight here comes from the MSDN page Supported Version and Edition Upgrades.

Protip: The Edition Upgrade GUI lets you see and copy the current license key for that instance of SQL Server. (No, I’m not showing a screenshot with my key in it!)

You can also do this from the command line using the SKUUPGRADE parameter (and back in SQL Server 2005 and prior, that was your only option).

Changing the edition causes some downtime, but it’s a simple procedure. The fact that it’s relatively simple isn’t an excuse to skip testing: always run through this outside of production first so you know exactly what to expect. And always, always, always take your backups and make sure they’re on separate storage before you start. Document everything you need to know about your configuration just in case something goes wrong and you’ve got to reinstall.

It’s pretty simple. Except when it’s not supported.

What Goes Up Does Not Necessarily Come Down

The way I usually remember the rules is that you can typically change from a cheaper version to a more expensive version. But you can’t necessarily go from a more expensive version to a cheaper version.

So if you have SQL Server Enterprise Edition and you want to change to Standard Edition, a simple SKUUPGRADE isn’t going to work for you. (If you have the “Evaluation” Enterprise Edition, you’re probably OK though!) Check the chart for what you want to do to make sure.

Clusters are Special. (Not in a good way in this case.)

A lot of the confusion is around SQL Servers installed on failover clusters. You have to scroll waaaaay down on that page to see this:

The fine print on upgrading failover clusters

Ouch! Changing the edition of a clustered SQL Server is not a simple thing.

What if I Don’t Know What Edition I Need?

Typically the answer here is to use Evaluation Edition. But if you’re running a failover cluster, be careful– as you can see above, you can’t easily change from Enterprise Evaluation to Standard Edition.

Will CHANGING THE EDITION Reset My Service Packs?

I believe this used to be true on SQL Server 2005– if you changed editions, you’d have to reapply service packs and cumulative updates afterward.

I just ran a test on SQL Server 2012 and upgraded from Developer Edition to Enterprise Edition on a test instance, and I still had version 11.0.3431 (Service Pack 1, Cumulative Update 10) after the upgrade.

But like I said, test this out with your version, even if it’s using a quick virtual environment that you don’t keep after the change has been completed successfully. There’s other real perks to doing this as well, such as making sure that your license keys really work and are the edition you think they are!

What If My Change Isn’t Supported By the GUI / Upgrade Installer?

In this case, you need to uninstall and reinstall SQL Server. It’s going to take longer and cause more downtime. You’ll have to reconfigure everything and reinstall service packs. (It’s not actually that much extra work, because you were going to take those backups and document all the special configuration just in case, right?)

What if I Can’t Take Much Downtime?

If downtime is a real issue, don’t make this change in place. Set up a new SQL instance, test the heck out of it, get it into monitoring and plan a way to migrate to it with limited downtime using something like Database Mirroring. (If you’re considering this, read that link– it mentions that mixing editions between database mirroring partners isn’t supported by Microsoft. You can’t set it up through the GUI, you have to use TSQL. If that makes you paranoid, you could do the migration with log shipping.)

I have successfully downgraded multiple SQL 2008 R2 and SQL 2012 Enterprise Edition servers by following the uninstall and reinstall process. Before I uninstalled though, I made a note of the SP and CUs installed. I also stopped the SQL services and saved the system DB files (MDFs and LDFs for Master and MSDB) to a separate location. Once I reinstalled the correct Edition (Developer or Standard Edition depending on the server’s role), I then updated it to match SP/CU level that previously existed. Stopping the service, swapping in the MDFs and LDFs from the previous installation, and then restarting the service restored all my databases, logins, jobs, etc… to the now correct edition instance.
One caveat that I want to mention, I did this on VMs and I took a snapshot before I started so if things really went south, I could just recover to that point in time.

The idea of swapping in the MDFs and LDFs makes me feel queasy. Like, literally, I actually feel queasy just thinking about it.

The reason it makes me uncomfortable is that I picture something really weird happening three months later, possibly totally unrelated, and being on a call with Microsoft support and having to explain what I did. If I was them, I’d make me do a totally fresh install and start over before I passed go, unless it was something REALLY obvious. (And even then, I might make me do it for fun. OK, not really. But I’d be tempted!)

I know people also talk about taking copies of system databases for emergency purposes, but I just can’t get on board with it. It’s like playing Dr Frankenstein to me, I guess.

I went through the hell of doing that same technique of mdf/ldf swapping after an uninstall/reinstall on hundreds of servers, and I still remember that queasy feeling! It worked though, even if it took years off my life.

Kendra, so I can’t upgrade a 2008r2 Standard that a former DBA installed to 2008r2 Enterprise (which we were licensed for) to use the remainder of the memory on the physical machine in a FCI (minus 10%)? Or is that one of the few scenarios that is supported, the documentation doesn’t say it is unsupported so I should be ok? I’m not adding any features per the KB you posted, just want to use the enterprise engine features like online rebuilds and compression.

If you install a new 2014 standard installation, you can restore databases to them that were backed up from enterprise edition, unless they contain enterprise-only features. You can check if you’re using any of those with sys.dm_db_persisted_sku_features. (http://msdn.microsoft.com/en-us/library/cc280724.aspx)

I have been migrating a large collection of databases off a SQL Server 2008 R2 Enterprise Edition server to new servers as part of a SAN migration project. The previous twenty-some migrations were moved to an identical but clustered SQL Server 2008 R2 instance. But the current batch of databases is different because this particular vendor application doesn’t support clustering and therefore has to move into a standalone and cheaper SQL Server 2012 Standard instance. Unfortunately now I’ve run into a new problem.

After two databases failed to restore to the new instances, I learned that the error message means they are using Enterprise features not supported by Standard edition. The DMV query shows that the compression feature is enabled on the 2008 R2 server for almost every database there, and the new instance error log indicates that is the problem with these two databases. A specific table is named for each database.

How would I disable that compression on this particular set of databases and tables so they can be restored or copied and attached into the downgraded 2012 instance?

The Enterprise backups with this characteristic restore onto the new instance in Suspect mode, which suggests to my novice brain that they need to leave compression behind before they’re moved from the Enterprise instance. Or is there a way to do that in the new instance and redeem them from Suspect status? I haven’t found those answers yet in spite of multiple searches through this site, SQLServerCentral, MSSQLTips, and Google.

But even after running the command to disable vardecimal storage on the offending table and then at the database level, the DMV still reports that data compression is in use.

I tried the SSMS table storage wizard, changing compression settings (which were “Page”) and generated and ALTER statement using these instructions (https://msdn.microsoft.com/en-us/library/hh710073%28v=sql.110%29.aspx) and the DMV still reports compression in use on the database if not the table. Nothing seems to shut that flag off for the database though.

I’m at my wits end what to try next since I’ve only succeeded so far in trashing the test vendor database. I dare not experiment on the other production ones after this misadventure without more information.

Could you help me understand the roadblock and how to get past this problem?

You do need to remove the data compression on each index while it’s still on the Enterprise Edition instance– as you’ve learned, the database can’t be restored (and sadly it checks at the end of the restore).

To remove compression you have to rebuild each of the indexes that are compressed individually with data_compression=NONE.

Most people typically script this out with a command on an index-by-index basis so that the work can be done in smaller chunks. (Rebuilding all the indexes on a large table can sometimes fill up transaction logs and be painful.)

Oh, also, I re-read your comment (sorry, long day), and if you’ve already tried the alter table rebuilds that sounds odd if it’s still reporting feature in use. However, I couldn’t tell you the last time I’ve tested it!

The first thing I would do in that case would be to query sys.partitions for the database and make sure that the data_compression column really does say NONE for each row in the table. If not, you can use that to track down the culprit.

I Googled another hour or so last night after sending that question and discovered that the log error wasn’t listing all the tables with compression, maybe just the first one it ran into. I expect that’s why the database reported the feature in use when I’d only known to modify one table per database.

There are in fact 33 affected tables in the test and training databases, and 1221 in the production database with data compression enabled. So compression appears to be on all the production tables and only a few of the test environment ones. But there’s not much data in test or training, so even the largest tables are pretty insignificant in size.

I’m working on the larger scale T-SQL solution today to alter the index compression so I won’t be rightclicking tables all night in SSMS! In the meantime, I have restored an old copy of the test vendor database back to the original SQL Server 2008 R2 instance to clean this up and test my solution.

Does that stored procedure (especially run through a nice script like this one from SQL Fool–http://sqlfool.com/2011/06/estimate-compression-savings/) give a close enough guesstimate on the uncompressed sizes for me to update the new server’s storage capacity with those results? Or would I be just as far ahead to wait until the databases’ compression has been disabled on the old server and then gather file size statistics?

Thank you again for your help, I really appreciate you taking time to answer!

I wouldn’t use the estimates, I’d rebuild to move the compression and then look at the space. Even if the estimates were accurate, you’re going to need some empty space to do the rebuilds, so might lead to some miscalculations.

Thank you again for pointing out those resources, Kendra, everything seems to be working well now. Of twelve application databases, it turned out only 3 vendor and one utility database were using data compression.

I modified my test and utility databases and tested that they restore and attach properly to the 2008 R2 Standard instance now from fresh decompressed backups. I have disabled data compression on the last two databases and will test more with the application team before moving those to the new instance.

You’re absolutely right that altering the tables in smaller sets would help the system load. I didn’t have that luxury and saw the production database deadlocking the application server at times, but the process finished in under 10 minutes.

The production log file is huge at the moment (I’ll check it after the full tonight’s backup) but the data file only used an additional 21% of space.

I have a task to move 12 separate servers from Enterprise to Standard in the next few months, and in my research I’ve come across a method that looks too good to be true. Essentially, the advice is to simply edit the registry directly rather than uninstalling/re-installing SQL Server on the correct edition:

Just to make sure– you’re testing everything out and doing that restore outside of production, right? Totally different servers, etc?

Restoring a SQL Server 2005 database to a SQL Server 2014 instance is supported, and it does specify that in Books Online here: https://msdn.microsoft.com/en-us/library/ms143393.aspx. It will change the compatibility level as part of the process. What do you mean when you say the restore is “suspended”? Are you seeing any errors, or is it just slow?

excuse my english i’m french
the backup is from an SQL 2005 enterprise to sql 2012 enterprise EVALUATION.
And when i perform the restore, it write datafile then it’s suspended
On message we don’t have :
Converting database ‘db’ from version 661 to the current version 706.

it’s suspended during more than 2 days

It’s why i think that the upgrade can’t perform with evaluation version

* Storage /network latency from where the backup is and for the drives you’re restoring to
* If instant file initialization is enabled where you’re doing the restore (that can help a lot)
* If you might have a high amount of virtual log files in the source database that could make the restore slower (http://www.brentozar.com/blitz/high-virtual-log-file-vlf-count/)