Cache frequently used data on SSDs. You can specify an SSD (or an SSD array) to be used to extend memory. SQL Server 2014 will automatically cache data there with zero risk of data loss. (Only clean pages, not dirty pages, are stored there.) The best use case is for read-heavy OLTP workloads. This works with local SSDs in clusters, too – each node can have its own local SSDs (just like you would with TempDB) and preserve the SAN throughput for the data and log files. SSDs are cheap, and they’re only getting cheaper and faster. Here’s the questions you’ll want to ask before you use this feature:

Is your total actively queried data set bigger than you can fit in memory? Note that I didn’t say all data: you might have archive or history or audit tables in your databases that are never queried, and there’s no sense in caching those.

Have you already maxed out the memory on the server? If not, start there first – memory can be used for more than just caching clean pages.

Do business requirements force you to use shared storage or magnetic local storage? If not, consider moving the data to local SSD entirely.

Does your server have room for locally attached PCI Express or SAS/SATA solid state drives?

If the answer to all of those questions is yes, an SSD buffer pool extension may be for you. Honestly, Microsoft could stop there and I’d probably still recommend the new version for most of my clients, because that’s a killer performance benefit.

More online maintenance operations. Got big data in a partitioned table? Is nobody giving you any time to do maintenance? Just no time to stop and get away cause you work so hard to make it every day? Well, with SQL 14, you can rebuild a single partition’s index online, and you can switch partitions in/out using DBA-specified lock priorities. For 24/7 workloads, this gives the DBA the ability to do maintenance with lower locking, CPU, and memory overhead. There’s also new Extended Events stuff you can use to monitor who’s getting blocked and killed. Here’s how the syntax works:

PARTITION = 3 – you can pick the specific partition you want to rebuild, and you can do it online.

WAIT_AT_LOW_PRIORITY – just hang out when you need the schema mod lock.

MAX_DURATION = 5 – wait for up to 5 minutes.

ABORT_AFTER_WAIT = BLOCKERS – can be a few different variables. If BLOCKERS, then SQL Server will abort (kill) the queries blocking your index rebuild. If SELF, your index rebuild will give up and let user queries keep going. If NONE, everybody just keeps waiting, doin’ the neutron dance. This is the current behavior in SQL Server 2012, and it’ll be the default.

AlwaysOn Availability Groups get more secondaries. If you really need to scale out your reads, SQL 14 gives you up to 8 secondaries (up from 4). Of course, you’ll be paying for Enterprise Edition licensing on these, but if you were already going to replicate data out to various reporting or BI servers, now your life is easier.

AlwaysOn AG readable secondaries will be more reliable. In SQL 2012, if your primary drops offline or the cluster loses quorum, the readable replica databases drop offline. (This is hilarious because it’s right when you really want to be able to query the secondaries.) No way to control it – it’s totally automatic. In SQL 14, the secondaries remain online and readable when the primaries aren’t available. However, keep in mind that typical AlwaysOn AG connections go through the AG listener name, and then fetch the list of readable replicas from the primary. This just means that in order to keep your report queries online, you can’t use the AG listener – you have to connect directly to the replica’s server name. I like using a separate set of DNS records for readable replicas, like readonly.mydomainname.com, and have my report servers point at those.

Initializing the replica means a full database backup/restore from on-premise up to the Azure VMs, too, so this isn’t a great solution for big databases with limited bandwidth.

Connectivity from on-premise to the Azure VM requires a VPN appliance from your datacenter to the Azure datacenter, and today that means a hardware appliance, so there’s still some expenditures required. Still way cheaper than buying hardware for a colo, though, and much more flexible.

If you’re really going to use it for disaster recovery, you need a Windows Domain Controller up in Azure as well. Without that, when your primary site dies, all of your Windows machines won’t be able to log on, so that wouldn’t be very useful. SSMS doesn’t automate the deployment of a DC (nor does it alert you if you didn’t think this through.)

Failover Cluster Support for Clustered Shared Volumes. With regular volumes, only one node can own the volume at any given time. He owns the entire volume, and no other node can see/read/write files on that volume. However, Windows Server clusters have a type of drive volume called Clustered Shared Volumes with much more flexibility. Multiple cluster nodes can be connected to the same volume at the same time, but each node can access different files on the drive independently. Windows and Hyper-V have supported this for a while (see the BOL section on the benefits), and now SQL Server supports it too. The big benefit here is that if one node of the SQL Server cluster loses connectivity to the storage, it can still read and write data over the network to a different node’s SAN connection.

Smart Backup to Azure. SQL Server 2012 CU2 already lets you back up databases to Azure storage. I hear a lot of people ask me, “Brent, how can I make my backups slower and less predictable?” Those folks loved backing up from on-premise databases over their Internet connections, but that wasn’t enough. They wanted even less predictability, so now they get Smart Backups. With this feature, SQL Server figures out whether it should do a full or differential backup, how often it should do a transaction log, and more. Humor aside, this makes sense for people who host their servers in VM providers with very fast Internet connections that don’t pay for bandwidth – specifically, people hosting SQL Server in Windows Azure VMs. Both of those guys are going to be thrilled.

On-premise SQL Server with data/log files in Azure storage. Great news for those of you who really like juggling chainsaws! Now you get the best of everything:

Expensive on-premise licensing

Expensive bandwidth costs to the cloud

Paying for data storage at Microsoft

Slow backups (because your data has to come down from Azure storage to local on-premise memory then back out to wherever you want it stored, and heaven forbid you be dumb enough to send it back up to Azure storage and pay TWICE for bandwidth in and out)

Hekaton: specialized in-memory OLTP tables. If your application is facing serious concurrency issues with thousands of simultaneous connections trying to lock data, Hekaton offers an intriguing solution. I’m not even going to try to type out an explanation here, but I’ll point out a few challenges with it:

You’ll also probably need to change your code. Hekaton works best with stored procedures, and specifically stored procs that it can compile into native code.

It’s memory-only. If you experience sudden data growth in your Hekaton tables, that means you can cache less of your other tables. If you run out of memory – well, let’s just say you’d better pray you don’t run out of memory, because hello, downtime.

My Analysis

The Pointer Sisters music probably gave away my true feelings here, but really, folks, I’m so excited. There’s very real improvements in here for everybody. If you’re a DBA on a multi-terabyte database, you’re going to love the SSD buffer pool extensions and the granular index rebuilds. If you’re BI-curious, you’re going to be experimenting with the clustered column store indexes. If you’re a software-as-a-service vendor with lots of clients, you’re going to love failover cluster support for CSVs and query performance improvements. And if you’re a developer who works with a SQL Server back end, you’ve got all kinds of new tricks to scale.

I know some DBAs were worried that Microsoft was “all in” with the cloud, and that they’d stop improving the box product. SQL2014 shows that Microsoft is still bringing the awesome.

Now, can somebody just get us a release date and some pricing? I was talking to a PR guy who almost spilled the beans, but he’s so shy.

<>
Well, given the amount of bugs I faced on Windows2008 RHS service and MSCS clusters I am not surprised MS now calls AG the new HA solution. MSCS old single storage clustering topology is becoming obsolete as compared to AG which is much more robust/versatile as far as I could see.

For me, the prospect of being able to move away from Replication in SQL Server is very appealing. The complexity that could be removed (distributed systems) is significant. We’ve had some “fun” times together but I’m now ready to move on…..

We’re also looking into Hekaton. With the App/Dev folks already super keen on in-memory database technology, the ability to transition existing environments easily to Hekaton, versus porting to a complete new DBMS could be a deal winner in terms of cost. Too early to say for sure….

Cecil – I’d go ahead and do the MCM. The MCM exams aren’t usually available for about a year after the new version comes out, and so it’ll be quite some time before the MCM has 2014-specific questions.

Todd – well, Microsoft needs to release new versions every 2-3 years in order for Software Assurance to make financial sense for customers. Because of that, I’d say it’s reasonable to expect it within 2-3 years after the release of SQL Server 2012.

SAinCA – my guess (and this is just totally a guess) is that the SSD caching would be an edition-sensitive feature because it increases the amount of effective RAM. Since Microsoft has been using RAM as a throttling limit in Standard Edition, I would bet that they’d limit this feature to Enterprise Edition.

I think it’s probably too early for them to announce anything about deprecation, but I’ll turn that into a question for you: have there been any significant investments in SQL Server’s replication capabilities in the last few versions?

Good question Brent. No, I have not personally seen any major feature enhancements (sure there were like SubscriptionStreams, Peer-to-Peer and others) in replication. I would assume there wont be which means it might be deprecated going forward but its too early to say.

Interesting, in Kalen Delaney’s hekaton whitepaper it mentions that replication is not supported. This may of course be a CTP limitation, however if Microsoft are pushing this as the main driver for upgrading then the death knell should be audible soon.

One minor point of a semantic / pedantic nature…the abbreviated “SQL 14” is a slightly confusing moniker, if only because versions are referred to variously by the year (name of the product) and the version number, as in, version 9 / 2005, version 10 and 10.5 / 2008 and 2008R2, and 11 / 2012. If we start referring to SQL 12 for example it will be confusing to understand whether we are referring to 11.0 / 2012 or 12.0 / 2014.

WOW, SQL 2014!!! but I just started my certifications for SQL 2012 and yes,it is expensive to get to MCSE on SQL 2012 (5*150=750 + any retakes*150)!!!…I do not want to see SQL 2014 with 5 more exams..:( or maybe I do…I am just gonna go ahead with my SQL 2012 Cert Prep.

Shankar – well, that’s kind of a trick question. I’m not a big fan of doing Hadoop with SQL Server. There’s a ton of free ways to do Hadoop, and I’d really rather not spend SQL Server licensing ($7k/core) to work with that. I understand why some folks do, but it’s just not my cup of tea.

That’s true but there are some very interesting things going on between SQL and Hadoop (based on Linux or Windows/HDInsight) like Polybase where you can query and join hadoop unstructured data with SQL data in a single SQL query, but this is only available in SQL PDW. Any updates or news on this coming to other editions of SQL?
Also HDInsight provides a Windows friendly way to interact with Hadoop, any news there? Especially with the possibility of on-premises HDinsight clusters.

I don’t know of any major form of replication being marked as deprecated. (Updating subscribers in transactional repl hit the “to be removed” list recently, of course.) I would be incredibly surprised if it were removed completely because of the flexibility it offers in terms of allowing a subset of the data to be sent and separate indexes put on it.

Now, would I really recommend a peer to peer topology at this point, even if it isn’t deprecated? Perhaps not because of the interoperability and management limitations that I think will be around for a looong time. But do I personally predict a long future for transactional replication? Certainly. (That’s not based on any insider knowledge, it’s just based on how widely used I’ve found it to be and the fact that it’s a mature feature with an established niche.)

At this point what’s enterprise vs standard edition for 2014 is all speculation. We just don’t have info on that one.

Thanks for the summary on SQL 2014 (I particularly like the VNext updateable column indexes and controllable partition online switching I have dreamt onto since 2005). Furthermore, it looks like MS is taking serious initiative over the current inadequacy of IO subsystems (namely SAN systems) into coping with data growth. This move will clearly encourage people to move toward peer to peer hybrid local storage. The trend seems to confirm the end of SAN topology as a both economically and technically viable option for database storage.

Dearest Brent and crew,
I really, really wanted to read this blog post, but was unable to sit still enough once the Pointer Sisters started. Perhaps you should warn of these potential side effects prior to linking to freakin’ amazing songs.

Having just finished moving off of SQL 2000 I’m more worried about how quickly SQL 2005 is going to out of support. With a release every 2-3 years I’m hoping they extend support to 4 versions rather than the old 3 (assuming that R2 didn’t really count).

I guess this will drop me yet another version behind the power curve :(. I am still trying to get major front offfice applications to move off of SQL2000SE on Windows 2000 SE (even if they stay in 8.0 compatibility mode).
I at least have the back office apps headed to SQL2008 (some in 8.0 compat) and hope to complete that before SQL 2014 is released (HA, HA, )

LondonDBA – you know, it’s an interesting question. I hear the RAC-type request a lot, and then I ask to look at the client’s system. I’ve never seen a SQL Server environment that actually needed that type of setup – we’ve always been able to get to their concurrency/scalability goals by doing index and query tuning.

I suppose that if money was no object and you’d rather throw money at the problem than improve code & design, then I can see how the RAC-style setup would have an appeal. I just haven’t seen that type of client yet though.

Awesome post! Any idea if memory optimized tables could use a col with a sequence default instead of identity pk? That’s too bad about the problem of an in memory table running out of memory and just refusing inserts. Perhaps a view with some clever instead of insert trigger action could be a workaround.

As usual Microsoft tries to catch up with Oracle. DB Cache on SSD drives? So 2009…
Rebuilding partitioned indexes online? Implemented 2 versions ago in Oracle.
Failover/availability features on RAC won’t be matched even in 2014.
Well, I’m still buying Microsoft stocks, because of new X-box 🙂

Aleksey – yep, and they’re catching up on licensing costs too. Microsoft’s new core-based licensing is catching up to where Oracle was years ago, but from what I hear, still hasn’t quite caught up yet. 😉

OK, here’s the scenario: you’re putting together a design for a new data model & want to ease any later transistion to the use of in-memory tables. One regular practice for clustered indexes, as you know, would be to use a sequential key to avoid fragmentation & hence give an ordering to row inserts so they occur in the same approx. physical location. Also, wouldn’t using a GUID significantly bump up the memory overhead per row, which would be even more of an issue for in-memory tables?

Great question. Design is about trade offs. If you’re building a vehicle that may need to hold 8 people, you build a minivan. If you need to go 200mph, you build a small sports car. If you need to take 8 people at 200mph, you build a plane.

But if you’re building a car that needs to go 200mph today, and may need to hold 8 people a couple of years from now, you don’t build a sports car with wings and say, “The wings get me closer to having a plane.” They don’t.

I’m not saying you can’t plan for the future, but Hekaton is a plane. If you don’t need that kind of power, don’t tack wings on your app and hope it will make an easy transition later. Hekaton solves a very specific problem – high concurrency with latch waits. It’s for apps that deal with tens or hundreds of thousands of batch requests per second.

I’m a bit mystified about why Microsoft would see a need to drop IDENTITY for Hekaton at all, surely we could grow our own on such fast tables, create the good ol’ next available value in a Hekaton table and use it? But then, so could Microsoft as a SQL Server feature. I give it fair odds they’ll find a way to shoehorn it back into SQL 2014 and if not how about SQL 2016!? I mean yes it may have some impact and cut Hekaton table back to a mere Dodekaton or something but hey.

Sure, but that’s what they said about columnar indexes too and they’ve now come around on that and I think it’s a good decision.

For better or worse integer/bigint identity columns are a ubiquitous feature of SQL Server logical and physical database designs. I mean, just the overhead to check the bit to see if it’s a table with identity, and that’s done at compile time. At runtime check a spinlock to see if it’s a table with a *busy* identity, and only then do you get into anything with real overhead.

So if you gotta have full ramming speed you don’t use them, the plan sees that and everyone wins.

Does 2014 address the first run performance hit stored procs have always endured due to compilation? (e.g. something like Oracle’s binary version of the stored proc PLX, instead of parsing text when the sproc is first called)

Nick – for typical stored procedures, no. If that initial hit presents a serious problem for you, what some folks do is create a SQL Agent job to execute stored procedures and populate the buffer pool upon startup.

Only sections that have changed to far are “RDBMS Manageability”, where Buffer Pool Extension are new and marked as EE only, and “Reporting services”, where a few items and one restriction seem to have been removed.

SQL Server 2014 CTP1 has been introduced with In-Memory concept, can u please help me, if I have 16 core CPU and have memory 16 GB and my Production DB size is 50 GB and currently I am using W2K8 Server with SQL Server 2008 R2. If I go for SQL server 2014 CTP1, is it required any extra memory upgradation, as because SQL server 2014 is memory intensive.

Subir – if your bottleneck is memory, I’d suggest adding more memory. 16GB isn’t much. Standard Edition will use up to 64GB, so might as well put 64GB in there. It’s a lot cheaper than you might think (probably around $500), and way faster than waiting for the next version of SQL Server. Hope that helps!

Regarding Always On, we too had reservation that having 2014 as primary and 2012 as secondary may not work. But right now we are facing challenge even with setting up 2012 as primary and 2014 as secondary.

If you’re in the TAP, be careful – your company has probably signed a non-disclosure agreement, so you’re not supposed to be talking about this publicly. Good news though – this also means you can get quick support from Microsoft on this during your weekly TAP call with them.

Lukas – not off the top of my head, but of course there’s an entirely new storage engine in the form of Hekaton, and you’ll want to learn how that affects the way you code. Not all commands or datatypes are supported, so it’s sort of a backwards step for now, but it can pay off with much faster performance if your code fits that smaller feature subset.

Microsoft seems to be very quiet about a release date for SQL 2014. Does anyone have any SOLID information they can share? Please skip the rumors, things a friend of a guy I met at the bar said and any things some random person tweeted about last Tuesday at 3 AM.

I just talked to a microsoft rep about licensing on sql 2012, and he mentioned that sql 2014 is coming out in April ’14, and I should wait until then. He said the licensing model is the same as 2012 as well.

Brent – mind telling if there is a theoretical memory limit to SQL Server 2014? I read of something like 64GB but as I go thru a lot of documentation from MS – there is a lot of talk about 1TB…is that a day dream or something that we can test and measure?

“Replication is supported if each SQL Server endpoint is within two major versions of the current version of SQL Server. Consequently, SQL Server 2014 does not support replication to or from SQL Server 2005”

Note: for completeness, the same link, “Pointer Sisters music” also seems to exist at the bottom of the post. Pointing out, just in case maintainer has an interest in keeping links current; if not, not a problem.

We have an application where we need to handle approx 1000 requests per sec and each request involves calling a stored proc which selects and inserts from the same table. So we have only one table that will grow huge and is transaction heavy. Does this in-memory OLTP solves our problem? Or are there any other approaches to handle such load.

Currently we are on SQL Server 2008 R2 and it peaks out at just 150 requests / sec.

Amit – 1,000 requests per second isn’t that big of a deal for modern database platforms – regular tables can handle that just fine – so let me ask about the second part of your question: do you mean by “grow huge?”

We are upgrading all our applications from SQL 2005 to SQL 2014.
Is there any document or something which would give me a case study on SQL 2014 HADR strategies ?
Your help is highly appreciated, thanks in advance