SQL Server, Columnstore, Data Platform & Community

Hekaton 2014 – The Saviour vs The Lost Cause

With the release of the SQL Server 2014, Microsoft has finally unleashed their long-time in work In-Memory solution (codenamed XTP for Extreme Transaction Processing, old codename Hekaton).
Note: I will be keep on using the name Hekaton since Dr Devitt at the 2013 PASS Summit has asked people to do so. I can’t deny that I do listen to one of the father’s of the feature :)

As recent as a couple of weeks ago at the SQLPort meeting in Oporto I was introducing SQL Server 2014 and I have said a couple of nice things about Hekaton, and so after the meeting I have had one person going like ‘Are you mad? Hekaton is one useless feature, which has failed everyone expectations! etc, etc etc’.
I have heard & seen a lot of people talking, blogposting and tweeting about Hekaton’s failures.

We need to set the record straight.
There is no need to panic and burn the product or feature.
There is no need to be extremely happy and believe that Hekaton by itself will automagically improve each and every workload there is.

The critical points

Not every T-SQL Construct and functionality is supported – correct, but that will logically change with the following releases. Nobody makes a perfect release on Version 1.0

It is impossible to alter Tables or Stored Procedures – this is the first release of whole new architecture, do not expect it to be perfect. Some other competitive products had some serious limitations in the 90s, but right now nobody remembers that.

The actual timings of the native operations sometimes going into different time dimensions, incomparably slow compared to traditional solutions – that is quite unfortunate

Previously every release was a major one with every feature being stable right from the beginning, but now Hekaton does not look like a 1.0 release even after 5 years in making – Really ? Install SQL Server 2000 without any patches and try it out :) Play with 2005 Reporting Services & Mirroring without any patches :) I won’t even continue.
Every release is a beta without patches. Look into the mirror.

The bigger picture

Let us review the architectural design points of the Hekaton:

In-Memory – keeping all Hekaton tables 100% in-memory, always available. Sounds like an extremely logical solution. Especially since you can buy a couple of TB of RAM memory for less money than the amount you would spent on the 4 licensing for just 4 cores of your SQL Server.

Not storing the indexes on the disk – makes a huge sense, this way any eventual writing will improve speed

Transaction Log writes have been minimised – Right way, no doubt. I believe that the right way would be going for an own T-log for Hekaton, since the boiling point for any update activity will remain T-log. I do recognise that creating a separate T-log would extremely difficult.

Natively Compiled Stored Procedures – Sounds very right, we want to have our logical code to run natively as close to the cores as possible.

Helping to solve Latching contentions – Yeap, we need it

Storage – I guess there is no doubt, that 8K pages belong to other decade.

There is no doubt that not every workload needs Hekaton – because there is so much that one can still solve while using traditional solution, but in my humble opinion the future belongs to the type of solutions such as Hekaton.

The critical point

Can Microsoft deliver ?
Can they really implement & improve the envisioned functionality to make it work “faster as lightning” ?

I say yes. I am a believer.
We have seen some unfinished products and unreleased features – yes, but this is life. This happens in every developer team, in every company.

Microsoft is going right direction, it might take some time to figure out the details, but the bigger picture is clear. The development team needs more sources and time – I know that as every developer who need to deliver knows that.