This month, I have just made some minor improvements to most of the query sets, mainly in the comments and documentation. I have also been working on the Azure SQL Database version of the queries lately.

I have a T-SQL script that you can use to check whether your instance of SQL Server has been patched to mitigate against the Spectre/Meltdown CPU vulnerability. This works for SQL Server 2008 through SQL Server 2017, for on-premises and cloud-based VM (IaaS) usage. You can get the query for this here.

I often make additional minor updates to the queries periodically during the month, so if you are in doubt, downloading the latest version is always a good idea.

Rather than having a separate blog post for each version, I have just put the links for all ten major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set.

Here are links to the most recent versions of these scripts for SQL Server 2012 and older:

Since SQL Server 2012 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward. I started this policy a while ago, and so far, I have not heard any complaints.

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect. I am not planning on moving these to Github any time soon.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.

(Glenn’s Tech Insights… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can get more visibility. It covers interesting new hardware and software developments that are generally relevant for SQL Server).

For affected Intel processors, you will need OS patches plus microcode updates (BIOS updates) from your hardware vendor. You may want to consider disabling hyper-threading on affected processors. Microsoft has updated their guidance on this subject here:

Microsoft has already released an updated PowerShell script that you can use to check your current OS and hardware status regarding these exploits. This article walks you through how to download the PowerShell script and run it to check your patching status:

As Computex Taipei 2019 gets closer (May 27), there are an increasing number of leaks and rumors about the exact specifications and features of the upcoming AMD Ryzen 3000 series desktop processors. This family of 7nm mainstream desktop processors will supposedly have SKUs starting with 6C/12T, going up to 16C/32T. Ryzen 3000 series processors will also have PCIe Gen 4.0 support. These processors are supposed to work in most existing 300 and 400 series AM4 socket motherboards. There will also be new 500 series motherboards that will offer additional features.

Figure 2: 2019 AMD Client Lineup

The main unknowns at this point are the exact specifications in terms of base and max boost speeds and how much instructions per clock (IPC) improvement we will see compared to the existing AMD Ryzen 2000 series processors. Depending on what the answers to these are, we may see these processors actually having better single-threaded CPU performance compared to Intel. If that happens, it will further establish AMD as a viable competitor to Intel from nearly every perspective in this market segment. This would be great for the consumer.

You might be thinking that this is interesting, but what does it have to do with SQL Server? If the Ryzen 3000 series performs as expected, and is successful in the marketplace, it will be a good precursor to the upcoming 7nm AMD EPYC “Rome” server processors. It will give us some hint about the IPC and clock speed increases that we can expect from the Zen 2 architecture. We should also get much more detail about the Rome processors at Computex.

From the exploded view in Figure 1, it appears that you might be able to disassemble the Samsung enclosure and swap in your own M.2 NVMe drive (which I am sure would void your warranty). This would let you put in any M.2 NVMe SSD that you wanted. I am not 100% sure this is possible though.

Figure 1: Exploded View of Samsung Portable SSD X5

You will also need a machine with a Thunderbolt 3 port, preferably with PCIe 3.0 x4 bandwidth so that you get the full performance that the drive can deliver. Figure 2 shows the CrystalDiskMark results for this drive in my recent HP Spectre x360 13 AP0023DX laptop, which has an TB3 PCIe 3.0 x4 port.

Figure 2: 500GB Samsung Portable SSD X5 in TB3 PCIe 3.0 x4 port

With Windows 10 version 1809 or later, it is also very important that you set the write-caching policy to what you want it to be for that drive. The new default for external drives is Quick removal, which is safer, but disables write caching in Windows. If you want better write performance, you should enable write caching for the drive as you see in Figure 3.

Figure 3: Windows 10 Write-Caching Policy

Another important factor is exactly what type of Thunderbolt 3 port and PCIe 3.0 interface you have in your laptop or desktop machine. I have a two-year old Dell Precision 5520 laptop that only has a PCIe 3.0 x2 interface for its USB-C Thunderbolt 3 port. This effectively cuts your maximum sequential performance in half compared to a PCIe 3.0 x4 interface. You can see these results in Figure 4.

Figure 4: Performance Effect of PCIe 3.0 x4 Interface

Figure 5 shows the CrystalDiskMark results for a 1TB Samsung 970 EVO Plus M.2 NVMe drive in my HP Spectre x360 laptop. That drive is an incredible value right now, giving great performance for less than $250.00. Flash NAND SSD prices have been in steep decline over the past year. I vividly remember paying $620.00 for a 1TB Samsung 960 PRO M.2 NVMe drive in November 2017.

Figure 5: 1TB Samsung 970 EVO Plus M.2 NVMe SSD in HP Spectre x360

Figure 6: Samsung Portable SSD X5

This drive is still somewhat pricey, and it does get warm under a heavy load, which happens with all M.2 drives. The built-in heatsink in the enclosure should help with that, compared to an M.2 drive inside a laptop.

Still, if you want TB3 level performance from an external drive and you have a new enough machine to support it, it is nice solution.

As SQL Server 2019 gets closer to GA status sometime during 2019, I have an “ask” for the decision makers at Microsoft who do the final analysis and actually decide what the hardware-based license limits will be for SQL Server 2019 Standard Edition. My ask is that these limits be raised for SQL Server 2019.

SQL Server 2017 Standard Edition License Limits

SQL Server 2017 Standard Edition has a 128GB limit (per instance) for the Database Engine, plus an additional 32GB per database for in-memory OLTP, and an additional 32GB per instance for Columnstore index usage. These license limits were the same for SQL Server 2016 with Service Pack 1.

SQL Server 2017 Standard Edition is also limited to the lesser of four sockets or 24 physical cores on non-virtualized instances. On virtualized instances, it is limited to the lesser of four sockets or 24 virtual cores (which may map to logical or physical cores).

SQL Server Standard Edition Licensing History

These Standard Edition license limits have slowly risen over the years. SQL Server 2008 Standard Edition was limited to four sockets, but could use the operating system limit for RAM. SQL Server 2008 was still using processor licensing, so there were no core-based license limits. SQL Server 2008 R2 Standard Edition was also limited to four sockets, but was limited to 64GB of RAM (per instance). SQL Server 2008 R2 still used processor licensing, so there were no core-based license limits.

SQL Server 2012 Standard Edition was limited to 64GB of RAM (per instance). It was also limited to the lesser of four sockets or 16 physical cores. Microsoft raised memory limit for SQL Server 2014. SQL Server 2014 Standard Edition was limited to 128GB of RAM (per instance). It was still limited to the lesser of four sockets or 16 physical cores.

Microsoft raised the core limit for SQL Server 2016. SQL Server 2016 RTM Standard Edition was still limited to 128GB of RAM (per instance). It was limited to the lesser of four sockets or 24 physical cores. There were no license limit increases for SQL Server 2017 Standard Edition. This means we haven’t had a license limit increase for SQL Server Standard Edition since June 2016.

Recent Hardware Advances

In June 2016, Intel Broadwell processors had a maximum of 24 physical cores, which meant that you could hit the SQL Server 2016 core license limit on one socket. It was also very easy to exceed the SQL Server 2016 core license limit with many, lower core count SKUs on a two-socket server. This was a more common scenario.

In May 2019, we now have Intel Cascade Lake-SP processors that have a maximum of 28 physical cores. We will soon have AMD EPYC “Rome” processors that have a maximum of 64 physical cores. These hardware advances mean that there is a good argument that the core license limit should be increased to a higher number. If Microsoft increased the core limit to 64, you would be able to use a two-socket server with two flagship Intel 28-core processors. You would also be able to use a one-socket server with one flagship 64-core AMD EPYC “Rome” processor.

I also believe that the per instance memory limit should raised to a higher value. My suggestion would be to go to 256GB, which would double the current limit. Server-class DDR4 memory prices have gone down to about $7.00/GB for 32GB DIMMs, so it is feasible and affordable to purchase more RAM than people might have used in the past.

SQL Server 2019 Enterprise Edition Value Proposition

The obvious argument for why Microsoft should NOT raise these hardware license limits is that it might cause more people to use SQL Server 2019 Standard Edition rather than SQL Server 2019 Enterprise Edition, which would reduce Microsoft’s revenues. I think this change would actually encourage more people to finally upgrade from legacy versions of SQL Server Standard Edition to SQL Server 2019 Standard Edition. Being able to use higher core count processors and more memory would be an attractive combination which would increase revenues from SQL Server 2019 Standard Edition.

Microsoft could (and really should) do a better job of demonstrating the value of SQL Server 2019 Enterprise Edition for performance, scalability, and useful Enterprise-only features. They could use KB articles, whitepapers, and blog posts to discuss the low level optimizations that are already in Enterprise Edition but not in Standard Edition. Putting all of this information together would be extremely valuable.

For example, running a common benchmark like DBHammer on SQL Server 2019 Standard Edition, and then doing an Edition Upgrade and re-running the benchmark on the exact same system would be a pretty simple way to confirm performance increases. This could be extended by making easy configuration or database/code changes to fully leverage Enterprise Edition and rerunning the benchmark.

Doing a good job of explaining the value of Enterprise Edition would protect Enterprise Edition sales, and actually encourage more people to do Edition upgrades. I have done some work in this area, such as this article:

I don’t have any illusions that Microsoft will just do what I say here. I really do hope they raise these license limits for SQL Server 2019 in order to help create an even more compelling upgrade story. What do you think? Is 128GB of RAM and 24 cores enough?

(Glenn’s Tech Insights… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can get more visibility. It covers interesting new hardware and software developments that are generally relevant for SQL Server).

SSMS 18.0 Goes GA

SQL Server Management Studio 18.0 became generally available on April 24, 2019. This means that it is the final release version (as opposed to being a preview or release candidate version). It is Build 15.0.18118.0. The Release Notes detail all of the new features, improvements and bug fixes in SSMS 18.0.

Starting with SQL Server 2019 preview, SQL Server big data clusters allow you to deploy scalable clusters of SQL Server, Spark, and HDFS containers running on Kubernetes. These components are running side by side to enable you to read, write, and process big data from Transact-SQL or Spark, allowing you to easily combine and analyze your high-value relational data with high-volume big data.

Figure 2: Kubernetes Cluster

There are also improvements in sys.dm_exec_query_plan_stats, including a new database scoped configuration option that lets you control whether last query plan statistics are available at the database level (as opposed to instance-wide with TF 2451).

I think Microsoft must be pushing pretty hard to get SQL Server 2019 to GA status by July 9, 2019, which is when SQL Server 2008 and 2008 R2 fall out of extended support. This is just my own common sense speculation. That’s what I would be pushing for if I were in charge!

AMD Ryzen 7 2700X 5oth Anniversary Edition

AMD is releasing a special 50th Anniversary Ryzen 7 2700X desktop processor. This processor will have the same exact specifications as a normal AMD Ryzen 7 2700X processor, but will come in special gold colored packaging. It will also have a laser engraved signature from AMD President and CEO Dr. Lisa Su on the heat spreader as shown in Figure 3. This will be covered up when you install the processor.

Figure 3: 50th Anniversary AMD Ryzen 7 2700X Processor

To be fair, AMD is including a few other things to help justify the cost of this processor compared to a typical Ryzen 7 2700X processor. These include a coupon for a AMD 50th Anniversary T-Shirt, and a special AMD sticker signed by Dr. Su.

Intel has four main segments in their Second Generation Intel Xeon Scalable Processor Family. These include Xeon Bronze, Xeon Silver, Xeon Gold, and Xeon Platinum. The reason why there are six boxes in Figure 1 is because Intel further segments this with two Xeon Platinum product segments and two Xeon Gold product segments. This is typical over-segmentation by Intel, but it is just the tip of the iceberg as far as their product segmentation goes. That is a topic for another blog post!

Figure 1: Second Generation Intel Xeon Scalable Processor Segments

Modern SQL Server Licensing

Since SQL Server 2012, Microsoft has used core-based licensing rather than socket-based licensing. For non-virtualized servers, you buy core licenses in two-packs, with a minimum of four processor core licenses per physical CPU. These correspond to physical processor cores, not logical processor cores, so Intel HT and AMD SMT is a free benefit. This can give you roughly 25% more overall CPU capacity (certainly not 100% like you might assume by just counting total logical cores).

With virtualized servers, you buy core licenses in two-packs, with a minimum of four processor core licenses per virtual machine. These correspond to virtual processor cores, which might map to logical cores or physical cores, depending on whether Intel HT or AMD SMT is enabled on the host machine. If you license allof the physical cores on the virtualization host and you have Microsoft Software Assurance, you can forget about counting vCPUs in your VMs, and you can configure them anyway you want to.

A key fact here is that Microsoft does not change the license pricing based on the actual performance of a processor core. It doesn’t matter how old or new the processor is, it doesn’t matter how slow or fast it is, the price per core is exactly the same. You need to be aware of this, and keep it in mind as you select the exact processor to use for SQL Server. Otherwise, you will be losing performance and scalability, or paying for too many SQL Server core licenses, or some combination of both. Each SQL Server 2017 Enterprise Edition core license is $7,128.00.

Intel Xeon Gold Processor Family

The Intel Xeon Gold Processor Family is divided between the entry level two or four-socket Gold 5200 family and the enhanced two or four-socket Gold 6200 family. According to Intel,

“With support for the higher memory speeds, enhanced memory capacity, and four-socket scalability, Intel® Xeon® Gold processors deliver significant improvement in performance, advanced reliability, and hardware-enhanced security. It is optimized for demanding mainstream data center, multi-cloud compute, and network and storage workloads. With up-to four-socket scalability, it is suitable for an expanded range of workloads.”

I think the Intel Gold Processor Family is the sweet spot for most SQL Server workloads. It lets you choose from between 4 and 22 physical cores, with frequency-optimized, lower core count SKUs being available. This lets you get higher single-threaded CPU performance while minimizing your physical core counts, which lets you minimize your SQL Server license costs.

“The Intel® Xeon® Bronze processors delivers entry performance for small business and basic storage servers. Hardware-enhanced reliability, availability, and serviceability features designed to meet the needs of these entry solutions.”

Processor ShootoutLet’s compare the Intel Xeon Silver 4208 processor to the Intel Xeon Gold 6244 processor. These are both Second Generation Intel Xeon Scalable processors (Cascade Lake-SP), using the same microarchitecture, so we can directly compare them.

The 14nm Intel Xeon Silver 4208 processor has 8 cores/16 threads with a Base clock speed of 2.10 GHz and a Max Turbo clock speed of 3.20 GHz. It has an 11MB L3 cache and supports DDR4-2400 memory. It has two Ultra Path Interconnect (UPI) links and can scale up to two sockets. It does not support Intel Optane DC Persistent memory. Its thermal design power (TDP) is 85 watts and it has one AVX-512 FMA unit. It has a recommended customer price of $417.00.

The 14nm Intel Xeon Gold 6244 processor has 8 cores/16 threads with a Base clock speed of 3.60GHz and a Max Turbo clock speed of 4.40GHz. It has an 25MB L3 cache and supports DDR4-2933 memory. It has three Ultra Path Interconnect (UPI) links and can scale up to four sockets. It does support Intel Optane DC Persistent memory. Its thermal design power (TDP) is 150 watts and it has two AVX-512 FMA units. It has a recommended customer price of $2,925.00.

You might be thinking “Wow, that is a big difference in the cost of the processors. I could save over $5,000.00 in a two-socket server by choosing the Silver processor instead of the Gold processor”. That is true, but lets take a look at what you would lose by doing that.

If your server has a typical CPU load, most of your processor cores will be running at their base clock speed most of the time. This makes the base clock speed a very important metric for SQL Server usage. There is a huge difference (71.4%) between 2.10 GHz and 3.60 GHz. The base clock speed of the Gold 6244 is actually higher than the Turbo clock speed of the Silver 4208. If your power management is properly configured for best performance, you will always be running a higher clock speed on all cores of the Xeon Gold 6244 than the Xeon Silver 4208 can ever run, even on a single core at full turbo boost speed. The Turbo clock speed difference between the two processors is 37.5%.

The L3 cache of the Gold 6244 is more than twice the size of the Silver 4208. This means there is a much better chance of finding the data you need for a query in the L3 cache rather than in your main DRAM memory. If you do have to go out to main DRAM, your DDR4-2933 memory with the Xeon Gold 6244 will be noticeably faster than the DDR4-2400 memory with the Xeon Silver 4208.

Comparing Estimated TPC-E Scores

So far, there has been one official TPC-E benchmark score on a system with Second Generation Intel Xeon Scalable processors. It is for a two-socket Lenovo ThinkSystem SR650, with two Intel Xeon Platinum 8280 processors. This system had a TPC-E Throughput score of 7,012.53. This is a decent measure of the CPU capacity of this system. We can use this actual score as a baseline for comparison against other Intel processors from the same generation.

Since this system has a total of 56 physical cores, we can divide 7012.53 by 56 to get a score/core of 125.22, which is a good measure of the single-threaded performance of this processor. Remember, this submission is for a system running SQL Server 2017, so it is not just a synthetic benchmark.

For a simple but relatively accurate comparison, we can adjust this score to account for the difference in the number of physical cores and the difference in the base clock speeds between the processor used for the actual TPC-E submission and any other processor from that same generation.

7012.53 times 0.29 equals 2003.58, then 2003.58 times 0.78 equals a 1558.34 total estimated score

If we take this total estimated score of 1558.34 divided by 16 total physical cores in the system, we get an estimated 97.40 score/core

Conclusion

These results show that we would have significantly more capacity and much better single-threaded CPU performance with the Intel Xeon Gold 6244 compared to the Intel Xeon Silver 4208. From a SQL Server perspective, the $5000.00 lower hardware cost would be a minor consideration.

This methodology does not account for factors such as total L3 cache size, L3 cache size per core, memory speed, and number of UPI links. If these were factored in, the Intel Xeon Gold 6244 would do even better compared to both the Intel Xeon Platinum 8280 and the Intel Xeon Silver 4208.

The difference is so stark that I would consider dropping down to the four-core Intel Xeon Gold 5222 processor, which would save me $56,024.00 in SQL Server 2017 Enterprise Edition license costs on a two-socket system. Unfortunately, Intel has not released a six-core Intel Xeon Gold 6228 processor to replace the old Intel Xeon Gold 6128 processor, so we have a big gap between the Xeon Gold 5222 and the Xeon Gold 6244.

Another alternative would be to have just one Intel Xeon Gold 6244 processor in a two-socket system. If you did that, you would lose half your total memory capacity and half of your total PCIe 3.0 lanes.

A recent conversation on Twitter prompted this post. I would love to hear your thoughts on this either in the comments section or on Twitter. I am GlennAlanBerry on Twitter.

Earlier this week, I bought a new HP Spectre x360 13-AP0023DX convertible laptop at Best Buy. I have often criticized Best Buy as a bad place to buy a computer, but in this case I ignored my own advice for some good reasons. First, this was a great deal for $1050.00. This particular laptop has an Intel Core i7-8565U “Whiskey Lake” processor, 16GB of RAM, a 512GB Toshiba XG5 M.2 NVMe SSD, a 13.3” IPS 4K touchscreen, Intel UHD 620 integrated graphics, two USB-C Thunderbolt 3 ports, and one USB-A 3.0 port. Second, I was planning on swapping out the 512GB Toshiba M.2 NVMe drive for a bigger and faster 1TB Samsung 970 EVO Plus.

The Best Buy computer buying experience has dramatically improved over what it was several years ago. Back then, if you bought a computer, they would doggedly insist that a “Geek Squad” tech needed to unbox your machine, power it on, and “configure” it for you. That service might have been well suited for a non-technical person, but since I am my own Geek Squad, I didn’t need or want it back then.

The main remaining problem with buying a laptop from Best Buy is that their machines will have whatever bloatware the OEM decided to add to their standard Windows 10 Home image. You can try to uninstall everything you don’t want, or you can just install a fresh copy of Windows 10 Professional. If you go with the latter route, you will also need to download and install all of the HP and Intel-specific drivers from the HP Support website. You will probably need to update your main system BIOS and any other firmware that is out of date. You can avoid most of this hassle if you buy a laptop from a Microsoft Store, where they use a very clean, bloatware-free image on their machines. This is called Microsoft Signature Edition.

Performance

After getting everything reinstalled and fully updated, I ran a few quick performance tests. This machine is pretty speedy from a CPU and storage perspective. Since it has two PCIe 3.0 x4 Thunderbolt 3 ports, I can use some very fast external storage if I need to. I do wish it had 32GB of RAM.

The purpose of this machine is to be a backup for my main work laptop (a 15” Dell Precision 5520), just in case I ever have problems with it when I am on the road. It only weighs 2.8 pounds, and it came with a touch pen that you can use to draw with as a tablet. It also has a 12-hour battery life, which is very handy. This machine is actually faster than my two-year old Dell Precision 5520 with an Intel Xeon E3-1505M v6 processor.

Figure 1: Intel Core i7-8565U Information

This processor compares pretty well to the old (Q3 2015) Intel Core i7-6700K desktop processor, which is pretty impressive for a mobile processor with only 15W TDP. I have confirmed that it is using Intel Speed Shift in combination with Windows 10. This means that it throttles up it’s clock speed much more quickly.

Figure 2: Intel Core i7-8565U Benchmark Results

It was pretty easy to get to the SSD, after removing six small Philips screws that are hidden under two rubber strips on the bottom of the machine. After swapping out the OEM Toshiba SSD for the 1TB Samsung 970 EVO Plus SSD, I ran CrystalDiskMark, with the results shown below.

(Glenn’s Tech Insights… used to be part of our bi-weekly newsletter but we decided to make it a regular blog post instead so it can get more visibility. It covers interesting new hardware and software developments that are generally relevant for SQL Server).

April Release of Azure Data Studio

On April 18, 2019, Microsoft released Azure Data Studio 1.6.0, which you can download here. The release notes are here. The highlights include some UI changes, such as renaming the Servers tab to Connections, and moving the Azure Resource Explorer as an Azure Viewlet under Connections. There are also a number of improvements for SQL Notebooks and 78 bug fixes on GitHub.

Azure Data Studio is stable and mature, and Microsoft (and open source contributors) seem very serious about improving it and maintaining a regular release schedule. They are regularly adding interesting new features, such as Jupyter Notebook support. If you haven’t started playing around with Azure Data Studio, you probably should try it out pretty soon.

SQL Server 2014 Cumulative Updates

On April 16, 2019, Microsoft released SQL Server 2014 SP3 CU3 (12.0.6259.0) and SQL Server 2014 SP2 CU17 (12.0.5632.1). These have four and three public hotfixes, respectively, so they are not big cumulative updates. You should be on the SP3 branch by now, but if not, you should plan on getting there as soon as you can. It is my understanding that there won’t be a Service Pack 4 release for SQL Server 2014, which makes sense given the time constraints.

Remember, SQL Server 2014 falls out of mainstream support on July 9, 2019, which means no more Service Packs or Cumulative Updates. Going by the calendar, there should be one more cumulative update for each branch before July 9, 2019.

If you are going to be stuck on SQL Server 2014 for a while, you should make a concerted effort to get on the latest SP and CU as they become available, and you are able to do your own pre-deployment testing.

AMD Ryzen 3000 Series Rumors

AMD is said to be introducing the 7nm Ryzen 3000 Series CPUs during the COMPUTEX CEO Keynote address of AMD CEO Dr. Lisa Su on May 27, 2019. The rumor mill has been working overtime over the past several months, and there is huge anticipation among hardware enthusiasts about what the ultimate truth will be.

Figure 1: AMD Ryzen 3000 Prototype

These processors will work in most existing AM4 socket motherboards, with a BIOS update. There will also be new 500-series chipset motherboards that will better support new features, such as PCIe 4.0.

The bottom line is that you might want to hold off on building a new desktop system, especially a dedicated gaming rig, until after this new generation is released and available. We are already seeing deep discounts of previous generation Ryzen 1000 and 2000 series processors as we get closer to the Ryzen 3000 release date.

You may have heard of Intel Optane Technology, but perhaps you aren’t quite sure what that term actually refers to, and whether it is relevant for SQL Server. Unfortunately, Intel Optane is an overloaded marketing term that covers several different product categories and specific products. Intel also has Optane product offerings for the consumer market, which further confuses the issue.

All of these different products use 3D-XPoint (pronounced 3D Cross Point) technology in different ways for different purposes. First, we have their consumer products.

Their consumer products include system accelerators that are a cache layer in front of a magnetic HDD or slow SATA NAND SSD. These include the Intel Optane Memory and Intel Optane Memory M10 Series of products. These are useful for their intended purpose, but some systems vendors are making dubious marketing claims about them. You will see new systems that claim to have 24GB of “Memory” that turns out to actually be 16GB of Intel Optane Memory and 8GB of DDR4 DRAM. This is confusing to a typical consumer, and somewhat deceptive in my opinion. Figure 1 shows an example of this.

Figure 1: New Laptop with 24GB of “Memory”

The Intel Optane Memory H10 with Solid State Storage series are hybrid storage M.2 2280 devices that combine Optane SSD storage as a cache in front of QLC NAND SSD storage on a single M.2 2280 card. They have 256GB, 512GB, or 1TB of usable capacity for storage. These should give close to Optane SSD storage performance for less intense workloads at a lower cost than a 100% Optane SSD.

There are also pure Optane SSD storage offerings such as the 800P, 900P, and 905P that give the best storage performance from the consumer line. I have a couple of Intel Optane 900P PCIe NVMe storage cards in two of my personal desktop systems, and I have been very impressed with them over the past 18 months. Both the 900P and newer, faster 905P series products are a great choice for an OS drive for a developer or DBA desktop workstation. They also work very well in gaming rigs.

Data Center Products

Intel also has a number of different data center product lines under the Optane umbrella.

Intel has a pyramid diagram that they like to show to explain where these data center products fit in the modern data access hierarchy.

Figure 2: Intel Data Access Pyramid

Here are some more details about Intel’s Optane Data Center products.

Intel Optane SSD DC P4800X Series

These are extremely high performance block storage devices that include 375GB, 750GB, and 1.5TB capacities. They are available in HHHL AIC and U.2 15mm form factors. They all have a PCIe 3.0 x4 interface and use the NVMe protocol. Most existing servers will be able to use these in the HHHL AIC form factor in an available PCIe 3.0 x4 expansion slot. It is common to use two of these cards in a Storage Spaces RAID 1 array for redundancy. They are also well-suited for AG nodes.

These can be used with any version of SQL Server and any relatively recent version of Windows Server. You will want to make sure that you use the Intel Datacenter NVMe driver rather than the generic Microsoft NVMe driver with these drives.

Once you have a couple of these cards, you can use them for pretty much anything you want for SQL Server usage. For example, you can have your tempdb files here, or perhaps your transaction log files. I have had some clients simply move all of their data, log, and tempdb files to Intel Optane SSD DC P4800X arrays. These cards currently run about $4.00-5.00/GB, which is more expensive than most enterprise NAND flash storage, but not outrageously so.

They offer excellent random read and write performance at low queue depths, extremely low latency, predictable and steady performance under load, along with greater write endurance than NAND-based flash. They also do not lose any performance as they get full. Here are some articles and reviews of these drives:

These are PCIe NVMe M.2 22110 (110mm) cards that range from 100GB to 375GB in capacity. They have the same technology and main specifications as the larger form factor Intel Optane SSD DC P4800X Series cards. Not as many existing servers have PCIe M.2 slots, but an increasing number of new servers do have PCIe M.2 slots. As long as your server supports this form factor, you can use them the same way you would as the Intel Optane SSD DC P4800X Series cards. You can also get M.2 to PCIe expansion slot adapters that will let you use these M.2 cards in older servers.

Intel Optane SSD DC D4800X Series

These are Optane SSD drives that have dual-port controllers for better redundancy. These were just announced at the Intel Data-Centric Innovation Day on April 2, 2019. So far, they are not in the Intel ARK database, and they don’t appear to be readily available yet.

Intel Optane DC Persistent Memory

This is the product line that is newer and less familiar to many people. These are persistent memory modules that also use 3D-XPoint technology. They fit in DDR4 memory slots on servers with selected 2nd Generation Intel Xeon Scalable Processors (Cascade Lake-SP). They are available in 128GB, 256GB, and 512GB capacities. If you have the requisite processor and supported operating system or hypervisor, you can use Optane DC PM modules in a system along with conventional DDR4 DRAM modules. You can have up to six persistent memory modules per processor, but you have to have at least one DRAM module per processor.

You can use Intel Optane DC Persistent Memory in one of three modes. These are Memory Mode, APP Direct mode, and Storage over APP Direct mode.

Memory Mode

Memory Mode is when you use Intel Optane DC Persistent Memory Modules to increase the total size of your memory by using the larger capacity Intel Optane DC Persistent Memory Modules in place of some of your DDR4 DRAM DIMMs. You use some (up to half) of your RAM slots to hold Intel Optane PMEM DIMMs. You put regular DDR4-2933 DRAM in your other memory slots, which is then invisible to the OS. The Intel Optane PMEM is less expensive per GB compared to 128GB DDR4-2933 DIMMs, and it is available in higher capacities than you can get with DDR4 DRAM.

In this mode, the DDR4 DRAM is “near memory” which is used as a write-back cache. The Optane PMEM is the “far memory”, which actually shows up as the amount of memory visible to the operating system. The ratio of the near/far memory can vary. A common recommendation from Intel is a 4:1 capacity ratio. So for example, you could have six 128GB PMEM modules and six 32GB DDR4 DRAM modules per socket, which would give you 768GB of capacity from the PMEM, with 192GB of DRAM cache in front of it.

No application changes are required to use Memory mode. In this mode, the PMEM is volatile, which means that the data is cleared when you cycle power (just like DRAM).

App Direct Mode

In App Direct Mode, a PMEM-aware application is required. This mode adds a new tier between Memory Mode and block mode storage. It is byte addressable just like memory. With SQL Server 2019 on Linux, you can host any or all of your database files on DAX volumes that are built on Intel Optane DC PMEM modules with App Direct mode. You can also use the new Hybrid Buffer Pool feature in SQL Server 2019 with App Direct mode.

Storage over App Direct Mode

Storage over App Direct mode uses block mode storage using traditional read/write instructions that work with existing file systems. You must have an NVDIMM driver for this mode to be supported. This will have higher latency than App Direct mode, but it doesn’t require any application changes. This means that legacy versions of SQL Server should be able to use this mode.

In a traditional system without persistent memory, SQL Server caches data pages in the DRAM buffer pool. With Hybrid Buffer Pool, SQL Server skips performing a copy of the page into the DRAM-based portion of the buffer pool, and instead references the page directly on the database file that lives on a PMEM DAX volume device.

Access to data files in PMEM for Hybrid Buffer Pool is performed using memory-mapped I/O, also known as enlightenment. This brings performance benefits from avoiding the copy of the page to DRAM, and from skipping the I/O stack of the operating system to access the page on the persistent memory storage volume. Only clean pages can be referenced directly on a PMEM device. When a page becomes dirty it is kept in DRAM, and then eventually written back to the PMEM device after it has been flushed to persistent storage.

Microsoft recommends that you use the largest allocation unit size available for NTFS (2MB in Windows Server 2019) when formatting your PMEM device on Windows and ensure the device has been enabled for DAX (Direct Access). This feature is available on both SQL Server 2019 on Windows and SQL Server 2019 on Linux. With SQL Server 2019 CTP 2.1, you need to enable startup trace flag 809 to enable this feature.

Optane Issues

If you use any Intel Optane DC Persistent Memory Modules in your system (in any of the three modes), they run at 2666 MHz, and your regular DDR4-2933 DRAM will also run at the slower 2666 MHz speed. Intel Optane DC PMEM performs better for reads than for writes. Sequential read latency is about 170ns while sequential write latency is about 320ns. Sequential read bandwidth is about 7.6 GB/sec per DIMM, while sequential write bandwidth is only about 2.3 GB/sec per DIMM. These figures are all significantly worse than modern DDR4-2933 DRAM. Intel Optane DC PMEM in Memory mode is faster than anything else that is lower in the data retrieval pyramid, but it simply does not compare to modern DRAM performance.

Intel Optane DC PMEM is less expensive per GB compared to 128GB DDR4 DRAM modules, but not compared to lower capacity 32GB DDR4 DRAM modules. The price per GB of Intel Optane DC PMEM goes up as the capacity increases, but not as steeply as with the highest capacity DRAM modules.

After all of this, where are we with Intel Optane regarding SQL Server usage? This will depend on the Intel Optane product, your workload, and your current/desired operating environment.

Intel Optane SSD P4800X

I am a big fan of the Intel Optane SSD P4800X series of drives for on-premises SQL Server usage. They just work, on any version of SQL Server on any recent operating system on any server with PCIe 3.0 support. They don’t require 2nd Generation Intel Xeon Scalable Processors. The only problem is their availability and relatively high price per GB of capacity.

Intel Optane DC Persistent Memory

Intel Optane DC Persistent Memory seems like a more mixed verdict. I think Memory mode is not going to be a good fit for most SQL Server workloads. Using the example from above, (with twelve 32GB DDR4-2933 DIMMs and twelve 128GB Optane PMEM DIMMs in a two-socket server) you would have 384GB of near memory cache in front of 1,536GB of Optane in Memory mode, all running at a 2666 MHz speed. Once your working set exceeds 384GB, you will be hitting the much slower Optane far memory. The current pricing breakdown for this configuration would be about $2,700 for twelve 32GB DDR4-2933 DIMMs and about $9,600 for twelve 128GB Intel Optane DIMMs. This would be about $12,300 total.

In most cases, you would be better off with twenty-four 32GB DDR4-2933 DIMMs in a two-socket server, running at full speed. This configuration would give you 768GB of very fast DRAM for your buffer pool. This memory would cost about $5,400 at current prices. Saving about $7,000 is nice, (but is insignificant compared to your SQL Server core licensing costs). What is more important is the likely much better performance for most workloads from having nothing but fast DRAM rather than a mixture of DRAM and PMEM in Memory mode.

One bad scenario (for SQL Server) that I hope we don’t see is heavy Memory mode usage on Virtualization hosts. Imagine a two-socket virtualization host that has twelve 512GB Optane PMEM DIMMs and twelve 32GB DDR4-2933 DRAM DIMMs. This host would have 6,144GB of PMEM capacity, with only 384GB of DRAM cache in front of it. That might be great for web-server VMs, but probably not so great for SQL Server VMs that have a significant workload.

App Direct mode is more interesting. I think that the Hybrid Buffer Pool feature may work very well (much better than the old Buffer Pool Extension feature), and I like the fact that it is available for both SQL Server 2019 on Windows and SQL Server 2019 on Linux. You should also be able to use the Persistent log buffer feature from SQL Server 2016 with App Direct mode on both Windows and Linux. SQL Server 2019 on Linux will be “fully enlightened” which means you will be able to store any type of SQL Server database file on a DAX volume that is using Optane PMEM in App Direct mode.

Storage over App Direct mode also looks very useful. It will let you use Optane PMEM as very fast block mode storage with older versions of SQL Server, older versions of Windows Server, and possibly older versions of your favorite hypervisor. All you need is an NVDIMM driver. It will still require a server with 2nd Generation Intel Xeon Scalable Processors though.

This month, I have just made some minor improvements to most of the query sets, mainly in the comments and documentation. I have been trying to fix the broken links to many Microsoft blog posts that resulted from Microsoft’s recent massive web site reorganization.

I have a T-SQL script that you can use to check whether your instance of SQL Server has been patched to mitigate against the Spectre/Meltdown CPU vulnerability. This works for SQL Server 2008 through SQL Server 2017, for on-premises and cloud-based VM (IaaS) usage. You can get the query for this here.

I often make additional minor updates to the queries periodically during the month, so if you are in doubt, downloading the latest version is always a good idea.

Rather than having a separate blog post for each version, I have just put the links for all ten major versions in this single post. There are two separate links for each version. The first one on the top left is the actual diagnostic query script, and the one below on the right is the matching blank results spreadsheet, with labeled tabs that correspond to each query in the set.

Here are links to the most recent versions of these scripts for SQL Server 2012 and older:

Since SQL Server 2012 and older are out of Mainstream support from Microsoft (and because fewer of my customers are using these old versions of SQL Server), I am not going to be updating the scripts for these older versions of SQL Server every single month going forward. I started this policy a while ago, and so far, I have not heard any complaints.

The basic instructions for using these queries is that you should run each query in the set, one at a time (after reading the directions for that query). It is not really a good idea to simply run the entire batch in one shot, especially the first time you run these queries on a particular server, since some of these queries can take some time to run, depending on your workload and hardware. I also think it is very helpful to run each query, look at the results (and my comments on how to interpret the results) and think about the emerging picture of what is happening on your server as you go through the complete set. I have quite a few comments and links in the script on how to interpret the results after each query.

After running each query, you need to click on the top left square of the results grid in SQL Server Management Studio (SSMS) to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the blank results spreadsheet.

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master system database. Running the database-specific queries while being connected to the master database is a very common mistake that I see people making when they run these queries.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect. I am not planning on moving these to Github any time soon.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have some of their user databases in 80 compatibility mode, which breaks many DMV queries, or that someone is running an incorrect version of the script for their version of SQL Server.

It is very important that you are running the correct version of the script that matches the major version of SQL Server that you are running. There is an initial query in each script that tries to confirm that you are using the correct version of the script for your version of SQL Server. If you are not using the correct version of these queries for your version of SQL Server, some of the queries are not going to work correctly.