From what comes after

Menu

Monthly Archives: April 2015

In my professional history with SQL Server I have seen this topic frequently misunderstood. I have seen consultants blaming NUMA for every single evil with certain systems and turning it off proactively going to SMP, while others never actually knew anything about NUMA and paid no attention to it. NUMA is important, but blaming it for all your SQL issues without evidence is wrong, however on the other hand you should be aware of it and how it affects SQL Server performance.

Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.

Previous to introduction of NUMA, things were arguable simpler where architecture used was pure SMP. Symmetric multiprocessing (SMP) involves a symmetric multiprocessor system hardware and software architecture where two or more identical processors connect to a single, shared main memory, have full access to all I/O devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes.

This is how SMP vs. NUMA look logically:

This is how its actually looking in terms of hardware motherboard:

SMP, note single memory bus for access

NUMA, note each CPU has its own memory:

Modern CPUs operate considerably faster than the main memory they use. In the early days of computing and data processing, the CPU generally ran slower than its own memory. The performance lines of processors and memory crossed in the 1960s with the advent of the first supercomputers. Since then, CPUs increasingly have found themselves “starved for data” and having to stall while waiting for data to arrive from memory. Many supercomputer designs of the 1980s and 1990s focused on providing high-speed memory access as opposed to faster processors, allowing the computers to work on large data sets at speeds other systems could not approach.

Limiting the number of memory accesses provided the key to extracting high performance from a modern computer. For commodity processors, this meant installing an ever-increasing amount of high-speed cache memory and using increasingly sophisticated algorithms to avoid cache misses. But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems without NUMA make the problem considerably worse. Now a system can starve several processors at the same time, notably because only one processor can access the computer’s memory at a time.[2]

NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory. For problems involving spread data (common for servers and similar applications), NUMA can improve the performance over a single shared memory by a factor of roughly the number of processors (or separate memory banks).[3] Another approach to addressing this problem, utilized mainly by non-NUMA systems, is the multi-channel memory architecture; multiple memory channels are increasing the number of simultaneous memory accesses.

The main benefit of NUMA is scalability. The NUMA architecture was designed to surpass the scalability limits of the SMP architecture. With SMP, all memory access is posted to the same shared memory bus. This works fine for a relatively small number of CPUs, but not when you have dozens, even hundreds, of CPUs competing for access to the shared memory bus. NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus and connecting the various nodes by means of a high speed interconnection.

SQL Server and NUMA.

SQL Server groups schedulers to map to the grouping of CPUs, based on the hardware NUMA boundary exposed by Windows. For example, a 16-way box may have 4 NUMA nodes, each node having 4 CPUs. This allows for a greater memory locality for that group of schedulers when tasks are processed on the node. With SQL Server you can further subdivide CPUs associated with a hardware NUMA node into multiple CPU nodes. This is known as soft-NUMA. Typically, you would subdivide CPUs to partition the work across CPU nodes.

When a thread running on a specific hardware NUMA node allocates memory, the memory manager of SQL Server tries to allocate memory from the memory associated with the NUMA node for locality of reference. Similarly, buffer pool pages are distributed across hardware NUMA nodes. It is more efficient for a thread to access memory from a buffer page that is allocated on the local memory than to access it from foreign memory. For more information, see Growing and Shrinking the Buffer Pool Under NUMA.

Each NUMA node (hardware NUMA or soft-NUMA) has an associated I/O completion port that is used to handle network I/O. This helps distribute the network I/O handling across multiple ports. When a client connection is made to SQL Server, it is bound to one of the nodes. All batch requests from this client will be processed on that node.

I mentioned above idea of “local” memory to the scheduler\CPU and “remote” memory. Not only remote access is more expensive, when Microsoft SQL Server 2012 requires memory in non-uniform memory access (NUMA)-enabled environments, Windows may allocate too much memory from a remote node before allocating a page of local memory. This issue is the one fixed in Cumulative update package 3 for SQL Server 2012 Service Pack 1. Workaround without a fix is to disable NUMA by enabling trace flag 8015 when the SQL Server service starts. This workaround directs the SQL Server Memory manager to treat all memory as a single node, therefore turning off NUMA in SQL Server, just as I had to do few times in the past
How would we actually see an issue with SQL Server performance counters:

Performance Counter

Description

Buffer Node: Local node page lookups/sec

Database page reads and writes that are satisfied from the local buffer pool node.

Note: This does not indicate if the memory for the page is local or remote to the node. It is possible the physical memory allocation is remote/away/foreign but assigned to the node in a steady state so the server is still doing a remote memory access.

Buffer Node: Remote node page lookups/sec

Database page reads and writes that are satisfied from the remote buffer pool node.

This is normally a physical, remote memory access because of the sorting that SQL Server performs.

You can use SysInternals CoreInfo.exe utility to perform NUMA node alignment checks to CPUs. Please download coreinfo .exe and run it in a command window. It is best to pipe the result into a text file which can be opened with notepad. E.g. coreinfo > structure.txt. It is expected that the program execution just takes a few seconds:

The symbol ‘*’ characterizes one Logical Processors thread being a part of the processor group. The sign ‘-‘symbolizes a Logical Processors thread which is not part of a processor group.

Encountering Processor Groups which are uneven in terms of CPU resources, dealing with applications which are not Processor Group aware combined with Windows’ random assignment of such applications to one of the processor groups can cause a non-deterministic behavior of non-processor-groups aware applications. Restarting such an application might get it assigned to a different Processor Group with more or less CPU resources as before and hence cause different behavior under workload applied to it.

In order to be able to get determinable performance for SQL Server we recommend to configure Windows Server 2008 R2 to create Processor Groups with the same number of Logical Processor threads per Processor Group. In order to get there.

On Windows 2008 R2 with large number of processors easiest way to get to an even number of Logical Processors threads is to apply a QFE which got released by Microsoft in March 2011. The QFE and the related Knowledgebase Article from Microsoft can be found here: http://support.microsoft.com/kb/2510206.

You can see more on your NUMA nodes in SQL Server using sys_dm_os_schedulers and sys.dm_os_nodes

In my previous blog entry I profiled Hadoop HDFS , now I will go one step higher in Hadoop architecture and introduce YARN. YARN was added for efficient resource management and scheduling relatively recently in Hadoop 2.0, previously MapReduce was a layer used in that role exclusively..

As you can see above this is relatively large change with YARN now taking its place as “data OS” on top of HDFS and MapReduce becoming another framework on top of YARN, just like many others.

YARN stands for “Yet-Another-Resource-Negotiator”. It is a new framework that facilitates writing arbitrary distributed processing frameworks and applications.

YARN provides the daemons and APIs necessary to develop generic distributed applications of any kind, handles and schedules resource requests (such as memory and CPU) from such applications, and supervises their execution.

YARN’s execution model is more generic than the earlier MapReduce implementation. YARN can run applications that do not follow the MapReduce model, unlike the original Apache Hadoop MapReduce.

Note there is a single point of failure here – JobTracker. Job Tracker is a “master” component of Task Trackers. Client submit MapReduce jobs to Job Tracker which distributes tasks to Task Trackers.Task Trackers run on Data Node and perform actual MapReduce jobs.

With the advent of YARN, there is no longer a single JobTracker to run jobs and a TaskTracker to run tasks of the jobs. The old MapReduce 1.0 framework was rewritten to run within a submitted application on top of YARN. This application was christened MR2, or MapReduce version 2. It is the familiar MapReduce execution underneath, except that each job now controls its own destiny via its own ApplicationMaster taking care of execution flow (such as scheduling tasks, handling speculative execution and failures, etc.). It is a more isolated and scalable model than the MR1\Map Reduce 1.0 system where a singular JobTracker does all the resource management, scheduling and task monitoring work.

The ResourceManager has two main components: Scheduler and ApplicationsManager:

The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees about restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a resource Container which incorporates elements such as memory, CPU, disk, network etc. In the first version, only memory is supported.

The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure.

With the advent of YARN, you are no longer constrained by the simpler MapReduce paradigm of development, but can instead create more complex distributed applications. In fact, you can think of the MapReduce model as simply one more application in the set of possible applications that the YARN architecture can run, in effect exposing more of the underlying framework for customized development.

MRV2 maintains API compatibility with previous stable release (hadoop-1.x). This means that all Map-Reduce jobs should still run unchanged on top of MRv2 with just a recompile.

So how do I configure my YARN cluster settings\size my cluster correctly?

YARN configuration options are stored in the /opt/mapr/hadoop/hadoop-2.x.x/etc/hadoop/yarn-site.xml file and are editable by the root user. This file contains configuration information that overrides the default values for YARN parameters. Overrides of the default values for core configuration properties are stored in the yarn-default.xml file.

Recently one of my customers ran into some challenges with intermittent connectivity failures from application to SQL Server data store\backend. In scenarios such as this the SQL Server Error Log is your friend, as it can be configured to record failed login attempts for an instance, however there is another useful tool – a lesser known SQL DMV sys.dm_os_ring_buffers.

Note that you use this particular DMV at your own peril considering Books Online states that:

In other words don’t go making use of this DMV as an integral part of your monitoring solution but rather be mindful of it’s existence as a possible tool for ad-hoc use.The DMV sys.dm_os_ring_buffers maintains approximately 1000 records, before wrapping around and replacing the oldest entries first. It exposes four columns but we’re primarily only interested in one of them, record, which contains XML data. There is lots of data there there, it can be used to look at CPU, memory, etc.; but in my case I will filter by connectivity ring buffer to look at connectivity issues:

SELECT CAST(record as xml) AS record_data
FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type= 'RING_BUFFER_CONNECTIVITY'

Executing the statement above will produce output similar to the following:

Clicking an XML fragment hyperlink will open the contents into a more readable format in a new tab in SSMS for you similar to below, showing a basic ring buffer connectivity error record. Fields that will be interesting to you in particular:

SniConsumerError – Error number that SNI consumer was sent

RemoteHost – Remote Host responsible for connection

This of course isn’t really handy, as no one can look at 1000 XML records to find “needle in a haystack”. Therefore we need a query: