TroubleShooting SQL Server Memory Consumption

Recently on one of our staging servers had a memory consumption issue and
even 32 GB of RAM was looking insufficient and application performance was being impaired. Initially I thought the cause would be poorly designed queries, but after a thorough investigation this turned out not to be the case. I
noticed that whenever the Re-indexing and Update Statistics jobs executed
against the database the memory consumption increased considerably.
The size of the database was 155 GB, the server had 32 GB of RAM as well as 8
CPU’s on it.

While performing the investigation of the system, I decided
to test the performance of the Re-indexing and Update Statistics tasks
separately.

The T-SQL used for performing the Re-indexing of the
database is as shown below:

Note from the above T-SQL, the task uses a system stored
procedure named Exec sp_MSForEachtable along with a DBCC command named ”DBCC
DBREINDEX ("?"). This was taking around 54 minutes to perform
the Re-indexing of the entire database. As soon as the above T-SQL was
executed the Memory Utilization on the server rapidly shot up. Since
DBCC DBREINDEX is going to be removed in future releases of SQL Server and
sp_msforeachtable is an undocumented stored procedure I decided to change the
logic involved in performing the Re-indexing task as follows:

As seen from the above T-SQL, only those indexes whose
fragmentation level is less than 30% will be re-organized whereas the one whose
fragmentation level is greater than 30% will be rebuilt. The script has also
eliminated the use of sp_msforeachtable undocumented stored procedure as well
as the DBCC DBREINDEX command.

Once this logic was implemented, the Re-indexing task
completed in just 18 minutes and the memory consumption on the server
was mere 5.29 GB.

The Logic initially used for performing the Update
Statistics is as follows:

The majority of the tables in the database had columns
of NTEXT and Image Data Types.

Note from the above T-SQL that the
Update STATISTICS with FULLSCAN is going to be performed for each and every
table in the database even for those tables where there are columns having data
type such as NTEXT, IMAGE,VARBINARY which is very inefficient and can kill the performance of your server.

If a column has a NTEXT data type then it means that the
data is stored in the form of LOB structure. For a LOB column, when the statistics
is created, the first 100 bytes and last 100 bytes will be read to generate
statistics. If we do not specify the sample rate for the UPDATE
STATISTICS command, SQL Server will calculate the sample rate based on the
number of rows and number of pages used by that table. In addition, the number of
BLOB pages will not be considered so it is possible that when we run UPDATE
STATISTIC a large volume of data is sampled and 200 bytes of every BLOB row is
read which generates a large I/O request and increasing memory consumption. Therefore it is always best that
if you have a table in which there are columns of data types
TEXT,NTEXT,IMAGE,VARBINARY then always perform the UPDATE STATISTICS
operation without a FULLSCAN. I changed the T-SQL involved in
performing the UPDATE STATISTICS operation as follows:

Once the above logic was implemented, the Update
Statistics command completed within 6 minutes and without a strain on memory resources:

My suggestion to the database developers would be to avoid
NTEXT data type as much as they can in their development work.

This was one such approach which I used to fine tune the
performance of the system which was badly impacted due to the re-indexing and
Update Statistics task. If you have any suggestions then please do let us know,
alternatively you can write me on singhsatnam84@yahoo.com