1. How can I improve DBCC performance in SQL Server?
The DBCC (DataBase Consistency Checker) utilities are a collection of programs used to verify integrity of a SQL Server database. They are conceptually similar to file system checking programs such as CHKDSK in MS-DOS, Windows 95, and Windows NT, and fsck in UNIX. Like file system checking programs, DBCC can take a significant amount of time to run on large data sets. This article discusses ways to improve DBCC performance. It focuses primarily on SQL Server 6.0, although in some cases the information will apply to version 4.2x.
Overview of DBCC
Most file systems and database systems provide integrity checking utilities to periodically verify data structure consistency. For example the Rdb Management Utility (RMU) performs functions similar to DBCC on the Oracle Rdb database.
Integrity checking utilities generally have a long running time when used on large data sets, and DBCC is no different. However there are several actions which can maximize the performance and minimize the impact of DBCC.
First, a review of the DBCC utilities most commonly used for integrity checking: NEWALLOC and CHECKTABLE/CHECKDB. CHECKDB performs the same checks as CHECKTABLE, but for each table in the database in succession.
NEWALLOC checks the integrity of all table and index page chains, and checks that each page is correctly allocated. It checks the integrity of all extent chains in the database, and verifies that each chain is correctly linked. NEWALLOC also checks for object ID consistency between the page, the extent, and sysindexes. In general, NEWALLOC checks things at a page and inter-page level.
CHECKTABLE/CHECKDB by contrast spends most of its time checking things at an intra-page level. In addition to checking the page chain, it checks that indexes are in properly sorted order, that the data information inside each page is reasonable, that page offsets are reasonable, that the sysindexes entries for the table are correct, and that the data row count equals the leaf row count for non-clustered indexes.
DBCC NEWALLOC and CHECKDB perform largely complementary checks and should generally both be run to get a comprehensive check of the database.
By necessity DBCC is I/O intensive. For NEWALLOC, running time will be roughly proportional to the database space in use. This is because most of NEWALLOC’s time is spent doing sequential reads. Time for this will scale linearly with the size of the database. For this same reason NEWALLOC can be significantly faster on version 6.0 because the parallel read-ahead facility increases the scanning rate.
Running time for CHECKDB is more dependant on the number of indexes in the database and number of rows per table, so the time cannot be predicted by the database size alone. Version 6.0 uses parallel threads for CHECKTABLE, one to check each index. When combined with the 6.0 read-ahead facility, on computers with sufficient memory, CHECKTABLE/CHECKDB can be significantly faster on 6.0 than on 4.2. The amount of memory needed to ensure this would be approximately the amount where the SQL page cache (visible with dbcc memusage) is larger than the largest individual table in the database. On computers with less memory than this, you may want to experiment with disabling read ahead for the CHECKTABLE/CHECKDB session by using trace flag 653. You can disable parallel CHECKTABLE by using trace flag 2508. Setting the sp_configure”RA worker threads” parameter to 0 disables both read ahead and parallel CHECKTABLE. See the version 6.0 online documentation for details on how to use trace flags.
Tests indicate that SQL Server 6.0 can run DBCC NEWALLOC on a 2 GB database in less than 30 minutes, when running on a typical industry standard symmetric multiprocessor (SMP) platform.
As for concurrency impact of DBCC, NEWALLOC and CHECKDB are both online checks in that the database can be in use while they run. There is concurrency impact, but in many cases this is acceptable. NEWALLOC impact is primarily the I/O and CPU load from the check. In addition to the I/O and CPU load, CHECKDB places a shared table lock on each table while it’s being checked. While allowing SELECTs, this will prevent modification of the table until CHECKDB moves on to the next table in the database. The share lock is necessary to achieve a reliable check.
It is usually best to run NEWALLOC when update activity in the database is at a minimum, since there is a possibility of spurious errors caused by in-progress changes during the check.
Specific Steps to Minimize DBCC Impact and Increase Performance
General Advice:
1. Since DBCC is I/O-intensive, use hardware with plenty of I/O capacity. One good way to increase I/O capacity is by placing the database on a large RAID disk array, using as many physical drives as possible.
2. Use hardware with plenty of RAM. The ideal case would be sufficient RAM such that the largest table can be contained in the SQL Server page cache. This is especially important on version 6.0 where it will allow full utilization of parallel index checking and read ahead.
3. Ideally you should run DBCC NEWALLOC, CHECKDB, CHECKCATALOG, and TEXTALL (if you have text/image data) in conjunction with your database dumps. This verifies the state of the database at the time of the dump. However, if time does not permit all of these checks, the best single check to run is NEWALLOC. It is typically faster than CHECKDB, and provides a good overall check of the database. It is better to run NEWALLOC alone than to run no checks at all.
4. If your database has a nightly interval where update activity is low, rather than setting the database to single-user mode for NEWALLOC, just run it and watch for any errors. On the optimistic principal that most of the time the low activity will not cause spurious errors, you would only then need to schedule a NEWALLOC in single-user mode when you see errors.
5. Although it is best to run both NEWALLOC and CHECKDB together, if circumstances require it (say the database is extremely large) you could consider running NEWALLOC as the more frequent check, owing to the non-blocking nature and quicker running time it provides. Then on a less-frequent basis, supplement this with CHECKDB.
6. Run CHECKDB with the NOINDEX option. The most time-consuming aspect of CHECKDB is checking non-clustered indexes. These checks can be bypassed with this syntax. For example:

DBCC CHECKDB(MYDB, NOINDEX)

7. Run DBCC on a “hot backup” server. Sites with close to 24x7 uptime requirements often have a hot backup computer on which transaction logs are loaded frequently. Otherwise a failure on the main server would preclude 24x7 availability. If you have a hot backup computer, it’s possible to run DBCC on that computer without impacting the main server. A DBCC run on the backup computer is just as valid and effective a test as when run on the main server. If you do not have a hot backup server, the combined benefits of the safety net it provides plus the DBCC solution can be strong arguments for getting one.

How to Run CHECKTABLE While In Production:
Sometimes you may need to check a specific table or its indexes. On large memory computers, there is a technique that can sometimes allow this with very little concurrency impact, even while the computer is at modest production activity levels. It basically consists of pre-loading the page cache with the table and index contents using a non-blocking SELECT with the NOLOCK optimizer hint, then checking a single index at a time to minimize the time the table is share locked. In some cases, it’s possible to check a 200 MB table in less than 60 seconds using this technique. The following are the steps needed to achieve this:
1. Verify activity level on the computer is modest. This will help ensure that pre-loaded cache pages are not stolen by concurrent requests while DBCC runs. Here is a good query to help check this. Run it several times and observe what queries are active.

2. Verify the table and indexes you want to check will fit into available page cache by comparing the table and index size to the page cache size. Table and index size in 2048-byte pages can be seen by issuing this query, where TAB1 represents your table name:

SELECT NAME, INDID, DPAGES
FROM SYSINDEXES
WHERE ID=OBJECT_ID(“TAB1”)

Execute DBCC MEMUSAGE and inspect the size of the returned “Page Cache” value. Only if the sum of your table and index size is less than the page cache size is it possible to use this technique. Ideally, page cache should be significantly larger to allow servicing of other requests during execution of DBCC, without depleting the cache holding the table being checked.
3. Pre-load the page cache by issuing SELECTs that return no results, using NOLOCK and INDEX optimizer hints to ensure a non-blocking SELECT and that each index is covered. This could take several minutes, depending on table size and computer I/O bandwidth.

4. While the above query runs, use PerfMon to watch I/O vs. CPU activity. Watch all instances of the “% Disk Time” counter of the”LogicalDisk” object. Also watch the “% Total Processor Time” counter of the “System” object. To see valid disk performance information, you must have previously turned on the Windows NT DISKPERF setting by issuing “diskperf -Y” from a command prompt. See the Windows NT documentation for more details.
After all the data is pre-loaded, you can usually re-scan the entire table with one of the above commands within a few seconds, during which CPU should be high and I/O almost 0. Do this to verify the data is still in cache.
5. Run DBCC CHECKTABLE on a single index of the table at a time. During this interval, a share lock will block attempts to update the table, but (depending on the computer speed and table size) CHECKTABLE will finish checking a single index within 30 seconds to two minutes. For example:

DBCC CHECKTABLE(TAB1, 2)

Then proceed and check the other indexes on the table.
6. While CHECKTABLE is running, watch closely for any blocking you may be causing. If you miscalculate and CHECKTABLE causes more blocking than you anticipate, you can usually abort CHECKTABLE using the Transact-SQL KILL command. Here is a good query to watch the amount of blocking:

Conclusion:
Integrity checking programs for file systems or databases tend to have long running times when applied to large data sets. Performance of SQL Server 6.0 DBCC has been significantly improved via read ahead and parallel index checking. When SQL Server 6.0 is run on an adequately equipped platform, and the techniques mentioned in this article are used, it is usually possible to minimize the impact of DBCC to a reasonable level. Specific techniques include running only the necessary DBCC utilities, understanding the concurrency impact of each utility and hence when during the day it can be run, running DBCC on a &quot;hot backup&quot; server, and pre-loading the page cache prior to checking a specific table.
2. Can I use the Microsoft Access Upsizing Tools with SQL Server v. 6.0?
When you try to upsize a Microsoft Access version 2.0 database to Microsoft SQL Server version 6.0, you may receive an error message such as the following:
ODBC-call failed. [Microsoft][ODBC SQL Server Driver][SQL Server] Column “D.status’ is invalid in the select list because it is not contained in either as aggregate function or the GROUP BY clause. [#8120]
The Microsoft Access Upsizing Tools are not designed for use with Microsoft SQL Server version 6.0. They are designed only for Microsoft SQL Server version 4.x for Windows NT and for Microsoft SQL Server version 4.x for OS/2.
3. How are TCP/IP and Windows Sockets supported in SQL Server?
Microsoft SQL Server version 6.0 supports client communication for Windows- or Windows NT-based clients, using standard Windows Sockets as the IPC method across the TCP/IP protocol. The Windows Sockets Net-Libraries have been extensively tested on the supported platforms for connecting to Microsoft SQL Server. Using these Net-Libraries with other TCP/IP protocols should work if those protocols properly support Windows Sockets. However, their use on these platforms is not guaranteed. The protocol provider should test and state their support policy.
Third party 16-bit TCP/IP products (other than those provided with Windows for Workgroups) which properly support the Windows Sockets specifications should work properly with the Win16 TCP/IP Sockets Net-Library (DBMSSOC3.DLL). While not officially tested and supported, products that properly implement the specification should work with the Net-Library.
MS-DOS-based applications running with the Microsoft Windows TCP/IP protocol stack must use the Named Pipes Net-Library, DBNMPIPE.EXE. There is no TCP/IP sockets support in MS-DOS.
Microsoft SQL Server version 4.21 ships with Windows Sockets client Net-Libraries. The Windows Sockets Net-Library for Windows is supported on Windows for Workgroups version 3.11 with Microsoft TCP/IP for Windows for Workgroups version 1.0, and the Windows 3.1 subsystem (WOW) of Windows NT version 3.1 and Windows NT Advanced Server version 3.1. The Windows Sockets Net-Library for Windows NT is supported on Windows NT version 3.1 and Windows NT Advanced Server version 3.1.
The Microsoft TCP/IP socket Net-Library for MS-DOS is supported for MS-DOS-based applications running on Windows for Workgroups 3.11 with Microsoft TCP/IP for Windows for Workgroups version 1.0, but is not compatible with the MS-DOS environment of Windows NT.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q107647, “Connecting to SQL Server from TCP/IP Sockets Clients”.
4. What memory allocations are suggested for SQL Server?
Microsoft SQL Server allows the use of up to 2048 MB of virtual memory. Windows NT provides each 32-bit Windows application a 4-gigabyte (GB) virtual address space, the lower 2 GB of which is private per process and available for application use. The upper 2 GB is reserved for system use.
The 4-GB address space is mapped to the available physical memory by the Windows NT Virtual Memory Manager (VMM). The available physical memory can be up to 4 GB, depending on hardware platform support.
A 32-bit Windows application such as SQL Server only perceives virtual or logical addresses, not physical addresses. How much physical memory an application uses at a given time (the working set) is determined by available physical memory and the VMM. The application cannot directly control memory residency.
Virtual address systems such as Windows NT allow the over-committing of virtual memory, such that the ratio of virtual to physical memory exceeds 1:1. As a result, larger programs can run on machines with a variety of physical memory configurations. However, in most cases, using significantly more virtual memory than the combined average working sets of all the processes will result in poor performance.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q110983, "Recommended SQL Server for Windows NT Memory Configurations"
5. How should performance optimization for SQL Server be approached?
To most effectively optimize Microsoft SQL Server performance, you must identify the areas that will yield the largest performance increases over the widest variety of situations, and focus analysis on these areas. Otherwise, you may expend significant time and effort on topics that may not yield sizable improvements.
Experience shows that the greatest benefit in SQL Server performance can be gained from the general areas of logical database design, index design, and query design. Conversely, the biggest performance problems are often caused by deficiencies in these same areas. If performance is a concern, you should concentrate on these areas first, since very large performance improvements can often be achieved with a relatively small time investment.
While other system-level performance issues, such as memory, cache buffers, hardware, and so forth, are certainly candidates for study, experience shows that the performance gain from these areas is often of an incremental nature. SQL Server manages available hardware resources in a largely automatic fashion, reducing the need (and thus, the benefit) of extensive system-level hand-tuning.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q110352, "Optimizing SQL Server Performance"
6. What are the causes and ramifications of the Transaction Log filling up?
The SQL Server transaction log can become full, which prevents further UPDATE, DELETE, or INSERT activity in the database, including CHECKPOINT.
This is usually seen as error 1105:
Can’t allocate space for object syslogs in database dbname because the logsegment is full. If you ran out of space in syslogs, dump the transaction log. Otherwise use ALTER DATABASE or sp_extendsegment to increase the size of the segment.
This can happen on any database, including master or tempdb.
Several difficult-to-predict factors can account for variation in log space consumption, such as:
1. A large atomic transaction, especially a bulk update, insert, or delete.
2. An uncommitted transaction.
3. Checkpoint handler truncation bandwidth exceeded.
4. Truncation threshold exceeded.
5. Interactions between any of the previously described conditions.
6. Transactions marked for publication but not read by the log reader.

For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q110139, Causes of SQL Transaction Log Filling Up
7. What could cause the SQLMonitor Service to fail?
The SQLMonitor Service on Windows NT uses the SQL Server System Administrator (SA) account to access SQL Server for Windows NT. If the SA account’s password is changed from the default, which is none or NULL, any attempt to start the SQLMonitor service results in the following error:
Error 2186 The service is not responding to the control function.
Could not start the SQLMonitor service on \\Server.
To resolve this problem, you must update the SQL Monitor password parameter in the Windows NT Registry.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q104919, SQLMonitor Service Terminates if SA Password Changes.
8. How are UPDATES and DELETES affected in transactions in which a table scan is required?
UPDATE and DELETE statements that require a table scan result in an exclusive lock held on the modified table.
Updates and deletes that do not use an index to search for rows that require modifications also require an exclusive table lock. This lock is not generated as a result of page lock promotion, but rather an immediate exclusive table lock.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q125770, Locking Behavior of Updates and Deletes in SQL Server
9. What causes a discrepancy between the DBCC page count and the count reflected in SYSINDEXES?
It is not unusual for DBCC to find a discrepancy between the actual page count and the count reflected in SYSINDEXES DPAGES for the SYSLOGS table. This discrepancy occurs because the page count in SYSINDEXES (DPAGES) is not updated every time something is logged; that would cause too much overhead. Instead, the changes are saved until a CHECKPOINT is executed.
The discrepancy does not cause problems because the value in SYSINDEXES is used only for reporting space allocation, not for enforcing it. Also, the occasionally erroneous value in SYSINDEXES never affects the choice of access strategy because queries are never run on SYSLOGS.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q39113, DBCC Reports Page Count Discrepancy
10. What types of communication errors could be encountered on SQL Server?
A variety of Microsoft SQL Server communication-related errors are possible. In general, these do not indicate a problem with SQL Server, but rather a network, network configuration, or client application problem. On both the client and server sides, SQL Server and its applications mostly exist above the ISO (International Organization for Standardization) network layer. The responsibility for establishing and maintaining a reliable network connection belongs to the network and system layers below SQL Server.
Possible errors include:
Server-Side Errors
17832 Unable to read login packet(s)
17825 Unable to close server-side connection
17824 Unable to write to server-side connection
10058 Can’t send after socket shutdown
10054 Connection reset by peer
10053 Software caused connection abort
1608 A network error was encountered while sending results to the front end
232 The pipe is being closed
109 The pipe has been ended
Client-Side Errors
10008 Bad token from SQL Server: datastream processing out of sync
10010 Read from SQL Server failed
10018 Error closing network connection
10025 Write to SQL Server failed
For more information on this topic, refer to SQL Server Books Online, Administrators Companion or see the Microsoft Knowledge Base article, Q109787, SQL Communication Errors 17832, 17824, 1608, 232, and 109.
11. How does Microsoft SQL Server handle encryption?
Microsoft SQL Server version 6.0 allows for encryption of data “over the wire” for both 16-bit and 32-bit clients with the encryption option of the Multi-Procotol Network Library.
SQL Server relies on the Microsoft Windows NT RPC API to do the encryption of network traffic. Windows NT RPC uses 40-bit RC4 encryption, which is the maximum allowed for export, so there are no differences between US and International versions.
For more information about this topic and other related issues, see the Microsoft Knowledge Base article Q132224, Encryption Algorithm in the Multi-Protocol Net Library
12. How do I install and use Microsoft SQL Server 6.0 replication?
An article available in the Microsoft Knowledge Base provides a reference for some of the most common issues customers can encounter when installing and using Microsoft SQL Server 6.0 Replication. This article is not a replacement for the SQL Server 6.0 documentation.
All of the concepts, terms, and topics below are documented in the SQL Server Administrator’s Companion. Part 6, Replication (both printed and & “ Books Online”). This section of the manual is the most complete reference for replication issues, questions, and troubleshooting. Anyone who is planning to install and use SQL Server 6.0 replication should thoroughly read all chapters in this section of the manual.
For additional information on how to install and use Microsoft SQL Server 6.0 replication, look for the SQL Server 6.0 Replication Concepts, SQL Server 6.0 Setting Up Replication, SQL Server 6.0 Advanced Replication whitepapers on TechNet

0

Featured Post

Before moving to the cloud, it is important to carefully define your db needs, plan for the migration & understand prod. environment. This wp explains how to define what you need from a cloud provider, plan for the migration & what putting a cloud solution into practice entails.

Recently we ran in to an issue while running some SQL jobs where we were trying to process the cubes. We got an error saying failure stating 'NT SERVICE\SQLSERVERAGENT does not have access to Analysis Services. So this is a way to automate that wit…

This tutorial will walk an individual through configuring a drive on a Windows Server 2008 to perform shadow copies in order to quickly recover deleted files and folders.
Click on Start and then select Computer to view the available drives on the se…

This tutorial will show how to configure a new Backup Exec 2012 server and move an existing database to that server with the use of the BEUtility.
Install Backup Exec 2012 on the new server and apply all of the latest hotfixes and service packs. The…