Tuesday, June 24, 2014

If you have gone through Windows services, specifically SQL Server services, either through SQL Server Configurations manager or Windows Services, you might have noticed that some services can be paused in addition to stopped or restarted. What if the service is paused?

If you pause the SQL Server service, there will not be any issue for existing connection but new connections will not be allowed. Users who have been working with databases can continue until they disconnect the connection. This is generally useful when you need to shutdown the service without disturbing existing users’ activities. Since this does not allow new connection, no user can establish a new connection and once all existing connections are disconnected, server can be safely shutdown.

Tuesday, June 17, 2014

This is a common question I always get from my classes/workshops. Do we really get some benefits out of a database with multiple data files on one physical drive? Obviously, no performance benefits, but in certain situation, it gives you some benefits;

Taking partial backups, file backups, file group backupsAlthough the files have not been distributed on multiple physical drives, still partial backups, files backups and file-groups backups are possible. Assume that your database is very large and some tables are infrequently updated, then you can separate tables into two file groups, using multiple data files, one group for tables that are frequently updated and other for tables that are infrequently updated (or read-only tables). This allows you to have a time-saving and efficient backup strategy, setting different backup plans on two file groups.

Restoration into smaller multiple physical drivesAssume that you maintain a database with single 100GB size data file. What if the exiting drive crashes and you get two 50GB physical drives for restoring the database from one of the backups? As you know, it is not possible, but, if the database is maintained with multiple data files even with a single physical drive, scenario like this can be easily addressed.

Moving files into another driveFor various reasons, either the entire database or part of the database is required to be moved to a different physical drives. Some of the reasons could be, moving from test environment to production environment, moving some file-groups to optimized physical drives, moving entire database to a different location. If the database is constructed with multiple files, moving files for above scenario can be easily done with less effort.

Thursday, June 12, 2014

You might have noticed that the data type of object names such as database name is “sysname” which is not listed in data type list when creating a table via table designer. This data type can be clearly seen when creating objects using templates (CREATE DATABASE template). What is this sysname data type? Can we use it? Should we use it?

Sysname is a built-in system data type defined as nvarchar(128). Initially, with older versions (up to 6.5 if I am not mistaken, TBV) it was equivalent to varchar(30). It is a non-nullable data type and mainly used to store object names. This data type is not listed in data-type list which comes when creating tables via table designer. However, it can be used for defining columns for tables when they are created using T-SQL. Have a look on following code;

-- using sysname for variables

DECLARE @TestVariable sysname = 'Test Value'

SELECT @TestVariable

GO

-- using sysname for creating tables

CREATETABLE dbo.TestTable

(

Id intPRIMARYKEY

, Name sysname

)

GO

If your system is configured with case-sensitive collation, make sure you type the word in lowercase.

As you see, it can be used with our codes. However, many recommend not to use because of; 1. nvarchar(128) can be simply used instead, 2. there can be a change in future version. Therefore, even though it is possible to use sysname data type with T-SQL, it is better not to use.

While trying to get some records from a database configured in a Windows Azure machine (NOT SQL AZURE), I noticed that the connection can be made using Power Query but Power Pivot. Power Query has an option for connecting with Windows Azure Database using “From Windows Azure SQL Database” as shown in below images;

However, there is no such option with Power Pivot. If you try to connect using “From SQL Server”, you will get an error as below;

“Unable to retrieve list of databases. Reason: Failed to connect to the server. Reason: Login timeout expired A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. Named Pipes Provider: Could not open a connection to SQL Server [53]. ”

The error indicates that Excel tries to establish the connecting using Named Pipes protocol which is not valid with this context. Though it can be considered as the reason for this error, there is no option for changing the protocol as TCP/IP. This makes establishing the connection impossible. However, found a workaround for connecting with Windows Azure SQL Database, not using the default Provider which is “SQL Server Native Client 11.0” but using “Microsoft OLEDB Provider for SQL Server”. All we have to do is, open the Advanced window clicking “Advanced” button and change the Provider as Microsoft OLEDB Provider for SQL Server.

I still cannot figure out the exact reason for this and how to connect using SQL Server Native Client. If you have done it, please share.

Tuesday, June 10, 2014

As you know, some of the components such as Management Studio (SSMS) and Configuration Manager are shared components of SQL Server instances and they are upgraded when an in-place upgrade is performed. However, they are not supposed to get upgraded with Side-by-side upgrade unless a certain criteria is met.

In-place and Side-by-side upgradesIn-place upgrades refers an installation of newer version of SQL Server that replaces an existing instance of SQL Server completely, including shared components. The key advantage with this is lesser requirements for additional hardware resources and the key disadvantage is incapability of rolling back at a failure of an installation. Side-by-side upgrades refers an installation of newer version of SQL Server as a new instance without replacing the older version of SQL Server installed. This is the most safest way of upgrading a system because it reduces the risk associated with upgrades leaving the original system in place and intact. However, there are few disadvantages with this, such as copying user defined databases manually, configuring security and configuring jobs used manually.

Side-by-side upgrades does not suppose to replace shared components of older version installed. However a replacement can be happened if the major build number of old instance and the major build number of the new instance are identical. This condition is rare to be occurred but this has been seen with SQL Server 2008 and 2008 R2. If you perform an installation of SQL Server 2008 R2 (version is 10.50) as a side-by-side upgrade for SQL Server 2008 (version is 10.00), though it is not an in-place upgrade, new installation will replace all shared components of SQL Server 2008 instance.

Monday, June 9, 2014

Are you trying to get the total number of records in one of your tables? If yes, what is the best, or what is the quickest way of getting the number?

Generally we use COUNT aggregate function for retrieving the record count from the table. This does not take much time if the table contains low number of records. However, when COUNT is run against a table that contains millions of records, it can take a noticeable time. In a situation like that, if the table is a clustered table, the required result can be retrieved with lesser time usingsys.indexes system catalog view.

The following query runs against a tables called Fact.Sales in ContosoRetailDW database (one of Microsoft sample database). This table is clustered table and it contains 3.4 millions records. Have a look on the query and the result;

USE [ContosoRetailDW]

GO

DECLARE @CurrentTime AS datetime = GetDate()

SELECTCount(*) NumberOfRecords

FROM dbo.FactSales;

SELECTCast(DateDiff(SECOND, @CurrentTime, GetDate()) ASvarchar)

+ ','

+ Cast(DateDiff(MILLISECOND, @CurrentTime, GetDate()) ASvarchar)

+ ','

+ Cast(DateDiff(MICROSECOND, @CurrentTime, GetDate()) ASvarchar)

AS ExecutionTime

GO

DECLARE @CurrentTime AS datetime = GetDate()

SELECT [rows] NumberOfRecords

FROM sys.sysindexes

WHERE id = OBJECT_ID('dbo.FactSales') and indid =1

SELECTCast(DateDiff(SECOND, @CurrentTime, GetDate()) ASvarchar)

+ ','

+ Cast(DateDiff(MILLISECOND, @CurrentTime, GetDate()) ASvarchar)

+ ','

+ Cast(DateDiff(MICROSECOND, @CurrentTime, GetDate()) ASvarchar)

AS ExecutionTime

As you see, the second method produces the result less than one microsecond while first method takes about 76 milliseconds. This proves that querying sys.indexes is more efficient than COUNT table when querying a clustered table. However, there is chance of seeing an inaccurate number with this method if statistics are not updated. In addition to that, if the table is a heap table, same performance may not be seen with second method.

Friday, June 6, 2014

If you have a requirement for connecting with SQL Serer virtual machine configured on Windows Azure, there are few things to be done. The following link explains step-by-step for getting the necessary tasks done for connecting via local Management Studio.

Tuesday, June 3, 2014

The LRU, or Last Recent Used algorithm determines the data pages to be dropped from the cache when Data Cache section, which is the largest part of the Buffer Pool needs space.

Data Cache section is used for caching database data pages. In order to satisfy users’ requests, SQL Server loads data pages to this section and then fulfills the requirements such as SELECT, INSERT, UPDATE and DELETE. No modifications are performed on the data files directly. Once pages are cached, other than the initial request, other requests are satisfied using cached pages too.

In addition to Data Cache section, Buffer Pool comprises two more sections; Free Pages and Stolen Pages. First section contains pages that are not used yet and are kept to handle new requests. Second section is for pages that are used by other SQL Server components.

The LRU policy determines the timeframe for data pages to be stayed in the Data Cache after they have been flushed to the disk by Checkpoint process. The process that drops pages from the cache is called as Lazy Writer. The determination of dropping pages are done based on a counter maintained on the page header, that is decreased if the page has not been scanned for a while. Pages with lowest counter are always dropped by Lazy Writer.

“As per MSDN, the maximum memory can be consumed by SQL Server is 8TB, Will SQL Server surely use 8TB if the box is configured with 8TB?”. This was a question raised by an enthusiast.

“It will, but currently it is impossible”. It was my answer. Do you agree with me? Let me explain.

Virtual Address Memory, which is called as VAS refers the total amount of memory can be used by a Windows process or an application such as SQL Server. The size of the VAS varies according the to processor architecture (32-bit or 64-bit) and the edition of the operating system. If the processor architecture is 32-bit, VAS is 4GB. Under standard installation, 2GB is reserved for the system (kernel mode address space) and 2GB is reserved for the application (user mode address space). This proportion can be changed by adding a /3GB switch to the boot.ini for setting 3GB for user mode address space.

This is bit different when a 32-bit version of SQL Server is installed on 64-bit version of Windows. This installation provides full 4GB to SQL Server using WOW emulation technology.

The amount “8TB” comes with 64-bit processor architecture. This architecture offers full 8TB for applications, hence 64-bit version of SQL Server running on 64-bit Windows should use 8TB without any issue. However, this is not possible yet because of unavailability of Windows operating system that supports 8TB. The maximum amount of RAM supported by Windows 2012 is 4TB. Therefore, we need to wait for seeing SQL Server using full 8TB for user address space :).

Search This Blog

About Me

Dinesh Priyankara (MSc IT) is an MVP – Data Platform (Microsoft Most Valuable Professional) in Sri Lanka with 16 years’ experience in various aspects of database technologies including business intelligence. He has done many local and foreign business intelligence implementations and has worked as a subject matter expert on various database and business intelligence related projects. He is the Founder/Principal Architect of dinesQL (Pvt) Ltd and he consults, teaches and runs training programs on data related solutions and subjects.