I have seen plenty of articles and blog posts out there for how to setup and implement table partitioning, but very few for removing or undoing it. So I thought I would cover a few ways to accomplish this while still preserving the data.

There could be many reasons for removing partitioning: no longer needed, need to change partitioned tables, etc. But for our example, we just want to completely remove it from all tables. It would be nice if we could just drop the partition function and partition schema and SQL Server would handle all the rest, but it just isn’t that simple and that’s why DBAs were created.

The Problem – We have two partitioned tables (PartitionTable1 & PartitionTable2) split across four filegroups. We need to remove partitioning from the tables, remove the four files and filegroups, and then move all data to the PRIMARY filegroup without losing any data.

Sample Database – Start by creating a test database with a few filegroups and add some data files to those filegroups.

USEmaster;

GO

— Create a test database.

CREATEDATABASEPartitionTest

ONPRIMARY (

NAME=N’PartitionTest’

,FILENAME=N’D:MSSQL11.TEST1MSSQLDATAPartitionTest.mdf’

,SIZE= 25MB,FILEGROWTH= 25MB)

LOGON (

NAME=N’PartitionTest_log’

,FILENAME=N’D:MSSQL11.TEST1MSSQLDATAPartitionTest_log.ldf’

,SIZE= 25MB,FILEGROWTH= 25MB);

GO

USEPartitionTest;

GO

— Add four new filegroups to the PartitionTest database.

ALTERDATABASEPartitionTestADDFILEGROUPPartitionFG1;

GO

ALTERDATABASEPartitionTestADDFILEGROUPPartitionFG2;

GO

ALTERDATABASEPartitionTestADDFILEGROUPPartitionFG3;

GO

ALTERDATABASEPartitionTestADDFILEGROUPPartitionFG4;

GO

— Adds one file for each filegroup.

ALTERDATABASEPartitionTest

ADDFILE

(

NAME=PartitionFile1,

FILENAME=‘D:MSSQL11.TEST1MSSQLDATAPartitionFile1.ndf’,

SIZE= 25MB,MAXSIZE= 100MB,FILEGROWTH= 5MB

)

TOFILEGROUPPartitionFG1;

GO

ALTERDATABASEPartitionTest

ADDFILE

(

NAME=PartitionFile2,

FILENAME=‘D:MSSQL11.TEST1MSSQLDATAPartitionFile2.ndf’,

SIZE= 25MB,MAXSIZE= 100MB,FILEGROWTH= 5MB

)

TOFILEGROUPPartitionFG2;

GO

ALTERDATABASEPartitionTest

ADDFILE

(

NAME=PartitionFile3,

FILENAME=‘D:MSSQL11.TEST1MSSQLDATAPartitionFile3.ndf’,

SIZE= 25MB,MAXSIZE= 100MB,FILEGROWTH= 5MB

)

TOFILEGROUPPartitionFG3;

GO

ALTERDATABASEPartitionTest

ADDFILE

(

NAME=PartitionFile4,

FILENAME=‘D:MSSQL11.TEST1MSSQLDATAPartitionFile4.ndf’,

SIZE= 25MB,MAXSIZE= 100MB,FILEGROWTH= 5MB

)

TOFILEGROUPPartitionFG4;

GO

Create our partition function and then our partition scheme.

— Creates a partition function called myRangePF1 that will partition a table into four partitions

CREATEPARTITIONFUNCTIONmyRangePF1(int)

ASRANGELEFTFORVALUES (500, 1000, 1500);

GO

— Creates a partition scheme called myRangePS1 that applies myRangePF1 to the four filegroups created above

CREATEPARTITIONSCHEMEmyRangePS1

ASPARTITIONmyRangePF1

TO (PartitionFG1,PartitionFG2,PartitionFG3,PartitionFG4);

GO

Create the partitioned tables on the partition scheme; one (PartitionTable1) with a clustered index and one (PartitionTable2) with a non-clustered index.

— Creates a partitioned table called PartitionTable1 with a clustered index

Here we can see both PartitionTable1 and PartitionTable2 are evenly split with 500 rows in each of the four partitions and each in a separate filegroup.

Within SSMS, you can also see each table is showing the partition scheme and the four partitions.

Solution for PartitionTable1 – This table has a clustered index which makes our solution pretty easy.

Since we have a partitioned clustered index, we can remove partitioning from this table by simply executing a single statement; CREATE INDEX using the DROP_EXISTING option and specifying a different filegroup. This will drop the current partitioned index (which includes the data) and recreate it on the PRIMARY filegroup all within a single command.

— Quick and easy way to unpartition and move it.

CREATECLUSTEREDINDEX[PK_col1]

ON[dbo].[PartitionTable1]([col1])

WITH (DROP_EXISTING=ON)

ON[PRIMARY];

GO

Now query the sys.partitions DMV again and you will see PartitionTable1 no longer shows up and only PartitionTable2 is remaining.

Once again in SSMS, you can will see PartitionTable1 now resides on the PRIMARY filegroup and its data still remains intact.

Solution for PartitionTable2 – We can’t use the previous index trick on the this table because it doesn’t have a clustered index. For this solution, we’ll need to use a few ALTER commands such as MERGE RANGE, NEXT USED, SPLIT RANGE, and SWITCH.

First we need to use the ALTER PARTITION FUNCTION MERGEcommand to combine all of the four partitions into a single partition. The MERGE RANGE command removes the boundary point between the specified partitions.

— Merge all partitions into a single partition.

ALTERPARTITIONFUNCTIONmyRangePF1()MERGERANGE (500);

GO

ALTERPARTITIONFUNCTIONmyRangePF1()MERGERANGE (1000);

GO

ALTERPARTITIONFUNCTIONmyRangePF1()MERGERANGE (1500);

GO

Query the sys.partitions DMV again, and you will see that all 2000 rows have been combined, or merged, into a single partition and now reside on the PartitionFG4 filegroup.

Next, we need to use ALTER PARTITION SCHEME NEXT USED to specify the PRIMARY filegroup as the next partition.

— Create next partition as PRIMARY.

ALTERPARTITIONSCHEMEmyRangePS1NEXTUSED[PRIMARY];

GO

Then we need to use ALTER PARTITION FUNCTION SPLIT RANGEusing a partition value that is larger than the maximum value of your partition column. In our example, since we’re doing a RANGE LEFT partition then specifying any value greater than or equal to 2000 will do the trick. The SPLIT RANGEcommand will create a new boundary in the partitioned table.

— Split the single partition into 2 separates ones to push all data to the PRIMARY FG.

ALTERPARTITIONFUNCTIONmyRangePF1()SPLITRANGE (2000);

GO

Query the sys.partitions DMV once again. You can see that PartitionTable2 is still partitioned into two partitions, but all 2000 rows now reside in the PRIMARYfilegroup.

At this point we’re only half way done. Now we need to create a non-partitioned table in the PRIMARY filegroup that matches the PartitionTable2 in every way, including any data types, constraints, etc. This new table will only be used as a temporary holding location for the data.

Next we’ll use the ALTER TABLE SWITCH command to move the 2000 rows of data into the NonPartitionTable.

— Switch the partitioned data into the temporary table.

ALTERTABLEPartitionTable2SWITCHPARTITION 1 TONonPartitionTable;

GO

Query the sys.partitions DMV again to see there are now zero rows in the PartitionTable2.

The SWITCH command is very efficient because it’s just making a metadata change. Under the covers, no data is actually being moved; it’s just reassigning the partition_idof PartitionTable2 to the the NonPartitionTable object_id. If you want to really see the undercover action, then you can run this script before and after the SWITCH command to see the 2000 rows of data never leave the same partition_ids. Our data has never left partition_id 72057594040156160.

SELECT

o.name

,o.object_id

,p.index_id

,p.partition_id

,p.partition_number

,p.rows

FROMsys.objectso

JOINsys.partitionspONo.object_id=p.object_id

WHEREo.nameIN(‘PartitionTable2’,‘NonPartitionTable’)

ORDERBYo.name,p.partition_number

;

GO

Before:

After:

Now that all the data has been moved to the temporary table, we can drop PartitionTable2 and rename the temporary table back to the original name.

— Drop the partitioned table.

DROPTABLEPartitionTable2;

GO

— Rename the temporary table to the original name.

EXECsp_rename‘dbo.NonPartitionTable’,‘PartitionTable2’,‘OBJECT’;

GO

At this point the PartitionTable2 is no longer partitioned.

Partitioning has now been completely removed from both PartitionTable1 and PartitionTable2. We can drop the remaining parts (partition schema, partition function,files, and filegroups) of partitioning to complete the clean up.

— Remove the partition scheme, function, files, and filegroups.

DROPPARTITIONSCHEMEmyRangePS1;

GO

DROPPARTITIONFUNCTIONmyRangePF1;

GO

ALTERDATABASE[PartitionTest]REMOVEFILEPartitionFile1;

ALTERDATABASE[PartitionTest]REMOVEFILEPartitionFile2;

ALTERDATABASE[PartitionTest]REMOVEFILEPartitionFile3;

ALTERDATABASE[PartitionTest]REMOVEFILEPartitionFile4;

GO

ALTERDATABASE[PartitionTest]REMOVEFILEGROUPPartitionFG1;

ALTERDATABASE[PartitionTest]REMOVEFILEGROUPPartitionFG2;

ALTERDATABASE[PartitionTest]REMOVEFILEGROUPPartitionFG3;

ALTERDATABASE[PartitionTest]REMOVEFILEGROUPPartitionFG4;

GO

What we’re left with is a completely un-partitioned database, and all rows of data in each table completely intact.

During your career as a DBA, you’ll run across articles by SQL experts or other DBAs that just give you an idea. A few years ago I found an article, or a poster to be more precise, that explains SQL Server performance counters. It was written by Kevin Kline, Brent Ozar, Christian Bolton, Bob Ward, Rod Colledge, and Raoul Illyaos.

I was thinking this would be a nice poster for any DBA to hang up in their cubical. But as I read down near the bottom, I saw they also mentioned getting performance counters from within SQL Server via sys.dm_os_performance_counters. Ah ha! My light bulb just turned on!

In an earlier post, I had discussed how to get performance counters from within SQL Server, so for this post I want to share a stored procedure that pulls the counters as described in the poster as well a few more.

The procedure is sp_PerformanceCounters, and it will aggregate overall performance data that is available in the sys.dm_os_performance_countersDMV. The metrics are gathered for various counters and objects and displayed in 7 columns.

PerformanceObject – The counter category.

CounterName – Name of the counter.

InstanceName – The specific instance of the counter; often the database name.

TimeFrame – The timeframe of the counter.

ActualValue – The value of the counter.

IdealValue – A generalized optimal value for the counter.

Description – A brief description of the counter.

The most important thing to understand from the output is to understand the timeframe of the metrics, and how that impacts the actual value. There are three possible TimeFrames: Current, Total since SQL startup, and Avg since SQL startup.

For the TimeFrames that are “Current”, those are counter type 65792 and are the absolute values that do not need any special definition. For example, you can select the number of database pages in the buffer pool, and the value returned is the current value.

SELECT * FROMsys.dm_os_performance_counters

WHEREobject_name=‘MSSQL$TEST1:Buffer Manager’

ANDcounter_name =‘Database Pages’;

The second TimeFrame “Total since SQL startup” is also easy to interpret. This counter is also of type 65792, and it’s just the accumulated total of a counter since SQL was started up. For example, you can select the number of log growths from the databases counter which would tell you the total number of log growths for all databases since SQL Server was last started.

SELECT*FROMsys.dm_os_performance_counters

WHEREobject_name=‘MSSQL$TEST1:Databases’

ANDcounter_name =‘Log Growths’

ORDERBY instance_name;

The last and most common TimeFrame is “Avg since SQL startup”, which is for counters of type 272696576. The value of these counters get incremented every time that event is fired and are mostly related to “per second” counters. To figure out the per second value we have to divide by the total uptime (in seconds) of SQL Server. This will give us the output displayed in the ActualValue column. This is also one reason why the SQL Server startup time is the first row displayed in the output. The SQL Server startup time is taken from the sys.dm_os_sys_infoDMV.

SELECT*FROMsys.dm_os_performance_counters

WHEREobject_name=‘MSSQL$TEST1:SQL Statistics’

ANDcounter_name =‘Batch Requests/sec’;

Most counter names will match exactly what you see in sys.dm_os_performance_counters; however, there will be a few that are calculated as a ratio. One example is the Page Lookup / Batch Request. This counter will show you the average number page lookups that occurred per batch request. Both of these individual counters are of type 272696576 which means they are “per second counters”. However, for this ratio counter we don’t care about the per second value, we just want total of Page Lookups divided by the total of Batch Requests.

DECLARE@TempValue1 DECIMAL(25,5), @TempValue2 DECIMAL(25,5)

SELECT@TempValue1 = cntr_value

FROMsys.dm_os_performance_counters

WHEREobject_name=‘MSSQL$TEST1:Buffer Manager’

ANDcounter_name =‘Page lookups/sec’;

SELECT@TempValue2 = cntr_value

FROMsys.dm_os_performance_counters

WHEREobject_name=‘MSSQL$TEST1:SQL Statistics’

ANDcounter_name =‘Batch Requests/sec’;

— This is to avoid divide by zero.

IF@TempValue2 <> 0

SELECT@TempValue1/@TempValue2 AS‘Page lookups/Batch Requests’;

ELSE

SELECT 0;

All aggregated data is stored in a temporary table #PerformanceCounters and then displayed at the end, and is best viewed using the “Results to Grid” output in SSMS.

EXECmaster.dbo.sp_PerformanceCounters;

GO

Most of the ideal values and descriptions were taken from the poster mentioned above. My disclaimer with that is the real ideal value will always depend on your specific application and setup. This stored procedure was created to be used an overall health check for a server. It’s especially useful if another DBA just handed you a new SQL Server to support, and you need a quick way to see a brief history of its performance.

The stored procedure was written for SQL Server 2005 and 2008. It will work on SQL Server 2012, but there were a lot of changes with the counters so I will have an updated officially supported version for 2012 soon.

For years, actually more like a decade, I’ve been saying that I’m going to get certified in SQL Server. Well finally after years of saying it, I finally did. I passed my first exam, 70-432, this past week. I have to say it was easier and at the same time harder than I thought it would be. Easier, as in the content that was covered in the test was already 2nd nature to me, because I had been doing this stuff for years. And it was harder, because of the way Microsoft asked the questions. Even if you know the concepts behind using a particular feature, you still have to know how to do it. Now I’m on to my next exam, 70-450, Designing and Maintaining a Database Administrative Solution Using Microsoft SQL Server 2008.

You may be wondering why I am spending time getting certified in 2008 instead of going straight for 2012. With the certification retirement date fast approaching on July31, 2013 for the 2008 exam, I thought it would best to get certified in 2008 and then move on to 2012. It’s too late for me to get certified in 2000 or 2005, so I wanted to make sure I have certifications from multiple versions.

My wife asked me after I told her that I passed this exam, “What is my end goal?”. My answer was to get as many of the SQL Server certifications as I can. By that I mean that I want to be certified at the highest level of in SQL Server. It would be extremely satisfying if I can get to the architect level, but I may not even be able to get to the master level. Only time will tell. No matter what level I finally obtain, I still know that I’ll be learning more and more about SQL Server as I go along, and that’s the REAL end goal.

Have you seen this type of an informational message in your SQL errorlog before? If so, then I hope the timestamp associated with it was not during the middle of your peak processing time. This is a good indication that someone has been tinkering with some of the SQL configuration settings.

SQL Server offers a lot of wonderful configuration options; many of which can be changed dynamically using sp_configure without the need of restarting the SQL Server service. The downside to the dynamic changes is sometimes they can have a detrimental effect on performance, and that’s what this “informational” message is tell you. Let’s look at an example.

First, let’s get our starting numbers for the cachestores.

SELECT

COUNT(*)AS‘Total Cached Plans’

FROMsys.dm_exec_cached_plans

;

GO

SELECT

name AS‘Cache Name’

,single_pages_kb

,multi_pages_kb

,entries_count

FROMsys.dm_os_memory_cache_counters

WHERE name IN(‘Object Plans’,‘SQL Plans’,‘Bound Trees’)

;

GO

As you can see, we have 160 total plans in cache using up the amount of single and multi-page allocations for each entry.

Now let’s say your system administrator came to you today and said he hot-added extra ram to the SQL box because of how much you had been complaining about memory shortages. You are so excited about this, you login to the server and execute the following query to bump SQL’s maximum memory up to 4GB.

SQL Server has encountered 1 occurrence(s) of cachestore flush for the ‘Object Plans’ cachestore (part of plan cache) due to some database maintenance or reconfigure operations.SQL Server has encountered 1 occurrence(s) of cachestore flush for the ‘SQL Plans’ cachestore (part of plan cache) due to some database maintenance or reconfigure operations.SQL Server has encountered 1 occurrence(s) of cachestore flush for the ‘Bound Trees’ cachestore (part of plan cache) due to some database maintenance or reconfigure operations.

Let’s run our cachestore query again and look at the results.

Ah proof that we just cleared the entire procedure cache. Having these cachestores cleared out means you just lost all compiled plans for stored procedures, functions, triggers, views, adhoc plans, prepared plans, etc. The next time any query runs, its plan will have to be compiled costing extra CPU cycles and potentially degraded performance.

As it turns out, for certain configuration settings when you execute the RECONFIGURE command it essentially does what amounts to DBCC FREEPROCCACHE.

The whole procedure cache is cleared if one of the following server options is changed by the RECONFIGURE statement:

cross db ownership chaining

index create memory (KB)

remote query timeout (s)

user options

max text repl size (B)

cost threshold for parallelism

max degree of parallelism

min memory per query (KB)

query wait (s)

min server memory (MB)

max server memory (MB)

query governor cost limit

This is a very good example of why you should NOT make configuration changes during peak processing times. You wouldn’t run DBCC FREEPROCCACHE during the middle of your peak processing, so why would you play around with configuration settings during that same time?