Dimant DataBase Solutions

Wednesday, May 14, 2014

Recently I worked on the package that loops many text files and insert the data into a staging table and then from the staging into original table which has five columns only defined as VARCHAR(50) in the database. Pretty simple , not? Text files have many data and the destination table has many columns (90+) and very wide data tape client defines (VARCHAR(3000)). Actually he needs to get only few columns but who cares, right? We have observed that the memory during the package execution grow rapidly and very quickly MSSQL Server service was become unavailabe. You can imagine that during this spike nobody can work and a local DBA started getting complains from the end users. I would like to say that the server is very strong (4CPU+ 28RAM)...So what's problem?

At the source component , the estimated sized of a row is determined by the maximum columns sizes of all columns returned by the query (remember we have 90+ columns defined as VARCHAR(3000)). This is where the performance problem resides.

After sometime investigations we dropped the wide stage table and created a stage table with exactly same structure as original table has.
That means columns have VARCHAR(50) and we also changed the job schedule to run it frequently to deal with small data sets. So performance was drastically improved. We have not seen any more memory growing and not reducing, there is no more compliance from the end user.

Please read this article if you deal with packages especially if it works with large data sets.
http://technet.microsoft.com/en-us/library/cc966529.aspx

Wednesday, January 1, 2014

We know that heaps are the tables without clustered index on. It may have many non clustered index (NCI) but still it is considered a heap. So consider a huge table (heap) that is fragmented. How would you fix it? Technically, you cannot defrag a heap table. But remember we have ALTER table REBUILD command which works very well on the heaps. But there is one big But. If you have many NCI on the table this command will rebuild all of them at once. What does it mean? It can produce transaction-log bloat which hits the over all performance. Think about the process the scan the log or perhaps Log Shipping job that needs to move the log file to the remote server. All this affects performance.

So, to answer the question ,we need to create a clustered index on that table..That is simple answer to the question.

Friday, May 10, 2013

Hi friends
I had recently a client who has a table with more than 90's column and his requirement was to return the all columns that IS NOT NULL. So starting with sample T-SQL you will need to filter each column to check for NULLs, like WHERE col1 IS NOT NULL or col2 IS NOT NULL... or perhaps even using an aggregation to eliminate NULLs. But I found pretty nice solution, so take a look at below DDL. We have a table with number of columns (Dayn) to be checked for NULLs. I used simple UNPIVOT output of multiple rows into multiple columns in a single row.

Monday, April 23, 2012

Hi everyone. Today I successfully upgraded our production database to the new version -SQL Server 2012. Actually everything went ok, and after running Upgrade Advisor and restored the database into a new server. The "challenge" was to upgrade existing SSRS reports and SSIS packages. What I would recommend is to open a new project in SQL Data Tools (yes BIDS is gone) and adding report by report to the project. SQL Server automatically upgraded them for first time I run them. Another thing is that now you can much easily to configure SSRS and even if you specified not to "configure" during the installation. I have not noticed any performance degradation since we moved from SQL Server 2005.So lets enjoy new features that were introduced and happy working with SQL Server 2012.

Tuesday, March 6, 2012

Microsoft announced that SQL Server 2012 has released to manufacturing (RTM). Customers and partners can download an evaluation of the product today (http://www.microsoft.com/sqlserver/en/us/default.aspx) and can expect general availability to begin on April 1.

About Me

The goal of this blog is to share my knowledge as a DBA/Developer in SQL Server area. Many years ago I started to collect some tips and tricks from many experts of SQL Server around the world as well as the scripts developed by myself. And I really want to share them with you. Also my intend is to help beginners as well as experienced people such I have been doing at microsoft forums. So please do not hesitate to write me comments or questions.