Friday, December 26, 2008

I think that is my first blog which does not relate to SQL Server,though why not?I wish to SQL Server Consultants lots of contracts to help out businesses in 2009, I wish to SQL Server DBAs lots of interesting work but that their databases will run smoothly,I wish to SQL Server Developers to write efficient queries and resolve challenges and to every body and their families in 2009 I wish you health ,happiness,wealth and if you look forward to a good year ahead, spread happiness with these wonderful New Year wishes.

Thursday, December 18, 2008

As you know SQL Server creates an execution plan for the query/stored procedure and stores it in procedure cache in order to reuse it when you run the query next time.But more and more execution plans in memory will also hurt the performance as we do not have a control the procedure cache's size so what would you do? Lets see the below scenario I've ran on the DEV(DO NOT run on Production) machine. First of all I used DBCC FREEPROCCACHE which clears the procedure cache and causes ad hoc queries tobe recompiled. Next I ran actually the same query with different parameters.

/*1 49152 DECLARE @i INT set @i = 518 SELECT * FROM Production.WorkOrder WHERE ProductID = @i1 24576 SELECT * FROM Production.WorkOrder WHERE ProductID=7371 24576 SELECT * FROM Production.WorkOrder WHERE ProductID=522*/As you see SQL Server created THREE execution plans.Now let's wrap the query within a stored procedure and see what will be happened.

/*3 40960 CREATE PROCEDURE spTest @i INT AS SELECT * FROM Production.WorkOrder WHERE ProductID = @i */ Wow, you we get single execution plan that SQL Server used three times.I also recommend you to read Tony's blog about the subjecthttp://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/07/07/procedure-cache-tuning-sizing-from-1gbyte-to-768kbytes-increase-the-size-of-usable-data-cache.aspx

Tony writes in his blog and I'm completely agree with him."Surprised? It's one of the reasons DBA's keep bleating on about using stored procedures, it forces the mindset to use the procedure cache more effectively. By parameterising, but preferably using stored procedures we get plan reuse which means a) less compiles thereby reducing CPU load, b) more available pages for the data cache thereby reducing physical disk IO and c) DBA’s are always right ;)."

PS.Besides using stored procedures we can use sp_executesql with parameters as below

Thursday, December 4, 2008

Ok, I'm talking about temporary tables and yes,it could happen to you even you think that the temporary table is no longer exists. Consider the below script where stored procedure inserts the data with wrong type which caused the error(245conversion error). As you propably know we would never name the constraints for local temporary tables (as opposite to permanent tables), because if we run the following statement from two connections we get something like that./*Msg 2714, Level 16, State 4, Line 1There is already an object named 'PK_tmp1' in the database*/

Wednesday, November 12, 2008

We have discussed the issue many times and there is an opened connection to Microsoft to add this feature in the next release. Recently I was visiting our client and we tried to do something for the subject.

1) CREATE new SQL login 'John'2) CREATE a user named 'John in master database3) GRANT CREATE DATABASE to John4) While impersonating John, create a database called 'demo'5) REVOKE CREATE DATABASE permission from John6) REVOKE VIEW ANY DATABASE permission from PUBLIC7) Register this server as John8) From the 'John' session, expand database tree. Now, you should see only master, tempdb, dbtest9) GRANT VIEW ANY DATABASE to PUBLIC10) From the 'John' session, you should see all the databases.

However, this works perfectly if the user is the owner of the database.

Wednesday, October 22, 2008

If you want to know who alters/drops/creates tables/views/stored procedures... I would like to share with you the following script. As you know DDL Triggers (introduced in SQL Server 2005) work very like the DML triggers but details of the event that fired the a trigger are available only in XML format.

It wont help if I get only SPID of the session as in many cases users get logged with only one defined login or even with 'sa'.So I need IP address of those workstations thus I added Session_IPAddress column.Now, let's create a database trigger to capture the info.

Well I won't bother to record CREATE STATISTIC events hence there is an IF block to skip this event. I get the IP Address from sys.dm_exec_connections DMV which has client_net_address column.Now create/drop/alter table (also via SSMS) for example and query the DDL_ChangeEvents table to see what happened.

Sunday, October 5, 2008

Hi everybody. It seems like I am seeing more and more inquires from our clients asking for help solving performance related issues with rebuilding indexes. All of them (or almost all of them) have been using Maintanace Plan Rebuild/Reorganize Index Task. We have lots of clients who have pretty big databases(>200GB) and have not hired yeat a DBA:-).They used to use this task and specify all tables as well as all databases, moreover, one client used to run such tasks in the middle of work day. It leads to locks on tables and performance decreasing. I would also notice you to not cancelling the task as SQL Server will rolback the whole transactions and you are about to wait a lot of time. Just let the task to complete. I suggested instead of running the task, first, identify fragmented indexes on tables that have more than 1000 pages.

The above SELECT generates a simple script to REORGANIZE (change to REBUILD) indexes and EXECUTES the dynamic sql. As you probaly know this script has to be run on SQL Server 2005/2008 and do not forget about really great feature such rebuilding indexes ONLINE. For more details please see BOL.

Monday, September 15, 2008

All of us are aware that restore database with different collation may cause a headache. That what happened to one of our databases where one developer created a database without to pay attention about what kind of data he is going to deal with. Ok,we can set COLLATION even per column , but what if you have lots of tables to be altered with new COLLATION. Well, we can use either import/export or SSIS package or perhaps write some T-SQL script to do the job , however I'd like to tell you how easy to get a new database with desired COLLATION by using RedGate tool.1) CREATE DATABASE dbname COLLATE 'your desired collation'2) Open RedGate (SQL Compare)tool to move the structure of source db to the destination db. That's all.

I takes a few minutes even we had 25GB database. I'd strongly recommend to have a look at this great tool.(www.red-gate.com)

Monday, September 1, 2008

Hi everybody. I'd like to share with you how important to define a computed column to be PESRISTED.As you know from the BOL/*For columns specified as PERSISTED, the SQL Server 2005 Database Engine physically stores the computed values in the table and updates the values when any other columns on which the computed column depends are updated. By marking a computed column as PERSISTED, you can create indexes on computed columns defined on expressions that are deterministic, but not precise.*/I visited our client two days ago who has been experienced with performance issue for one of their very important query.One big table containes a computed column which SELECT statement is using for to return to the client. We have seen very high number of logical reads and TWO computer scalar iterators. For an obvious reason we define the computed column as PESRISTED and performance was increased dramatically.Moreover, create an index on computed column and see how perfromance will be increased more..

Tuesday, August 19, 2008

Hi folks.I named this article with this pretty famous error message. I'm sure that everybody has seen this error at least once.This week I visited our client who asked me very intresting question. They have a table with many columns that most of them defined as VARCHAR(n) datatype. One of the query has been failing with below error.

"Msg 8152, Level 16, State 14, Line 1String or binary data would be truncated.The statement has been terminated."

Well ,we knew for sure that the user supplied a string which does not match with column datatype but real question is WHICH of so many columns? As I said, they have more than 50 columns and it was pretty difficult to identify on which column is failed.My point is that it would be nice to have more information from this error message about which column is failing and I hope that MS will do something for the matter.

Monday, July 28, 2008

Hi folks If you are using SQL Server 2005 (SP) and used to build Maintanace Plans especially with adding subplans so you are probable seen the folowing error message /*Drop failed for Job ‘jobname’. (Microsoft.SqlServer.Smo)

So if you create a MP,SQL Server will create a job and SSIS which is refernced to the subplan as well as inserts the data into system tables in msdb database. (sysmaintplan_subplans,sysjobs_view,sysjobschedules).

Intresting is that if you execute a job it makes more insert into log table called sysmaintplan_log. All of these tables are linked through FK and PK relationships. The problem is when you try to delete a job it gives a Foreign Key errors until you manually remove those entries by the SQL tables.

Please see the link written by Jonas Kempas http://gudenas.com/2007/04/20/sql-server-2005-delete-maintenance-plan-error/ explains step by step how to delete not associated jobs.

Wednesday, July 2, 2008

This question raised when we worked at the client's side by one of developers. They use SQL Server 2005 (SP2) and remembered that in SQL Server 2000 we can open the table via EM and then specify (All rows , Top rows...) do you remember?

I rarely use SSMS to open/edit tables data, and it seems that MS just removed this option. In SQL Server 2005 we have TABLESAMPLE clause that used to open a table with lots of rows. So we only needed to see what kind of data this psecific table has without open entire table or using TOP clause.

This example returns an approximate percentage of rows and generates a random value for each physical 8-KB page in the table.

SELECT * FROM tableTABLESAMPLE system(5 PERCENT)

I'd really advise you to read BOL about this great featurems-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/8868e8fd-6c42-4171-9eab-a0e38cb1bfd3.htm

Thursday, June 19, 2008

My latest post is about how maintaining MSDB database, today I would like to share with you some thoughts about maintaining Sharepoint databases.

Last week I visited our client who has been working with SharePoint (MOSS7) and started complain about performance of SQL Server. I identified long running queries as well as very good number of deadlocks that happened every hour to databases which belong to SharePoint product. However , when I recommend to add some indexes on the tables people start almost crying not to do that as they were told that SharePoint databases are self managed product and DBA should not be touched it at all. It looks to me very strange , but that was my first experience with MOSS and I decided to do some searching on internet.

I found a couple of documents (even published by MS) to read them as if Sharepoint SQL Server performance can be managed by Shrinking & Defragging the DB. I also asked some Sharepoint people and they say that accessing the database directly, changing anything on their databases aside from what's provided out of the box, etc. is not supported unless you do it thru the Sharepoint API. Hmm.... looks strange , does not?

Finally I ended up with sample script to identify very fragmented indexes and running ALTER INDEX index_name ON tablename REORGANIZE;

PS.I could not imagine a customer being unwilling to create whatever indexes are necessary to ensure reasonable performance of a production Sharepoint system.

Sunday, May 18, 2008

Last week I visited our client who has pretty big databases and performs BACKUP LOG ..operation on almost all user databases. Now, one of the most critical databases got corrupted and the DBA was pretty confident that he won't loose any data (as he had backup of log file) and brings the database from the backup within 10-12 minutes.

They also have very well written stored procedure that does RESTORE DATABASE based on name of the database and number of log files to be restored. They run the stored procedure and it has been running for almost 5 hours till DBA canceled the process. What happened? Why it has taken so much time? I thought about it and asked him a question,:-"Have you ever cleared backup history?", he replied that he hasn't. Then we checked backupset system database that contained more than one million rows!!!!

I remember SQL Server MVP Geoff N.Hiten wrote the blog about the issue and I even posted a comment on.Please check the following articlehttp://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx

It tooks only 3 minutes to run a script that create indexes and about 15 munutes to runuse msdb

go

declare @OldestDate datetime

set @OldestDate = getdate() -100

exec sp_delete_backuphistory @OldestDate

Now that it is finished , our RESTORE command took only 12 minutes to complete.I'd like to point out how important is to clear backup history (Fortunately, in SQL Server 2005 we have builtin taks to do the job) as on time 'X' you will succefully restore a needed database.

Sunday, May 4, 2008

Hi folks. I'd like to share with you some technique to get a next value from the table means to create your own sequence mechanism.As we all know that an Identity property may have gaps, so use the below script to retrieve a next value. We create a table with a one row and a only one column which holding the last used sequence value.

Sunday, April 20, 2008

There are lots of articles and techniques onnthe internet about how to deal with delimited parameters. Last Wed I was visited a client that asked to write a quick query to return the data based on delimited values . Here we go

Monday, March 17, 2008

Hi folks. I would like to share with you the following simple script to show us jobs duration report in SQL Server 2005. I manipulated with INTERGER values stored by SQL Server to convert them into DATETIME/CHAR(8) datatypes to represent the data.Thanks to SQL Server MVP Peter Ward provided me with StartTime calculation.

Sunday, March 9, 2008

What's happening if you installed SQL Server instance with a collation that is different from a database collation? We just started testing our production application and everything seem to work well, however one of our stored procedure inserts hebrew characters into temporary table and then after some operations the data get insertded into a real table. Guess what wee have seen in the database? Right,we have seen '????' symbols. Sure, if you do not use temporary table and insert the data directly into a permanent table you will see the right characters. Someone said that we should decline of using temporary table and insert the data into 'temporary' permanent table. Another guy said that we should run ALTER DATABASE tempdb command to change COLLATION, but as we know you cannot run this statement on system databases. The error isMsg 3708, Level 16, State 5, Line 1Cannot alter the database 'tempdb' because it is a system database.

So what is the solution? Well , use the REBUILDDATABASE option in Setup.exe or re-install the instance. Fortunately, the whole story happened on developing machine and we did not forget to install PRODUCTION server with right collation:-).

Just wanted to note you how important is to choose the 'right' collation while installing production server.

Sunday, March 2, 2008

I have been recently visited client who uses SQL Server 2005 Express Edition for his business. As you know that edition is supplied without SQL Server Agent (which was in MSDE). So,the question is how to schedule backup of the database in that case? I would like to thank to Jasper Smith, who wrote these greate articles

The above is simple stored procedure that accepts two parameters and performs BACKUP of the database.Now that, we have created stored procedure lets put the call of the sp into a text file ( I use userdbfullbackup.sql) and place the file on filesystem.usp_BackupDatabases @dbname='$(DB)',@backup_path='$(BACKUPFOLDER)'

Sunday, February 24, 2008

Hi folks. Actually, it is pretty old trick/feature I would like to show you. Let's say we have a table called 'Test1' with one column as VARCHAR(10) which allows inserting NULLs. Well,we want to change this to NOT allow NULLs and I'm also going to change datatype for this column at the same time ,so just for testing let's assume that 'Test1' does not contain alphabetic values.

Tuesday, February 12, 2008

Hi folksI have seen many and many businesses today have already upgraded to SQL Server 2005 and at this time I would like to share with you some of new features that this product gives us.As you know SQL Server 2005 provides lots of dynamic management views (DMV) to get access to internal behaviour of processes or commands. If I remember well, someone of MVPs has already published about this feature , so anywhere..

Take a look at this script. Run for example BACKUP DATABASE comand and in another session run the below script. It provides you with start time column , percent of complete and what possible completion time is...(estimated_completion_time ) COOL, right?.

SELECT TOP 2 start_time, percent_complete ,estimated_completion_time FROM sys.dm_exec_requests ORDER BY start_time desc

Monday, January 21, 2008

I'd like to share with you this script that returns name of tables that do not have Clustered Index defined on.You may ask, why do we need to create a Clustered Index on the table at all? Well,I think it is a subject for another blog , my experience is that every table should have clustered index.

--SQL Server 2005 (SP2)

SELECT name AS object_name FROM sys.objects WHERE name NOT IN(SELECT o.name AS object_name FROM sys.indexes iJOIN sys.objects oON i.object_id = o.object_idWHERE objectproperty(o.object_id,'IsMSShipped') =0AND I.type=1 ---Clustered Index) AND objectproperty(sys.objects.object_id,'IsMSShipped') =0 ---Only user objects AND objectproperty(sys.objects.object_id,'IsTable') =1 ---Only tablesORDER BY name

Tuesday, January 8, 2008

Happy New Year to everyone. Once someone asked me about what columns are affected by the UPDATE statement. As you probaly know that in SQL Server 2005 we can do that in more elegant way, but for SQL Server 2000 I decided perfrom some testing.

---UPDATE Test SET c2='D',c3=2000 WHERE c1=1--Updated columns are :2,3We see that the second and the third column were affected according to the INFORMATION_SCHEMA.COLUMNS ordered by ORDINAL_POSITION (check it out). It is not comfortable to see those numbers , but with a little effort you can get also name of columns from the INFORMATION_SCHEMA.COLUMNS.

I suggest you not to run/use that on the production server as it may hurt performance.

About Me

The goal of this blog is to share my knowledge as a DBA/Developer in SQL Server area. Many years ago I started to collect some tips and tricks from many experts of SQL Server around the world as well as the scripts developed by myself. And I really want to share them with you. Also my intend is to help beginners as well as experienced people such I have been doing at microsoft forums. So please do not hesitate to write me comments or questions.