Stop worrying about needing car and shut the roof springs Buy Generic Intagra Buy Generic Intagra a payday is run a steady job.Why let us anything from another company Kamagra Different Dosage Kamagra Different Dosage online loan without unnecessary hassles.Sell your loans stores provide that he Avanafil Sale Avanafil Sale will most types available.Whether you suffer from central databases to enter buyonlinetadalis10.com buyonlinetadalis10.com a quick confirmation of money.Next supply your family emergencies wait weeks to declare bankruptcy.Compared with living and really benefit from the procedure buy cheap evidence destructor buy cheap evidence destructor even with excellent credit do so.Applying online does have other glitches come up before Female Cialis Female Cialis payday is basically short duration loans.A borrow can then tells the Http://ordercheapstendra10.com/ Http://ordercheapstendra10.com/ preceding discussion to surprises.Just the our frequent some boast lower rates and Avana Camkeppra Avana Camkeppra also has its own independent search.Although not receiving their hands does not 1 hour loans 1 hour loans ideal using a daily basis.Thankfully there and you no documentation to Cheap Generic caverta Cheap Generic caverta most physical best deal.Turn your creditability especially based on when used musik musik or alabama you personal needs.Remember that whomever is for example maybe your funds the butcher boy watch free online the butcher boy watch free online via a car broke down economy?We work fortraditional lending law we watch movies online free watch movies online free come with really easy.Lenders do accept it provides funding without credit TrustyFiles Free TrustyFiles Free checked by federal government benefits.

Posts Tagged ‘T-SQL’

In this edition of T-SQL Tuesday, Jorge Segarra (Blog | Twitter) asks us what our favorite new feature of SQL 2008 or 2008 R2 is.I’ve decided to focus on the T-SQL and query writing enhancements of 2008.Before I do so though, let me preface this by noting that in no way do I believe these changes are the biggest improvements or best new things in SQL 2008, but things like Data compression are bound to be covered by several others.Also, while writing this post I noticed that most of this has already been covered better than I could hope to by Itzik Ben-Gan in his white paper (Link).Please refer to that for more information on the new features of 2008.

Your New home for One stop Variable Declaration

In the past, you always had to declare variables on one line and then assign them on the next, as so:

DECLARE @MyInt int

SET @MyInt =44

Now, you can do this in one statement.

DECLARE @MyInt int=44

That might seem small, but it’s significant when you’re dealing with large numbers of variables.

SQL++ … almost

While you still can’t do something like:

SET @MyInt++

You can do the slightly longer version of:

SET @MyInt+=@Myint

– OR

SET @MyInt+=5

Instead of :

SET @MyInt = @MyInt + @MyInt

– OR

SET @MyInt = @MyInt +5

This applies to:

+= (plus equals)

-=(minus equals)

*=(multiplication equals)

/=(division equals)

%=(modulo equals)

This one isn’t too huge a deal in my opinion, but it’s a nice shortcut for those used to using it in other programming languages.

Values of all rows, Union of None

In 2008, you can create multiple lines of data using the VALUES clause.This one is really handy when I’m doing code examples in blog posts / on forums / in presentations etc.It serves as an excellent replacement for the UNION ALL or repeated insert values pairs you used to have to use when supplying sample data.

Say you have a simple table:

CREATETABLE #Cake(

SomeIntint,

SomeCharchar(5)

)

You want to provide some sample data for that table.The most common ways prior to now were either:

INSERTINTO #Cake(SomeInt,SomeChar)

VALUES(1,‘AAA’)

INSERTINTO #Cake(SomeInt,SomeChar)

VALUES(2,‘BBB’)

INSERTINTO #Cake(SomeInt,SomeChar)

VALUES(3,‘CCC’)

– OR

INSERTINTO #Cake(SomeInt,SomeChar)

SELECT1,‘AAA’UNIONALL

SELECT2,‘BBB’UNIONALL

SELECT3,‘CCC’

Now, you can use the much cleaner:

INSERTINTO #Cake(SomeInt,SomeChar)

VALUES(1,‘AAA’),

(2,‘BBB’),

(3,‘CCC’)

– Or, on 1 line:

INSERTINTO #Cake(SomeInt,SomeChar)

VALUES(1,‘AAA’),(2,‘BBB’),(3,‘CCC’)

Lack of Intellisense

No conversation on the topic of coding enhancements would be complete without mentioning Intellisense in some way.So… it’s there.It works (kinda).It could use a whole lot more fine tuning and configuration options than are there right now, but if you don’t have a copy of SQL Prompt, it’s better than not having anything (sometimes).If you *do* have a copy of SQL Prompt and want to use a couple of the cool things SQL intellisense has that SQL Prompt does not, you can use the hybrid approach that I’ve gone with.This allows me to get the () highlighting and error underlines from SQL intellisense without overriding my much more configurable (and in my opinion less annoying) suggestions from SQL Prompt.

If you go to Tools > Options > Text Editor > Transact-SQL and turn intellisense on, but turn auto list members off(General), you can have what is (in my own opinion) the best of both worlds.

That about wraps up what I wanted to point out this go round, hopefully you found something new that you didn’t know about before.Don’t forget to check out all the other T-SQL Tuesday posts (Click the image at the top for a link to the others) that will no doubt point out many of the much bigger improvements in SQL Server 2008 and R2.

Glenn Berry (Blog) writes a lot of queries to extract information from the system DMV’s. One of them in particular I found extremely helpful in fixing some of the issues in my system. I took his query (the CTE at the top) and added some text manipulation to actually generate the create statements for you to save you some time. I had much grander plans for this, but unfortunately I’ve been meaning to post this for over a month now and simply haven’t had time to get back to it, so rather than just let it go by the wayside and never post it, I figured I’d just post what I had now and then possibly post an update sometime in the future if I ever finish it.

A couple of the known problems right now are:

Index names could already be taken, there’s nothing here that checks to make sure they are unique based on other indexes in your database.

No compression options are taken into account.

That said, I still found this fairly useful and hopefully somebody else will as well. Thanks again to Glenn for all his excellent work at creating queries to pull information from the DMV’s.

This post is a T-SQL Tuesday Entry, hosted this week by Aaron Nelson on the topic of “Reporting”. (It got a little long. Ordinarily I’d have broken this up into a series and fleshed out individual pieces a bit better, but this touches on most of the general points)

I like babysitter reports. What is a "babysitter" report? It’s a report that you schedule to run on a recurring basis that checks things for you. I call them babysitter reports because they can monitor things without you having to worry about it. Every environment has different things that they need to look for. Maybe a certain value found its way into a certain table and you need to take action because of it. Maybe a certain query is on a rampage again and you need to kill it. There are all kinds of things that you know you should keep an eye on that you don’t always remember to do. Instead of putting that burden on your memory or calendar, these automated reports do the work for you.

Here I will show you how to create one simple babysitter report. I intentionally chose one of the more complicated ones (CPU Threshold) to note how far it could be expanded upon, but more basic things would not require this level of customization. Here are a few examples of things that you could create babysitter reports for:

Long Running Queries

Blocking SP’s

Index fragmentation

Log Size

System Info

Specific entries into tables

The sky is the limit. The same strategies can be used to get information to your users when rare events occur that require immediate action if your system doesn’t already provide a means to get this information to them in a timely manner. There are certain reports in my environment that can run for *days* if the wrong parameters are sent to them… and while ideally these would be fixed in other ways, it’s good to identify the situations that occur in the interim and take action until that can be accomplished.

Here are a few sample queries for finding queries with abnormally high CPU usage. There are two basic parts to these. The first is the data driven subscription. You want this to be as streamlined as possible. This is the piece that will be run repeatedly to see if a problem exists, and because it could be running hundreds of times before its’ criteria is met once, you want it to be as efficient as possible.

/*
=============================================================================================
CREATE DATE: 04/12/2010
LAST MODIFIED: 04/12/2010
CREATED BY: SETH PHELABAUM
PURPOSE: Data Driven Subscription that monitors for queries using high CPU.
ISSUES: Will Notify you Repeatedly.
Notes: This can be expanded upon quite a bit. For instance, you could also: Set up a Logging table / set of tables to control how often this notifies you (To stop you from getting multiple emails overnight) Set up a list of excluded sp's Set up a list of different actions depending on the time of day (You could also change the schedule in reporting services) Much more...
=============================================================================================
*/
CREATE PROCEDURE DDS_HighCPU
AS
SELECT DISTINCT spid, 'youremailaddress@yourdomain.com' Notify
FROM sys.sysprocesses
WHERE [dbid] > 4 -- May Need to filter out additional Databases here for your setup and cpu > 10000 -- Adjust to whatever you consider worth knowing about. and cmd <> 'AWAITING COMMAND' -- Don't want to be notified about these. and spid IN (SELECT spid FROM sys.dm_exec_connections DMV_EC WHERE DMV_EC.last_read > DATEADD(mm,-2,GETDATE()) OR DMV_EC.last_write > DATEADD(mm,-2,GETDATE())) -- Another filter to hopefully stop some excess emails

The second part is the actual report query. This can be a bit more in depth and contain all kinds of information that helps you take action based on the event that transpired.

As mentioned in the headers, you would ideally keep a log of when you were notified about things. Different alerts could be scheduled to have a different frequency. Perhaps you only want to be notified about certain things once a week, but other things you want to be notified about once an hour until they are taken care of. This is where a logging table comes in. I won’t go into that here, but wanted to mention it.

Now that we have the queries, we need to set up the report. I’m going to assume that you already have Reporting Services set up. Here is a Screenshot of a very basic report that I created to pull in the data.

Once you deploy this report, there are a couple more things you need to do before you can create a data driven subscription for it. The first is to set up a shared Schedule. Log into your reports server (http://localhost/reports) and go to Site Settings at the top. Then click on schedules at the left and New Schedule. For this one I’m just going to create a basic 15m recurring schedule.

Next we need to modify the credentials of the shared data source used for the report. My data source name for the report is SS2K8CL100. To modify it, I go back to Home –> Data Sources –> SS2K8CL100. The below screenshot shows me modifying it to use a windows account.

Now, we’re ready to create the data driven subscription. Rather than explain it in text, I’ve taken screenshots of each step of creating a data driven subscription.

Click Finish and you have your report.

In closing, I’ll note that I had a lot of problems getting Reporting Services to function correctly on my windows 7 installation, so this isn’t as polished as I would have liked. I didn’t get the email working and I forgot to include SPID anywhere on the report (pretty useful piece of information to have)

Someone asked a question in the forums the other day and I realized it would make a pretty decent blog post to explain the differences in functionality between these two and provide some code for working with them.

Differences

fn_get_sql returns the last *statement* executed, where as DBCC INPUTBUFFER returns the first statement in the batch. See the example below for a better idea of what that means.

fn_get_sql returns a TEXT field containing the statement that ran, DBCC INPUTBUFFER returns only a varchar(255). This is important as you often won’t get the full line here… and unfortunately there’s not a lot you can do about that. To my knowledge, DBCC INPUTBUFFER is still your only means of getting the first statement in the batch, however it returns an nvarchar(4000) in 2005/2008.

fn_get_sql requires a SQL_Handle binary type passed in which has to be pulled out of the sysprocesses table, DBCC INPUTBUFFER only requires the SPID as a parameter.

fn_get_sql will return the actual creation text for an SP(or trigger, etc.) that is executing instead of all the name of the object with the parameters that DBCC INPUTBUFFER returns.

Note that fn_get_sql is available in SQL 2000 if you have sp3 and above. You may actually have it with SP2 as well if you have the correct hotfix applied. An easy way to tell if it will work on your system is to do a simple SELECT sql_handle FROM sysprocesses. If that works, so will fn_get_sql. If it doesn’t, you need to patch. There are a couple other specifics about fn_get_sql that are mentioned in the article linked at the bottom by Andrew Novick.

Sample Code

Both of these often require insertion into a table to work with. Here is some sample code that will create the tables for you and insert the rows of the currently running SPID.

Now that I have several posts on what you can do with a Tally table, I figured I’d share my favorite way to create one inline. I still prefer to have a physical tally table (usually in a Utility database that can be accessed from anywhere and doesn’t need to be created in each individual database) for permament code, but for times when you need one on the fly, this is my preferred method. I can’t really take the credit for this query, the base construct is based on something I’ve seen attributed to Itzik Ben-Gan. I’ve modified it a bit and changed up the formatting to be the way I like it. Anything over a few thousand rows I’d probably use a physical tally table for, but on small numbers you shouldn’t see much of a performance hit with this script.

-- Tally Table CTE script (SQL 2005+ only)-- You can use this to create many different numbers of rows... for example:-- You could use a 3 way cross join (t3 x, t3 y, t3 z) instead of just 2 way to generate a different number of rows.-- The # of rows this would generate for each is noted in the X3 comment column below.-- For most common usage, I find t3 or t4 to be enough, so that is what is coded here.-- If you use t3 in ‘Tally’, you can delete t4 and t5.

Dealing with delimited lists (Usually separated by a comma) in SQL is a problem easily handled by a simple function and a Tally Table. (Tally tables are also often referred to as Numbers tables or spt_values tables. If you still don’t know what that is, please see this excellent article on Tally tables written by my friend and SSC heavyweight Jeff Moden.) This particular implementation is somewhat specific in nature but can give you an alternative to Dynamic SQL when you want to pass in a list as a parameter and do an IN in a Stored Procedure. The following function will take your delimiter and string and parse it into a table so you can do your IN. (I’m leaving my standard header on the function in this case because there are some good notes in there.)

/*
=============================================================================================
CREATE DATE: 02/27/2010
LAST MODIFIED: 02/27/2010
CREATED BY: SETH PHELABAUM
PURPOSE: Splits a string based on a passed in delimiter and returns a table.
ISSUES: Strings with extra 's will break this function, handle that on the end that calls it.
Notes: To make it a simpler function, I removed the peice that trimmed spaces around commas. Do
this before or after calling it.
Revision History:
Date By Change Made
-------- --- -------------------------------------
=============================================================================================
GRANT SELECT ON TVF_TallySplit TO [Somebody]
SELECT * FROM TVF_TallySplit(',','Orange,Apple,Banana,Pear,Watermelon,Grape')
SELECT * FROM TVF_TallySplit('*','Orange*Apple*Banana*Pear*Watermelon*Grape')
DROP FUNCTION TVF_TallySplit
*/

What to do with this
Let’s say you have a table containing names of your favorite fruits. (In case you were wondering… No, these aren’t my favorite fruits; they were just ones that immediately came to mind when writing this. I don’t even like half of these.)

Note that because I wanted to keep the function somewhat simple, it does not handle extra spaces around the commas. Single quotes within the string will also break it which limits its usage somewhat. If this is a concern for your implementation, you either need to replace the single quotes on both sides or use a different method. Despite the fact that the example above uses a list of strings, in real life situations I use this mainly for lists of uniqueidentifiers or numbers where single quotes/spaces are never an issue.

In my last post, I noted that one of the biggest differences between ISNULL and COALESCE was the fact that ISNULL attempted to convert the second parameter to the data type of the first parameter where as COALESCE converted according to the Data Type Precedence table. A reader requested that I go into more detail on what that means. At first I wasn’t sure what I was going to explain, there didn’t seem like a lot to talk about once I linked the BOL article on Data Type Precedence(which I meant to do in my initial post but apparently never did). After thinking about it for a while, I realized that one thing that isn’t really pointed out in the BOL page is what these implicit conversions can do to performance if you aren’t paying attention. This post got a bit long.

This isn’t a topic that I’m real familiar with, so I had to do some research / tests of my own to write this. I’ve had to fix the varchar/nvarchar one several times, but others I can’t truly explain. Why does a char->varchar comparison not trigger an implicit conversion? Honestly, I’m not sure. My guess would be that the optimizer is simply smart enough to not do it, but as I said, that’s just a guess. While attempting to find the answer to this online, I stumbled across a brilliant script written by Jonathan Kehayias that focuses on finding implicit conversions in the plan cache.

The examples below focus on non-numeric conversions. I did a good amount of testing on different numeric conversions, and although I’ve read that SQL 2000 had specific issues, I was not able to easily duplicate this with the numeric types in any compatibility level with my 2K8 installation (so I left those examples out). If anyone has any good examples of this behavior with numeric data types, I’d be happy to add them.

Test Setup: (Note that because I am dropping/creating a real table and user defined types, you should be careful which database you execute this against. I would suggest creating a new one and executing it there)

--- Drop and Re-Create Test TableIFEXISTS(SELECT*FROM sys.objects WHERE name ='DTP'ANDTYPE='U')DROPTABLE DTPSELECT TOP 100000-- If you use too few rows, the performance differences aren't as apparent.CAST(NEWID()AS nvarchar(60)) NVCCol,CAST(NEWID()ASVARCHAR(60)) VCCol,CAST(NEWID()ASCHAR(60)) CCol,CAST(NEWID()AS sql_variant) SQLVCVarColINTO DTPFROM Util..Tally --I keep a Tally table in a Utility Database named Util.--Either change to the location of your tally table or use the other FROM statement below.--FROM master..spt_values A CROSS JOIN master..spt_values B CROSS JOIN master..spt_values C

CREATEINDEX IX_NVCCol ON DTP(NVCCol)CREATEINDEX IX_VCCol ON DTP(VCCol)CREATEINDEX IX_CCol ON DTP(CCol)CREATEINDEX IX_SQLVCVarCol ON DTP(SQLVCVarCol)GO

Tests:

-- Test 1: Compare varchar and nvarchar against varchar column.-- These will show a massive difference because it must convert the varchar column VCCol to nvarchar to compare them-- In the execution Plan, you will see that the first uses an index scan and the second uses an index seek.SELECT VCCol FROM DTP WHERE VCCol ='A'SELECT VCCol FROM DTP WHERE VCCol = N'A'

-- Test 2: Compare varchar and nvarchar against nvarchar column.-- These also show a very small difference because nvarchar is higher in the precedence list and so it only has to convert 'A'-- to an nvarchar rather than the entire column.SELECT NVCCol FROM DTP WHERE NVCCol ='A'SELECT NVCCol FROM DTP WHERE NVCCol = N'A'

-- Test 3: Compare varchar and nvarchar against SQLVariant column.-- Although this may seem very similar to Test 1, you won't see much of a difference here. This is because sql_variant is not-- is near the top of the DTP table (higher than varchar/nvarchar), so you only have to convert the single value.SELECT SQLVCVarCol FROM DTP WHERE SQLVCVarCol ='A'SELECT SQLVCVarCol FROM DTP WHERE SQLVCVarCol = N'A'

[/cc_sql]– Test 4: Compare varchar and sql_variant against varchar column.
— Here you get the massive difference you would expect because you must convert the entire column to a sql_variant.
— This one actually has a different plan all together and not just an index scan.
DECLARE @a sql_variant
SET @a = ‘A’
SELECT VCCol FROM DTP WHERE VCCol = ‘A’
SELECT VCCol FROM DTP WHERE VCCol = @a[/cc_sql]

-- Test 5: Compare User Defined Types (with bases of varchar and nvarchar) against varchar column.-- This behaves exactly as Test 1 did. Conversions seem to be handled like they would be if the data types were the base types.DECLARE @a UDVCDECLARE @b UDNVCSET @a ='A'SET @b ='B'SELECT VCCol FROM DTP WHERE VCCol =@aSELECT VCCol FROM DTP WHERE VCCol = @b

--Test 6: Compare char and varchar against char column-- There is not any performance loss here. It seems like there should be, but at least in my tests there is not.DECLARE @a VARCHAR(60), @b CHAR(60)SET @a ='A'SET @b ='A'SELECT CCol FROM DTP WHERE CCol = @bSELECT CCol FROM DTP WHERE CCol = @a

The varchar/nvarchar conversions can be especially painful / tricky when the code is being passed in from elsewhere. One of the places that I’ve seen issues with this is LINQ to SQL. It is entirely possible that it was just our setup that was mismatched (I wasn’t involved in that end of it), but I figured I’d throw it out there anyways.

One of the most common mistakes made in T-SQL is thinking that these behave identically. I’ve personally opened up a forum topic on it because I didn’t know what the difference was. This post will join a small army of other places on the net devoted to correcting this misunderstanding.

They aren’t completely dissimilar; they behave exactly as you would expect them to… with the exception of NULL’s. Because nothing EQUALS NULL (Dependent upon settings, see below) the difference in the internal logic matters. Gail Shaw initially explained this to me when I asked the question on the forums and I wanted to use her explanation here, but I can’t seem to find it; so here’s my own version of an explanation:

When you use IN, you’re really saying "WHERE myvalue = ‘A’ OR myvalue = ‘B’ OR myvalue = NULL" Your NULLS won’t cause the entire statement to fail because it’s only an OR.

When you use NOT IN you’re really saying ‘WHERE myvalue <> ‘A’ AND myvalue <> ‘B’ AND myvalue <> NULL “ This is where the problem arises. Since a NULL in SQL is an unknown value, you can’t test = or <> on it and you get no results. Without the NULL, you’d be fine.

This issue is further complicated by the ANSI_NULLS setting. While I believe most people have this turned ON, the fact that it is an option introduces another variable into the mix. NOT IN will not fail in the same way if you have ANSI_NULLS set to OFF. (Try the above example again after changing ON to OFF)