Some time ago, I wrote an article for SQL Server 2008 to help determine the use of the server since SQL Server 2008 was reaching End Of Life. In that article, I shared a reasonable use of server side trace to capture all of the logon events to the server. Afterall, you have to find out the source of connections and who is using the server if you need to migrate it to a newer SQL Server version. You can read that article here.

Soon after, from various sources, I received requests on how to perform a logon audit using the more preferred, robust, venerable, awesome tool called Extended Events (XEvents). In response, I would share a login audit session to each person. In this article, I will share my login audit solution and give a brief explanation. I use a solution like this on more than 90% of my client servers and I find it highly useful.

Auditing Logons

The Events to capture SQL Server logon/logoff activities were not a part of the original release of XEvents in 2008. The requisite events did not become available until SQL Server 2012. Why mention SQL Server 2008 at all given it has reached its End of Life you may ask? Well, as luck would have it, there are still plenty of 2008/R2 instances out there in the world that are yet to be upgraded still. So, it is useful to continually point it out. This session, and this information, does NOT apply to anything prior to SQL Server 2012.

The events in XEvents that we need happen to be called: sqlserver.login and sqlserver.logout. Here is a little more info on those events along with a query that can retrieve the details being shown here.

As shown in the preceding image, there are two events of interest that are needed for auditing logon events – sort of. These events are only useful to capture the successful connection (or connection pooling reuse connections) and the logoff events. To take this a step further and capture the failed logins, we can also use an Event that was available in 2008, error_reported. The use of the logout event would be an optional event in this case but could be of use during the troubleshooting under certain circumstances.

Focusing on the login and error_reported events, I have two very useful XEvent Sessions I like to use for my client servers. First part of the audit logon solution is the audit of successful logins with this XEvent Session.

Next, I like to use a separate session for the failed logins. This makes it easy to keep the audit files separate and only share the necessary files to others (e.g. only the failed logins instead of giving them both successful and failed which may be way too much information). In addition, I like this method because it makes searching the audit files easier and more focused.

failed logins

Transact-SQL

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

USEmaster;

GO

-- Create the Event Session

IFEXISTS(SELECT*

FROMsys.server_event_sessions

WHEREname='Audit_FailedLogon')

DROPEVENTSESSIONAudit_FailedLogon

ONSERVER;

GO

EXECUTExp_create_subdir'C:\Database\XE\';

GO

CREATE EVENT SESSION [Audit_FailedLogon] ON SERVER

ADD EVENT sqlserver.error_reported (

ACTION (

package0.event_sequence

, sqlserver.client_app_name

, sqlserver.client_hostname

, sqlserver.client_pid

, sqlserver.nt_username

, sqlserver.server_principal_name

, sqlserver.session_nt_username

, sqlserver.transaction_sequence

, sqlserver.username

)

WHERE ([severity]=(14) AND [error_number]=(18456) AND [state]>(1)) )

ADD TARGET package0.event_file

(SET filename = N'C:\Database\XE\Audit_FailedLogon.xel'

)

WITH(MAX_MEMORY=4096KB

,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS

,MAX_DISPATCH_LATENCY=3SECONDS

,MAX_EVENT_SIZE=0KB

,MEMORY_PARTITION_MODE=NONE

,TRACK_CAUSALITY=ON

,STARTUP_STATE=ON

);

In this second session, I have the predicate defined such that I will only get the failed login events. Funny thing about this, I can often catch 3rd party vendors trying to login to client systems in very odd ways (including putting the password into the user name box which in turn causes the password to be stored in clear text in the error log).

Could these sessions be combined into a single all-purpose session? Sure! That is up to your discretion. I prefer the separate sessions for ease of management and review.

The Extra Mile

Let’s say we wanted to go ahead and add the logout event to our session, we can then proceed with a slight modification to the successful login session so that it would look something like the following.

And, to show what might occur with this session, I can see data similar to this for the logout event.

See how easy that is!

The Wrap

This article has shown how to audit the logon events for SQL Server 2012 and beyond through the use of XEvents. It also happens to be an excellent follow up to another recent article – here. An essential function of any DBA is to have a clear understanding of the activity (logons) that is occurring on the servers. This audit session will help you do exactly that!

Through the power of XEvents, we can accomplish a great many things and reach extensive insights into our database footprint. This post is just one of many in the XE Series, of which you can read more – here.

In addition, this post is similar to many in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Cannot resolve the collation conflict between “pick a collation” and “pick another collation” in the equal to operation.

This kind of error seems pretty straight forward and it is safe to say that it generally happens in a query. When you know what the query is and you get this error, it is pretty easy to spot and fix the problem. At least you can band-aid it well enough to get past the error with a little use of the collate clause in your query.

But what if the error you are seeing is popping up when you are trying to use Management Studio (SSMS)? The error is less than helpful and can look a little something like this.

And for a little context in how that error message popped up, here is a little better image.

As you can see here, the error message popped up just trying to get into the Extended Events Sessions folder in SSMS. Just trying to expand the folder to see a list of the sessions on the server throws this error. So what is really happening?

Background

First let’s cover a little background on this problem. This server was poorly configured. There was considerable set it and forget it action with this server – meaning all defaults and and a lot of bad choices. In an effort to update the instance to meet corporate standards, the client attempted to change the server collation to “Latin1_General_100_CI_AS_SC”.

Changing collations on a server is a rare occurrence in the grand scheme of things. If you do it at the right time, all will go well. Do it on a server that is poorly configured then there is a little more risk. In this case, the collation change had failed for the server. In order to figure out if the Server collation change had failed, we can run a few tests. First though, let’s see what that query looks like that was causing the error from within SSMS.

The query here is one time that profiler can actually come in handy (not that I would try to rely on it too much) to help retrieve given that there is a collation issue with Extended Events (XE). We can now take this a step further and start to show that the Server collation change failed. Let’s check the collations on each of those objects and then validate the server collation.

As we can see from the image, the collation for the master database indeed was changed (it is done before the user databases) but then the change for the server collation failed. As it turns out, due to the sequence of events, if there is a failure in changing the collation in a user database, then the collation change for the server fails but the master database will indeed change. We can further confirm this from the output of the attempted collation change. Here is a snippet from a failed change (sadly does not show the user database change failure due to extent of output).

So, how do we go about fixing it?

The Fix

Well, since we know where the failure occurs, the fix becomes pretty apparent. It may be a lot of work – but it isn’t too bad. The fix is to detach all user databases, attempt the collation change again, and then re-attach all databases. To help with user database detach and re-attach, I would recommend using a script that can generate the necessary detach and attach statements for all databases. It will save a bit of time.

Once, reattached, I can try to look at the XE Sessions from SSMS with much different results (shown here).

You see? It is as easy as that.

Final Thoughts

The default collation for SQL Server is a pretty bad idea. Sure, it works but so does SQL Server 7. When you have the opportunity to update to more current technologies, it is a good idea. Sometimes though, that upgrade can come with some pain. This article shows how to alleviate one such pain point by fixing problems related to collation conflicts and XE.

This is yet another tool in the ever popular and constantly growing library of Extended Events. Are you still stuck on Profiler? Try one of these articles to help remedy that problem (here and here)

The Extended Events library has just about something for everybody. Please take the time to explore it and become more familiar with this fabulous tool!

Tracing a query is a common task for a DBA. The methods employed are pretty varied. I seem to encounter a new method here and there throughout my journeys and projects.

One method is surprisingly easy and I don’t recall ever paying it any attention until recently. Upon discovering this method, I was appalled that there is still no equivalent method within Extended Events (XE). In this article, I am going to share that method and a viable equivalent for XE until an appropriate GUI integration is created for SSMS.

Query Tracing

First things first, how do we find this supremely easy tracing method? When do we want to use it? Let’s answer that second question first. This is a method we will want to use whenever we have a query that we have just been handed and we want/need to trace the query to figure out things such as resource utilization. To get to this method, we simply right click in the query pane and select “Trace Query in SQL Server Profiler” from the context menu. The following image illustrates that menu option.

After selecting that menu option to trace the query, the next step is amazingly simple too – just execute the query. That is fabulous – if you want to use a tool as inefficient and outdated as Profiler. How do we do this in XEvents? First, we need to capture a bit more detail from this Profiler style trace.

Let’s delve into the properties for that trace session we just started (from the previous image).

Once we have the properties open, the next step is to click the “Column Filters…” button as shown in the following image.

After clicking the “Column Filters…” button, a new window will open, revealing any filters that were defined for that query we wanted to trace. In this case, the SPID for the query window is transferred to the query trace. Thus, in theory, this Profiler trace will only capture data related to the SPID in question. With the filter in hand, and also noting the events being trapped from the properties window, we have adequate information to create an XEvent session to perform the same functionality.

We can easily setup a session in XE through the GUI using the Standard template shown here:

And then modify it to include the missing events as shown here:

Or, one could use the easy button and take advantage of a script. The script option provides a much more robust option while also being far less repetitive than the GUI.

Trace like from menu

Transact-SQL

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

USEmaster;

GO

-- Create the Event Session

IFEXISTS(SELECT*

FROMsys.server_event_sessions

WHEREname=N'TraceSQLMenu')

DROPEVENTSESSIONTraceSQLMenu

ONSERVER;

GO

EXECUTExp_create_subdir'C:\Database\XE\';

GO

DECLARE @SessionId VARCHAR(32) = '59'

,@SQL VARCHAR(MAX) ;

SET @SQL = 'CREATEEVENTSESSION[TraceSQLMenu]

ONSERVER

ADDEVENTsqlserver.existing_connection

(SET

collect_options_text=(1)

ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.login

(SET

collect_options_text=(1)

ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.logout

(ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.rpc_starting

(ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.sp_statement_completed

(ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.sp_statement_starting

(ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.sql_batch_starting

(ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

,ADDEVENTsqlserver.sql_statement_starting

(SET

collect_statement=(1)

ACTION

(

sqlserver.session_id

,sqlserver.sql_text

)

WHERE([sqlserver].[session_id]=(' + @SessionId + '))

)

ADDTARGETpackage0.event_file

(SETfilename=N''C:\Database\XE\TraceSQLMenu'')

WITH(MAX_MEMORY=4096KB

,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS

,MAX_DISPATCH_LATENCY=3SECONDS

,MAX_EVENT_SIZE=0KB

,MEMORY_PARTITION_MODE=NONE

,TRACK_CAUSALITY=ON

,STARTUP_STATE=ON);

ALTEREVENTSESSIONTraceSQLMenuONSERVER

STATE=START;'

PRINT@SQL;

EXECUTE(@SQL);

GO

This session is ready to roll (without a GUI access point obviously) simply by entering the SPID #, into the @SessionId variable, for the query window in question. Once the spid value is entered, and the script is executed, we can easily watch the live data for the spid in question (if we wish).

So, what happens when I need to query a different spid? That’s easy! I just change the value of the @SessionId variable to the spid in question and then run the script again. The script will drop and recreate the session with all of appropriate filters in place and pointing to the correct SPID.

Final Thoughts

The ability to quickly and easily trace a query is important to database professionals. This script provides one useful alternative to trace a specific spid similar to the method of using the context menu to create the trace within SSMS and Profiler.

This is yet another tool in the ever popular and constantly growing library of Extended Events. Are you still stuck on Profiler? Try one of these articles to help remedy that problem (here and here)

The Extended Events library has just about something for everybody. Please take the time to explore it and become more familiar with this fabulous tool!

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

check script

Transact-SQL

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

SELECT

t.idASTraceId

,t.event_count

,te.nameASEventName

,te.trace_event_id

,tc.nameASColumnName

,tc.trace_column_id

FROMsys.tracest

CROSSAPPLYsys.fn_trace_geteventinfo(t.id)gei

INNERJOINsys.trace_eventste

ONgei.eventid=te.trace_event_id

INNERJOINsys.trace_columnstc

ONgei.columnid=tc.trace_column_id

WHEREt.pathLIKE'%LogonAttempts%'

ORDERBYte.trace_event_id,tc.trace_column_id;

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

What do you do when a developer comes to you and asks, “Where did the database go? The database was there one minute, and the next it was not.” Only one thing could be worse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue? This is when an audit can save the day. Through an audit, you can discover who dropped the database and when it happened. Then you have hard data to take back to the team to again ask what happened. Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

default trace

Transact-SQL

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

DECLARE@DBNamesysname='AdventureWorks2014'

,@d1DATETIME

,@diffINT;

SELECTObjectName

,ObjectID

,DatabaseName

,StartTime

,EventClass

,EventSubClass

,ObjectType

,ServerName

,LoginName

,NTUserName

,ApplicationName

,CASEEventClass

WHEN46

THEN'CREATE'

WHEN47

THEN'DROP'

--WHEN 164

-- THEN 'ALTER'

ENDASDDLOperation

INTO#temp_trace

FROMsys.fn_trace_gettable(CONVERT(VARCHAR(150),

(SELECTREVERSE(SUBSTRING(REVERSE(path),

CHARINDEX('\',REVERSE(path)),256)) + 'log.trc'

FROM sys.traces

WHERE is_default = 1)), DEFAULT) T

WHERE EventClass in (46,47)

AND EventSubclass = 0

AND ObjectType = 16964-- i just want database related events

AND DatabaseName = ISNULL(@DBName,DatabaseName);

SELECT @d1 = MIN(StartTime)

FROM #temp_trace;

SET @diff= DATEDIFF(hh,@d1,GETDATE());

SELECT @diff AS HrsSinceFirstChange

, @d1 AS FirstChangeDate

, sv.name AS obj_type_desc

, tt.ObjectType

, tt.DDLOperation

, tt.DatabaseName,tt.ObjectName,tt.StartTime

, tt.EventClass,tt.EventSubClass

, tt.ServerName,tt.LoginName, tt.NTUserName

, tt.ApplicationName

, (dense_rank() OVER (ORDER BY ObjectName,ObjectType ) )%2 AS l1

, (dense_rank() OVER (ORDER BY ObjectName,ObjectType,StartTime ))%2 AS l2

FROM #temp_trace tt

INNER JOIN sys.trace_events AS te

ON tt.EventClass = te.trace_event_id

INNER JOIN sys.trace_subclass_values tsv

ON tt.EventClass = tsv.trace_event_id

AND tt.ObjectType = tsv.subclass_value

INNER JOIN master.dbo.spt_values sv

ON tsv.subclass_value = sv.number

AND sv.type = 'EOD'

ORDERBYStartTimeDESC;

DROPTABLE#temp_trace;

This script will now query the default trace to determine when a database was dropped or created. I am limiting this result set through the use of this filter: ObjectType = 16964. In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace. What if you wished to not be entirely dependent on the default trace for that information? As luck would have it, you don’t need to be solely dependent on the default trace. Instead you can use the robust tool called extended events. If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

Auto-generated statistics names can seem like they are entirely random, but there is a method to the madness. With a little effort and a bit of TSQL trickery, we can decode those names and reveal what the names really mean.

The default collation for SQL Server is a pretty bad idea. Sure, it works but so does SQL Server 7. When you have the opportunity to update to more current technologies, it is a good idea. Sometimes though, that upgrade can come with some pain. This article shows how to alleviate one such pain point by fixing problems related to collation conflicts and XE.

This article has just shared multiple tools to help you become more acquainted with the Query Store! This acquaintance is coming via an extremely powerful tool called Extended Events. Through the use of these two sessions and two additional scripts, this article demonstrates how to become more familiar with the internals for QDS.

The ability to quickly and easily trace a query is important to database professionals. This script provides one useful alternative to trace a specific spid similar to the method of using the context menu to create the trace within SSMS and Profiler.

This article takes us to the edge with a couple of CRM related errors after changing the service account to a more secure Managed Service Account. Despite the CRM reports working properly within Report Manager (via SSRS), the reports would fail in CRM.

Working with Extended Events will help you become a better DBA. Working with PoSh can also help you in many various tasks to become a better DBA. Combine the two and you just might have a super weapon.