During the research I've faced two problems:
1. I had to wait several seconds to get the full result set.
2. I had to scroll down and shuffle 115 different data sets to find the counter I want.

To eliminate both these problem all these different counters have to be in the same table/data set.
That will make research easier and data can be stored or compared with the base line.

So in order to achieve that we have to parse results of "DBCC MEMORYSTATUS".

I've did a quick search over the internet and found several parsing solutions, but all of them are based on PowerShell.
There is no problem with PowerShell for me, but the whole Idea seems wrong.
It is like if I'm asking for driving direction I can get the answer only in Chinese.

Being in frustration, I've decided to make it available in native SQL language.
So, here is the script:

Side notes:

1. Script uses "xp_cmdshell" command, which can be disabled on your server. In case you've got to the point when you are using "DBCC MEMORYSTATUS" command you have to have enough privileges to turn it on/off. Also, there are plenty of resources in the Internet on "how to enable xp_cmdshell".

2. Within "xp_cmdshell" command I use trusted connection "-E" and default local server: "-S localhost". These parameters can be changed adjusting for your instance/environment.

3. I've tested it on SQL Server 2014, but I'm pretty sure the script is supposed to work from 2008 to 2016. For SQL Server 2005 it would require some adjustments.

4. For the Value column I've used "MONEY" data type, which is limiting amount of "Available Virtual Memory" by 838 TB. There is possibility to use BIGINT, but "Memory Broker Clerk (Buffer Pool)" returns decimal values and that might be the problem. If you'd like you can use value data type as "DECIMAL(38,6)", but that I think would be the extreme case, which you might like to catch.

Saturday, August 27, 2016

Usually, I do my blog posts after visiting SQL Saturday events, but I want to change that tradition and talk about them before the actual event.

In this post I want to talk about sessions I have personal interest in and I understand that it might drive people from other sessions. However, it will definitely drive some audience in and I hope those people will thank me for an additional information/suggestions/guidance.

Slot #1. 9:00 A.M. - 10:00 A.M.

In that slot there is ONLY ONE session I'm going to visit. That is my own session "Inside of Indexes". I'll be talking about:
- What indexes are;
- How SQL Server is using different types of indexes to retrieve data in the best possible way;
- What are Page Split and how they impact performance;
and some more.
I hope my session will be very interesting, but I'm afraid that half of the people, interested in indexing, will split between my session and Jeffrey Garbus' session "Indexing for performance".

If I didn't have my session in the very first slot I'd definitely go to Aaron Nelson's session "Achieve More in Less Time by Using the New SQL PowerShell".

Slot #2. 10:15 A.M. - 11:15 A.M.

In the second slot, the winner, without a doubt is Adam Machanic's "SQL Server 2016 "Hidden Gems". The room has to be big enough to accommodate all attendees. If you desperately want to visit his session, I'd suggest to come to the previous session of Angela Henry's "Lightning Talks" to secure your space.

Even though Adam Machanic will be the shining star in that slot I won't go for his session because I've already attended it on another event.

The second session in the priority is Rick Heiges' "Winning with Columnstore - A Customer Story". Since Microsoft nailed down Columnstores in SQL Server 2016 and because of Rick's HIP (High Influential Power) - that would be the session to visit.

Despite on all advantages of Rick's presentation I won't go there. The trick is: I want to visit his session two weeks later on SQL Saturday #545 in in Pittsburgh.
So, I'll have a tough choice between Rushabh Mehta's "Mastering and Enhancing Company Data" and William Assaf's "SQL Server Permissions and Security Principals".

Slot #3. 11:30 A.M. - 12:30 P.M.

The Best and "Must Go" in the third slot is Geoff Hiten's session "What’s your Imperative doing in my Declarative?". The name is not very descriptive, but that session can do a Inventor's Light bulb effect on any beginner SQL Developer, who just graduated college with most of the experience in Java and C#. I guarantee, there will be the "AHA! moment" during that session, which will shift a paradigm of your programming experience for life.

So, because I already know all of that I won't go to that session.
The next in the line is Tom Norman's "Encrypting Data within Sql Server". I'm from DC region and "Security" here is always #1 priority.

Slot #4. 1:45 P.M. - 2:45 P.M.

After lunch session. There will be a lot of other performance tuning topics I like, but my choice will be Brian Carrig's session "Managing Very Large Databases with SQL Server".

Slot #5. 3:00 P.M. - 4:00 P.M.

Very good session is supposed to be of Kevin Feasel's "APPLY Yourself" about the art of using "Apply" operator.
Wonderful session of Jonathan Buller: "Tips and Tricks for the SSIS Catalog" about new abilities of managing SSIS packages directly from SQL. That topic is not fully covered in SQL community yet and I'm waiting the time when we will discuss parallel executions of multiple queries using SSIS catalog and SSIS packages addressing their own scheduling manipulating SSISDB.

Because it will be the last slot of sessions and everybody, including me, will fill pretty tired, I'd probably will go an relax listening to "Replication v. AlwaysOn, the great debate", hosted by Chris Skorlinski and Curt Mathews. They encourage to bring your own questions to that discussion, so, I assume there might be several of very good blog posts delivered only based on that conversation.

See you in Charlotte!
And not hesitate to show your smile when I'll make your picture.

#1 - INSTANCE and #3 - WINDOWS are the most common authentication types: they are simply SQL and Windows authentications.

Will review the more exotic ones: #0 - NONE and #2 - DATABASE.

SQL Server User with "NONE" authentication:

"TestUser0" with NONE authentication is just a database user without SQL Server Login.
That means that user can't authenticate into a SQL Server. Because of that it can't be assigned to any person or an application using SQL Server.

That user account can be used within "EXECUTE AS" in functions, stored procedures and triggers.
That user account can own schemas and have privileges and permissions, which will allow restricted user to have an access to objects and actions via stored procedures or functions, which she/he is not granted to see or perform.

For example I can try to do a select as user "TestUser1" from newly created table.

However, if I create a stored procedure with EXECUTE AS permissions for user "TestUser0" and grant access to that procedure to "TestUser1" then that user will be able successfully see content of a new table:

Create user with NONE authentication is pretty simple, you just specify that you want to create a user with no SQL Server login:

USE TestDB;GOCREATE USER [TestUser0] WITHOUT LOGIN;GO

SQL Server User with "DATABASE" authentication:

That is absolutely unique type of authentication.
At first, when I've tried to connect to SQL Server using "TestUser2" I've got an error:

Then I've tried to go to additional connection "Options":

And explicitly specified the name of the database I want to connect to:

That made the trick. I was be able to connect and see my database:

Wow! I can see ONLY and ONLY one database I'm assigned to.

I've tried to see what else I can see:

SELECT * FROM sys.databases;

And here is the list of databases I can really see:

Along with "TestDB" I'm exposed to "master" and "tempdb".
I can see data from tables in "master", I can create "#Temp" tables, but can write directly only to "TestDB".

Now, here is how to set up "DATABASE" authenticated user. It is pretty simple, just specify a user name in current database with a password:

USE TestDB;GOCREATE USER [TestUser2] WITH PASSWORD = 'TestUser2';GO

Actually, that is not only it. Before you be able to setup database authenticated user you have to switch your database into Partial Containment mode and to do so, you have to enable it on the server level first:

Looks cool?
Yep, it is really cool that you can COMPLETELY separate a user form ANY OTHER databases on your server!!!

BUT. There is always "BUT". There no "Silver Bullet" nor "Free Cheese".
If you use Partially contained databases cannot use replication, change data capture, or change tracking and some other features.

The good news: you are good to go with AlwaysOn.

Conclusion:

Because Contained Database Users can easily coexist with all other types of users and they are highly restricted within boundaries of one database, since SQL Server 20014, it can be very good addition to your tool set of other security features.

Will try to fix that problem the conventional way by trying to shrink database to 20% of it's current size:

DBCC SHRINKDATABASE('TestFF',20);

As the result, got following message:

DBCC SHRINKDATABASE: File ID 1 of database ID 5 was skipped because the file does not have enough free space to reclaim.
DBCC SHRINKDATABASE: File ID 2 of database ID 5 was skipped because the file does not have enough free space to reclaim.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

What is that??? I have 85% of free space in my File 1 !!! What is the problem?

Will try to target an individual file and reduce its size to ~2Gb:

DBCC SHRINKFILE('TestFF',2000);

After waiting for almost 2 minutes I've got following error:

Msg 0, Level 11, State 0, Line 144A severe error occurred on the current command. The results, if any, should be discarded.

Then I've tried to reduce file size for anything I can:

DBCC SHRINKFILE('TestFF');

Then almost immediately got following result:

After checking the size again see very strange behavior: size of the file have not reduced, but amount of USED space reduced by 300Mb:

Have you ever been in the same situation before when no matter what, but you can't shrink your database?

So, now the time to see WHY SQL Server can't shrink our file.
I use following script to see object allocations/fragmentation in a data file:

As you can see, I definitely have a lot of free space, but my data are so spread across the file and especially up to it's border, that there is no way to make file size smaller.

If we zoom at the very tail we can figure out the names of tables at the very end of the file, which prevent file from shrinking:

After I performed "Index Rebuild" for all user tables the Page Allocation picture changed to this:

As you can see, the most of the tables were moved to the beginning of the database, but some are still wondering somewhere in the middle. Lets Zoom and figure out which tables might prevent Shrinking of the file:

Unfortunately, it is the system table and we can do nothing about it.

However, will try to shrink the file again:

DBCC SHRINKFILE('TestFF',2000);

This time I was more lucky sis not get an error, but also had my file shrank:

Here is how fragmentation looked after the Shrink:

The main conclusion of that exercise is that if you struggle with shrinking your database, look at your index fragmentation, that can cause errors and inability to shrink files.

Monday, August 22, 2016

Might happen that you can't access SQL Server shared drives, but you desperately want to copy file from the server to another location.

Here is the solution I have:

Step 1. Reading a file.

At first, you have to read the file you want to copy into a SQL Server. You have to choose a database to perform that action. It can be Test database or you can create a new database to perform that action or it can be even TempDB. There is only two requirements for the database:
- It must not be a production Database;
- Database should have enough of space to accommodate the file you want to copy.

So, as the firs step in copying we create a table in TempDB and read backup file into that table:

Step 2. Extract a File.

From this point you are going out of SSMS and run a NT Shell commands.
At first you press Win+R, then type "CMD" and then press enter.
That has to open new "CMD.EXE" window.
You have to choose a folder on your computer where you will extract your file.
You also have to provide SQL Server name and credentials to it:

- After you fill server name and user name and press Enter, BCP utility will ask you for your password.
- Then it will ask you to enter "Enter prefix-length of field StoredFile [8]:" - By default it is "8", but you have to enter "0" and press enter.
- Then it will ask you for "Enter length of field StoredFile [0]:" - just press enter;
- Then it will ask you for "Enter field terminator [none]:" - just press enter;
- Then it will ask you if you want to save this format information - answer "n" and press Enter.

As the result, file will be extracted to your file system.

Step 3. Cleanup.

Just cleanup the space in your database by dropping the table:

USE tempdbGODROP TABLE tbl_StoredFile;

As you can see it is extremely easy to copy files using SQL Server, you just need to have enough of free space in a database to load a file.

That means ALL our code, which uses OLD Linked Server name is pointed now to the new SQL Server.Be very careful with that tool. Potentially you can create cross reference like this:

Linked Server "SQL2016_01" is now referring to actual SQL Server "SQL2016_02" and Linked Server "SQL2016_02" is now referring to actual SQL Server "SQL2016_01".
That is complete mess which can lead not only to loss/discrepancy of the data, but easy loss of a job.

However, it generates following error:Msg 229, Level 14, State 5, Procedure xp_readerrorlog, Line 1 [Batch Start Line 39]The EXECUTE permission was denied on the object 'xp_readerrorlog', database 'mssqlsystemresource', schema 'sys'.

In order to give the right to see Error Log DBA has to create user in "master" database and give rights to execute "xp_readerrorlog".

Step #2. Giving rights to execute "xp_readerrorlog".

USE [master]GOCREATE USER [MrConsultant] FOR LOGIN [MrConsultant]GOGRANT EXECUTE ON xp_readerrorlog TO [MrConsultant] GO

Then Consultant tries to execute several commands to get an idea what is going on on the server:

SELECT * FROM sys.dm_exec_connectionsSELECT TOP 10 * FROM sys.dm_exec_cached_plans SELECT TOP 10 * FROM sys.dm_exec_query_statsSELECT * FROM sys.dm_io_virtual_file_stats(NULL, NULL)SELECT * FROM sys.dm_server_servicesSELECT * FROM sys.dm_os_sys_infoSELECT * FROM sys.dm_os_windows_infoSELECT * FROM sys.dm_os_buffer_descriptorsSELECT * FROM sys.dm_os_sys_memory SELECT * FROM sys.dm_os_process_memorySELECT * FROM sys.dm_os_performance_counters

However, he/she immediately gets following error for each of these selects:Msg 300, Level 14, State 1, Line 3VIEW SERVER STATE permission was denied on object 'server', database 'master'.Msg 297, Level 16, State 1, Line 3The user does not have permission to perform this action.

So, SQL DBA has to solve the problem.

Step #3. Giving "VIEW SERVER STATE" permissions.

To give "VIEW SERVER STATE" permissions DBA has to execute following statement:

USE [master]GOGRANT VIEW SERVER STATE TO [MrConsultant];GO

To see list of the databases, their sizes, file names, locations consultant has to run following query:

SELECT * FROM sys.master_files

However, there is nothing in return.
DBA has to grant permissions to see databases' definitions:

Step #4. Granting databases' definition access.

USE [master]GOGRANT VIEW ANY DEFINITION TO [MrConsultant];GO

The main production database in the system is "AdventureWorks2014". Consultant tries to access that database, but gets following error:Msg 916, Level 14, State 1, Line 20The server principal "MrConsultant" is not able to access the database "AdventureWorks2014" under the current security context.

Step #5. Giving database access.

In order to troubleshoot a query Mr Consultant runs following statement:

DBCC SHOW_STATISTICS('Person.Person','PK_Person_BusinessEntityID');

And gets following errors:Msg 229, Level 14, State 5, Line 34The SELECT permission was denied on the object 'Person', database 'AdventureWorks2014', schema 'Person'.Msg 2557, Level 16, State 7, Line 34User 'MrConsultant' does not have permission to run DBCC SHOW_STATISTICS for object 'Person.Person'.

Consultant needs to read objects within the database.

Step #6. Granting Selection access.

Administrator has to grant rights to "SELECT" for Consultant in the Database:

USE [AdventureWorks2014]GOGRANT SELECT TO [MrConsultant]GO

SORRY, contractor have to be able to see your underlying data. If you have to keep the data secured, encrypt it.

Now Mr. Consultant wants to see actual query plan generated by a query:

SELECT * FROM Person.Person;

And gets an error:Msg 262, Level 14, State 4, Line 30SHOWPLAN permission denied in database 'AdventureWorks2014'.
DBA has to allow to see an Execution Plan.

Step #7. Granting Showplan access.

USE [AdventureWorks2014]GOGRANT SHOWPLAN TO [MrConsultant]GO

From this point the consultant has the most rights needed for performance and error troubleshooting.
He/she does not have any rights to:
- Insert/Update/Delete any record;
- Create/Alter/Drop any database object;
- Create/Drop Login or User;
- Grant/Deny/Revoke any permissions;
- Run DBCC commands such as "CHECKDB", "FREEPROCCACHE", "DROPCLEANBUFFERS", "PAGE", Etc.

That is the minimal level of trust you can maintain as a DBA with external contractor, which will allow him/her to perform the required research.

In case Contractor will be requested doing any changes DBA can assist with Object creation or giving individual permissions to the specific objects.

At the end will combine all scripts into one. If contractor needs access to more than one database you have to repeat last three steps for all databases he/she need the access:

USE [master]GOCREATE USER [MrConsultant] FOR LOGIN [MrConsultant];

GOGRANT EXECUTE ON xp_readerrorlog TO [MrConsultant];

GOGRANT VIEW SERVER STATE TO [MrConsultant];GOGRANT VIEW ANY DEFINITION TO [MrConsultant];GOUSE [AdventureWorks2014]GOCREATE USER [MrConsultant] FOR LOGIN [MrConsultant];GOGRANT SELECT TO [MrConsultant];GOGRANT SHOWPLAN TO [MrConsultant];GO

Step #Last One. After Consultant is done...

After consultant helped your company to solve a problem, do not forget to clean up after him/her:

Solution #1. Copy Log file content into tempdb table or into another database:

In this sample code I copy content into a temp table, permanent table in temp db and permanent table in TestDB. By doing this you have to be aware of the following:
1. Never copy Transnational Log data into the same database you are doing research in. By doing that you will change the log.
2. Be aware of the size of the data. It might take a long while to extract whole log and the amount of space required for that table will exceed amount of space used by the Log file.

If you went this path you can easily select needed data using WHERE clause.

Solution #2. Extract only needed columns.

Function "fn_dblog" returns about 130 columns. It is obvious you do not need all of them.
You can limit an output by selecting only certain columns:

This time I want to show a query to report top 10 SQL Serve queries in your system.

Will start with a diagram:

At first there is nothing interesting until you learn how to interpret the diagram.

The diagram is reporting three main parameters of SQL Server queries stored in the query plan cache:

1. Horizontal coordinate reports amount of CPU used by particular query. It goes from Left to Right. On this diagram the winner is query #1. It's execution time is way higher than for any other query. Obviously, Query #1 is my CPU eater.

2. Vertical coordinate reports amount of I/O used by particular query.It goes from Bottom to a Top. On this diagram the winner by I/O is query #2. It's I/O usage is higher than for any other query. That is my biggest I/O consumer.

3. Size of a circle indicates number of query executions. Her my winner is query #3. That parameter might not represent well the usage of my resources, but it is something I have to be aware of.

Besides of the diagram that query produces a list of most crucial query characteristics:

As you can see query #1 consumed more than 12.5 seconds of my CPU, query #2 used more than 2Gb of I/O and query #3 has been executed more than 1K times.

As you can guess and see in the query, all parameters are presented in Logarithmic scale to keep sizes and coordinates in manageable format.

What Next?

When we identified the most annoying queries in our system we can run another query to get actual queries' texts and plans. You just have to use value from "query_hash" column to identify the query you want to do a research on: