Over the past few weeks, I saw on social media that many of my #sqlfamily members were not attending the PASS Member Summit conference this year. It made me want to blog about why I am attending this year.

Many years ago, I heard that the PASS Member Summit conference could change your career as a data professional. I thought that statement was a great marketing pitch until I attended for the first time in 2011. These days, I get more excited for other people than myself. With that said, here are three reasons why I am attending this year’s PASS Summit.

Local Grass Roots

Being a chapter leader, I always want to do everything I can to help my local user members, and I love seeing them succeed and grow. In fact, it’s been fun watching some of the members grow professionally and in the community. One of Austin’s Finest SQL Server presenters, Lance Tidwell will be doing a full session at 3:30 pm on Friday on Parameter Sniffing the Good, the Bad, and the Ugly. There is not another session I look forward to seeing more this year (Yes, I am also presenting but luckily not at the same time as Lance).

The Dream I Never Had

After coming back from my very first PASS Summit in 2011, I had all kinds of thoughts on how my career would evolve. I had dreams of making six figures, working from home, being my boss. I even had mentors in the PASS community who helped me realize all those dreams were possible. I never imagined that I would hire my first my first employee directly from conversations I had with a local user group member. I tried to help Angela Tidwell (Yes, Lance’s wife) break into the IT field a few times. After several conversations, I learned we could help each other out so this week Angela became my second employee. Angela is at the PASS Member Summit this week as a first timer. I hope I can do a good job introducing her to everyone just like Tom LaRock did for me when I was a first timer.

If you are at the PASS Member Summit and you see Angela, please say hello. Just please don’t do it during the middle of Lance’s session on Friday.

The Speaker That Almost Never Presented

Many years ago, I had a boss who I knew would be a great speaker in the PASS Community. Like most people, he was afraid of public speaking. I had to dare him to go to the local Pittsburgh SQL Server User Group with me and co-present. When I say co-present, I meant just stand next to me and share some real-world stories while I do demos. Now he is speaking at the PASS Member Summit for the second time in a row. I love being able to say I knew him when. Now he is a superstar, and I look forward to watching him succeed and continue to grow in the SQL Server Community.

[Update 9:48 PST]

Why are you attending this year’s PASS Member Summit? I would love to hear your reasons. If you couldn’t make it this year you can still watch parts of the conference live on the internet.

Today, I want to focus on how we can monitor wait statistics in an Azure SQL Database. In the past, I blogged about how you should benchmark wait stats with the box product. This process will give you misleading data in Azure SQL Database. You will want to focus on wait stats that are specific to your database as you are using shared resources in Azure SQL Databases.

Finding Database Waits Statistics

Query we are talking to you!

With an instance of SQL Server regardless of using IaaS or on-premise, you would want to focus on all the waits that are occurring in your instance because the resources are dedicated to you.

In database as a service (DaaS), Microsoft gives you a special DMV that makes troubleshooting performance in Azure easier than any other competitor. This feature is the dm_db_wait_stats DMV. This DMV allows us specifically to get the details behind why our queries are waiting within our database and not the shared environment. Once again it is worth repeating, wait statistics for our database in a shared environment.

The following is the script is a stored procedure I use to collect wait statistics for my Azure SQL Databases. I hope it is a helpful benchmarking tool for you when you need to troubleshoot performance in Azure SQL Database.

Special Wait Statistics Types

The following are wait statistics you will want to focus on specifically in Azure SQL Database. If you made it this far, I strongly encourage you to read how DTU is measured. That blog post will help you understand exactly why these waits can be signs of DTU pressure.

IO_QUEUE_LIMIT : Occurs when the asynchronous IO queue for the Azure SQL Database has too many IOs pending. Tasks trying to issue another IO are blocked on this wait type until the number of pending IOs drop below the threshold. The threshold is proportional to the DTUs assigned to the database.

LOG_RATE_GOVERNOR : Occurs when DB is waiting for quota to write to the log. Yes, Azure SQL Database is capping your transactional log writes to adhere to DTU.

SOS_SCHEDULER_YIELD: This occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait, the task is waiting in the runnable queue to get a scheduler to run. If your DTU calculation is based on CPU usage you will typically see these waits.

Want More Azure Articles?

If you enjoyed this blog post I think you will also enjoy the following related blog posts.

A few months ago, I posted a question over on ask.sqlservercentral.com. In a nutshell, it was how do you measure DTU? How could you pull the data shown in the Azure Portal graphs with T-SQL? No one answered, so this motivated me get off my butt and answer my question. In doing so, I wanted to share the results with all my readers as well.

I started thinking like a detective. If I was DTU how would I want to be found with T-SQL?

It was the Azure Portal in the Query Store

Knowing, that both “sys.dm_db_resource_stats” and “sys.resource_stats” holds data that is used to calculate DTU. I decided to leverage Query Store on an Azure SQL Database to see if I could quickly see how DTU is calculated. Behold, I was right.

Query Store

The Secret Sauce

The whole query is below. Right now, let’s just focus on the secret sauce. The secret sauce is how DTU percentage gets calculated. In a nutshell, the maximum of CPU, Data IO, Log Write Percent determine your DTU percentage. What does this mean to you? Your max consumer limits you. So, you can be using 1% of your IO but still be slowed down because CPU could be your max consumer resource.

A Future Blog Post

Now that we can calculate DTU we could trend this data and automatically make changes like moving up and down from the current performance tier. I would love to wrap this into an Azure SQL Database Alert but a process in PowerShell might be a great starting point.

I had the pleasure of catching up with John Parkinson, one of my mentors and a great friend of mine. It feels like yesterday John, GWC of AITP, and I brought two SQL Saturday’s to Wheeling, WV (#36 and #$80).

John reached out to me to do a podcast with him talking about my SQL Saturday experiences so people in the WordPress community could learn about other all-day training events. The result is the 10-minute video which is provided below.

Unfortunately, with Azure SQL Database you are not able to take an existing SQL Server Backup and restore it on an Azure SQL Database server. We are talking about Microsoft’s database as a service (DaaS) offering, not Azure VMs also known as infrastructure as a service (IaaS). The only current way to migrate an existing database is to move the schema and the data, period. We have some good tools that can make this seamless, especially with smaller databases and outage windows. You can easily use SQLPackage.exe (My recommended tool), SSMS Wizard , or import/export bacpacs.

Migrating Existing Databases to Azure SQL Database

This is great, except for the case I want to talk about today. What if you need to do a live migration with as little downtime (business wants no downtime) as possible with bigger databases. For example, say and existing 50 GB to 500GB database? Your only, option today is a good old friend of mine called transactional replication. You see, you can configure transactional replication and have the snapshot occur and all data in your current production system can be syncing live with your Azure SQL Database until it’s time to cutover which will make your cutover downtime as short as possible.

Below I will give you step by step instructions on how you can configure your subscriber. This would be your Azure SQL Database. The publisher would be your existing production database which could either be on-premise or an Azure VM.

Prereqs

To make this blog post as consumable as possible, we will assume a few prereqs have been completed. Links in the reference section are provided in case you might want help with the following prereqs.

Current production database has a primary key on all tables that need to be migrated. This is a requirment for transactional replication.

You have an SQL Authenticated Account configured for your Azure SQL Database

Publication currently exists

Step One:

Below we will start creating our subscription by utilizing an existing publication. For this article, we will use AdventureWorks. We start by creating a new subscription. I assume you have an existing publication on your primary server and that it is configured. We will use the same server for the distribution agent and the distribution database.

Next, we will walk through the wizard.

Step Two:

Select the publication you would want to use to add the new subscriber to use the empty Azure SQL Database. Remember in this case, were going to use this subscriber to migrate the schema and data to a new Azure SQL Database.

Step Three:

Now, you will need to configure the distribution properties. With Azure SQL Databases you will have no access to the OS and only the database. Therefore, you will need to create a push subscription. A push subscription is a nice way of saying all the agents that initialize the data sync, capture and send data are configured on the distributor server.

NOTE: If you plan on having a lot of publications it is recommended to use an individual SQL Server to isolate your distribution databases.

Step Four:

Now we will select where our subscriber will reside. The subscriber will be the Azure SQL Database that we are migrating too.

Now you will be able to select your database. In this case, we are going to use SQL 2016ctp3 database. This is an empty database that was created in the prereqs.

Step Five:

Now that our subscriber database is selected we need to define how we are going to connect to our subscriber.

We will use SQL Server authentication using an account on your Azure SQL Database that will allow you to create the schema and move the data. You will also need to update your SQL Azure firewall to include your existing distribution SQL Server as mentioned in the prereq section.

For simplicity of this guide, we are going to run the distribution agent under the SQL Agent Service account. Ideally, you would want a separate account that only had the security needed for transactional replication.

Next, we will define how often we will synchronize the data. We will select to run continuously. You could also have this run on a schedule as well.

Now we select when to initialize the subscription. Initializing the subscription will execute the snapshot agent (process) that will initialize building the schema and bulk inserting data from the publisher to the distributor and then this data will be sent to the subscriber.

NOTE: The snapshot process will require a schema lock on the publisher database so make sure you initialize at a time when the schema lock is appropriate. In this article, we will initialize immediately.

Step Six:

Now the hard work is done. We will go through the final processes of adding the new subscription with the wizard provided with SSMS.

Finally, once the New Subscription Wizard is successful, you will see your subscription like shown in the image below. You can then monitor the synchronization just like you would with any on-prem transactional replication configuration.

Conclusion

Configuring an Azure SQL Database as a subscriber is a lot easier than many IT Professionals would initially think. It’s also the best current option to reduce downtime required to synchronize data during a migration of an existing database. The data sync is done in advance and data changes will be kept in sync.

Once data is synchronized, you can stop access to the current production server and remove the subscriber and let the users access the new Azure SQL Database.

Keep in mind, as of the time of publishing this blog post an Azure SQL Database can only be a subscriber and not a publisher in transactional replication. What does this mean to you? Do performance and acceptance testing because you would have to generate a bacpac and export it and import it if you decide to move away from Azure SQL Database.

Today, I have achieved an SQL Someday moment. I am excited to share some exciting news. Microsoft has chosen me to receive the Microsoft “MVP” award. I am completely shocked as this is my first time obtaining this award.

It is a tremendous honor to be given this award. Words cannot describe how humbling it is just to be nominated for this award by peers within the Data Platform community. I am blessed to be part of the community. I am thankful to work in a career that supports the ability for everyone to connect, share and learn. I look forward to doing the same as a new member of the MVP community.

Who Are Microsoft MVP’s?

The following excerpt comes directly from the Microsoft MVP website.

“Microsoft Most Valuable Professionals, or MVPs, are community leaders who’ve demonstrated an exemplary commitment to helping others get the most out of their experience with Microsoft technologies. They share their exceptional passion, real-world knowledge, and technical expertise with the community and with Microsoft.”

How My Journey Started

I would never forget and always be thankful to Adolph Santorine my boss at the time who told me, “John, I think you should get involved in our local Association for Informational Technology Professionals (AITP) chapter.” He shared how it help him personally and how he thought it could be beneficial for my career. Adolph was right. I learned so much from some well experienced IT leaders. The Greater Wheeling Chapter of AITP is where my community involvement started, and I was lucky to start with a community who has been connecting, sharing and learning since 1960’s. When I decided to get my local community involved in the SQL Server community my fellow AITP members had my back and helped make our event a success. From this moment, I was introduced into PASS and the SQL Server Community. I have been connecting, sharing and learning ever since.

How Will Things Change?

They will not change, and to me, that is the greatest part of this award. I still plan to connect, share and learn with as many people as possible. The data platform is evolving. I plan on being involved in the community and building relationships with individuals who are interested in seeing the Microsoft Data Platform grow and succeed.

Special Thanks

I would like to thank my family, friends, and mentors. Without their help and support, I would never be the person who I am today or tomorrow.

I would also like to thank everyone I have worked with in my career. Thank you to everyone at Deloitte, Schedule Star, Orrick Herrington & Sutcliffe, RDX, Dell and Linchpin People. I am truly blessed to have a lot of great mentors and friends. Thank you for giving me the opportunity to learn, grow, and have a career doing what I love to do.

DONT FORGET TO NOMINATE

Finally, you might not know this, but anyone can nominate someone for a Microsoft MVP Award. You don’t have to be an MVP or a Microsoft employee. I have personally been nominating people for years and will continue to do so as an MVP. If you know of anyone who you think is deserving of the Microsoft MVP Award nominate them.

When you add a database to an availability group, the replicas must synchronize the data between the availability groups to join the database on the replicas. In the past, data initialization has been done with mirroring, log shipping, backup, and restores. Personally, I have been a big fan of log shipping for the initial data synchronization of VLDB’s especially when you need more than two replicas. Here is how I added a 60TB (Yes, TB not GB) database to an availability group that utilized multiple data centers.

Automatic seeding is a feature that has been in Azure SQL Databases for a while. It’s how the initial data synchronization occurs for Geo-Replication of Azure SQL Databases. Automatic seeding utilizes a VDI Backup to take a copy only backup and send it over the endpoint network to seed the replicas and then join the databases with the replicas. This eliminates the need to manually take full and log backups from the primary replica to all the secondary replicas. It will also join the database on the replicas for you.

Bonus Feature of Automatic Seeding

There is also a bonus feature of automatic seeding for DBA’s and Information Technology professionals. Even if you decide to not use automatic seeding I recommend testing this feature as automatic seeding can be a great way to stress your endpoint network to validate its throughput.

Background Information

This availability group has been configured with a separate 10Gbps network dedicated to endpoint traffic. Nothing else is active on the network or the replicas during the time of testing.

Setup

I configured the following performance monitor counters.

Bytes Received/sec on Secondary replicas

Bytes Sent/sec on Primary replica.

I also configured the following extended event session to monitor seeding activity on the primary and secondary replicas. We will focus on the “hadr_physical_seeding_progress” event today. We will talk about others in a future blog post.

The following T-SQL script is then used to read the results once your seeding process has completed. We will talk about the results in the results section below.

DECLARE @XFiles VARCHAR(300) = 'S:\MSSQL13.MSSQLSERVER\MSSQL\Log\autoseed*'
;WITH cXEvent
AS (
SELECT object_name AS event
,CONVERT(XML,event_data) AS EventXml
FROM sys.fn_xe_file_target_read_file(@XFiles, NULL,NULL,NULL)
where object_name like 'hadr_physical_seeding_progress')
SELECT
c1.value('(/event/@timestamp)[1]','datetime') AS time
,c1.value('(/event/@name)[1]','varchar(200)') AS XEventType
,c1.value('(/event/data[@name="database_id"]/value)[1]','int') AS database_id
,c1.value('(/event/data[@name="database_name"]/value)[1]','sysname') AS [database_name]
,c1.value('(/event/data[@name="transfer_rate_bytes_per_second"]/value)[1]','float') AS [transfer_rate_bytes_per_second]
,(c1.value('(/event/data[@name="transfer_rate_bytes_per_second"]/value)[1]','float')*8)/1000000.00 AS [transfer_Mbps]
,c1.value('(/event/data[@name="transferred_size_bytes"]/value)[1]','float') AS [transferred_size_bytes]
,c1.value('(/event/data[@name="database_size_bytes"]/value)[1]','float') AS [database_size_bytes]
,(c1.value('(/event/data[@name="transferred_size_bytes"]/value)[1]','float') / c1.value('(/event/data[@name="database_size_bytes"]/value)[1]','float'))*100.00 AS [PctCompleted]
,c1.value('(/event/data[@name="is_compression_enabled"]/value)[1]','varchar(200)') AS [is_compression_enabled]
,c1.value('(/event/data[@name="total_disk_io_wait_time_ms"]/value)[1]','bigint') AS [total_disk_io_wait_time_ms]
,c1.value('(/event/data[@name="total_network_wait_time_ms"]/value)[1]','int') AS [total_network_wait_time_ms]
,c1.value('(/event/data[@name="role_desc"]/value)[1]','varchar(300)') AS [role_desc]
,c1.value('(/event/data[@name="remote_machine_name"]/value)[1]','varchar(300)') AS [remote_machine_name]
,c1.value('(/event/data[@name="internal_state_desc"]/value)[1]','varchar(300)') AS [internal_state_desc]
,c1.value('(/event/data[@name="failure_code"]/value)[1]','int') AS [failure_code]
,c1.value('(/event/data[@name="failure_message"]/value)[1]','varchar(max)') AS [failure_message]
FROM cXEvent
CROSS APPLY EventXml.nodes('//event') as t1(c1)

Results

The 600 GB databases took about 66 minutes toseed across the network from a primary replica to the secondary replica. I noticed 1.4 Gbps of consistent throughput during the seeding process. This makes a lot of sense as it caps out around what the storage system can deliver in this environment.

The first thing I would look at for benchmarking throughput for network activity would be the bytes sent per second from the primary replica and bytes received per second on the secondary replicas.

AG Seeding VLDB Primary Replica – Bytes Sent per Second

AG Seed VLDB Secondary Replica – Bytes Received per Second

I am seeing average around 1.4 Gbps. Normally, just looking at bytes sent and bytes received will be good enough for you to measure your throughput. Especially, when nothing else is utilizing the dedicated endpoint network. In the field, I usually do not see dedicated networks for endpoint traffic so I wanted to take this a step further and monitor with some of the new extended event events for automatic seeding.

Here is a look at the raw data from the extended event capture showing the progress and throughput of the seeding.

The following is a nice excel graph showing the throughput in Mbps. I added an extra row with zero for throughput just to show the rise and fall of network usages as seeding starts and completes.

I love Azure and I love PowerShell. They are a perfect marriage. It’s amazing how much automation you can do inside of Azure with some PowerShell scripting. In fact, you hardly see me using the Azure Portal these days.

Automation is great until manual processes are required. I have been able to automate a lot of things in Azure. One thing I struggled with was automating the process of logging into Azure. Well, I have figured it out so I wanted to share it with everyone.

Let me explain, for every single script I have I need to login to Azure in order to run them. This requires us to execute “Login-AzureRmAccount.” While this seems trivial by default it forces the following popup screen shown below.

For me, the goal of automation is the ability to have things run without human intervention. Therefore, this popup is a huge pain in the butt. I thought I had to live with this until my friend Shawn Weisfeld at SQL Saturday Houston mentioned to me that he thought this was possible to automate. We exchanged a few emails and this blog post is the result of them.

What is Azure Resource Manager?

Let’s take a step back, what is Azure Resource Manager and why should you care about it? Azure Resource Manager is the new recommended way of interacting with Azure. You can deploy, manage and monitor all of the resources for a solution inside of a resource group. This makes life so much easier than having to implement and configure each piece of your infrastructure for a deployment separately. Also, for testing purposes when you are done just delete your resource group and you will remove everything. Resource Manager is the future, and it makes our lives in Azure better. This is our hub where most of our automation will reside.

Now back to automation of the login. Azure has implemented Active Directory services. Could it be possible for us to create a user or something similar to a group-managed service to allow us to automate authentication without human intervention? This got me thinking we might be able to leverage it for automating the login process.

Let The Games Begin

Step one starts with logging in. Yes, were going to do this manually by running the code below and providing our login and password credentials. Hopefully, this will be your last time 😉

We are going to want to grab our TenatId. You should see something similar to the screenshot below with the actual values. The TenantId will be used in our final step.

Login-AzureRmAccount

Step Two

Now that we are logged into the Azure Resource Manager (RM) we will start by creating our Azure RM Active Directory application. This will allow us to create a relationship that identifies an application with Active Directory.

Here we will also provide your password and display name you will use for the application entity in Active Directory. The IdentiferUris parameter is how you find this application entry as well.

Step Three

Next, we will create a service principal for our application. This will end up being an instance of the application that needs to access other resources. We will utilize the application identifier established by our previous step.

Step Five

Hopefully, by now you might be wondering some of the following things. How the heck am I using Active Directory? I have never set one up in Azure so how would I connect? What is the domain name?

Your account that you use to connect to the Azure Portal is pulled into an azure active directory when you create it. Humor me, go log in to the Azure Portal. Now take your mouse over your login in the top right corner. This should be over your login name (john.sterrett@linchpin…) in the screen shot below.

If you look at the last line it will say Domain: something@onmicrosoft.com. That is your domain name and it will be used as part of your login in our final step.

Step Six

Time to pull everything together so we can now automate the process of logging in with Login-AzureRmAccount command.

First, we will need to create our credential which will consist of a login and a password. The login will be the ApplicationId unique identifier and the domain name combined together like “11c67bbc-16h4-8ka5-6y58-f346baou123@something.onmicrosoft.com” The application Id could be found in step three. We also show it if you run all the scripts together in the script below. Our password comes from step two above. We will talk about securing it a little later.

Finally, in step one when we originally connected by manually typing our username and password PowerShell outputted our tenantId. We will need this as well in as its required for us using credentials with Login-AzureRmAccount.

Final Thoughts

Automating the process of logging in was difficult for me to understand at first. I know of some others who have given up. I hope this blog post makes it easier for everyone who wants to automate the process of logging into Azure Resource Manager with PowerShell.

In the process, your password is in plain text and not secured. I wanted to use Azure Key Vault as I thought it would be perfect for this. Unfortunately, you have to be logged into Azure Key Vault so it is not ideal for storing the password you need to login to Azure Resource Manager. Shawn came up with a good solution to encrypt and decrypt your password locally.

I have a client that wants to run on the latest and greatest version of SQL Server. We have been testing on RCs (Release Candidate), and we are in the process of upgrading non-production environments to RTM GA (General Availability) as it was released today on June 1st, 2016 . When SQL Server 2014 RTM’ed (is that a word?) upgrades from the latest RC was supported. I expect the same with SQL Server 2016. Currently, this isn’t specified in the SQL 2016 Release Notes. I will update the post when it’s confirmed. I am not a SQL Server MVP so I have one sending a message to their message board to see if we can get confirmation on if this upgrade will be supported. Obviously, I wouldn’t recommend doing this with your live production instances of SQL Server until it is confirmed. Even, if this isn’t supported, I have a automated install process so it wouldn’t take us much time to start fresh.

Read The Release Notes

Everyone, should validate if they need to apply KB 3138367. msvcr120.dll should be version 12.0.40649.5 or higher.

Here are screen shots of validating that even Windows 2012 R2 Server with SQL 2016 RC3 needs the .NET update.

Here I am applying KB 3138367.

This is restart number one for anyone who is counting at home.

This validates that msvcp120.dll is on the recommended version 12.00.40649.5 by the SQL Server 2016 Release Notes (Seriously, please read these..)

Questions You Should Ask Before Upgrading?

Does the business need to or wants to upgrade? Will this upgrade even be possible? Just because it was allowed in SQL 2014 doesn’t mean it will be possible with SQL Server 2016. Will it be supported? Can I change editions from Evaluation to Developer (Now free in SQL 2016) or Enterprise, Standard?

Upgrading SQL Server 2016 RC3 to GA

The following are the screen shots. I will add some more detailed text in here tonight per step.

I am using the developer edition for testing here. It’s cool that I can flip from evaluation to developer mode.

Here we can see that it is possible to upgrade RC3 as its detected as an existing upgradable version of SQL Server for SQL 2016 GA.

The instance ID changes even though we are able to keep the same named instance. This gets up a separate root folder for the SQL 2016 GA install.

The server configuration caught be off guard as I expected seeing the database engine and sql agent services listed as well.

Finally, you can see the SQL 2016 RC3 Evaluation Edition was able to be upgraded to SQL Server 2016 GA. I was also able to change this from Evaluation Edition to Developer Edition as well.

The Talks

This was the first time I presented my Azure Databases for DBA’s talk at a conference. I got some great feedback and it was a lot of fun. I ended up using videos for the demos to make sure they would fit within the timeline of the presentation. Running the High Availability and Disaster Recovery Virtual Chapter I am always thinking of backup strategies this even includes my own demos 😉

At the end of the day I gave my “Why Did My Plan Change: Intro to Query Store” presentation. I was able to tell some real-world stories about lessons learned while monitoring plan changes and explain how upgrades can greatly improve performance when you find the few queries that run slower after an upgrade and fix them.

THE CAR THAT WOULDN’T START

Leaving the event I though I was on my way to the after party when I realized my 1990 Volvo wouldn’t start. Some people would be upset and frustrated but looking back a few days later, I was blessed. Let me explain. Instead of being stranded by myself Jamey Johnston stayed to help. He made sure I would be okay. The bonus in this for me was that I was able to chat with Jamey and get to know him better. He is an amazing guy. I wish we had a lot more people like Jamey in our SQL Server Community!