Just like last year, I wanted to write a blog post about how the past 12 months went. But this year all of a sudden everything changed for me. But let’s start at the beginning…

Beginning of the year
The beginning of 2017 was pretty relaxed. After the really busy year I had (2016), I decided for myself that I needed to turn it down a notch. The immense amount of hours I spend in the office and working from home on the evenings and weekends, the stress and workload, it all became too much in December of 2016. For 2017 I was planning to work a bit smarter instead of harder, and that should give me the opportunity to take a step back and see the rainbow through the clouds.

In the first few months this actually worked for me. It was quite surprising to be honest, but my stress started to decrease, and I could focus on the stuff that needed my attention. For example, just a few of the things I did in Q1 of 2017: Worked on our ISO 27001 certification process with a team of people, had some security calls with HUGE corporates (which were really nice), made preparations for knowledge sessions, etc.

There are a few things that do deserve an honorary mention. One of the things that started last year was a quarterly hacklab with Deloitte. In this hacklab they tested our digital- and physical-security with different approaches. Just a few things they tried are: pen-tests (penetration tests of our software and infrastructure), phishing, social engineering, media-baiting, etc. This was really fun to make this happen together with them.

Another big project that started was our migration from a hybrid platform to Azure. This started in February, and this was a really cool project to discover new things in Azure, and a challenge to do this with the least possible amount of downtime.

We also started with knowledge sessions within the company. Together with a colleague (Pascal) we hosted the first session in March.

Q2: The dark times began
The second quarter of the year started out really good. Together with Pascal (not only my colleague, but also my best friend and parter-in-crime in most cases), his wife Aga and their beautiful baby girl Emily we went to Poland for a holiday, and the baptism of the little princess. Even though some things didn’t work out the way we planned, I did enjoy my stay there. Great people, great food, beautiful country. Oh yeah, and some drinks here and there…

But without knowing, my life would change all of a sudden right after I came back from holiday. Before my flight left Warschau I called home. My parents asked me to stop by on the way home, because they had something they wanted to tell me. This conversation over the phone felt somewhat uncomfortable, and I was soon to discover why. So as soon as I landed at Amsterdam Schiphol, I drove to my parents house.

The message wasn’t sugarcoated in any way. They told me my mom received the results of some tests from the hospital: terminal lung cancer. Even when I write this now, I immediately feel that sinking feeling again like I felt on the 19th of April. Just like it was yesterday.

In the weeks that followed she needed to go to the hospital over and over again. And every talk we had with doctors changed the situation. It went from inoperable to “we can slow it down with chemo en medicine”, to “we think it’s curable”, to terminal. The final diagnosis was somewhere between 8 weeks and 8 months to live. I was in the room when the doctor said that. And even though my world collapsed, I didn’t show that. I remained strong, for her.

I’ll skip some months, because it’s hard to talk about all the things I’ve seen and experienced. All I can say is that it left its marks on my soul forever.

Eventually she went to a hospice in the village we live in (which was really nice, since I could spend all my time there without losing any time on traveling somewhere). The reason for that was that she needed more care than we could give her at home, mainly because the care consisted of medical treatment and giving her shots for pain medication. But looking back now, it was only for a short period of time, not even a week…

Just 2 months and 20 days (or 80 days in total) after the first diagnosis, she passed away. That’s when the darkness began for me. It’s hard for me to describe how this feels, but I know the lyrics of a song come fairly close to that feeling:

There’s an emptiness tonight,
A hole that wasn’t there before.

And I keep reaching for the light,
But I can’t find it anymore.

There’s an emptiness tonight,
A heavy hand that pulls me down.

They say it’s gonna be alright,
But can’t begin to tell me how.

‘Cause I’m just sitting in the dark,
In disbelief that this is real.

It’s hard to express the feelings to someone who hasn’t experienced it themselves. That’s what I found out by talking to friends who experienced similar situations. Even after 4 months and 26 days, I still can’t find the words to describe it.

Another description I can relate to is: it’s “survival mode”. Nothing matters anymore, just the care for my mom mattered. And even now, it’s hard to see the true value of things, because sometimes I’m still on auto-pilot I think. It’s so much easier to see the negative side of things, when you just feel pain. I didn’t see the things that happened around me. All I could feel is emptiness, anger, feeling lost, feeling left alone, frustration, powerlessness, and I can go on. My heart was broken (and still is).

There were fun times though
But even though this was happening at home, I did have nice moments as well. In May I traveled with 2 friends to Seattle (my first time to the US!) and attended BUILD 2017. This was a really nice event, we saw a lot of great (new) things, and talked to a lot of interesting and smart people.

After BUILD, we stayed for a couple of days because we had some appointments at the Microsoft HQ in Redmond. Most of the appointments were arranged by Julie Koesmarno (Website | @mssqlgirl), and we talked to a number of people in the SQL Server building. We discusses some issues, ongoing development within Azure (for both Roadmap as well as Microsoft), we talked to the tools-team (and I can now FINALLY talk about Project Carbon!), etc. Great times!

From cloud-first to cloud-only company
Over the last few weeks, we spend time on migrating the last on-premise (or private-cloud) assets into Azure. This concluded a long-awaited change, that we’ve been planning since I joined Roadmap (which is 2 years already tomorrow). This means we’ve changed from cloud-first company in 2016 to a cloud-only, Azure-only company in 2017.

This was a long-term goal we set ourselves around 2 years ago, and I can’t express how happy I am that we made this happen this year. By doing this we went from a very busy on-call rotation (where you were called about every single night), to an on-call rotation where you get incidental calls (my last on-call was 1 phone call on a Saturday morning).

I can honestly state that this brought us and the company forward in an incredible way. So also kudo’s to you Julie, for helping us out when we needed it!

On to 2018
So I don’t know what 2018 will bring me, but hopefully it’s going to be a great year. A few goals are on the list again, and hopefully I can tick some of those boxes next year (presenting on SQL Saturday(s) and events (so public speaking), attending PASS Summit 2018, etc.).

Hopefully you had a wonderful year, and together we can make 2018 even better.

In my previous blog post I explained how we are going to leverage the power of transactional replication, to move your on-premise databases to Azure SQL databases (PaaS). This works like a charm, but you do end up with a bunch of system-generated objects in your database. So how do you get rid of these?

What “mess” is left behind after disabling replication
After you’ve completed your migration to Azure SQL DB, and cut the strings on your transactional replication, you end up with a bunch of system-generated objects. These are used by SQL Server to “replay” the changes on the subscriber (inserts, updates and deletes) that were made on the publisher. This is just an example of one of our smaller databases:

As you can see, there are 3 stored procedures per table (ins, upd, del), and some other objects used by replication.

Cleanup
Fortunately the SQL Server team gave us a way to easily clean up these objects: sp_subscription_cleanup (that, even though MS Docs says it won’t, WILL work on Azure SQL DB).

This stored procedure accepts the following input:

@publisher: The name of the SQL Server instance of the publisher
@publisher_db: The name of the publishing database
@publication: The name of the publication (this was configured in the setup/configuration of replication)

By running this stored procedure on your subscriber, it cleans up all remaining transactional replication components. But where can you find the input for this procedure?

First I was looking at MSreplication_objects, but this doesn’t work on Azure SQL db (both MS DOCS says so, and I’ve tested it). So my guess was that all the other systemobject mentioned in related articles wouldn’t work as well.

So what do you do when you can’t find the answer yourself after extensive Googling/Bing-ing? Yes, you call in reinforcements (in this case William Durkin (Website | @sql_williamd), who knows a lot about replication):

But as you can tell by the reply on William’s response, I quickly found the answer by just trying something out: MSreplication_subscriptions. You can query this table on your subscriber to retrieve the publisher- and publication-properties. Again: even though MS DOCS says it won’t work, it does work on Azure SQL db.

So by using the output of MSreplication_subscriptions on your subscriber, you can quickly find the information needed to run sp_subscription_cleanup.

Automating the script with C#
Running the script on 1 or 2 databases can be done manually. But because I migrated an entire data platform with replication, we’re talking about A LOT of databases. So instead of doing this all manually (you need to connect to every single database to check this), I decided to create a small tool to help me with this.

In all honesty, my C#-skills aren’t that great, but my response to that is: “If it looks stupid but works it ain’t stupid”.

The tool connects to the instance you provided, returns a list of databasenames, and uses this list to connect to every database individually to check for replication components:

The tool can be used to connect to any Azure SQL db instance, as long as you have 1 login/user that can connect to all database on that server. If your server supports that (it depends on the configuration you use of course), you’ll see a list that provides you with the following information:

One of the downsides of using Azure Automation runbooks is the visibility of the runbook-outcomes. That’s one of the few downsides I could actually find when I wrote my “Running database maintenance on Azure SQL DB with Azure Automation” post. But because we execute the runbooks against our production environment, visibility is of high importance to us.

To get some more insights into this, I’ve implemented a try-catch block in every runbook we have, and send an email to our ITOps team when a runbook fails. So how did we do this?

Create an email runbook
To keep it a bit modular (and I know it could be set up even better), I’ve made a runbook that is only used to send an email to our IT Operations team. This email includes the runbook name and the error that occurred.

Because we are going to include sensitive account information in a Azure Automation runbook, it’s a wise idea to store the username and password of your Office365 account into a “Credential” in Azure Automation. This safely stores your account information, and allows you to use that in your runbook later. This option can be found in the Automation Account’s left menu.

This runbook sends an email from “no-reply@domain.com” to “ITOPS@domain.com”, and includes the input parameters (RunbookName and MessageBody) in the subject and body of the email. You can customize this based on your preferences.

Call the runbook
The way we use this runbook is by calling this from another runbook that runs our database maintenance. This runbook is copied from this post, except for the try-catch block.

By adding the $ErrorActionPreference and try-catch block, it’s possible to receive an email if a runbook fails:

I’ve included a bunch of “Write-Output” lines in there, so that if I test it through the Azure portal I can see the status of the runbook. You can remove those if you want to have a clean runbook of course.

Conclusion
By using PowerShell to automate this type of work its easy to extend your jobs like we did. Adding an email component is just as easy as altering the payload in your runbook. This extensibility and modular approach is the way to go, if you want to deliver a solution that is easy to maintain and scalable.

And let me tell you something, that PowerShell is going to be big someday, mark my words! 😉

In one of my previous posts I wrote about Azure Automation jobs to run your Azure SQL DB maintenance processes. This is a very easy way to schedule your maintenance, but there are some shortcomings when you look at the scheduler options and job outcome visibility.

In this blog post I’ll show you how you can schedule your runbooks through webhooks. Then you’re not bound to the minimal schedule of once per hour, that is a limit set for Azure Automation schedules.

Start with a runbook and webhook
In this post I’m using the runbook that I previously created for running database maintenance. Before I created a webhook, I removed the schedule on the runbook itself. This is to prevent confusion on which schedule started the runbook.

Now that the basics are set, click on the “Webhooks” button, and click “Add Webhook” in the blade that opened:

A new blade opens up in the portal, and this blade allows you to create a new webhook. Give the webhook a name and an expiration date (which is mandatory, even though you don’t want it to expire), and copy the URL. You need to store this in a secure place, because it will only be visible in this windows while creating the webhook. It can’t be found anywhere else if you loose this URL:

The webhook URL will look like this (this specific URL is changed, and won’t work): “https:// s2events.azure-automation.net/webhooks?token=eT0%0bKvfBcMeusbZw00RAwP0ZKGqxEUnThBIHAEB0Eq0%0d”

The webhook is ready to use, so now we can move on to creating a schedule.

Create a webhook scheduler
Just by creating a webhook alone the job will never run. In order to do that, we need to create a “Scheduler Job Collection”. In the Azure portal you need to go to “Scheduler Job Collection”, click on “Add”, and give your Scheduler Job Collection a name. If you already have a Job collection you can add this one to it, or create a new one. Now go to “Action settings” to configure the webhook URL for this scheduler:

Make sure you set the action to “HTTPS”, and the method to “Post”.

Under “Schedule” you can configure the actual schedule. As mentioned earlier, the schedule of a Azure Automation runbook can only be configured in hours. The schedule of a Job Scheduler can be configured in minutes, so this allows you to run a job multiple times an hour in case you need it:

Conclusion
In some cases, it might be needed to run an Azure Automation runbook multiple times an hour, but that’s (currently) not supported for runbooks. By using the webhook and job scheduler as a work-around you’re able to make the runbooks just a bit more useful. Before using the job scheduler, we were required to duplicate runbooks and configure the schedule to run on specific times, but this only made things more unclear. This works out for us, and hopefully for you too!

For those of you who use Azure today, the security discussion must have been a thing on some occasion. Explaining to managers (and possibly colleagues) that Azure is a lot more secure than a(n) (on-premise) data center, and that Azure is easier to maintain and scalable. Trust me, we’ve all been there!

But besides the physical security, there’s also the digital security. In the world of today it’s easier to find a data-breach on the news, then it is to find an item about a bank robbery. So how can you secure your data in Azure in an easy but solid way, without the hassle of changing your applications?

Encryption could be one of your tools to achieve a secure infrastructure and/or applications. But encryption is a challenge for pretty much everyone. Almost every day we hear about companies not doing it right, or not doing it at all. But luckily, Azure helps us with setting this up with just the click of a button.

So how do you enable it? There are 2 ways to do so, but I’ll only show you the route via the Azure portal. Information on how to do this via T-SQL can be found here.

First, login to the Azure portal, and navigate to the database you want to encrypt. Click on “Transparent Data Encryption”, and just with a click of a button you can encrypt your data:

This will start the encryption process and, depending on the size of the database, after a while you’ll see that the data is encrypted:

This feature will allow you to encrypt your database, without any application changes. This is because the encryption and decryption is being handled in an “intermediate layer” by Azure. The data will be decrypted before returning it to the client, and the other way around it will be encrypted before it’s stored. So your applications will continue to work without any changes in the application-code or connectionstring(s) to the database(s).

Storage Accounts encryption
There is also an option to encrypt your Storage Accounts in the same way as TDE works for Azure SQL databases (without any application changes). When you enable this on your Storage Account, please remember that only the new data will be encrypted, and that the existing data won’t be encrypted until it changes. For more information on this, please read this article, and this MSDN thread.

When you’re creating a new Storage Account, you can choose to encrypt it right away:

But when you want to encrypt an existing Storage Account with data in it, you need to do it on 2 different levels (it’s a separate setting for BLOB and files):

This will encrypt your data with the same algrorithm as TDE for SQL Server will do: “All data is encrypted using 256-bit AES encryption, one of the strongest block ciphers available.” (source).

Conclusion
For us as a company, enabling this features means that all of our data is encrypted. We’re only sending and receiving data from within Azure, so the communication is also secure. And even though the majority of our data is public data (publicly available such as flight information, etc.), it’s a safe feeling to know that all our data is encrypted when stored.

Running all of your databases in Azure SQL DB (the PaaS solution, also known as Azure SQL database) is a wonderful thing, except for one thing: you still need to run database maintenance. And because the SQL Server Agent is not available (which is a shame in my opinion), you have to find a way around that.

Azure Automation
Microsoft gives you a tool to automate your processes with something called “Azure Automation”. With this tool you can schedule “Runbooks” that contain PowerShell. This allows you to execute the stored procedure that is executing the actual maintenance on your database.

Creating a runbook
In order to create a runbook, you need to login to the Azure portal, navigate to “Automation Accounts”, and create a new account:

When that is created (this usually only takes a few seconds), open the accounts, and click “Runbooks”. Click “Add a runbook” to create a new runbook, choose “Create a runbook”, and give your new runbook a name:

It is important that you choose “PowerShell Workflow” as Runbook type.

Adding the script
In this example I use the scripts I blogged about earlier. This allows me to just execute a single stored procedure to kick off the maintenance process.

By adding this to the Runbook a connection to the specific database is opened, and the stored procedure is executed:

To start using this Runbook, you don’t only need to save it, but also publish it:

Schedule
Now that you have a Runbook, you can schedule it to run at the time you want it to. You can do that by clicking on “Schedule”, and either create a new schedule or select an existing one (if you want to use a schedule you created earlier):

Conclusion
The Azure Automation is easy to use, and allows you to do almost anything on Azure by using PowerShell. But the only downside I found until now is that the job output and outcome is kind of tucked away into Azure. There’s no way to send out an email when a job fails for example (which was the setup we used on-premise).

There must be a way around that, but I haven’t found that until now. But if I do, that sounds like a good follow-up blog post! 🙂

In one of my previous posts, I described how to setup replication to an Azure SQL database. This works like a charm, and I still highly recommend using this when you want to migrate data from an on-premise server (or Azure VM) to a Azure SQL db (PaaS).

But in our environment, we use SQL Server 2016 and contained databases for some of our datasets. Unfortunately (but totally understandable), you can’t setup replication from a contained database. So how do you deal with this? For our use-case, I’ve written a script to automatically change the database from contained to non-contained. And because I’m probably not the only one who needs to do this, I’d like to share it with you.

Steps to take
It might sound a bit difficult, but when you think about it, it’s actually quite easy. To go from a contained database with a user to a non-contained database with a login, you need to take the following steps:

The script
Before you start using this, I want to warn you. Using this script on your server(s) or in production is at your own risk. This worked for me, but maybe it doesn’t on your server(s)!

In order to let this script work, you must be able to stop applications and queries from executing on your database. In our case, we can just stop the service(s) that use a specific database (that’s the advantage of micro services). The reason for this is that you must be able to obtain an exclusive lock on the database, in order to switch from contained to non-contained.

Conclusion
Even though I thought that using a contained database could be a big blocking factor for us in the process of migrating data to Azure, it really wasn’t that big of a deal. Especially now that we automated the process, it doesn’t add more than 5 minutes to the process of replicating data to Azure.