SQL Server 2016 is making lots of news, and promises to be a revolutionary SQL Server version. In talking about AlwaysOn Availability Groups, a very good number of improvements were already announced as part of the Basic Availability Groups. By the way, Hemantgiri, my friend and former Data Platform MVP, already talked about this on his post.

One of the improvements that got my attention was the ability to set up the AlwaysOn Availability Groups sit in a multi-domain cluster, or even a domain-detached cluster. I tested both and indeed this works very well, but the maintenance is slightly different (read: more difficult).

We need to consider the following points:

The cluster manager is not supporting this, everything should be done by using PowerShell.

To perform the setup, you need to be connected as a local Administrator.

The “Administrator” user password should be the same on all involved nodes.

For some tasks, like SQL Server upgrade/update, you will need to run the setup using the local administrator account.

You don’t need use the Administrator account to log in. Use your regular account, select the Run as different user menu item, and press the SHIFT key while you right-click the file.

You must be using Windows Server 2016 and SQL Server 2016.

The new capability is a group effort between SQL Server and the Windows development team and is, in my opinion, a very good option. I see the multi-domain capability as a better option than a detached-domain, and I already have customers who can benefit very well from this option.

The following approach is the same; it doesn’t matter if you are making a multi-domain setup or domain-detached cluster; however, there are some particular details:

For Domain-detached cluster

In this particular case, the Failover Cluster is created without any associated computer objects, and for this reason, each involved node needs to have a defined primary DNS suffix.

For Multi-domain cluster

Make sure that you can reach all the nodes, using IP, server name, and FQDN. Take care with firewall rules as this may influence the cluster creation. Make sure that the trust relationship between the domains is well defined.

Creating the cluster

In my case, I created a multi-domain cluster, which was also a multi subnet one. The following command is the base to create the cluster:

The catch here is the -AdministrativeAccessPoint DNS. Check the PowerShell New-Cluster command for more options.

To perform this command, open the PowerShell console as Administrator (you can either log in as local Administrator or open the console using the method that I explained earlier in this article), and execute the command. That’s it!

After the cluster is created, use PowerShell to perform all the administrative tasks. The following commands are useful:

Get-Cluster

Get-ClusterNode

Get-ClusterGroup

Get-ClusterGroup <Group Name> | Get-ClusterResource

Creating an Availability Group

The process of creating the availability group is the same as mentioned previously. Actually, we are lucky, because the SQL Server Management Studio has all we need to manage the AG, without the need to touch PowerShell. For basic management, SSMS is enough.

After the cluster is created, you can go ahead and enable the AlwaysOn Availability Groups feature. To perform this, just go the SLQ Server Configuration manager and access the SQL Server engine properties. In the AlwaysOn High Availability tab, select the Enable AlwaysOn Availability Groups check box. Click OK and restart the SQL Server engine.

After enabling the feature, just follow the usual procedure and use the wizard in the SSMS, T-SQL, or PowerShell, to create the Availability Group.

More Details

As this is a complex configuration, a careful DNS and Quorum configuration is recommended to ensure the highest service availability. As a result, a DNS replication might be needed. For the quorum, the new Cloud Witness (supported from Windows Server 2016) is recommended.

Summary

When installed on Windows Server 2016, more flexibility is given to SQL Server 2016, including DTC support for AG, and Cloud Witness. Multi-domain and domain-detached cluster increases the architecture options when using SQL Server. This is just one of the improvements coming with SQL Server 2016. Stay tuned for more news!

The Azure SQL Database is improving its capabilities day-by-day. The “Cloud-first” strategy used by Microsoft is also an incentive to start using the Azure’s SQL Database as a Service (DaaS) offer.

In this article I’ll explain all the steps to move your database from on-premises to Azure, using three different approaches. You will need to choose the right one based on your migration strategy and on the database that you are migrating. Don’t forget that not all the features supported on-premises are supported on Azure, so some additional work may be needed prior to the migration.

I’ll show how to migrate a database to Azure SQL Database by using two general methods:

Using the SQL Server Management Studio – Recommended when there are no code compatibility issues blocking the cloud migration.

Using the SQL Server Data Tools – This approach is highly recommended when there are migration barriers, as the process of detecting and fixing the issues is simpler and more direct.

If you are in doubt about which one to use, the recommendation is to start by using the SQL Server Management Studio approach and, in case of failures, proceed with the SQL Server Data Tools.

Migrate Using SQL Server Management Studio

SQL Server Management Studio (SSMS) offers two direct ways to transfer a database to an Azure SQL Database. To proceed, connect to the SQL Server instance and run either the “SQL Database Deployment Wizard” or the “Export data-tier application” option from SQL Server Management Studio.

If you cannot find the preferred option, you will need to update your SQL Server Management Studio (SSMS), which is now a free standalone product. You can do this by downloading the latest version.

The primary difference between the two options is that the “Deploy“ option requires an existing Database server in Azure and will directly deploy the on-premises database to that location. The difference is that afterwards the “Export” option will create a file to be imported from the Azure portal. The exported file can be loaded straight to an Azure Blob Storage account, which will help avoid an extra step to copy the file (recommended).

NOTE: For both options, an Azure Blob Storage account with a container and an Azure SQL server are needed.

Migration Steps Using the Deployment Wizard

Right-click the database and select the Deploy Database to Microsoft Azure SQL Database

Fill in the required fields.
The server information is for the target (Azure SQL Database server). The settings to define the price tier are also configured at this stage. The bacpac file will be created locally and then applied on the Azure SQL Server, and because of this, we will need to store the bacpac file in a temporary place in the server.

Click Next.

Review the settings and click Finish.

Wait for the process to complete.
At this stage the wizard will validate the database, create the DACPAC file, and apply the Azure SQL Server to create the database.

The database is now ready to use the server admin account to access the Azure SQL Server.

Migration Steps using the Export Data-Tier Application Process

Right-click the database and select the Export Data-tier Application.

Save the file in an Azure Blob Storage Account. You will need the account name and access key.

Select the container and click Next.

Click Finish, and wait for the processing to complete.

Once the process completes a “Success” message is seen as shown in the screen below. Otherwise, there are items needing to be resolved to make the database capable of being converted into an Azure SQL Database.

Connect to the Azure portal and choose the SQL Servers.

Select the SQL Server location where the database should be created, and then click the Import Database icon as shown below.

Complete the required settings, including the BACPAC file location, price tier, and server administrator’s password, and then click Create.

Once the process completes, the database will be seen in the list.

Migrate Using SQL Server Data Tools

By using the SSMS to migrate the database using a DACPAC, we don’t have the needed flexibility to properly detect and fix the found issues. For this purpose, the SQL Server Data Tools – Business Intelligence is a better option to analyze the database objects. To proceed with this option, follow the steps below.

Creating the Main Project

Using the SQL Server Data Tools BI, click the SQL Server Object Explorer tab and connect to the on-premises instance:

Right-click the database to be migrated to Azure, and then click Create New Project.

Add a name to the project and select a path to save the project files.

Click next and wait for the processing to complete.

After the project is created, right-click the project root, go to properties and change the Target Platform to Azure SQL Database. Save and close.

Right-click the project and click Rebuild. If problems are detected, all the errors will be shown in the Error List.

Go to File->New->Project, give a project name (I will name it AWAzure) and in the Solution option, click Add to solution:

Creating the New Schema

In order to filter the non-supported features and find the code to be corrected, the next step is a Schema Comparison creation. Follow the steps shown:

Now, select the options. Click the icon shown.

In the Schema Compare Options window, click to clear the following known non-supported items:

Aggregates

Application Roles

Assemblies

Asymmetric Keys

Broker Providers

Certificates

Contracts

Defaults

Extended Properties

Filegroups

FIleTables

Full-Text Stoplists

Full-Text Catalogs

Full-Text Indexes

Message Types

Partition Functions

Partition Schemes

Queues

Remote Service Bindings

Rules

Sequences

Services

Symmetric Keys

Used-Defined Types (CLR)

XML Indexes

XML Schemas Collections

Click Ok and save the Schema Comparison, as it can be useful later.

Select the source: The On-premises database.

Select the Target: The empty SQL Server create project.

We will have the following:

Now, click Compare. Wait for the process to complete and then click Update (click YES in the confirmation pop-up), to update the selected target.

Next, go to the AWAzure (the target) project, right-click on the root, go to properties, and change the Target Platform to Azure SQL Database.

Click Save and Close the screen.

Resolving Problems

Now it’s time to resolve the problems. Check the errors tab and double click on each found item to open the code. Resolve the issue and save the file.

Use the filter to ensure you are dealing with the right project.

Deploying the Schema

After the schema revision, we can publish the database.

To publish the database, right click the AWAzure project, and click Publish.

Edit the target instance and connect to the Azure SQL Server:

Fill in the database name and click Publish.

Moving the Data

The schema is deployed. Now it is time to move the data. To do this, use the Import and Export Wizard, from the SQL Server Management Studio.

Connect to the on-premises instance, right click the database used as the data source and follow the steps shown:

In the wizard, confirm the Server name and the source database, and then click Next.

Now, do the same for the Azure SQL Database.

In the Destination field, select SQL Server Native Client 11.0, fill in the server name, and select the target database.

Click Next.

For this step, keep the first option selected, and then click Next.

Select all the tables and views from the source. Notice that SQL Server will automatically map the target tables on Azure.

About data hierarchy: If foreign key constraints are being used in the database, the data migration should be made in phases to avoid failure. This needs to be analyzed prior to the final migration.

Make sure that all the tables are highlighted and click Edit Mappings.

Select Enable Identity Insert and then click Ok.

Then, in the main Wizard window click Next.

Make sure the Run immediately check box is selected and click Next.

In the following screen, review the options, and then click Finish.

Monitor and the data transfer and close the wizard.

That’s it. I hope that the steps were clear and this article was useful. If you have questions, do not hesitate in post your comment or contact me using twitter (@murilocmiranda). “See” you in another article.

This Saturday October 11, I will be speaking at SQL Saturday Bulgaria 2014 in Sofia. It’s my first time in the country and I’m really excited to be part of another SQL Saturday :)

I will be speaking about Buffer Pool Extension, a new feature on SQL Server 2014. If you want to learn a little more about the new SQL Server version, don’t hesitate to attend the event. Looking forward to seeing you there!

CONFIGURE THE VPN CLIENT
1. In the Management Portal, navigate to virtual network page; in the “quick glance” you have the links to download the VPN package.

Choose the one appropriate to your architecture (x86 or x64).

2. After successfully download, copy the file to your servers and execute the setup.

3. Click Yes when it asks if you want to install the VP and let it run.

4. After successful installation, it will be visible in your network connections.

5. In Windows 2012 you can click in the network icon, in the notification area icons (close to the clock), and it will show the right-side bar with all the network connections. You can connect from there.
The other option is right-click the connection in the “Network Connections” window (previous step) and click “Connect / Disconnect”.

6. A window will be shown, click Connect.

7. Now check the box near to “Do not show this message again for this Connection” and click on “Continue”.

If everything is ok, the connection will succeed.

8. To confirm that you are connected, execute the command “ipconfig /all” in the command line, and you should see and entry for the VPN with an IP assigned.

9. After a while, you will be also able to see the connection in you vNet dashboard. As you can see in the image you have data in/out in the vNet.

After this last part, you are done with the point-to-site VPN configuration. You can test the connectivity by executing the “ping” command and also using the “telnet” client to test if some specific port is opened and reachable.

The point-to-site VPN is recommended if you want connect users/devices to your Azure infrastructure, for few different reasons. If you need to connect the entire or part of your on-premises infrastructure, the way to go is configure a Site-to-Site VPN. Stay tuned for a blog post on how it works.

This blog post is the second in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. Today’s post will be about creating certificates.

CREATING CERTIFICATES

At this step, we will create and upload a certificate. This certificate will be used to authenticate the VPN clients and are performed in few steps:

Generate the certificate

Upload the root certificate to the Azure Management Portal

Generate a client certificate

Export and install the client certificate

Let’s start …

We will need to use the MakeCert tool. MakeCert is part of “Microsoft Visual Studio Express”.

After successfully downloading the tool, start the setup and follow the installation steps. Note that you can generate this certificate in any computer, not only in the computer where you are configuring the VPN.
After the installation, you can find MakeCert at:

C:\Program Files (x86)\Windows Kits\8.1\bin\x64

C:\Program Files (x86)\Windows Kits\8.1\bin\x86

Launch the command prompt as Administrator. Point the path to one of the folders referred in the previous step and execute the following command (note: keep the command line opened):
makecert -sky exchange -r -n “CN=RootCertificateMurilo” -pe -a sha1 -len 2048 -ss My “RootCertificateMurilo.cer”
(where “RootCertificateMurilo” is teh certificate name).
This command will create and install a root certificate in the Personal certificate store and create the define RootCertificateMurilo.cer file in the same directory that you are executing the command.Note: Store this certificate in a safe location.

Depending on the time zone of the server where you created the certificate, you might receive an error message, “The certificate is not valid yet, effective date is [date and time].” To work around this, delete the created certificate, and create another one adding the following parameter (change the date):-b “07/30/2014″It will be valid form 00:00:00 hours for the day you set.

Now we need to create a Client Certificate. We will use the Root Certificate to do this.
In the same command line window, opened before, execute the following command:makecert.exe -n “CN=ClientCertificateMurilo” -pe -sky exchange -m 96 -ss My -in “RootCertificateMurilo” -is my -a sha1This certificate will be stored in your personal certificate store.

Now we need to export this certificate, as this should be installed on each computer that needs to be connected to the virtual network. To achieve this, enter the command “mmc”, still in the opened command line. The following window will be shown:

Go to File->Add/Remove Snap-in.

Select “Certificates” and click on “Add >”.

Select My user account and click Finish.

Click OK in the remaining window.

Now you will be able to see your certificates under the “Personal\Certificates” folder:

To export the certificate, right click the Client certificate and click on “All Tasks->Export…”, as shown:

A wizard will be presented. Choose Yes, export the private key and click.

Leave this as default, and click Next.

Choose a strong password (try to remember this) and click Next.

Now you need to set the path to store you .pfx file.

Click Next, then Finish.

To finalize the “Certificates part”, we will need to install the certificate on all the servers where we want to setup the VPN.To accomplish this, you just need to:

This blog post is the first in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. Today’s post will teach you how to configure a virtual network and a dynamic routing gateway, and the following blog posts will demonstrate how to create the certificates, and how to configure the VPN client.

Nowadays we are opting to move parts of, or even entire systems to the cloud. In order to build a hybrid environment, we need to find a way to connect our enterprise/local network, also known as on-premises, and the cloud.

Currently, we have two options to connect Azure and On-Premises:

Using a Point-to-Site VPN

Using a Site-to-Site VPN

The first option, using a Point-to-Site VPN is the option I’ll be demonstrating. It is recommended when you need to connect only some servers of your network to Azure. On the other hand, the Site-to-Site VPN connects your entire on-premises network to Azure.

Today I’ll be discussing Integrity Check on SQL Server – we have some very good info in the community, but I’d like to summarize my thoughts here. When a DBA talks about Integrity Check, the conversation is usually redirected to one subject: CheckDB. And yes, we are going to talk about it in today’s post. :)

The DBCC CheckDB is a medicine of sorts that can prevent headaches, and remedy possible damages. But it should be used with caution: Without a well-planned backup strategy you can lose data. Even with all of the warnings about the importance of Integrity Check, it’s not uncommon to see instances without it.

“I’m a Manager just trying to understand this… Could you explain this at a high level?”

Let’s cut the blah, blah, blah and go ahead with this… I’ll introduce the CheckDB for those who have never heard of it – probably less technical individuals: The CheckDB command is a Database Console Command, or simply DBCC, categorized as a Validation DBCC. This is a necessary step in the entire instance maintenance, where we can detect, and in some cases repair a possible problem with the database integrity. Running the DBCC CheckDb regularly will help to anticipate problems, and even a unexpected shutdown of the instance. We’ll go into more detail a bit later…

“What’s the objective of the CheckDB?”

CheckDB has two roles: The main objective is the integrity check itself, and the second is the correction of the findings. Please not that the CheckDB verifies the database structure and not the data consistency.

Integrity Check: The command CheckDB follows a few steps on its execution, passing from the following phases:

Verification of the allocated structures, i.e. GAM, SGAM, PFS, IAM…

Verification of table consistency, including all of its indexes.

Verification of the system catalogs.

Correction: After the CheckDB “standard execution” as described above, we will have information about possible damaged pages. The CheckDB itself can fix these damages.

CheckDB has an option that allows the correction of the database structure, but the data allocated into the repaired page will be lost. This option is “REPAIR_ALLOW_DATA_LOSS”, and this option changes the damaged page for a new one. This way, we save the integrity of the database. Note that we are talking about the physical structure – not about data integrity, which is why we lose data. After the execution of this command, a verification based on backups should be made in order to identify and recover the lost data.

Another option is “REPAIR_REBUILD” used to correct non clustered indexes. On this option we don’t lose data. Even though we have options to correct pages, there are few limitations:

Not too many people are curious to look deeper and discover more about this, but the CheckDB doesn’t run directly over the database. A hidden database snapshot is created and the process runs based on this snapshot. Many people only realize this when the CheckDB execution fails, and once they begin digging on it, they find out about “some snapshot” that failed the creation.

While we’re on the topic, do you know what to do if the snapshot creation fail? I can see 3 options:

Fix the permission to create the snapshot.

Execute the CheckDB with the TABLOCK option. This can bring problems, as the DB will be a target of locks.

Create a process (job) doing the following steps:

Create a snapshot of the database.

Run the DBCC CheckDB on this snapshot. Yes, the DBCC CheckDB can run over databse snapshots. And it’s obvious that a snapshot of a snapshot won’t be created.

“When and where should we execute an Integrity Check?”It’s recommended to check the integrity of all the databases, without exception!The best approach is to execute the CheckDB everyday. But we understand that on busy systems (instances with hundreds of databases, VLDBs, etc.) this can be nearly impossible.

Some interesting facts:

SQL Server maintenance plan ignores the TempDB, but if the TempDB becomes corrupt the instance will shutdown. Use caution with this.

Executing the CheckDB on the master database will cause the execution on the mssqlsystemresource database as well.

Execution Options: There are few execution options for CheckDB:

NO_INFOMSGS: Avoid the output of info messages.

ALL_ERRORMSDGS: Allow the output of error messages.

Default since SQL Server 2008 SP1.

NOINDEX: Skips nonclustered indexes verification.

DATA_PURITY: Validates de data based on the column characteristcs.

Default since SQL Server 2005.

ESTIMATEONLY: Estimates the space needed on TempDB.

As per Paul Randal, this option is broken on newer versions of SQL Server.

TABLOCK: Uses locks instead of database snapshots.

Useful when the snapshot creation is failing.

EXTENDED_LOGICAL_CHECKS: Allows the validation on XML columns and Spatial indexes.

This can be costly.

PHYSICAL_ONLY: Skips most of the logical verifications.

Need less resources to run, but a “full” execution should be sone periodically.

Other Approaches to execute: As CheckDB is a resource-intensive process, in some cases the execution may take a long time to complete. Depending on the environment, the Integrity Check on all the databases (or even in one very large database) may not finish on time, namely, the integrity check will exceed the maintenance window limit. To work around this, we can reproduce a full CheckDB executing its subset commands. As follows:

CHECKALLOC

CHECKTABLE

CHECKCATALOG

CHECKFILEGROUP

By executing those commands, one per day, we can cover a full CheckDB execution in phases. Another possible approach is execute the backups with the “CHECKSUM” option. This way we will be able to restore using the option “VERIFY ONLY” and do an extra level of verification. Anyway, we will still need the CheckDB.

So, that’s all about CheckDB. There are more information around there, but the best place to get info from CheckDB is on SQL Skills blog, where the CheckDB guru writes few articles about it. See you in another post ;)

We have reached the last article of this series. To close this series out, we will talk about Distributed Transaction Coordinator, or simply DTC. I’ll try to do some simple explanation. After that, I’ll demonstrate how to prepare the DTC for a clustered instance.

What is DTC (MS DTC)?
The MS DTC is a OS level service, which comes automatically installed and running under the Network Service account. Its role is to ensure that a distributed transaction is consistent, even with failures.
Those transactions might be initiated when a transaction is dealing with data on multiple computers via network or when the transaction is dealing with multiple processes in a single computer.

All the participants of a distributed transaction works in sync with the other participants (computers) involved in a transaction, looking for the right moment to commit or abort its work. For this reason, we need to make sure that the computers can reach each other.

Do I need to configure MS DTC on my environment?The answer for this question is the standard for almost everything involved with SQL Server; It depends. You need to understand whether or not you will perform distributed transactions. If you have more than one instance in the same computer (without aditional componentes installed), you won’t need the DTC. On the other hand, if you have a two nodes cluster with two clustered instances communicating with each other, you will need the DTC – the instances could be in different nodes. Another possible scenario is when you have the database engine and SSIS installed, in this case you will need to configure the DTC.

Since Windows 2008, we are allowed to have more than one instance of MS DTC in a server/cluster. So, for clustered SQL Server installations is a best practice to have a Role exclusively for the DTC and a dedicated DTC for each SQL Server Role.

As documented per Microsoft, the SQL Server follow this path to choose the MS DTC instance to use:

This way we finish this series about how to install a clustered instance. We still have too many details to cover and I will try to create separated articles with best practices, configuration alternatives, etc.

I hope you enjoyed this series and as always, if you have any doubts, contact me! Thank you for reading! If you want to check the other parts of this series, here are the links:

In continuation of our series on how to install a SQL Server 2012 clustered instance, let’s discuss how to add a node into an existing SQL Server clustered instance. The following steps are performed either to add one more node to some already installed clustered instance, or to continue the installation of a brand new clustered instance — It all boils down to the same thing. To perform this phase, you will need to have at least one node installed. In this case, we installed a new SQL Server failover instance in the Part 2 of this series.

So connect to the next node, in this case W2012SRV04, and perform the following steps:

1. Make sure that you have the same SQL Server 2012 media used to install in the other node available and execute the “Setup” binary.

4. A check will run in order to verify the setup support rules. Click “OK”.

5. Now the setup will check and install the latest updates. Keep the “Include SQL Server product updates” checked and click “Next”.

6. Another check will run in order to identify problems within the installation process. Click “Next”.

7. Insert the product key and click “Next”.

8. Accept the terms and click “Next”.

9. On this step you need to pick the instance where this installation will be related. Notice that you have a list of installed instances, as well as the nodes that the instances are already installed.
In our case, the Instance Name “DB” is installed in only one node, and we need to choose this instance in the list box in the top to proceed to the node addition.

10. Now. confirm the IP settings as you did in the first node installation. Click “Next”.

11. Fill the passwords for the Engine and Agent service account, and click “Next”.

12. Like in the other (first) node, you have the option to send error reports to Microsoft. Click on “Next”.

13. Now the setup will verify if the installation process can be blocked. In the end, click “Next”.

14. Review the options and click on “Install”.

15 Now you can watch the installation progress. Click “Next” when it is done.

16. Now the node addition is complete! Just verify if all of the features have succeeded. Click “Close” and you are done!

At the end of this installation, you will have one more node available to run out our instance. This means that we can now perform a failover from W2012SRV03 to W2012SRV04, so we have now a high availability (HA) solution. In case of a hardware failure on the active node, we will have a failover action.

For some reason, such as a test or for maintenance purposes, you can do a failover manually. To perform this, open the Failover Cluster Manager tool (the same used on step 2), right-click over the role name (in our case “SQL Server (DB)” and then select the “Move” option. Two options will be shown — the first “Best Possible Node” and the second one “Select Node…” Both are valid, but the second gives you the chance to choose the node to move the Role, which is useful when you have more than two nodes.

That’s it for today. I will be back soon with the last part of this series, talking about MS DTC on clustered instances.

In continuation of our series on how to install a SQL Server 2012 clustered instance, let´s begin the actual installation, starting from the first node.
The objective of this second blog post in my series of three is to demonstrate how to install the first node of a clustered SQL Server 2012 instance and how to basically manage it from the Failover Cluster Manager tool, on Windows Server 2012 R2.

Just to refresh our memory, this is our infrastructure:

For more details about the prerequisites and a general explanation, check out Part 1 of this series.

The installation: Starting from the very first node
So let’s start the installation. If you remember, I mentioned that we have two steps to complete the installation on both nodes. For now we will start doing the new instance installation in one of the nodes. This node will be W2012SRV03.

Here we have few options to install on “Instance Features” and “Shared Features” sections.
For “Instance Features” pick the “Database Engine Services” item.
For “Shared Features” pick “Management Studio (complete)”.Another important point here is the directory that the shared tools will be installed. A good practice is have a dedicated local disk to install SQL Server related files. In the image I’m using the drive C, the same that I have my OS. This is my lab, so I advise that this is not a good practice for a productive system. Click “Next”.

At this step another check will run, this time to identify problems that might block the setup, based on our choices of the last step. Just wait for its completion and click “Next”.

This is an important step: Here we will define the instance network name (one of our prerequisites), whether we will use a default or name instance, the Instance ID and the instance root directory.A few things to consider here:
– On “Detected SQL Server instances and Features on this computer” section, we already have one instance installed. So I had to use a named instance and change the Instance ID to do not conflict with the existing one.
– Based on the information above, to connect to our instance we will need to use: SQL04\\DB.
– Another important point is the “Instance root directory”: It’s recommended to use a dedicated local disk to install the SQL Server binaries. Avoid using the system drive, a.k.a. “drive C”.Fill and verify all the points an click “Next”:

This step confirms if the disk space requirements are being met. Click “Next”:

Now this is related to the Cluster Resource Group name to be used. The installation suggests a name, but you can change it.
This window also shows the reserved and already used Resource Group names. Change the Resource Group name if you are not satisfied with the suggestion and click “Next”:

Now the disks! Another piece of the prerequisites has been shown. At this step, we will have the information of all available storage to be used on our new clustered instance.
As mentioned before, I have only on available disk for this guide, so let’s use this. Just select the desirable disks and click “Next”:

Do you remember the requisite of an IP? It’s time to use it! Just pick the network that you will use, unmark the DHCP column and fill the address column with the value of the IP. And… “Next”:

Here is our last requisite: Service Accounts. Fill the information about the SQL Server Engine and Agent service account (login and password).
Note: For a clustered instance, the “Startup Type” for the services should be as “Manual”. The Cluster Service will manage this for us.
Click “Next”.

At the same step, we have another tab where we can define the collation to be used in our database engine. By default it has “Latin1_General_CI_AS”.
For more information about collations, click here. This can be a very important choice! If you have an special collation requirement, don’t forget to select the right one during the installation, otherwise you will have a hard work ahead to change this.

Now we need to choose the authentication mode of our instance.The options are either “Windows Authentication”, which will take benefit from domain and local server logins, or “Mixed Mode”, which accepts Domain/Windows logins as well as logins created and managed by SQL Server.If you pick “Mixed Mode” a login called “sa”, member of the “Sysadmin” role will be enabled. For this reason we need to specify the password for this login.In the box bellow, we need to add all domain/Windows users that have access to the instance and will be part of the “Sysadmin” fixed server role. You need to use the three buttons to add/remove logins from this list:

Now it’s time to define the disk strategy.This step worth an entire article — I’m using only one disk for demonstration purposes, but the recommendation is to use one isolated disk for each one of the points.
The general rule here is: The most spread, the better!

We could use a layout like the following:

The only thing that we cannot specify directly is a place to store the non-clustered indexes. This separation of the non-clustered and clustered index is a case aside.

Points to take attention here:

Isolate the TempDB in a fast disk. Remember: From SQL Server 2012 we can store the TempDB in a local disk on clustered installations!

Place your data files in one disk and log files in another one.

Request the disks with an appropriate RAID level.

Pay attention to the partition offset and block size before the installation, even if you are using a Windows 2008+ OS.

After consider all those points, set carefully the disks for each point and click on “Next”.

Here you have an option to send error reports or not. Click “Next”:

Another check will run to verify if the failover cluster installation will be blocked. After the check, click “Next”:

Here we will be able to review all chosen options. Review and click “Install”:

The installation will begin — wait for completion.

In the end you will have a confirmation about the success or not of each feature installation.
You are done here! Click “Close”:

On this second part, we passed for all the steps to install the first node of a clustered instance. Of course some points will be slightly different from your environment, but here are pretty much all the steps to follow. After the completion of this installation you will need to add the other nodes to this clustered installation and you will be able to see the SQL Server “Role” created into the WFC Manager.

Dealing with Failover Cluster Manager

As we already have one node of our clustered instance installed, we will need to manage its resources using the Failover Cluster Manager tool.

Click on the “Windows Key”+R.

Write “cluadmin.msc” and click on “Ok”.

The Windows Failover Cluster will be opened.

On the image you can see two “Roles”, representing two clustered SQL Server instances. The labeled as “SQL Server (DB)” is the one that we installed (Do you remember the choice made on the step 14?).
Selecting this Role will show all the resources that are part of this clustered installation, such as IPs, Disks, etc.

At this point, we have the clustered instance installed into one node only, so we cannot do a failover yet.

To stop the SQL Server, which will stop also the SQL Server Agent, right-click over the SQL Server Engine service and do the following:

In other hands, to start do the following:

Note: When you start the SQL Server, the agent service needs to be started also. One way to reduce the number of steps is first start the Agent, this way, the SQL Server Engine will start automatically. The reason is that the SQL Server Agent is dependent of the SQL Server Engine, this way WFC will try to start the Engine service prior to start the Agent.
Another very important item are the dependencies of our SQL Server.
To check that:

Right-clicking the SQL Server Engine service.

Click on Properties.

Select the “Dependencies” tab.

This way you can see all the resources that the SQL Server Engine is dependent. Looking for our picture, if either the “Cluster Disk 1” or “SQL Server network name (SQL04)” fails, the SQL Server Engine will shutdown/failover!!
All the clustered resources can be dependent of another resource. For example, the “SQL Server network name (SQL04)” is dependent of the IP. This way, if the IP fails, the resource “SQL Server network name (SQL04)” will be offline and this will fire a SQL Server failover/shutdown.

What to take from here?

Check the dependencies and conditions on this tab, this can be useful to increase you availability rate!
As a practical example:
All the disks are important, but if the backup disk fails, we can continue with the service online, and fix the problem in background. But, if the SQL Server is dependent of this disk, we will have a failover/shutdown.

So, pay attention to this!

What’s next?

After completing the second step, we already have our instance working on a clustered environment, but we have only one available node. We will complete this series with more two parts: