Andrew Sullivan

Protecting data is arguably the most important job that your storage is entrusted with. Losing data is simply not an option, so it’s critical to protect data through the use of backups and replication.

There are different ways that you can replicate data in your clustered Data ONTAP system. First, you can replicate to a separate volume of the same SVM in the cluster. Second, to a volume that belongs to a different SVM in the same cluster. Finally, replication can be configured with another cluster entirely.

In this post we will cover:

Peering Relationships

Cluster Peers

SVM Peers

SnapMirror Policies

SnapMirror

Version Flexible SnapMirror

SnapVault

Load Sharing Mirrors

If you are interested in additional detail about SnapMirror and SnapVault in clustered Data ONTAP 8.3, please see the post I did over at DatacenterDude.com.

Over the last several posts we have reviewed how to create and manage aggregates, SVMs, and volumes. All of that is great, but at this point you still can’t access that capacity to begin storing things. In this post we will discuss the various ways to access the volumes and the data inside them.

Volumes are the containers of data in a NetApp storage system. They are “stored” on aggregates, accessed via Storage Virtual Machines, and are the point-of-application for many of the features of Data ONTAP. Let’s look at what we can do with volumes leveraging the PowerShell Toolkit:

Storage Virtual Machines (SVM) are the entity in clustered Data ONTAP which the storage consumer actually interacts with. As the name implies, they are a virtual entity, however they are not a virtual machine like you would expect. There are no CPU, RAM, or other cache assignments that must be made. Instead, we assign storage resources to the SVM, such as aggregates and data LIF(s), which the SVM then uses to provision FlexVols and make them available via the desired protocol.

In this post we will look at how to configure an SVM using PowerShell.

Using the NetApp PowerShell Toolkit (NPTK) can sometimes be a daunting task. Fortunately, it is pretty intuitive on how to configure most aspects of your storage system. Let’s start by looking at some of the cluster level configuration items that can be managed using the NTPK.

Getting Help

The NetApp Communities: The communities are a great place to get help quickly for any question you might have. I recommend that you use the Microsoft Cloud and Virtualization Discussions board, however the SDK and API board will infrequently have questions as well.You can also send me a message using the NetApp Communities. My username is asulliva, and I’m happy to respond to questions directly through the Communities messaging system.

From the NPTK itself: One of the less known features of the Toolkit is that it has help built in. Yes, you can use the standard Get-Help cmdlet, but there’s a hidden treasure: Show-NcHelp.This cmdlet will generate an HTML version of the cmdlet help and open your default browser to display it.

From here you can dig through the cmdlets and view all of the information you want to know about them quickly and easily.

A Few Basics To Get Started

Now that you have the toolkit and have installed it, it’s time to use it. Let’s look at a couple of basic tasks.

Note: I will be using the cDOT cmdlets, however nearly all of the commands have an equivalent available for 7-mode.

Connecting to a controller
Connecting to your cluster is extremely easy. You can specify the cluster management IP address, or any of the node management IPs as well. If you do not provide credentials as a part of the command invocation, it will prompt for them.

1

2

# connect to the cluster management LIF

Connect-NcController$controllerNameOrIp-Credential(Get-Credential)

Getting Information
Now that we’re connected to the cluster, let’s take a look at some of the information that can be gathered:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

# show cluster information

Get-NcCluster

# show node information

Get-NcNode

# show the number of disks assigned to each controller

Get-NcDisk|%{$_.DiskOwnershipInfo.HomeNodeName}|Group-Object

# show a summary of disk status

Get-NcDisk|%{$_.DiskRaidInfo.ContainerType}|Group-Object

# show failed disks

Get-NcDisk|?{$_.DiskRaidInfo.ContainerType-eq"broken"}

# show root aggregates

Get-NcAggr|?{$_.AggrRaidAttributes.HasLocalRoot-eq$true}

# show volumes which are not SVM root volumes

Get-NcVol|?{$_.VolumeStateAttributes.IsVserverRoot-eq$false}

Onward to Automation

There are a number of “PowerShell Toolkit 101” posts that introduce some of the possibilities. Be sure to read through these other posts:

This doesn’t even begin to scratch the surface of the NetApp PowerShell Toolkit. Anything that can be done from the command line can be done using the toolkit. If you’re interested in seeing specific examples, need help, or just have questions, please let me know in the comments!

In the last post we covered how to create Filters and Finders in WFA so that we could access WFA data through a RESTful interface. This creates a nice separation between the two systems and decouples the dependency on the WFA database for dynamically populating data in vRealize Orchestrator workflows.

Let’s look at how to take the result of the last post, query the data from vRO, and incorporate it into vRO workflows.

I’m going to be using the same workflow as before, “Create a Clustered Data ONTAP Volume”, so we will once again need four inputs:

Cluster Name – A string with valid values being clustered Data ONTAP systems configured in WFA

Storage Virtual Machine Name – A string with valid values being SVMs belonging to the cluster selected above.

Volume Name – A string provided by the user.

Volume Size (in GB) – A number provided by the user.

To get started, we are going to create vRO actions which execute REST operations against the WFA filters/finders to return the same data that we used direct SQL queries for previously. These actions will then be executed/used from the workflow presentation.

Using the database to get information from Workflow Automation (WFA) and create dynamic vCenter Orchestrator (vCO) workflows is one way to add dynamic data fields to those workflows. However, it just feels dirty. It’s a “backdoor” if you will, and just not very scalable or supportable. Imagine if the WFA database schema changes…you will be responsible for changing all of the SQL queries in the vCO workflows, which make break in non-obvious ways.

A much more robust method is to abstract those queries (and keep them in WFA) then use REST to retrieve the data. WFA provides two mechanisms, filters and finders, for selecting and returning data from the database internally. We can access these through the REST interface, which we can then parse from XML into a more vCO friendly format.

What is a filter?

A filter is simply a SQL select statement that has been validated to return certain fields (the natural keys at a minimum).

What is a finder?

A finder is one or more filters.

Putting them to work

Both of these constructs use SQL to query the WFA cache database (which is periodically updated from the data sources such as OnCommand Unified Manager), however a finder does not have SQL directly in it, only the filter does.

NetApp’s Workflow Automation (WFA) supports two languages out-of-the-box: Powershell and Perl. Adding modules to the perl installation is done in a non-obvious way because the install does not include ActiveState’s PPM package manager.

However, the PPM command line utility is included. Here is how to use it to manage the packages on your system.

First, you will need to open a command prompt with elevated privileges. Click the start button, find “cmd”, right click and select “Run as Administrator”.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

# browse to the correct location

cd%programfiles%\NetApp\WFA\Perl64\bin

# use the ppm command line utility to view the repos

ppm repo list

# if you don't see the ActiveState repo, add it using the following command:

ppm repo add activestate

# once that's done, install the modules using the ppm install command

ppm install DBD::mysql

# if you want to know all of the modules on your system, use this command: