With the release of ONTAP 9.1 earlier this year it brought with it amongst many things NetApp Volume Encryption (NVE). This feature although offered $0 cost requires an additional license which needs to be generated by NetApp. On top of the NVE license NetApp also added a new license which is needed to enable the newly-integrated trusted platform modules (TPM). What is a TPM you may ask? By definition a TPM is, “A dedicated microcontroller designed to secure hardware by integrating cryptographic keys into devices”. When first reading about this I then wondered what platforms include the TPM module? Does every platform that support NVE have a TPM module? After a bit of digging I found that not every platform that supports NVE includes a TPM. The list below shows all platforms that currently have TPM modules integrated:

*2018-03-18: Updated for version 1.4.1 of my container image which includes bug fixes, Grafana 4.5.2 and NetApp Harvest 1.4

This post is based on the original “How To Setup NetApp Harvest Using Docker” blog post however it has been tweaked to reflect the use of Kubernetes and the NetApp Trident plugin. It is assumed both Kubernetes and NetApp Trident are already deployed so if you have questions on deploying these technologies see here.

Image download and distribution

Create the appropriate directories which will house the docker container build files

Shell

1

# mkdir -pv /root/docker/harvest

Download the custom NetApp Harvest Docker image (dburkland/harvest)

Download the following files to your workstation and upload them to the “/root/docker/harvest/“ directory on the Kubernetes master node

As folks adopt DevOps principals they are using common applications to help them get there. One of those is Docker and usually in the same sentence Kubernetes is mentioned next. To review, Docker is essentially a wrapper for Linux containers (LXC), which similar to FreeBSD jails or Solaris Zones, provides a method for applications (and their dependencies) to be isolated in separate namespaces all while sharing the host system’s kernel. Docker containers are extremely portable as they just need the host server to have a LXC-compatible kernel and the Docker application installed. Kubernetes takes this concept to the next level by automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. For a more detailed intro on what Kubernetes is check out the “Sources” section below.

Now to the meat of the post, what is NetApp Trident and where does it fit in to the Docker/Kubernetes equation? Well according to NetApp Trident’s GitHub page, “Trident provides storage orchestration for Kubernetes, integrating with its Persistent Volume framework to act as an external provisioner for NetApp ONTAP, SolidFire, and E-Series systems. Additionally, through its REST interface, Trident can also provide storage orchestration for non-Kubernetes deployments.” In other words, Trident allows one to attach persistent storage from NetApp FAS, E-Series, or SolidFire system(s) to containers allowing applications such as databases to easily operate in a containerized environment. Below are the steps I compiled needed to not only stand up a small 3-node Kubernetes cluster but also to deploy the NetApp Trident plugin:

*2017-07-13: Updated for version 1.3 of my container image which includes some updated dashboards*2017-05-21: Updated for version 1.2 of my container image which includes NetApp SDK 5.7*2016-11-15: Updated for version 1.1 of my container image which includes Grafana 3.1 and NetApp Harvest 1.3*2018-03-18: Updated for version 1.4.1 of my container image which includes bug fixes, Grafana 4.5.2 and NetApp Harvest 1.4

This post is based on the original “How To Setup NetApp Harvest Using Docker” blog post however it has been tweaked to reflect the more simplistic deployment method which relies on a pre-built container image (vs building one from source).

Setup a Docker host server (physical or virtual) with at least the following recommended specs:

Linux Distribution: CentOS or Red Hat Enterprise Linux (RHEL) 6.7+ or 7.0+ are preferred otherwise see the link under the “Other Distributions” section below for more details.

Access: Root (direct or via sudo)

For secure environments where the default umask value is adjusted, verify that the settings do not apply to system services (daemons) by checking the “/etc/init.d/functions” file (CentOS/RHEL 6). If the umask value in the aforementioned file is adjusted, please temporarily set it to 022. Once all the deployments steps discussed in this post are completed, you can then safely revert the umask setting back to its original (secure) value.

Click “Add” to bring up the “Add User” window and create the “harvest” user as seen in the screenshot

For certificate authentication see the “NetApp Harvest Installation and Administration Guide” located here for more information

Add all applicable NetApp 7mode or cDOT systems to “/opt/data/opt/netapp-harvest-conf/netapp-harvest.conf” on the Docker host (See examples below). Also, do not forget to change “username” to reflect the local user account you created on the NetApp system(s) in the previous step.

7mode controller or cDOT cluster

OCUM 6.x Server (Required for capacity dashboards)

Login to the container and restart the Harvest service to apply the changes

Shell

1

2

docker exec-t-iharvest/bin/bash

/etc/init.d/netapp-harvest restart

You should now be able to login to Grafana by pointing your web browser at http://DockerHostIP

Default Username: admin

Default Password: admin

Within a half hour the dashboards should be correctly displaying realtime statistics as seen in the following example screenshot:

*2017-07-13: Updated for version 1.3 of my container image which includes some updated dashboards*2017-05-21: Updated for version 1.2 of my container image which includes NetApp SDK 5.7*2016-11-15: Updated for version 1.1 of my container image which includes Grafana 3.1 and support for NetApp Harvest 1.3

After receiving positive comments regarding my previous Graphite & Grafana blog posts, I wanted to further streamline the deployment process for this solution while also incorporating a few new applications. Now if you read my previous blog posts on Grafana or Graphite you would know that Grafana is a graphing application which elegantly displays data stored by Graphite, a time-series database. Previously we relied on existing NetApp OnCommand Performance Manager (OPM) installations to collect and relay NetApp performance data to our Graphite server installation. Through my experience in the field I have noticed that the data collected by OPM lacks in detail, especially when trying to perform a performance deep-dive of a given system. With that being said there now is an alternative to OPM and that is known as NetApp Harvest. Harvest is an excellent Perl-based application created by NetApp Enterprise Infrastructure Architect Christopher Madden which collects data from NetApp 7mode/cDOT/OnCommand Unified Manager systems and relays it to a defined Graphite server. Harvest is very flexible configuration-wise and comes bundled with a collection of pre-canned yet verbose Grafana dashboards. Since its public release earlier this year, I have deployed the updated Grafana/Graphite/Harvest solution for many customers over the last year with great success. I have received mostly positive feedback however the one complaint I get time and time again is regarding the complexity of deployment (regarding the software dependencies). After thinking of ways to simplify the deployment process and seeing what methods fellow NetAppers have applied to similar challenges (Perfstat), I decided to “Dockerize” the solution. For those that are new to Docker, Docker is essentially a wrapper for Linux containers (LXC), which similar to FreeBSD jails or Solaris Zones, provides a method for applications (and their dependencies) to be isolated in separate namespaces all while sharing the host system’s kernel. Docker containers are extremely portable as they just need the host server to have a LXC-compatible kernel and the Docker application installed. Due to the simple requirements of Docker, an individual can deploy a given container on various Linux platforms, possibly located in constrasting environments (such as public and private clouds) with great ease. The topic of containers is fascinating and if you are interested to learn more about this technology and NetApp integration I encourage you to see the “Sources” section below. So now that you have a basic understanding of what Docker is, I would like to mention that I have created a cookbook-style tutorial on how to quickly build and deploy a Grafana/Graphite/NetApp Harvest container in your environment. Finally, while this solution is extremely powerful it is merely a compliment to the feature sets offered by other NetApp applications such as OnCommand Unified Manager (OCUM), OnCommand Performance Manager (OPM), and OnCommand Insight (OCI). If you have any questions or run into any issues with this tutorial please do not hesitate to contact me via the comment section or via twitter (@dburkland).

Setup a Docker host server (physical or virtual) with at least the following recommended specs:

Linux Distribution: CentOS or Red Hat Enterprise Linux (RHEL) 6.7+ or 7.0+ are preferred otherwise see the link under the “Other Distributions” section below for more details.

Access: Root (direct or via sudo)

For secure environments where the default umask value is adjusted, verify that the settings do not apply to system services (daemons) by checking the “/etc/init.d/functions” file (CentOS/RHEL 6). If the umask value in the aforementioned file is adjusted, please temporarily set it to 022. Once all the deployments steps discussed in this post are completed, you can then safely revert the umask setting back to its original (secure) value.

NOTE: If a proxy server is required for outbound internet access you will need to define it in order for the image build process to complete properly. Depending on if authentication is required, execute the “proxy_enable.sh” script with the appropriate arguments based on the examples below.

One new feature in 8.2.2+ that hasn’t been given much press (thanks to Curtis @ NetApp U) is the ability to boot directly into the boot menu or maintenance mode from the loader prompt. This feature will mainly be helpful for field personnel who frequently setup up and build out NetApp Clustered Data ONTAP systems. See below for a list of commands that are required to boot a cDOT system into each respective area.

Boot directly to the boot menu by entering the following from the controller’s loader prompt

Shell

1

boot_ontap menu

Boot directly to maintenance mode by entering the following from the controller’s loader prompt

Shell

1

boot_ontap maint

UPDATE #1 2015-05-21: A fellow NetApper mentioned that this functionality only exists if you are running a version of the BIOS and loader firmware which supports it.
UPDATE #2 2017-04-06: It looks like the “prompt” argument no longer works and has been replaced by “menu” so I have updated the post to reflect that change.

Before I dive into Grafana I wanted to make a quick note that last week marked my 1 year anniversary at NetApp. This year has honestly flown by and it is hard to believe as it feels like just yesterday I started this new adventure with various unknowns. It has been a year full of change along with many new experiences which I feel have made me stronger, both as an individual and professional. I am thankful for the opportunity at NetApp and for the ability to work with so many great customers that have turned to us to solve their most critical of IT issues. I look forward to the challenges that lay ahead and for many more years with this great team at NetApp.

With that being said that I think it is now time to discuss Grafana. Grafana is an open source graphing application that integrates with applications such as Graphite. If you read my previous blog post on OPM’s new external data relay capabilities, you would know that one of the applications you can send data to is Graphite. While you can create custom dashboards within Graphite, the interface leaves much to be desired from both an aesthetic and functional point of view – this is where Grafana comes in. Grafana has a much more refined interface that presents the ability to create advanced dashboards from several different data sources. Grafana also contains a powerful query editor that allows you to filter data using patterns to ensure only the proper data is illustrated. Below I have created a step by step tutorial on how to stand up a Grafana/InfluxDB instance on an existing Graphite server running CentOS 6.6. If you run into any issues or have any feedback at all regarding the content please feel free to leave a comment below.

Prepare Apache for the Grafana installation and configuration

Edit the “/etc/httpd/conf/httpd.conf” file and add the following line of text below the “Listen 80” line to ensure Apache listens on port 8080:

Shell

1

Listen8080

Also add the following lines right after the “Include conf.d/*.conf” line to the aforementioned file to prepare Apache for Grafana (replace ‘10.26.69.144’ with the IP address of your Grafana server)

Shell

1

2

3

4

5

6

# Added for Grafana

<VirtualHost*:80>

ServerName10.26.69.144

ServerAlias10.26.69.144

Redirect permanent/http://10.26.69.144:3000/

</VirtualHost>

Edit the “/etc/httpd/conf.d/graphite-web.conf” file and make the appropriate changes to the “<Virtual Host *” line

Shell

1

<VirtualHost*:8080>

Restart Apache to apply the recent configuration changes

Shell

1

/etc/init.d/httpd restart

Install and configure InfluxDB which will serve as a backend for Grafana

Download the latest version of the InfluxDB RPM

Shell

1

2

wget https://s3.amazonaws.com/influxdb/influxdb-0.8.8-1.x86_64.rpm

Install the InfluxDB RPM package

Shell

1

rpm-ivh influxdb-0.8.8-1.x86_64

Enable InfluxDB to start at boot

Shell

1

2

ln-s/opt/influxdb/current/scripts/init.sh/etc/init.d/influxdb

chkconfig influxdb on

Ensure that InfluxDB is started (if not start it)

Shell

1

2

/etc/init.d/influxdb status

/etc/init.d/influxdb start

Login to the InfluxDB management URL which can be accessed by browsing to http://<server_ip_address>:8083

Once you are logged in using the default username & password of “root”, create a database named “dashboards”

Next you will want to click the database name and then create a new admin database user named “grafana”. Please ensure that you are recording the passwords specified in the recent steps as you will need these in the coming steps.

Reset the “root” user password by browsing to “Cluster Admins” -> root, entering a new password twice, and then clicking “Change Password”.

Once you are done customizing this graph you can add more (or quit) by pressing the “Back to dashboard” button at the top of the page

You should now see the recently created graph on your dashboard however you will need to save your changes. You can do this by clicking the floppy disk icon (at the top of the page) -> specifying a dashboard name in the text field -> clicking the floppy disk icon to the right of the text field.

If you wanted to set aforementioned dashboard as the default you can click the floppy disk icon (at the top of the page) -> “Save as Home”

As you may well know OnCommand Performance Manager 1.1RC1 was recently released which added the ability to send data to an external system such as Graphite. I have created the following tutorial which explains how to setup a CentOS server and install the Graphite application on it. If you run into any issues with the tutorial please let me know in the comment section below.

# and first match wins. This file is scanned for changes every 60 seconds

#

# [name]

# pattern = <regex>

# xFilesFactor = <float between 0 and 1>

# aggregationMethod = <average|sum|last|max|min>

#

# name: Arbitrary unique name for the rule

# pattern: Regex pattern to match against the metric name

# xFilesFactor: Ratio of valid data points required for aggregation to the next retention to occur

# aggregationMethod: function to apply to data points for aggregation

#

[min]

pattern=\.min$

xFilesFactor=0.1

aggregationMethod=min

[max]

pattern=\.max$

xFilesFactor=0.1

aggregationMethod=max

[sum]

pattern=\.count$

xFilesFactor=0

aggregationMethod=sum

[default_average]

pattern=.*

xFilesFactor=0.5

aggregationMethod=average

Add the following line to the “/etc/httpd/conf.d/graphite-web.conf” configuration file underneath “Alias /media/”

Shell

1

Alias/content/"/usr/share/graphite/webapp/content/"

Fix the following filesystem permission issues

1

2

chmod775/var/log/graphite-web/

chmod775/var/lib/graphite-web/

Start the “carbon-cache” & “httpd” services and enable them to start at boot

1

2

3

4

5

6

/etc/init.d/carbon-cache start

chkconfig carbon-cache on

/etc/init.d/carbon-aggregator start

chkconfig carbon-aggregator on

/etc/init.d/httpd start

chkconfig httpd on

Complete the “Configuring a connection from a Performance Manager server to an external data provider” steps outlined in the “OnCommand Performance Manager 1.1 Installation and Administration Guide For VMware Virtual Appliances” document found here. This will enable performance data to be sent from the OPM server to the new graphite server

After waiting 5 or so minutes you should now see all of the appropriate OPM data points within Graphite under the “netapp-performance” folder

Today is hump day and with that comes some added motivation to update this blog with some new material! The following post discusses another common topic and that is the termination of CIFS sessions in cDOT. This task could be performed in 7-mode however the commands have since changed in cDOT. Refer to the following tutorial below to kill any unwanted CIFS for a specific Windows user:

Display the current CIFS session(s) for the user and record the value(s) in the “Connection ID” column

After spending the last few weeks moving into my new place and having “fun” furnishing it, I am back on the road armed with more content! Within a few hours of being onsite today I got asked if it was possible to set NTFS permissions to files and/or folders within Clustered Data ONTAP (cDOT). This is another commonly asked question and the answer to it is yes, you can apply NTFS permissions to filesystem objects from within cDOT. Below I have included a summarized step by step tutorial on how to apply NTFS permissions to a given path (can be the root of a volume or a file or folder which resides within a cDOT volume):

Add access control entries to the recently created security descriptor. NOTE: Any access control entires NOT added to the security descriptor will be removed from the specified parent & children filesystem objects when the policy is applied!