Now click on the + sign to add your BackupExec host. Enter the hostname and click OK

Next we need to create a DD Boost Storage Unit. Navigate to Data Management – DD Boost – Storage Units and click Create. Enter a descriptive name and configure any quotas, if desired. Click OK

Now we need to enable an interface for DD Boost. Navigate to Data Management – DD Boost – IP Network. Highlight and edit an interface group and tick the “Enabled” check box in the resultant dialogue box and click OK.

That takes care of the configuration on the DataDomain side of things. Let’s move over to Backup Exec

Backup Exec 2012 Configuration

Download and install the latest version of the EMC DataDomain Boost for Symantec OpenStorage (Version 2.5.0.3-314845) at the time of writing. This file is available from the DataDomain support site (Powerlink Login required)

Open up the Backup Exec console, select the Storage tab and click Configure Storage

The type of storage is OpenStorage

Enter a name and description for the DataDomain

Select the DataDomain provider (the DataDomain only shows up once you completed the OST plugin installation as per Step 1)

Enter the connection details for the your DataDomain device

Enter the Storage Location configured on the DataDomain (BackupExec in our case)

Select the amount of concurrent operations allowed to the DataDomain

Click Finish on the Summary screen and confirm that you would like to restart the Backup Exec services

With that completed you’ll be able to select the DataDomain as a deduplication target (as opposed to B2D device).

Friday, December 14, 2012

IT peeps supporting Microsoft Exchange can be divided into two groups – those that have experienced problems with the OAB and those that will. If you’re in the first camp – I feel your pain. Those of you lucky enough to be in the second camp – file this away, it might come in handy…no I’m not being facetious.
The process below will blow away your OAB and create a new one, so be mindful. FWIW I’ve never had any issues with this process. This is especially effective if you’ve screwed up on the public folder replication during Exchange Migrations (don’t ask).
We rebuild the Exchange 2010 OAB like so:

Create a new Offline Address Book object

Open the Exchange Management Console (EMC) and navigate to Organisation Configuration – Mailbox

Click the Offline Address Book tab. Right click in the blank area and click New Offline Address Book

Give your OAB a different name than the existing one

Select your Exchange 2010 MBX server as the OAB generation server

Check Include the default Global Address Lists option

Check Enable Web-based Distribution as well as the Enable public folder distribution option

Finish the wizard

Restart Exchange Services

Restart the Microsoft Exchange System Attendant service

Restart the Microsoft Exchange File Distribution service

Update and set the OAB as default Offline Address Book

Right-click your newly created OAB and click Update. This can take a couple of minutes, confirm successful completion via your Application log

Wednesday, December 12, 2012

I received a mail from NetApp this morning, pointing my attention to KB ID 7010014. In a nutshell, there is a drive firmware upgrade available which the lowers the drive failure rates. AutoSupport has also been nagging me about out of date DS4243 shelf firmware, so I thought this would be a perfect opportunity to upgrade it all in in one go. It goes without saying that the upgrades must to have zero impact on client access. The process below was run on Data Ontap Release 8.1 7-Mode.

Update the Disk Qualification Package

Download the latest DQP from the NetApp support site

Extract the files and copy it to the /etc folder on your filer, overwriting the existing files

Update the Disk Shelf Software

Download the appropriate disk shelf software upgrade from the NetApp support site

Extract and copy it to the /etc/shelf_fw folder on your filer

Run the options shelf.fw.ndu.enable command and verify it is set to on

If not, enable it with the options shelf.fw.ndu.enable on command

Execute the storage download shelf command to update the shelf firmware and enter yes when prompted

Update the Disk Firmware

Download the latest disk firmware from the NetApp support site

Verify the following, otherwise you will not be able to do a non-disruptive upgrade

Aggregates need to be RAID-DP or mirrored RAID4

You need to have functioning spares

Run the options raid.background_disk_fw_update.enable command and verify it is set to on

If not, enable it with the options raid.background_disk_fw_update.enable on command

Extract and copy the disk firmware to the /etc/disk_fw folder on your filer

The upgrade should start automatically in a couple of minutes

Repeat for both controllers

Verifying the upgrade

Execute the sysconfig –v command to verify successful installation
And there we go, we have non-disruptively upgraded the firmware and disk drives in our filer!

Saturday, December 8, 2012

What better way to kick off the festive season than a with a storage migration (only being slightly ironic!). A customer uses their existing NetApp kit to provide block storage to vSphere hosts and CIFS shares to Windows clients and they wanted me to do a swap out upgrade. Migrating the vSphere data is a cinch nowadays, what with Storage vMotion and all, so I’ll just document the CIFS stuff.

First you’ll need to setup a SnapMirror relationship of the CIFS volume between the source and destination filers (no faffing around with robocopy and the like)

Make a backup copy of the /etc/cifsconfig_shares.cfg file

Execute cifs terminate on the source filer (downtime starts here)

Update (quiesce if necessary) and break the SnapMirror relationship

Take the source filer offline

Assign the source filer’s IP to the new filer

Reset the source filer’s account in Active Directory (if applicable)

Execute cifs setup on the new filer

It goes without saying that you will assign the source filer’s hostname to the destination filer, as well as join it to the AD (assuming the source filer was joined)

Execute cifs terminate on the destination filer and replace the cifsconfig_shares.cfg with the backup copy you made in step 2

Wednesday, October 10, 2012

I’m busy with a project which involves getting two ESXi hosts hooked up to a VNX5300 configured in block mode. The order we placed with Dell specified Emulex 12000 HBA’s, but Dell got creative and shipped Brocade 815’s instead. Only problem was that they didn’t work when directly connected to the front-end ports on the VNX. I’m documenting the symptoms here as well, so that the next person does not have to battle for two days.

The Symptoms

When directly connecting the HBA’s to the VNX fiber ports the following events pop up in the SP event logs

This tells us that things are fine on a physical layer, but not much else is happening higher up the stack.

The Fix

First we need to upgrade the HBA firmware to version 3.1. There are various OS specific ways to do it, easiest is probably to download the livecd from Brocade. Since this HBA is not on the ESXi 5.1 HCL we need to install the driver. You need to install at least the v3.1 I include the steps for the sake of completeness

Enable SSH on your ESXi host

Use scp for Windows or the following command from a linux / max host: scp brocade_driver_esx50_v3-1-0-0.tar root@<ip address>:/tmp

SSH into your ESXi host and navigate to the /tmp folder with cd /tmp

Execute tar xf brocadedriveresx50_v3-1-0-0.tar

Execute ./brocade_install_esxi.sh

Wait for the installation to finish (takes about 1 – 2 mins) and reboot host once done

Now we need to configure the HBA for direct connection, or more technically, FC-AL mode

SSH into your ESXi host and navigate to /opt/brocade/bin/ by entering cd /opt/brocade/bin/

./bcu port --topology 1/0 loop

./bcu port —disable 1/0

./bcu port —enable 1/0

./bcu port --topology 2/0 loop

./bcu port —disable 1/0

./bcu port —enable 1/0

Your ESXi host should now show up as a host on the VNX where you can add it to a storage group and assign LUNs.

Sunday, July 29, 2012

I recently had the opportunity to architect a solution consisting of 3 vSphere 5 boxes connecting to a NetApp FAS2040. Storage connectivity would be via iSCSI. The storage network would be running off of 2 Cisco 2960G switches, soon to be replaced by stacked Cisco 3750’s.

The requirements were stock standard, as high a throughput as possible, with as much redundancy as possible. This meant going active active on the iSCSI links. Here is how I did it.

NetApp FAS2040 Configuration

This little SAN has 8 1GB Ethernet ports. Due to the fact that the Cisco 2960G switches does not support multi-link switch aggregation (this is where the 3750’s will come in) I had to come up with a simpler design – what NetApp terms a Single-Mode design. My design allows for:

Two active connections to each controller, thus a total of four active sessions

This image, courtesy of NetApp, explains it infinitely better than my wall of text:-)

I also configured partner takeover for all VIF. In case of controller failure it allows the remaining controller to take over the VIFs.

Ethernet Storage Network Configuration

On the storage network I had to configure 2 critical settings:

Spanning Tree Portfast

Jumbo Frames

When connecting ESX and NetApp storage arrays to Ethernet storage networks, NetApp highly recommends configuring the Ethernet ports to which these systems connect as RSTP edge ports. This is done like so:

vSphere Configuration

I am in love with vSphere 5, and one of the biggest reasons for that is the fact that a lot of the configuration parameters that used to be command-line only has been moved into the GUI. Another reason is Multiple TCP Session Support for iSCSI. This feature enables round robin load balancing using VMware native multipathing and requires a VMkernel port to be defined for each physical adapter port assigned to iSCSI traffic. That said, let’s get configuring:

Open your vCenter Serve

Select an ESXi host

In the right pane, click the Configuration tab

In the Hardware box, select Networking

In the upper-right corner, click Add Networking to open the Add Network wizard

Select the VMkernel radio button and click Next

Configure the VMkernel by providing the required network information. NetApp requires separate subnets for active/active iSCSI connections, therefore we will create two VMkernels, on the 192.168.1.x and 192.168.2.x subnets respectively.

Configure each VMkernel to use a single active adapter that is not used by any other iSCSI VMkernel. Also, each VMkernel must not have any standby adapters. If using a single vSwitch, it is necessary to override the switch failover order for each VMkernel port used for iSCSI. There must be only one active vmnic, and all others should be assigned to unused

The VMkernels created in the previous steps must be bound to the software iSCSI storage adapter. In the Hardware box for the selected ESXi server, select Storage Adapters.

In the top window, the VMkernel ports that are currently bound to the iSCSI software interface are listed

To bind a new VMkernel port, click the Add button. A list of eligible VMkernel ports is displayed. If no eligible ports are displayed, make sure that the VMkernel ports have a 1:1 mapping to active vmnics as described earlier

Select the desired VMkernel port and click OK.

Click Close to close the dialog box

At this point, the vSphere Client will recommend rescanning the iSCSI adapters. After doing this, go back into the Network Configuration tab to verify that the new VMkernel ports are shown as active, as per the image below.

Saturday, July 14, 2012

I've been working in IT for the best part of a decade, but only got into blogging and the whole social media thing in the last year or so. I really love doing what I do and sharing it with others, but in putting yourself out there you begin to realise how important ethics are. There is absolutely no difference between me and the next blogger, apart from the quality of the content one puts up and credibility.

Then something dawned on me, credibility is not just something that should shine through in what you put out there for the public to consume, it is even more important to apply those principles in your day to day dealings. It was about at that time when I realised that true credibility is something that is exceedingly rare in IT, in my experience.

In my universe, a very quick way to loose credibility is to shoot down and bad mouth a product, vendor or technology you know nothing about. An example - I am in the somewhat unique situation where my job involves presales and architecting products from the two biggest storage vendors out there, namely EMC and NetApp. As if that's not enough, I also do the HP EVA portfolio. The storage field is hugely competitive, and this shows. I take my job seriously, so I make it my business to know the products I work with as well as possible.

For me it really is all about analysing the customers technical and business needs and consequently the application of the best technology for their given needs. And believe me, there is enough key differentiators between the various vendors, that when combined with the customers budgetary requirements that you will be able to determine a best-fit solution, and not this one-size-fits-all that most vendors fixate on. Unfortunately the amount of FUD and misinformation I've heard from people who really should know better is absolutely astounding. It gets to the point where the vendors are *actively* just advancing their own best interests with the client and their interests a distant second (or maybe I'm just naive, and that is how its supposed to work?).

As if that's not bad enough, I also do the entire lifecycle of both vSphere and Hyper-V, from pre-sales through to implementing and supporting. The amount of garbage I hear sprouted is enough to fill a landfill. Admittedly most of it comes from the vSphere-supporting side of the fence, but the Microsoft partners are quickly catching up. a Couple of examples I've heard is "Hyper-V does not do the equivalent of vMotion" or "the ESXi hypervisor is 50% faster then Hyper-V". Complete and utter bollocks in other words. As I said, the MS camp is quickly catching up and with the confidence and maturity that Hyper-V 3 will bring we'll see the MS guys giving as good as they get.

That being said, there will always be a bit of bias inherent in everyone. You will develop bias through your career, naturally leaning towards the solutions that you sell and implement. That is normal and there is nothing wrong with it. By all means do challenge the opposition's claims, ask them to backup their statements, ask for facts, see through the normal sales BS and question their value propositions.

What is not right is the stuff I was talking about earlier. At the risk of repeating myself we should all try and avoid spreading FUD intentionally. Spreading it unintentionally is only slightly worse, because one should always verify claims before repeating it as gospel yourself. If NetApp, for example, tells me they scored eleventy billion marks on some benchmark whilst EMC flunked out I will investigate. EMC does knows a thing or two about storage - so there is bound to be a story behind the story. Conversely, if I hear a EMC partner starting with "No one can touch our Avamar / DataDomain dedupe / our ease of management / etc" my BS detector goes into overdrive.

The ultimate loser here is the customer who gets bombarded with noise and misinformation from all sides, whose job hinges on making the correct decision, who ultimately needs to put his trust in a vendor who is more interested in pushing a brand or technology which might or might not solve a problem and who needs to explain when a solution does not deliver.

We need to start putting the customer first in everything we do. In the short term it might not seem the easy / profitable thing to do, but in the long term you will be rewarded. Credibility is truly priceless, and once you give it up it is very, very difficult to regain.

Sunday, June 3, 2012

I recently had a perplexing problem on one of my lab servers, which took a lot of head-scratching to solve. Fortunately I had some time to burn so I managed to get to the bottom of it.

Symptom

If I moved a disk or a CSV to a specific node in my Hyper-V failover cluster it would put the CSV in redirected mode and log the following to the System log

Log Name: System Source: Microsoft-Windows-FailoverClustering Event ID: 5125 Task Category: Cluster Shared Volume Level: Warning Keywords: User: SYSTEM Description: Cluster Shared Volume '\\?\Volume{0bf0b229-9b0e-11e1-8a3a-e4115ba98410}\' ('') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared Volumes. Active filter drivers found: aksdf (Encryption)

Cause

After a fair bit of head-scratching, rolling back actions and research with Sysinternals Process Monitor I pinpointed the problem to NetApp Single Mailbox Restore for Exchange. During installation it installs the aksdf.sys device driver. A quick google search showed it to be a driver used for USB dongle licensing. Weird, since SMBR does not require a dongle. Anyhow, this device driver conflicts with the CSV and forces it to run in redirected mode

Solution

The solution is simple – navigate to the HKLM\SYSTEM\CurrentControlSet\Services\akdsf registry key and set the Start key to have a value of four (4), as per the below screenshot.

This is not documented anywhere on the NetApp support site, so I will file a bug report. In mitigation, I cannot see that one will actually run SMBR on one of your production cluster nodes. Still, it should be trivial for NetApp to patch their installation routine to not install the aksdf.sys device driver.

Wednesday, May 23, 2012

NetApp Single Mailbox Restore for Exchange 2010 is, well, a snap to use when your Exchange server is running in the “NetApp way”. What is the NetApp way you ask? Well, in a nutshell, it is when you have your physical Exchange box hooked up to your SAN via iSCSI or FCP. If you are virtualised then you’ll need to present your disks via RDM (vSphere) or pass-through if you live in MS land.

What I address here is the case where you have an Exchange server virtualised with Hyper-V, with your hard drives attached as VHD’s. Even though this example uses Hyper-V, the principles are also applicable to a vSphere environment.

Mounting the NetApp Snapshot

Open NetApp SnapDrive on a host connected to your Filer via either FCP or iSCSI

Navigate to the Disks node and expand the LUN containing the VHD which in turn contains your Exchange DB’s.

Under Snapshot Copies, right-click point in time snapshot that you wish to restore and select Connect Disk

The Connect Disk Wizard will start. Click Next

Select the appropriate snapshot and click next.

Click Next on the the “Important Properties…” screen (Don’t change anything here)

Set the LUN type as Dedicated and click Next

Assign a Drive Letter and click Next

Select your initiators and click Next

Select Manual on the Initiator Group Management Screen and click Next

Select the appropriate iGroup and click Next

Click Finish to complete the SnapDrive Connect Disk Wizard

Your NetApp snapshot should now be mounted as a drive accessible through Windows Explorer, If you browse to it it should contain the VHD hosting your Exchange DB. The next step is to mount the VHD so that it is accessible to SMR.

Thursday, May 10, 2012

Part one of our little tutorial dealt with correctly setting up and sizing the Snapinfo LUN. Part deux will show you how to add and configure your cluster for SnapManager for Hyper-V. Let’s dive in.

Configuring a Hyper-V Failover Cluster

Open up SnapManager for Hyper-V, click the Protection node – Hosts tab and click Add Host. Enter your host name. NB! Only enter the NetBIOS name, not the FQDN***

Click Next. Answer Yes to the dialog box asking you to start the configuration wizard.

The configuration wizard will pop up

Click Next. Enter the report path location (or choose the default).

Enter the correct notification settings for your environment

Click Next. Select your Snapinfo path.

Click Next. Admire the exquisitely formatted summary.

Click Finish. The configuration wizard will now do the necessary to configure your Hyper-V failover cluster.

Once you click close you can start configuring your Hyper-V protection.

***If the Fully Qualified Domain Name (FQDN) is used, SMHV will not be able to recognize the name as a cluster. This is in view of the manner in which the Windows Failover Cluster (WFC) returns the cluster name through WMI calls. Consequently, the host will not be recognized by SMHV as a cluster and will fail to use a clustered LUN as the SnapInfo Directory Location.

Simple as this sounds I found that the process is not as simple and as well documented as it could be, especially with regards to creating the clustered SnapInfo LUN and folders. Consequently I decided to document it with (a first for this blog) screenshots.

I am going to assume that you have already hooked up your hosts to your NetApp system, and that you’ve installed SnapDrive and SnapManager for Hyper-V.

The steps, in a nutshell, are:

Create the Snapinfo LUN

Make the Snapinfo LUN a highly available clustered resource

Configure SnapManager for Hyper-V

Creating the SnapInfo LUN

Create a volume to host your Hyper-V SnapInfo LUN

Open up Snapdrive on of your Hyper-V cluster nodes, go to the Disks node, and click Create Disk. This launches the Create Disk Wizard.

Click Next. Now highlight the volume you created in step 1, enter a LUN name and description:

Click Next. Select whether you want to manually select the igroups (collection of initiators) or whether you want the filer to do it automatically.

Click Next. Choose the option to create a new Cluster Group to host the LUN

Click Next and click Finish to exit the wizard.

To recap, the above will:

Create a LUN on the volume of your choosing

Format the LUN with the NTFS filesystem

Add the disk to your Failover Cluster as part of a Cluster group

Assign a driveletter to the disk.

***SnapInfo LUN Size Provisioning: The NetApp filer will store about 50KB metadata per VM per snapshot. Due to the way Hyper-V snapshots work it will store two snaps per snapshot, therefore if we backup 20 VM’s once per day our sizing will be as follows: 20 * 50KB = 1MB * 2 = 2MB per day. NetApp allows us to store 255 snapshots per volume so we should cater for 510 MB total. I give it 10GB just because I can. And because thin provisioning works.

About Me

About This Blog

This blog serves 2 purposes. Firstly, I want to share information with other IT pros about the technologies we work with and how to solve problems we often face. I work with technologies from the desktop to the data center, Active Directory, System Center, Exchange, Hyper-V, VMware, Networking and Storage.

Less altruistically, I use my blog as a reference. There's so much to learn and remember in our field that it's impossible to keep up. By blogging, I have a notebook that I can access from anywhere. It has made me look much smarter than I probably am on many occasions.