Tag: NetApp

To protect the Storage Virtual Machine (SVM) namespace root volume, you can create a load-sharing mirror volume on every node in the cluster, including the node in which the root volume is located. Then you create a mirror relationship to each load-sharing mirror volume and initialize the set of load-sharing mirror volumes. As you add new nodes to the cluster, you may choose to add a new load-sharing mirror to a set of existing load-sharing mirrors.

Create the destination load-sharing mirror volume by using the New-NcVol cmdlet with the -type parameter set to DP (data-protection volume). The destination volume that you create must be the same size or greater than the SVM root volume.

Use the Invoke-NcSnapmirrorInitialize cmdlet specifying the DestinationVolume and DestinationVserver to perform the initial update of a SnapMirror relationship.

Note: Do not use the Invoke-NcSnapmirrorLsInitialize cmdlet. The Invoke-NcSnapmirrorLsInitialize cmdlet is for initializing volumes for an entire set of load-sharing mirrors, not for initializing an individual volume.

Upon job completion, update the LS set by using the Invoke-NcSnapmirrorLsUpdate cmdlet specifying the source endpoint to update destination volumes of the set of load-sharing mirrors. The cmdlet makes destination volumes in the group of load-sharing mirrors up-to-date mirrors of the source volume. Separate SnapMirror transfers are performed from the source volume to each of the up-to-date destination volumes in the set of load-sharing mirrors.

Use the Get-NcSnapmirror cmdlet once more to confirm the health of the relationship.

Caught this after upgrading from 8.3.2P2 Cluster-Mode to 8.3.2P12 Cluster-Mode. We created a new CIFS share and found we could not apply NTFS ACL permissions to the share because it was missing the security tab.

Old shares looked and operated fine.

It turned out the culprit was quiesced LS mirrors. Here’s how to fix:

snapmirror show

From here we can confirm that our LS mirrors are in a quiesced state. As per NetApp doc “FA266”:

To a cluster, a volume is a folder. When you create and mount a volume to /, it appears as a folder to the cluster and clients.

A read or write request comes through that path into the N-blade of a node, the N-blade first determines if there are any LS mirrors of the volume that it needs to access. If there are no LS mirrors of that volume, the read request will be routed to the R/W volume. If there are LS mirrors of the volume, preference is given to an LS mirror on the same node as the N-blade that fielded the request. If there is no LS mirror on that node, an up-to-date LS mirror from another node is chosen. This is why the newly created volumes are invisible, since before the LS mirror update, all the requests go to LS Mirror Destination volume, which is Read-Only.

Additionally if we browse the admin share c$, we do not see our new share.

snapmirror resume

Because there is a LS mirror set in place that was quiesced, meta data for the new share was not propagated to the root vol. Once the mirror set is resumed and updated, the new share replicates the meta data and access is restored.

After resuming the mirror, you can either wait for it to update and sync on its set schedule, or you can update the LS set manually by using “snapmirror update-ls-set”.

snapmirror update-ls-set

We can confirm that the LS mirror set is in sync now because security tab appears on the new share.

And it now appears when we browse the admin share.

One last thing to confirm is that the LS set is being updated on a schedule.

To enable the Varonis Metadata Framework to connect to a NetApp file server operating in cluster mode, you must configure an FPolicy for it.

This PowerShell script, which I based off of Technical Report TR-4429 (referenced below for further reading), will automate:

Creating the FPolicy Event Object

Creating the FPolicy External Engine

Creating the FPolicy Object

Creating the Fpolicy Scope Object

Configuring the Login Method for DatAdvantage

Configuring the Varonis service account as CIFS superuser (To enable the Management Console to correctly detect NetApp cluster shares, the Varonis service account must be a member of the Domain Administrators group, or added as a CIFS superuser.)

Network File System (NFS) users have access problems when they belong to greater than 16 groups.

Users have access problems if in 17 to 20 groups.

As noted in the KB:

Although the filer currently supports up-to 32 UNIX/NFS groups some NFS clients only support 16 groups, which means an NFS user can only belong to 16 groups while using NFS… While there are hacks for allowing a Unix user to be a part of more then 16 netgroups, per RFC standard RFC 5531 this is a set limit and cannot be modified. So it is likely that a client vendor would not support changes to the client allowing more then 16 netgroups. ONTAP limits to 16 as well following the RFC 5531 standard.

8. Configure the number of group IDs allowed for NFS users

By default, Data ONTAP supports up to 32 group IDs when handling NFS user credentials using Kerberos (RPCSEC_GSS) authentication. When using AUTH_SYS authentication, the default maximum number of group IDs is 16, as defined in RFC 5531. You can increase the maximum up to 1,024 if you have users who are members of more than the default number of groups.