Unlike EMC’s CLI, there’s no executable to install – it’s all on the controllers. If you’re using Windows, PuTTY is still a good choice as an ssh client. Otherwise the macOS ssh client does a reasonable job too. When you first setup your FlashArray, a virtual IP (VIP) was configured. It’s easiest to connect to the VIP, and Purity then directs your session to whichever controller is the current primary controller. Note that you can also connect via the physical IP address if that’s how you want to do things.

The first step is to login to the array as pureuser, with the password that you’ve definitely changed from the default one.

If you want to get some additional help with a command, you can run “command -h” (or –help).

pureuser@purearray> purevol -husage: purevol [-h] {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate} ...positional arguments: {add,connect,copy,create,destroy,disconnect,eradicate,list,listobj,monitor,recover,remove,rename,setattr,snap,truncate} add add volumes to protection groups connect connect one or more volumes to a host copy copy a volume or snapshot to one or more volumes create create one or more volumes destroy destroy one or more volumes or snapshots disconnect disconnect one or more volumes from a host eradicate eradicate one or more volumes or snapshots list display information about volumes or snapshots listobj list objects associated with one or more volumes monitor display I/O performance information recover recover one or more destroyed volumes or snapshots remove remove volumes from protection groups rename rename a volume or snapshot setattr set volume attributes (increase size) snap take snapshots of one or more volumes truncate truncate one or more volumes (reduce size)optional arguments: -h, --help show this help message and exit

There’s also a facility to access the man page for commands. Just run “pureman command” to access it.

Want to see how much capacity there is on the array? Run “purearray list –space”.

Note that a snapshot is available for 24 hours to roll back if required. This is good if you’ve shrunk a volume to be smaller than the data on it and have consequently munted the filesystem.

When you destroy a volume it immediately becomes unavailable to host, but remains on the array for 24 hours. Note that you’ll need to remove the volume from any hosts connected to it first.

purevol disconnect [volume] --host [hostname]purevol destroy [volume]

If you’re running short of capacity, or are just curious about when a deleted volume will disappear, use the following command.

purevol list --pending

If you need the capacity back immediately, the deleted volume can be eradicated with the following comamnd.

purevol eradicate [volume]

Further Reading

The Pure CLI is obviously not a new thing, and plenty of bright folks have already done a few articles about how you can use it as part of a provisioning workflow. This one from Chadd Kenney is a little old now but still demonstrates how you can bring it all together to do something pretty useful. You can obviously extend that to do some pretty interesting stuff, and there’s solid parity between the GUI and CLI in the Purity environment.

It seems like a small thing, but the fact that there’s no need to install an executable is a big thing in my book. Array vendors (and infrastructure vendors in general) insisting on installing some shell extension or command environment is a pain in the arse, and should be seen as an act of hostility akin to requiring Java to complete simple administration tasks. The sooner we get everyone working with either HTML5 or simple ssh access the better. In any csase, I hope this was a useful introduction to the Purity CLI. Check out the Administration Guide for more information.

For some reason, I keep persisting with my QNAP TS-639 II, despite the fact that every time something goes wrong with it I spend hours trying to revive it. In any case, I recently had an issue with a disk showing SMART warnings. I figured it would be a good idea to replace it before it became a big problem. I had some disks on the shelf from the last upgrade. When I popped one in, however, it sent me this e-mail.

Server Name: qnap639IP Address: 192.168.0.110Date/Time: 28/05/2015 06:27:00Level: WarningThe firmware versions of the system built-in flash (4.1.3 Build 20150408) and the hard drive (4.1.2 Build 20150126) are not consistent. It is recommended to update the firmware again for higher system stability.

Not such a great result. I ignored the warning and manually rebuilt the /dev/md0 device. When I rebooted, however, I still had the warning. And a missing disk from the md0 device (but that’s a story for later). To get around this problem, it is recommended that you reinstall the array firmware via the shell. I took my instructions from here. In short, you copy the image file to a share, copy that to an update directory, run a script, and reboot. It fixed my problem as it relates to that warning, but I’m still having issues getting a drive to join the RAID device. I’m currently clearing the array again and will put in a new drive next week. Here’s what it looks like when you upgrade the firmware this way.

CompCU.jar is the Compellent Command Utility. You can download it from Compellent’s support site (registration required). This is a basic article that demonstrates how to get started.

The first thing you’ll want to do is create an authentication file that you can re-use, similar to what you do with EMC’s naviseccli tool. The file I specify is saved in the directory I’m working from, and the Storage Center IP is the cluster IP, not the IP address of the controllers.

Now you can run commands without having to input credentials each time. I like to ouput to a text file, although you’ll notice that CompCU also dumps output on the console at the same time. The “system show” command provides a brief summary of the system configuration.

Notice I get java errors every time I run this command. I think that’s related to an expired certificate, but I need to research that further. Another useful command is “storagetype show“. Here’s one I prepared earlier.

There’s a bunch of useful things you can do with CompCU, particularly when it comes to creating volumes and allocating them to hosts, for example. I’ll cover these in the next little while. In the meantime, I hope this was a useful introduction to CompCU.

I was doing an Exchange 2010 storage health check recently and needed some information some volumes presented to the environment from our SVC. My colleague gave me some commands to get the information I needed. I also found a useful website with pretty much identical commands listed. Check out the “SAN Admin Newbie — My notes on Useful Commands” blog, the post I looked at was “Commands to look around the SVC -> svcinfo”, located here. This is basic stuff for the seasoned SVC admin, but I’m really new to it, so I’m putting it up here.

The first order of business was to identify the vdisks that were mapped to one of the hosts I was looking at. To do this I used lshostvdiskmap. The lshostvdiskmap command displays a list of volumes that are mapped to a given host. These are the volumes that are recognized by the specified host. More info can be found here.

So now I know the vdisks, but what if I want to check the capacity or find out the IO Group or MDisk name? I can use lsvdisk to get the job done. The lsvdisk command displays a concise list or a detailed view of volumes that are recognized by the clustered system. More information on this command can be found here.

Great, so what about the MDisk group that that vdisk sits on? Let’s use lsmdiskgrp for that one. The lsmdiskgrp command returns a concise list or a detailed view of MDisk groups visible to the cluster. More information can be found here.

Now let’s find out all the vdisks residing on a given MDisk group. In this example I’ve filtered by
mdisk_grp_name as well as adding the -delim , so that I can dump the output in a csv file and work with it in a spreadsheet application.

I did a post a little while ago (you can see it here) that covered using mdadm to repair a munted RAID config on a QNAP NAS. So I popped another disk recently, and took the opportunity to get some proper output. Ideally you’ll want to use the web interface on the QNAP to do this type of thing but sometimes it no worky. So here you go.

There are any number of reasons why you mightn’t want to store your CLARiiON credentials in an encrypted file in your home directory. I can’t think of any. This post will cover the basics of setting yourself up with a security file that means you won’t have to keep entering your username, scope and password every time you want to use naviseccli.

-AddUserSecurity
This is the command to add user security information to the security file on this host. You need to use the -scope switch to add scope information to the security file. You can also use the -password switch or enter your password into the password prompt, to supply the required password information to the security file. If you don’t specify the -user switch, naviseccli assumes that the currently logged in user is the username you wish to use. The -secfilepath switch is also optional with this command. Note that if you use the -secfilepath switch, you can specify an alternative location to your default home directory, for the security file on this host. Keep in mind that you will then need to use the -secfilepath switch in each subsequent command you issue. You might find this tiresome.

-RemoveUserSecurity
This blats any user security information about the current user from the security file on this host.

-scope 0|1|2
Specifies whether the user account on the storage system you want to log in to is global (0), local (1), or LDAP (2). A global account is, as the name implies, global for the Navisphere / Unisphere domain you’re working in. A local account is effective on only the storage systems for which the administrator creates the account. LDAP maps the username/password entries to an external LDAP or active directory server for authentication.

-secfilepath filepath
Stores the security file in a specified location. This is useful if for some reason you don’t want the security file stored in your default home directory.

Enough talk. Here’s an example of how to setup the security file.

c:\>naviseccli -AddUserSecurity -Scope 0 -user san_admin

Enter password:

Assuming that the user san_admin is valid for the domain, and assuming that I’ve entered the password correctly, I can now run commands against any array in the domain without entering the username, password or scope. When you have a long password this can lead to some real time savings :)

working for minimum rage

taking the social out of social networking

buy me a pony

photos of food

disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my employer and does not necessarily reflect the views and opinions of my employers, previous or current. This is my blog.

Search

Search

Subscribe to PenguinPunk.net by email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.