NetApp and virtualization stuffs

When it comes to edit configuration files on a remote server, the fastest way is usually to ssh into that server and use “vi” or other terminal based text file editing program.

Even though I use “vi” quite often and know it enough for what I do, I appreciate the convenience and comfort of a nice text editor on my workstation. To bridge the gap I came with a solution that seems to work very well for me and I figured it might help you too.

I use a Mac as a workstation and my editor of choice has been Visual Studio Code from Microsoft, but I think the principles are generic and you can probably tweak the following to work in your own environment.

Remote VS Code

The first piece is a plugin for VS Code called “Remote VSCode“, you will find it under the plugin section of VS Code and is quickly installed. I just changed my configuration file to run remote editing at launch, it is completely transparent and doesn’t change the way you use VS Code.

This line goes in the JSON configuration file that you can edit from the Preferences menu under “Settings”. Be careful with the syntax, and add this line in the right context with the right commas to keep a valid JSON syntax.

"remote.onstartup": true

Just to show you in context, this is my whole VS Code config file, very simple :

SSH Tunneling

You need to establish some kind of communication between the remote host and the local computer your are using. This is pretty easy with SSH Tunnels, but nobody wants to type the tunnel syntax to establish the right tunnel with the right port.

I added an entry in my user’s ssh configuration file to always create a tunnel for port 52698 (the default port used by Remote VS Code. Here is the entry in my ~/.ssh/config file :

host *
RemoteForward 52698 127.0.0.1:52698

It works quite well if you’re the only one working this way, but you might want to use a different port if multiple people are doing the same thing on the same server, otherwise the second person connecting will be denied tunnelling port 52698.

rmate

The last part is a piece of program you need to call on the server instead of “vi”, “pico” or whatever you use. That’s the only thing that needs to be placed on your remote servers. Don’t mind the fact it has been made for TextMate editor, it actually works with Remote VSCode.

You can now edit a file using “rmate -p 52698 somefile.txt”, but we can avoid typing the port name every time we edit a file. For this, just create a configuration file in your home directory on the remote server :

echo "port 52698" > ~/.rmate

And voilà ! You can now connect on your server and type “rmate somefile.txt” it will open in VS Code if already running. Saving the file will send it to the remote server.

Up until now, pretty much all of my lab stuff were running off of my laptop. I’ve been very happy with it, but these days, the products are more complex, more dependent on each others, and it becomes very hard to maintain a consistent environment available anytime, and scalable. Plus, the MacBook Pro I’m using is getting quite hot when under pressure, I do not feel comfortable anymore running a lot in VMware Fusion on a laptop, I guess I reached the limit of a backpack lab.

Part I. Identify components

I did quite some research about home lab equipments that wouldn’t break the bank, I was somewhat disappointed and realized I didn’t have reasonable expectations and I will have to make compromise.

In my ideal world, a home lab has a small footprint, has a load of RAM, and is absolutely silent. Well, that’s not going to happen, the most painful part for me was that today there is no reasonably priced fanless PC that can run an i5 and host at least 32G of RAM.

When I realized that was not happening, and lowered my expectations, it became a little bit easier and after doing some research, settled for the following configuration from Newegg :

Intel NUC BOXNUC7I5BNH ($376.99)

SAMSUNG 960 EVO M.2 250GB NVMe ($127.99)

2 x Kingston Module DDR4 2133mhz 16 Gb ($353.98)

For a total of $858.96. It’s more than what I wanted to pay, but I’m cheap, I realized eventually you have to break the bank to get some serious gears. Of course, you could always find cheaper stuff second hand, or if the small form factor isn’t really a requirement. But for example, going for 1U rackable servers, you need to think about power consumption and on the long run it mights not be worth it.

Intel NUC

I considered multiple options, and there is actually a pretty good article on the subject over here. The Supermicro seems really appealing, mostly because of the embedded management card and high RAM configuration supported, but really, it’s hard to beat the cheap $377 of the NUC. Go for a E200-8D or E300-8D if you think it’s worth spending the bucks, but be careful that they might make much more noise because of the 3 tiny fans they use to cool it down.

On the NUC side, the Skull Canyon NUC6i7KYK, gives you another M.2 slot and some decent graphics performance if you need it.

Shuttle seems to have interesting options as well, for a few more bucks, you can get a DH110 that provides 2x1GBE ports and you can pick the CPU you want.

SAMSUNG 960 EVO M.2 250GB NVMe

Samsung has a solid reputation in flash media, I wanted NVMe very fast memory, considering the relatively cheap price I think it’s totally worth it. The NUC also has a regular SATA III expansion bay if I need to add more capacity with a spinning drive or a regular SSD drive.

I settled for 256G to start. The idea is if I reach the storage capacity before I reach the RAM capacity then I’ll upgrade or I’ll put another SATA III SSD in the NUC. My goal for this lab is to rely on containers as much as I can so the storage footprint will be less than actual VMs.

But if your intent is to host regular VMs, you might want to go for the 512G option.

Also, consider there is a sweet spot to find between one powerful lab server, and multiple ones. Workloads are very mobile today and ideally you probably want to run multiple physical servers to get some resiliency and flexibility. For budget reason, I didn’t do it, but I would have preferred 2xNUC with 16G RAM and 256G storage instead of one big 32G.

2 x Kingston Module DDR4 2133mhz 16 Gb

RAM, RAM and RAM… What else is all that about anyway? Since the flash memory era, thank god we can put that behind us, the storage is no longer the contention point in home lab equipments. In my opinion CPU doesn’t matter much either, of course it all depends on the kind of application you will host, but RAM is what we need.

Out of the bat I will be running the hypervisor itself, a few containers, Data ONTAP simulators, even 32G can go fast, I think 16 might have been limited. From what I saw, you can start with one 16G module and buy another one later, but it probably depends on the compute platform/motherboard/cpu.

With 32G you should be able to run a decent environment without having to shut down VMs all the time.

SanDisk 32GB Ultra Fit Flash Drive

Not the most critical component, but I decided to go with a thumb drive installation of ESXi for this lab. That way I can dedicate the NVMe for datastore and avoid multiple partitions, for $12 it’s likely worth it. Chances are I won’t need these USB ports for anything else anyway.

Conclusion

Stay tuned! I’m waiting for the components and I will document my progress with the installation and setup!

It is a common request from customer that wants to know who are the most active clients to a system. Though not hard to obtain, there is no obvious menu that lets you do that, so there it is :

First of all, you need to be in advanced mode and start statistics collection for the “client” object :

netapptest2::*> set advanced
Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
netapptest2::*> statistics start -object client

NetApp Virtual Appliances like OnCommand Unified Manager 6 or OnCommand Performance Manager 1 are normally deployed within VMware ESX hypervisors. It can work on other hypervisors as well but it requires additional steps to workaround an error that occurs when you try to set the IP address to something else than DHCP. The purpose of this article is to explain how to make IP configuration available when you setup the virtual appliance in a lab-on-laptop lab, or anywhere else that is not a ESX server.

Hanging around on NetApp communities, I came across that post from a freshly NetApp certified person wondering about where to go next, and how to gain experience with NetApp storage. Long story short (since this was not exactly what Fabian was looking for), I started to think about good learning tricks and how my personal experience would be worth sharing.

Part of my job is to analyze different data from my customer’s systems and dozen times a day I have to parse and transform regular output from Data ONTAP systems to more organized format, suitable for an Excel spreadsheet for example.

Of course, this is an absolutely not supported version of System Manager. The reason why I decided to provide it to the public is because it does not use any private materials from NetApp, and is only based on the Linux version of System Manager, which is available on NetApp Support Site.

Note that if you run it without installing the JDK, Mac OS X will automatically propose to install Java. Don’t do this as you won’t get a JDK this way, you must download it from Oracle web site and install it manually.

CentOS is my favorite Linux distribution because it is the closest to RedHat you can find (for a reason). And I like RedHat for the simple reason that it is the most supported Linux distribution for any enterprise applications. I know usually both RedHat and SuSE are supported, but SuSE and I was never a love story and I will save you the history of this little drama for this time.

So, the idea is not to run CentOS in a production environment, unless you really have the heart of a warrior and do not care about being supported by a rock solid editor, but instead to run your OnCommand Core lab on a free OS.

[EDIT] Please use caution when setting up your environment according to this article. In some case, especially when using BIND DNS server with non-default parameters, you may end-up with requests going to data LIF even if they are on the storage network and you set “-listen-for-dns-query false” on it.

I got an interesting question from a customer about the way DNS load balancing is served in a cluster.

For network topology reasons, they needed to serve DNS requests on a network different from the data network used by the clients to access storage.

The problem is that Data ONTAP only listens to DNS requests on data LIFs that has been configured for DNS load balancing. The obvious issue is that if you configure an additional LIF on the management network, the load balancer will start serving this IP to the clients, which might not be the optimum path or even not routed at all.