Thoughts from your friendly neighborhood technologist.

Author: Matt Lestock
(page 1 of 10)

Just a quick note to everyone out there that even though you can find the ixgbe 4.5.1 listing in the compatibility list it takes you to the “Tools and Drivers” page where there is no download. Thanks VMware…

So through some URL hacking I was able to find the download page and am posting it I can save you some precious moments of rabbit-hole traversing…

What’s all this about?

Having a home lab is great as I’m sure most of you are aware. But when you want to start running things that are more or less “production”, often we have the need to start tracking historical data. During my quest to begin graphing statistics associated with my homelab I started my journey first by looking at some of the more popular options for infrastructure visualization by exploring options like the 100 monitor license for PRTG, and LibreNMS.

While these would work in a majority of lab scenarios, I operate a few things that required a custom graphing and data collection solution.

You see I run a few community game-servers, mumble server, share a plex system with friends and family, and also all of the automation components that go along with it (CouchPotato, Sonarr, NZBGet, qBittorrent). While PRTG and LibreNMS are great for SNMP polling and logging, the type of systems I’m looking to monitor don’t have SNMP on them, thus it was time to explore other options.

For reference, here’s a look at what I’m running for my always on services.

Enough Hardware, what about the Software!?

Now that we’ve got the hardware out of the way, what about the software? Well originally I had planned on using Graphite, which includes Carbon, Whisper and the Web Frontend. However, it’s a fairly long pain in the ass process to setup so I opted for an alternative running Grafana and InfluxDB.

Unlike a traditional relational database, InfluxDB is a TSDB or Time Series Database. In our example here we’re using InfluxDB to store 3 things, the Measurement Name, the Measurement Value, and the Time which that measurement happened. Let’s have a look at an example.

homelab.servers.nanoserver.cpu.0.processortime, 8, 1454794445

Simple right? The first value is that of our Measurement Name, which can be anything we want. There’s no need to prepopulate the database with these measurement names, as anything fed into InfluxDB with the correct structure will create the measurement name and associate a measurement with it. Bitchin’ I know! The second value is the actual measurement itself, in this case 8% utilization of CPU core 0. The third and final value is the time this measurement was taken in UNIX time.

Once we’ve got data in an InfluxDB, we need a way to easily digest it. This is where Grafana comes in. Luckally Grafana (as of 2.1.0) has support for InfluxDB 0.8 and InfluxDB 0.9 out of the box. Note: There are two profiles because the query language changed between InfluxDB 0.8 and 0.9.

Now that we’ve gotten the what out of the way, it’s time to dive into the how. Or put more simply, how are we going to actually measure datapoints and get them into InfluxDB for Grafana to ya know, graph?

In my case my “Nanoserver” is running Windows 7 on the bare metal, and a few virtual machines under VMware Workstation 12. One of those machines is my general purpose Ubuntu 14 box that I run utilities on which don’t have native Windows support. This is where we’ll be installing InfluxDB, Grafana, an NMP stack (Nginx, MariaDB, Php). Now I mentioned previously that the Nanoserver is running Windows which I certainly want to retrieve and report performance statistics from, but I couldn’t find a script that would easily allow me to get performance data out of Windows and into InfluxDB. However I did find Matt Hodge’s excellent powershell script for getting performance data from windows and inserting it into Graphite. “BUT MATT! I thought you said we were working with InfluxDB?!” And you’re right, it turns out one of the great things about InfluxDB is that it has a Carbon Protocol Listener that while disabled by default, is easily activated with a config change, and allows us to receive data originally meant for a Graphite, Carbon, Whisper stack!

Alright I get it, InfluxDB and Grafana are awesome, but how do we set them up?

Ok ok. I promised you that this would be a how to and if you’ve made it this far, it’s time to actually show you how we’re going to accomplish this stuff. Ready? Here we go.

First we’ll start with the fact that this tutorial is based on the notion that we’re installing these components on Ubuntu 14, so that’s our first prerequisite.

Head to a web browser and enter the IP of your InfluxDB Server followed by port 8083 (http://INFLUXDB_IP:8083), then in the query textbox enter the following command.

1

CREATE USER'graphite'WITH PASSWORD'graphite'WITH ALL PRIVILEGES

Now we need to edit the InfluxDB config file to enable user authentication.

1

sudo nano/etc/influxdb/influxdb.conf

First we’ll enable authentication for InfluxDB by setting “auth-enabled = true” under [http] from it’s default of false.
Now let’s enable the Graphite protocol listener by setting “enabled = true” under the [[graphite]] config section from it’s default of false.

Note : InfluxDB by default does not require user authentication and is happy to allow anyone to just write measurement data to it, however the Grafana Data Source interface requires user credentials, so that’s why we enabled basic user authentication for the Graphite database.

Time to add the InfluxDB Data Source

Click on the Data Sources link on the left menu, and then select add new from the top menu.
Give your Data Source a name, in this case we’re going with InfluxDB Graphite
Select InfluxDB 0.9.x (even if version 0.10.x was installed) as your Data Source Type, and optionally select the Default checkbox to have this Data Source selected each time you want to add a new panel.
In the Http settings section enter your InfluxDB IP address followed by port 8086, (In this case because InfluxDB and Grafana are located on the same server go ahead and enter http://localhost:8086), your access mode should be proxy.
Now in the InfluxDB Details section we’ll enter our database (graphite) followed by our username and password (graphite / graphite)
Click Add and then click Test Connection, you should get a green box with “Success: Data source is working”

That’s it! Simple right?

You deserve a medal if you’ve made it this far, but in the next post we’ll actually show you how to get data into InfluxDB from various different sources, and then the all important step of actually graphing it! So stay tuned!

So while I’ve been writing a blog post on getting Grafana and InfluxDB setup to monitor your homelab, VMware went ahead and announced the latest innovations for their End User Computing solution.

Instead of boring you with a repeat of the announcement, I’m going to briefly highlight what I think are some of the biggest announcements.

Instant Clones WUT?!

Here’s a quick intro into what Project Meteor (Just in Time Clones) is all about.

At the basic level, it’s a in-memory clone of a powered-on reference machine (it’s quiesced and cloned as required), a virtual machine that in other words you’d be using as your desktop pool golden image. Only now, when a new client desktop instance is requested, the newly created client desktop will use the parent memory and disk, from that point the newly requested client desktop will live on it’s own. However, this newly created client desktop will use the parent memory and disk for reads. At logoff the client desktop is destroyed, not refreshed just gone.

Desktop management

Desktop management and maintenance windows are the hardest and most time consuming part of any virtual desktop environment. The virtual desktop needs updating, software needs to be installed.

One of the biggest pains in the asses in dealing with “Gold Reference Images” are Virus Scanners and Windows updates. We all know the drill, they never stop and you have to weigh the man hours required to keep these things current vs your current business risk requirements. Not to mention that there’s never an ideal time to perform pool recompose. With JIT desktop clones, we’ve eliminated this problem entirely.

The “Gold Reference Image” can be updated anytime because the desktop your user has just requested is created from a living breathing virtual machine, not some copy that has been lying dormant for a week. So during the day you can allow your Virus Scanner server and the WSUS server to update the software on your reference image and when the user logs on to their machine, they get an always up to date desktop.

High performance

Before JIT desktops, our composer deployed pools provisioned desktops ahead of the user’s request often leading to boot storms and requiring architects to design storage subsystems around these brief but critical performance windows. With JIT desktops, an elastic pool consists of 0 or a miniscule amount of provisioned desktops, meaning our days of deploying hundreds of desktops upfront are gone, and likewise for our boot storms. Additionally because we’re referencing reads from a single disk our login storms are markedly reduced as well!

Enhanced end user experience

As mentioned before the experience to the end user enhanced because we no longer have to interrupt sessions for Recomposes or application updates. When IT or the business decides it’s time to deploy an update company wide, we don’t have to worry about updating “The VDI Users”, everyone is on equal footing now!

JIT desktops are now a reality with the instant cloning functionality of Horizon 7, App Volumes and VMware UEM.

v1.0 Caveats

There are a few limitations with the v1.0 release. For example, only floating desktops are supported. No dedicated desktops as of now, but v2 should have it. Also no RDSH or Apps support. The scale is up to 2000 desktops with single vCenter, single vLAN only.

No Nvidia GRID and there is are limited SVGA options.

Storage options – there are VSAN or VMFS datastores

Blast Extreme Protocol

One of the other big announcements, next to Instant clones, was Blast Extreme. When VMware first announced Blast it was a nice feature to be able to access a desktop or with recent releases even applications through a browser in a pinch, but due to feature disparity it was never widely deployed at any of my customer installs.

Recent rumors hinted at VMware giving Blast a bigger role in a new version of Horizon. Looks like that day is today.

Blast Extreme brings a lot of features that were missing in the previous revisions.

Grid optimized

Better battery life

Built for the Cloud

Feature parity

Probably the biggest single thing about the Blast Extreme Protocol is that it supposedly has complete feature parity between both VMware protocols.

Port sharing will ensure that Blast is ready on day 1 for existing installs by being the preferred protocol and failing back to PCoIP if required.

Honestly it wouldn’t shock me to see VMware deprecate PCoIP entirely in the not to distant future. PCoIP has always needed some love as far as tweaking performance with bandwidth constraints, and being that Blast is based on H.264, this is likely the final nail in the coffin. Also remember PCoIP isn’t owned by VMware, it was jointly developed with Teredici and they are the sole supplier of PCoIP chips for zero clients. With this move, it opens up the hardware ecosystem to an entirely new set of manufacturers who are able to offer Zero devices with basic H264 decoding chips as opposed to licensing Teredici Intellectual Property.

Other goodies…

AMD Graphics support for vSGA

Enable multiuser GPU solution for Horizon via AMD graphics hardware

AMD SR-IOV support (single root I/O virtualization)

Native AMD driver support for OpenGL, DirectX and OpenCL acceleration

Solidworks, PTC and Siemens ISV certification planned

Intel vDGA Graphics support

With Intel Xeon E3 – Support for CPUs with integrated Iris Pro GPU and compatible with Intel Graphics Virtualization Technologies (Intel GVT-d), with support up to 3 monitors per user.

Flash Redirection

This is in tech preview (supports only server-side fetch of the flash content). It allows the redirection of flash content from the server to the client in order to be decoded and rendered locally.

Allows flash streaming content to play more smoothly with lower bandwidth and CPU usage at the server side.

Improved printing Experience

Local and network printing is up to 4x faster.

Windows 10 Improvements

Scan and serial port redirection are finally supported, where the scanner redirection supports TWAIN and WIA standards on Windows clients. Serial port redirection allows serial port redirection from the client to the server.

URL Content Redirection

Allows Horizon to redirect the destination URL from the virtual desktop to the local browser. Admins can configure policies to control whether user can access the content with application on the server or the client. Supports HTTP and HTTPs. Can be usefull for customers who need to separate internal browsing from external browsing domains. Allows admins to secure the environment where content which is potentially dangerous is executed on the client computer instead on the VDI desktop.

Admins can configure GPOs which does restrict the content that will be opened in a browser inside the virtual desktop vs the browser on the client’s PC.

I’m going to admit to being a huge fan of the VMware App Volumes product since it’s acquisition in August of 2014. The team has done a great job of responding to customer requests for new and increased functionality and have just announced their biggest update yet with 3.0.

So what’s new in VMware App Volumes 3.0?

AppToggle – A new patent pending capability that enables per user entitlement and installation of applications within a single AppStack for maximum flexibility. This helps IT reduce the number of AppStacks that need to be managed, lowers storage capacity and management costs even further, improves performance, and allows applications to share or have different dependencies in a single AppStack. The AppToggle architectural approach of only installing entitled applications also offers greater security as opposed to simply hiding installed applications, which can easily be exploited.

AppCapture with AppIsolation – A new capability that easily captures and updates applications to simplify application packaging, delivery and isolation with a command line interface that enables IT to distribute AppStack creation to different teams and merge AppStacks for simplified delivery and management. With support for AppIsolation, AppCapture also integrates with VMware ThinApp to enable IT to deliver native applications and VMware ThinApp applications in one consistent format through AppStacks.

AppScaling with Multizones – Allows integrated application availability across datacenters so customers no longer need additional software to replicate AppStacks across sites. IT admins can add multiple file shares to host AppStacks and pair them to VMware vCenter™ instances. An import service will then scan the file shares and populate the AppStacks into the data stores of the vCenter instances. This removes the requirement of having a shared data store between vCenter instances to replicate AppStacks.

Integrated Application, User Management and Monitoring Architecture – A new modern architecture for the VMware App Volumes manager component offers the industry’s only solution that combines application and user environment management with monitoring. With an architecture streamlined for faster provisioning and context-aware user policy, this offers a flexible and reliable application and lifecycle management solution for the digital workspace.

Unified Administration Console ­– A single pane of glass across application management, user environment management and monitoring. This next-generation admin view recognizes patterns to create simple, yet powerful workflows for application delivery, user environment management (beta for this release), and desktop and published application environment monitoring. This removes the complexity of managing multiple consoles but still enables customers to use legacy consoles if desired. Out of the box functionality also enables IT admins to address end-user needs quickly and efficiently.

Additionally it seems they’re breaking out the functionality of the new release into multiple editions… It will be interested to see how these are priced and also how they’re integrated into existing offerings like Horizon Enterprise.

Standard – A new edition starting at only $60 per user that includes AppStacks, Writeable Volumes and integrated UEM

Advanced – Includes scalable enterprise management capabilities such as AppToggle, AppCapture with AppIsolation, and AppScaling for organizations with 1,000 or more seats.

Enterprise – Includes application monitoring in addition to the capabilities available in the Advanced edition

I recently consulted for a customer who was in the process of deploying a new Oracle instance for their production ERP system.

When they brought up the fact that they had an organizational directive to only deploy new solutions on their VMware Infrastructure the Oracle Sales Rep and Sales Engineer promptly said “YOU CAN’T DO THAT, Licensing will burn through your entire budget! You have to license ALL THE THINGS! (Things in this case being every core in the cluster)”

Yea, that’s not only false, it’s a downright lie. Any Oracle employee and even most resellers you come across will tell you that in order to run Oracle DB on a virtual infrastructure other than Oracle’s own, you’ll need to license every core in a cluster because Oracle cannot guarantee where the processing for a virtual machine running their software is.

In actuality, there is NOTHING in an Oracle License and Service Agreement (OLSA) that indicates you have to license their software that way.

“Is the contract that we have signed for Oracle Software Legally Binding on both of us?”
“Does the contract supersede all prior agreements both verbal and written?”
“Can the contract be modified or altered in any way, other than in writing, and being agreed to and signed by both of our authorized representatives?”
“Does the contract form the complete agreement?”
“Where in the contract does it state we must license processors that are not running Oracle software?”
“Where in the contract does it state anything with regards to Hypervisor, VMware, vCenter, Cluster, Live Migration or vMotion?”

After you get your answers to these questions in writing and Oracle admits that you don’t owe them a dime more in licensing for any servers that do not and have not run Oracle software, go on about your daily business. You are now assured that there is no way Oracle can charge you for anything more than what you’re actually using. They can’t charge you a penny more for licenses other than where Oracle software is installed and/or running. There is no mention of Hypervisor, VMware, vCenter, Cluster (except referring to Oracle RAC), Live Migration or vMotion anywhere in your OLSA contract. So that simply can’t be applicable to you, as clearly, the contract and it’s explicitly referred to documents, are all that matter.

So, next time you or your customer is interested in virtualizing Oracle on non Oracle hardware / software remember you have every right to!