PernixData FVP

FVP is an acceleration product that lies as close to the application as possible, without the need for ripping and replacing existing storage hardware. It accelerates both reads and writes, meaning it can leverage performance gains against any existing block-based storage, whether it be DAS or SAN, spinning disk or all-flash.

The first version of the product used server-side flash for acceleration. In version 2.0, RAM was introduced to further increase performance. However, due to the volatile nature of DRAM, any storage acceleration product which uses it opens itself up to risk. What if you lose power? Luckily Pernix thought of this and created Distributed Fault Tolerant Memory (DFTM), which effectively uses software -based mirroring to protect data.

Version 2.5 took this feature one step further and introduced memory compression to accommodate larger application working sets.

As Churchill said [1], “with power comes great responsibility”, and FVP 3.0 brought with it a new HTML5 UI and a number of other improvements, such as greater reporting and auditing.

FVP 3.5

FVP 3.5 was released on Monday 13 June 2016 and builds on many of the successful features of older versions. However, one of the biggest bugbears previously was the need to provision a Windows management machine just for FVP. As mentioned in my previous article, for non-Windows environments this was just another headache.

Together with this, customers needed to deploy (or make use of an existing) Microsoft SQL Server. As I’m sure many are aware, SQL Server is not cheap. It can use SQL Express, however that is limited to 10GB database. Not exactly ideal in enterprise environments.

Thankfully those days are now gone. FVP 3.5 now gives the option of a Linux-based appliance which uses an embedded PostgreSQL database. The Windows and SQL option is still available for those who want it.

image courtesy of PernixData

FVP Policies

A lot has already been written about FVP, and there are others far better equipped than I am to explain how it works. If you’d like to know more, Frank Denneman has written a number of informative pieces. Check out http://frankdenneman.nl/?s=FVP.

In short, FVP uses two policies, write-through and write-back. For the former, data is written direct to the backend storage system, and a copy is made to a local flash resource. If FVP determines that subsequent reads can be taken from the flash resource, they are taken from there instead. Therefore in this example only reads are accelerated.

image courtesy of Frank Denneman

With write-back, writes are first made to the flash resource. Once acknowledged, FVP then copies the data to the storage backend. By de-staging it in this way, the application sees minimal latency and excellent performance, and FVP is left to do the rest. If that application is something like an Oracle or SQL database, the benefits are clearly obvious.

Installing FVP 3.5

Now that our capacity tier has been installed, it is time to accelerate our workloads.

Download the FVP 3.5 host extension relevant for your version of vSphere and upload it to your ESXi hosts. You can either use the datastore browser to place it on a ScaleIO datastore, or use your SFTP/SCP client of choice to upload it to each host.

SSH to each host and place it into maintenance mode using the following command:

Repeat the above on each node and reboot. The installation will say it’s not necessary, however I’ve had unpredictable results in the past when I haven’t.

Once the hosts have been rebooted, open the vSphere Web Client, select your cluster and then right-click and select Deploy OVF Template:

Click Browse to select the FVP 3.5 OVA file, and then click Next

Click Next

Select the location you wish to save the virtual machine to and click Next

Select the size of the deployment. The options are:

Tiny – 1-5 hosts or 1-50 VMs

Small – 5-100 hosts or 50-1000 VMs

Medium – 100-400 hosts or 1000-4000 VMs

Large – more than 400 hosts or more than 4000 VMs (wow!)

Click Next

Select the datastore you wish to store the management appliance on, then click Next

Select the network and then choose how the management network will be allocated. In my example, the appliance will live in a server management subnet, so I chose Static – Manual IP allocation. When done, click Next

If you chose to assign a static IP manually, enter the required details and click Next, followed by Finish.

After the deployment has finished, power the appliance on. Using your web browser, browse to the IP address you specified (if you chose DHCP, the IP address will appear in the Web Client):

Click I Accept, followed by OK.

Login using the default credential of pernixdata and pernixdataappliance, then click Login:

Type the address of your vCenter, and supply the details of a user account that has access. In my example I previously created a service account, granted it permission on the vCenter and specified that. When done, click Next.

On the Network Settings page, verify the network details are correct and supply a hostname. When done, click Next

Choose the correct time zone and click Next

Finally, specify a new password for the configuration console, then click Finish.

Configuring FVP

From the drop-down box at the top, change from PernixData Hub to FVP. Click FVP Clusters, followed by Create FVP Cluster…

Give it a name, select your vSphere cluster, and then click OK.

Under the new cluster, click the Configuration tab. Under Acceleration Resources click Add:

In my example, I will be using RAM to accelerate my storage workloads, and will specify 20GB per host to enable both DFTM and DFTM-Z:

Select the resources you would wish to use, and click OK.

Click on the Datastores tab, and then click Add:

Select the datastore(s) you wish to accelerate, and for each one choose the Write Policy. If choosing Write Back, select the number of peer hosts to replicate writes to:

Finally, click OK.

Lastly, click on the Advanced tab, then select Network Configuration.

By default, FVP uses a vMotion network for acceleration traffic, but this can be changed. In my example I have created a separate dvSwitch port group solely for FVP and have selected this instead. This solution maybe more desirable for those with 10GbE and wish to use NIOC to control bandwidth with greater granularity.

Finishing off

Once acceleration has been configured the benefits will be seen almost immediately. In my lab I have moved a number of management VMs onto the ScaleIO cluster and will be monitoring their performance closely in the coming days.

Coming up

So far we have our capacity (EMC ScaleIO) and acceleration (PernixData FVP) tiers in place. Now we need some insight into how best this technology can serve us. For that I plan to use PernixData Architect 1.1, and will be the subject of the third and final instalment.

Notes

[1] Who said that line is open to debate. I did some brief research, and I’m going with Sir Winston Churchill. This is a semi-professional blog and I’m not going to get laughed at by quoting Peter Parker’s uncle 🙂