FVP serves read requests from its pool of flash storage. It can operate in Write-Through mode in which applications hosted on virtual machines making write requests are acknowledged when both FVP and a back-end SAN's writes have been committed. This accelerates writes compared to using only a SAN, because the SAN is not servicing read requests.

Writes can be accelerated more by having FVP operate in Write Back mode, in which writes are acknowledged when written to the FVP flash, with replication between individual servers being the protection mechanism.

El Reg's storage desk wonders what happens when the back-end SAN is a shared flash array. It seems realistic that writes are accelerated more in Write Through mode although there is still a network latency effect.

We also wonder what happens when a server's flash memory increases to 10TB and beyond? That means FVP can hold more of an application's datasets in its cache. But, if that flash is used as storage memory - a logical extension of the server's DRAM memory space - then, conceivably, there would be no need for FVP, since you would be using flash to cache flash, which seems pointless.

Continuing our flight of fancy, we also pondered what would happen if the flash used by FVP was the faster-access DIMM-type MCS flash being introduced by Diablo Technologies. As MCS has only just been announced and no product is yet available, this experiment will have to wait for a while.

Suddenly there are several options for “flashifying” servers that promise to radically increase application performance. Being constrained by disk I/O is going to become a thing of the past in the performance-centric server data access world.

FVP is generally available now through PernixData's channel partners. There are pricing options for small and medium businesses and also larger organisations - less than $8,000/server, we think.

A no-charge 60-day trial offer can be investigated here, and you can read a white paper about FVP here (PDF) ®