All posts tagged Caching

High performance flash based storage has dramatically improved the storage infrastructure’s ability to respond to the demands of servers and the applications that count on it. Nowhere does this improvement have more potential than in the virtualized server environment. The performance benefits of flash are so great that it can be deployed indiscriminately and still performance gains can be seen. But doing so may not allow the environment to take full advantage of flash performance. It may also be a much more expensive deployment model and put data at risk. Modern data centers need to understand which forms of flash and which deployment models will show the greatest return on investment while not risking any data.

Flash memory is a type of non-volatile memory storage, which can be electrically erased and programmed. What was the event that precipitated the introduction of this new storage medium? Well, it started in the mid-1980s, when Toshiba was working on a project to create a replacement for the EEPROM, a low-cost type of non-volatile memory, which could be erased and reprogrammed. The problem with the EEPROM was its cumbersome erasure process; it needed to be exposed to an ultraviolet light source to perform a complete erasure. To overcome this challenge, the E2PROM was created. The E2PROM type of memory cell was block erasable, but it was eight times the cost of the EEPROM. The high cost of the E2PROM led to rejection from consumers who wanted the low cost of EEPROM coupled with the block erasable qualities of the E2PROM.

In an upcoming webinar, Storage Switzerland will make the case for using snapshots as a primary component of data protection. For this strategy to work several things are needed from the storage infrastructure. First, it must be able to keep an almost unlimited number of snapshots; second, it needs to have a replication process that can transfer those snapshot deltas (the changed blocks of data) to a safe place; and third, the entire storage infrastructure has to be very cost effective. In this column we will look at that first requirement, the ability to create and store a large amount of snapshots without impacting performance.

Almost a year ago I wrote a piece about server-side caching, wondering if it was a mere feature or something more. After this VMworld it is pretty clear that server-side caching is quickly maturing and is a really interesting area to look at, not just for what it does today but for what we have to expect tomorrow… especially from the virtualization perspective.

Deploying server-side cache is an effective way to accelerate application workloads; whether they are hosted on bare metal servers or virtualized infrastructure. But with so many flash and SSD options to choose from – server-side, all flash arrays, hybrid storage systems, etc. IT decision makers may be concerned about which solution is best for their business. Furthermore, many organizations already have various forms of flash distributed throughout the data center. How can all of these resources be efficiently and effectively managed without resorting to multiple “panes of glass”?

Caching is a popular first step when data centers want to leverage high performance flash storage. It eases the transition from traditional disk storage by automatically moving frequently accessed data to the high performance pool. Most cache technologies that have come to market have essentially ignored writes and focused instead on read caching. But an increasing number of vendors have started delivering, or announced, write caching solutions as well. While it sounds like a good idea to cache both reads and writes there are some considerations that IT planners need to be aware of when implementing write-based caching.

It seems like every other startup I talk to is pitching VMware-integrated caching, and a spate of acquisitions and announcements from flash companies are legitimizing the idea. Even VMware has gotten in on the game with vFlash Read Cache (vFRC). But integrating caching with VMware vSphere isn’t nearly as easy and effective as everyone is making it sound!

Solid state drive (SSD) solutions using flash are becoming the ‘go to’ options for addressing storage performance bottlenecks. And within that technology category, PCIe based SSDs could represent the state of the art. By locating the flash in the server they can eliminate the latency of the storage network but can also re-introduce the storage silos that storage networking was designed to eliminate. This begs the question: “Is there a way to leverage server-side PCIe SSDs without breaking the storage network?”