San Carlos CA-based CacheBox, whose engineers include the original architects of the Veritas clustered file system, has emerged from stealth with CacheAdvance, a new type of cache management solution that for the first time enables application-centric acceleration, which makes possible major and non-disruptive improvements in application throughput and processing.

“With the deluge of data that needs to be processed in modern data centres, this is the most cost effective solution for application acceleration,” said John Groff, CacheBox’s COO. “We are using technology that’s relatively well known – server side caching – and advancing it to the next level.”

Groff said that CacheBox is really the third stage in the evolution of server side caching.

“FusionIO’s ioTurbine got on this quickly, and they focused on the storage tier, and as virtualization became more mainstream, others like Pernix Data advanced it into a new tier to deal with that, with software for virtualizing server-side flash memory,” Groff said. “Now the final frontier for this technology is advancing it into the application tier – predictably managing the data that flows between the application and the infrastructure to allow the storage to keep up. That’s application centric acceleration.”

CacheBox’s software removes bottlenecks by monitoring application I/O requests and intelligently determining which data to accelerate for optimal performance. Its interception of application I/O requests intended for relatively slower hard disk devices and rerouting them to faster flash devices that have a cached copy of data required by the application increases performance levels. It also reduces write amplification which positively impacts flash endurance.

“We use flash efficiently to read and write, and we begin to differentiate in a very meaningful way with Application Specific Module (ASM) technology,” said Murali Nagaraj, VP of Engineering and Product Development

The ASMs are used for a range of backend applications that drive heuristic analysis of application behavior. The ASM identifies application signature, configuration and other components such as tables and indexes providing fine grained intelligence to proactively manage caching policies at the block level more precisely and as the application I/O requires.

“By understanding this information, they can accelerate key elements of the applications that are limited to performance, while utilizing little flash resources,” Groff said. “It really advances the technology to the next paradigm.”

CacheBox’s internal testing of CacheAdvance has been impressive, with performance gains ranging from 10X to 100X over HDD. It also comes within 5% to 15% of flash performance – while utilizing a small fraction of flash capacity.

“In all use cases, like analytics and transaction processing, we are seeing fantastic results,” Nagaraj said.

The results, and the low level of utilization, are helping overcome customary customer and channel concerns about new technology from a startup, Groff said.

“While caching has been around forever, the application-centric point of view is new to customers,” he said. “However, we fit in that interesting place where most of the applications are business critical. We deliver business value to these organizations and speak the language of the application developer, so we haven’t had the usual startup concerns. It’s so impressive in terms of the results we can deliver and the low level of utilization we require.”

While CacheBox’s channel is an evolving concept, since the company is so new, Groff said they want to develop a strong channel which will handle the SME part of the market.

“The channel is still on the drawing board, as our proof of concept deployments have been primarily direct,” he said. “We would like channel to be SME, with a small number of strong VARs and SIs.” A Linux version of CacheAdvance is currently available for SMEs through hyperscale organizations and directly from CacheBox.