This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

The Key To Performance Isn’t CPU Or Memory

‘It’s the storage speed, dummy.’

Applications are driving the enterprise, whether it is a relatively simple application used by millions of customers or a complex, scalable database that drives an organization's back end. These applications, and the users that count on them, expect rapid response times. In a world that demands “instant gratification,” forcing a customer, prospect, or employee to wait for a response is the kiss of death.

For most data centers, Crump suggests, “the number one cause of these ‘waits’ is the data storage infrastructure, and improving storage performance is a top priority for many CIOs.”

Sound familiar?

It may be challenging for executives who live well outside the IT glass house to think in milliseconds, or to recognize how much speed — which translates directly to application performance — matters. It’s tough enough to wrap our heads around the fractional advantages that accrue to Olympians like Usain Bolt and Michael Phelps, much less grasp the arcane benefits of sub-millisecond flash storage to everyday business applications.

But matter it does. Here’s a look at why, how we got here, and what organizations need to know as they tune their IT infrastructures for maximum performance — and optimum profitability. (Hint: it’s not speed for its own sake.)

I’ve got a flash for you. If you’re an owner of a small to midsize business: (paraphrasing James Carville) it’s the storage performance, dummy.

For years, the IT establishment has been telling businesses that faster performance can be achieved through more memory and greater CPU horsepower. The problem with that is simply this: at some point, you have enough processing power and memory capacity, and you’re still not satisfied with how your applications perform.

Even within the storage business, the mantra has been “large enough” storage. Only the very savvy have been recommending faster storage for the office network. Now, with the advent of inexpensive SSD/flash disks for the office, everyone can get very fast storage within a network environment. How does this translate to the data center and the cloud? Not all that well, as it turns out. Even as you add more CPU and memory to the cloud, apps are still sluggish.

The reason is that most cloud providers have opted for larger capacity — but slightly slower and less expensive — disks. High speed flash storage has become feasible in the cloud environment only recently, but even SSD drives have their limits in the cloud. There is, after all, a reason they call it “off the shelf.” This “commodity storage” offers only incremental bumps, when the smart money should be on technologies that really do deliver palpable, strategic improvements in storage performance — and therefore application speed.

The industry hasn’t done an adequate job equipping business users with the vocabulary they need to understand storage speed. The key is latency: as market researcher StorageSwiss puts it, how long it takes for a single data request to be received and the right data found and accessed from the storage media. The norm used to be storage latency of 5 milliseconds (ms) — now, it’s 1 to 2 ms latency across the board. And that’s not even fast enough. Sub-millisecond performance is just about here — and users need to start asking their providers for that kind of capability. Before long, if providers aren’t hitting sub-ms latency, they won’t even be in the ballpark.

This quantum boost in storage speed would seem to be one of those technology breakthroughs of interest mostly to those who actually make or sell the hardware, but that would be a misreading of the trend.

An industry analyst has identified high-speed flash storage as among the three biggest emerging trends in cloud computing, noting that flash storage offers greater efficiency compared to the traditional HDD storage device. “Currently, the cost of flash-based NAS storage is several times larger than that of HDD-based NAS storage,” the firm reports. “With the increased user access needs, flash-based NAS storage arrays offer 40 to 45 times better performance than hard disk input/output (I/O) performance.” New York-based 451 Research concurs, “With the inclusion of solid-state drives (SSDs) in arrays, performance is no longer a differentiator in its own right, but a scalability enabler that improves operational and financial efficiency by facilitating storage consolidation.”

The “need for speed,” then, is neither an extravagance nor a distraction; it’s an opportunity for organizations to take stock of what they have, what they’re using, and how their IT infrastructure — whether in-house or administered in the cloud, through a third party — is supporting the business.

“The problem is that the storage industry often misleads IT professionals as to where they should direct their attention when trying to eliminate wait time,” says StorageSwiss’s Crump, intimating that IT’s confusion isn’t doing business users any favors. So not only is it “the storage performance, dummy,” it’s also “the latency, dummy.”

When it comes to recognizing the importance and impact for sub-millisecond storage performance, then, there really is no time to wait.

Adam Stern, founder and CEO of Infinitely Virtual, is an entrepreneur who saw the value of virtualization and cloud computing some six years ago. Stern’s company helps businesses move from obsolete hardware investments to an IaaS [Infrastructure as a Service] cloud platform, providing them the flexibility and scalability to transition select data operations from in-house to the cloud.

Stern founded Infinitely Virtual in 2007, to provide virtual dedicated server solutions to growing enterprises, offering what was essentially a cloud computing platform before the term existed Infinitely Virtual is a subsidiary of Santa Monica-based Altay Corporation, which Stern founded in 2003 to provide Windows, VMware and other service solutions to small and medium-size businesses.

Since 2007, Infinitely Virtual has grown exponentially through its offering affordable, customized cloud-based solutions, using the most sophisticated technology available. Host Review named the company to its list of “Top Ten Fastest Growing” enterprises in 2011 and it has made the list on a regular basis ever since. Stern is a firm believer in corporate responsibility. The company’s products and services feature low-power consumption and fit squarely within the green IT movement. As a provider -- and consumer - of cloud based services, Infinitely Virtual is committed to sustainability.

Prior to founding Altay and Infinitely Virtual, Stern worked for CallOne, Inc., a value-added reseller of computer equipment and professional services. From 1997 to 2000, Stern led CallOne as the Vice President of Operations. He then worked for 3Com in 2000 as a network consultant.

Stern holds a B.S. degree in business administration and management from Cal State, Northridge.

Events

On Demand An edge data facility is still a data center. One still and must consider design, construction, and operational habits that result in high levels of system availability, maintainability, and resiliency. The disaggregation that edge poses will distribute compute and storage assets, and the challenge will be placing edge assets into sufficiently robust facilities or enclosures outside of traditional data centers where those local solutions can equally sustain those IT operations and systems.

As power consumption requirements of both server and switch components within the IT stack expand, users are having to reconsider their power distribution strategies to balance and manage power more effectively. More power means more heat, thicker cabling and overall denser environments, which requires infrastructure upgrades. Learn key considerations for high density rack environments and effective power distribution strategies to better future-proof your infrastructure.