TOPICS:

EVENT ANGLE:

Premium Research

You can't buy a hybrid cloud as a product nor as a service, and even if you could you would need to customise it for your unique requirements and constraints. The reality today is you need to buy the ingredients from a supplier then roll your own hybrid cloud and to manage this you need to put in place a Hybrid Cloud Manifesto.

The SPC-2 benchmark is a useful benchmark for bandwidth intensive sequential workloads, such as backup, ETL (extraction, translate, load) and large-scale analytics. Wikibon does a deep comparative analysis of the SPC-2 results, time-adjusting the pricing information to correct for different publication dates. Wikibon then analyses performance and price-performance together, and develops a guide to enable practitioners to understand the business options and best strategic fit. Wikibon concludes the Oracle ZS4-4 storage appliance dominates this high-bandwidth processing as of the best combination of good performance and great price performance at the high-end and mid-range of this market.

The thesis of the overall Wikibon research in this area is that within 2 years, the majority of IT installations will be moving to combine workloads together to share data using NAND flash as the only active storage media. This will save on IT budget and improve IT productivity, especially in the IT development function. Our research shows that these changes have the potential to reduce the typical IT budget by 34% over a five year period while delivering the same functionality to the business. The projected IT savings of moving to a shared-data all-flash datacenter for an organization with a $40M IT budget are $38M over 5 years, with an IRR of 246%, an annual ROI of 542%, and a breakeven of 13 months. Future research will look at the potential to maximize the contribution of IT to the business, and will conclude that IT budgets should increase to deliver historic improvements in internal productivity and increased business potential.

The Public Cloud market is still forming – but seems to be poised to soon enter the Early Majority stage of its development where user behavior, preferences, and strategies become more stable. Large enterprises are more discerning of Public Cloud IaaS offerings. Test and development appears to be a key entry point for them since scale, operational complexity, and security/compliance/regulatory demands require a more nuanced approach to Public Cloud for IaaS. Small and Medium enterprises have the greatest need for Public Cloud and should consider well-established, lower risk entry points to Public Cloud like SaaS, Email, and Web Applications before venturing into Mission Critical and IaaS workloads to help them navigate an increasingly complex and costly IT infrastructure environment.

Flash is all the Rage. But Where Does it Belong in the Data Center?

Storage industry technology has once again become the hot trend due to the advent and adoption of flash storage. The changing dynamics of cloud-based architectures and our insatiable appetite for big data have driven companies to think about how they can turn infrastructure spend – and particularly their storage –into a profit engine. As a former professor at one of the top engineering schools in the country, I’m proud to be a part of this innovation.

However, there’s been much discussion lately surrounding cloud about moving the data and storage problem from one place to another. In fact, The New York Times recently posted an interesting and detailed story around the challenges large data centers have based on cost, energy and footprint. Data centers continue to grow unabated as we are busy filling the hard drives with work data plus downloaded movies, music, family photos, tax files and every email we’ve sent or received in the past five years.

The concept and promise of cloud is great – don’t get me wrong. But there is a much bigger problem in the data center than just idle file servers waiting for computations to occur than what’s mentioned in the article. There are also racks and racks of storage servers under heavy loadthat contain hard drives that are filled to just a fraction of their capacity. This is called short stroking, whereby lots of disk drives are used in parallel to get performance. Because these disks are utilized at or near 100 percent of their performance, the remaining available capacity cannot be used, because there is no more available performance to get additional data on and off the drive. When short stroked, the added number of these high-speed hard drives consume enormous amounts of energy and take up vast amounts of space within the data center. And the sad fact of the matter is that 90 percent of the available capacity can go unutilized.

Now, many argue that adding flash storage is the solution. In most cases, I would agree at a conceptual level. With flash, you are able to get significant performanceimprovements allowing your business to run more efficiently. Ultimately, you can do a whole lot more in a lot less time.

But with innovation comes hurdles, both from a physical and a mental perspective. It’s great that we have new technologies like flash storage, but today’s storage architecture, rooted in 25 years of disk-based technology, will not allow customers to reap the true benefits of this game-changing technology. While we know it’s economically and operationally tough to rip and replace a data center based on a brand new architecture, we need to take small but important steps to begin this paradigm shift.

One quick way to begin this change is answering the following question – where is the optimal place for flash storage in your data center?

There are many differing opinions on where flash is most appropriately deployed. Some vendors advocate putting it in the array. This can be an expensive and non-optimal cost, creating islands of flash and using a fraction of the technology – even arrays that are under-utilized for a specific work load would bear the cost of the flash technology. And it still doesn’t address the bottleneck of moving significant amounts of data over the network to the server. Others want you to keep flash on the server. With this deployment, the benefits are greatly limited due to the lack of sharing (each server only has access to its local flash), it’s still expensive and requires significant integration.

As an industry, we are putting flash in the wrong locations. Why continue to grow the size of our data centers as if they’re landfills,rather than capitalize on the intellectual capital within the technology industry and leverage it to develop more cost-effective, ecologically beneficial alternatives? We challenge the broader storage and data management vendors of the world. We challenge ourselves in our own technology solutions to do the same. But most importantly, we challenge the consumer to be in the know, as well.

So, the answer is not in the server and not in the storage – but in the network. By placing flash on the network, you can extend and amplify the benefits and create globally shared pools of performance and efficiency. The term network attached flash will become the way that more and more data centers turn their expenditures into a profit engine. This will allow organizations to achieve unlimited application performance scaling, free applications from the confines of the data center by eliminating latency and cut storage costs by more than half.

In the end, we need to invert the equation of spending most of our IT data center budget to “just keeping the lights on.” While flash is going to help us change that equation, it’s paramount to make sure you consider where this technology is placed throughout your infrastructure.

Would love to hear your comments. Also, would like to know how you’re using flash and the pros and cons you’ve experienced so far.

About the Author

Ronald Bianchini, Jr.President and CEO

As President and Chief Executive Officer of Avere Systems, Co-Founder Ron Bianchini has a long record of accomplishment in building and leading successful companies that deliver breakthrough technologies. Prior to Avere, Ron was a Senior Vice President at NetApp, where he served as the leader of the NetApp Pittsburgh Technology Center. Before NetApp, he was CEO and Co-Founder of Spinnaker Networks, which developed the Storage Grid architecture acquired by NetApp. Ron also served as Vice President of Product Architecture of FORE Systems, where he was responsible for ATM products. Previously, he co-founded Scalable Networks [acquired by FORE], which designed and implemented a large-scale Gigabit Ethernet switch, and earlier in his career, he was a professor at Carnegie Mellon University.

Ron received an S.B. degree in Electrical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Electrical and Computer Engineering from Carnegie Mellon University. He also holds numerous patents in fault-tolerant distributed systems and high-speed network design and has published extensively in technical journals.