DataGravity@DataGravityInc has revealed what the announced secret gravity sauce is all about. Be sure to check out the press release ” DataGravity Unveils Industry’s First Data-Aware Storage Platform”

“NASHUA, N.H., August 19, 2014 – DataGravity today announced the launch of the DataGravity Discovery Series, the first ever data-aware storage platform that tracks data access and analyzes data as it is stored to provide greater visibility, insight and value from a company’s information assets. The DataGravity Discovery Series delivers storage, protection, data governance, search and discovery powered by an enterprise-grade hardware platform and patent-pending software architecture, enabling midmarket companies to glean new insights and make better business decisions.”

We often see new storage Array’s, with a impressive list of new features. Lets see how this stacks up.

Looks fairly standard, a nice sharp looking box. But what makes this unique?

Why the fuss? why do we care what is on our storage? With the increasing costs of data, many are looking to what what is taking the space, and using the disk.

“The unstructured data dilemma is growing, and IDC has been predicting technology would catch up to provide an answer to the market demand,” said Laura Dubois, program vice president of storage at IDC. “The DataGravity approach is transformational in an industry where innovation has been mostly incremental. DataGravity data-aware storage can tell you about the data it’s holding, making the embedded value of stored information accessible to customers who cannot otherwise support the cost and complexity of solutions available today.”

“The Digital Universe in 2020″ report, IDC estimates that the overall volume of digital bits created, replicated, and consumed across the United States will hit 6.6 zettabytes by 2020. That represents a doubling of volume about every three years. For those not up on their Greek numerical prefixes, a zettabyte is 1,000 exabytes, or just over 25 billion 4-TB drives.” Source http://www.networkcomputing.com/storage/2014-state-of-storage-cost-worries-grow/a/d-id/1113476

Today companies often lavage things like Hadoop, EMC ESA or new offerings like CloudPhysics to learn what people are using storage for in their environment, or to get data out of the system.

“Perhaps you’ve heard of DataGravity (@DataGravityInc), or perhaps you haven’t. They’ve been staying pretty quiet about what they’ve been working on. Today, however, they’re dropping the veil. Read on for a look into what you can expect from this exciting announcement!”

“DataGravity just released their embargo and my little techie corner of the Internet is on fire. There’s a very good reason for that, but it might not be obvious at a glance. Read on to learn why DataGravity is a Big Deal even though it might not work out.”

I’m a fan of Veeam, and use in production today. Thus, I wanted to share the write up.

“The goal of the joint whitepaper between Veeam and Nutanix is to help customers deploy Veeam Backup & Replication v7 on Nutanix, when used with VMware vSphere 5.x. This post will highlight some of the major points and how customers can head off some potential issues. The whitepaper covers all the applicable technologies such as VMware’s VADP, CBT, and Microsoft VSS. It also includes and easy to follow checklist of all the recommendations.”

”
Veeam is modern data protection for virtual environments, and are also a great sponsor of my blog. The web-scale Nutanix solution and its data locality technology are complimented by the distributed and scale-out architecture of Veeam Backup & Replication v7. The combined Veeam and Nutanix solutions leverage the strengths of both products to provide network efficient backups to enable meeting recovery point objective (RPO) and recovery time objective (RTO) requirements.
The architecture is flexible enough to enable the use of either 100% virtualized Veeam components or a combination of virtual and physical components, depending on customer requirements and available hardware. You could also use existing dedicated backup appliances. In short, our joint solution is flexible enough to meet your requirements and efficiently use your physical assets. For example, if you have requirements for tape-out, then you will need at least one physical server in the mix to connect your library to since tape Fibre Channel/SAS pass-thru is not available in ESXi 5.x.

“When virtualizing solution the last thing you want is your backup data stored in the same location as the data you are trying to protect. So the first best practice for a 100% virtualized solution is to use a secondary Nutanix cluster. The cluster would be comprised of at least three Nutanix nodes. This is where the virtualized Veeam Backup & Replication server (along with the data repository), would reside. Should you have a problem with the production Nutanix cluster, your secondary cluster is unaffected. Depending on the amount of data you are backing up and your retention policies, you may or may not want the same Nutanix hardware models as your production cluster. For example, you may want to consider the 6000 series hardware which are ‘storage heavy’ for your secondary cluster. The following figure depicts a virtualized Veeam backup solution.”