NetApp uses cookies and similar technologies to improve and customize your online experience. By continuing to use this site, you consent to our use of cookies, as described in our Cookie Policy.

NetApp has recently updated its Privacy Policy. NetApp uses third-party applications to personalise and secure your web site experience. By continuing to use this site you consent to our use of these technologies as outlined in our Privacy Policy. You may disable these technologies by clicking HERE.

Do not track my information

The storage of tomorrow; how to think like a cloud provider

By John Martin, principal technologist, NetApp ANZ

Cloud and the changing role of IT

It’s no secret the cloud is having a drastic impact on the IT landscape. For IT managers and CIOs, it means changing the entire way they think about IT. Rather than being custodians of infrastructure, they now have to think increasingly about being brokers of IT services.

When it comes to infrastructure services, this transition has largely been made possible by advances in server and network virtualization. At the beginning of my career, we gave our servers names, and if they got sick, we carefully nursed them back to health. Now, virtual servers within the same class of service are, by definition, pretty much identical. We give them numbers, and if they get sick, we shoot them and build a new one from a template.

However, when it comes to data storage, the shifting role of IT has been more complex. The only thing that differentiates servers is the data they hold, and that’s where the problem comes in; nobody wants to shoot their data. If all your virtual machines go down you have a nasty, but solvable problem on your hands. But if you lose all your data, you’re highly likely to lose your entire business.

The cloud has made it easy to move a virtual server to hardware, or to completely different datacenters and providers. However, this has given rise to two significant challenges: First, the TCP/IP protocol was never designed for machines that instantly teleport from one place to another. This is the primary problem that software-defined networking is now helping to address. The second challenge is that moving large amounts of data quickly between datacenters using traditional approaches is simply unworkable. Data, unlike virtual machines, has gravity and is therefore extremely difficult to move around.

NetApp has been working to help our customers solve these and other problems better than any other storage vendor. This is why many of the world’s best cloud vendors, such as Digital Sense and Peak Colo, have built their clouds on NetApp storage. But these features aren’t just for cloud service providers. When it comes to buying storage, businesses are beginning to think like cloud providers in the way that they approach data management challenges.

Upping the intelligence factor:

In order to respond to the challenges that have arisen from the rise of cloud infrastructures, the storage needs to be intelligent. In particular, this includes features like service automation and analytics, which help to orchestrate storage resources according to business needs, resulting in faster response times, reduced risk to human errors and simplified storage administration. With automation, service levels can also be standardised and monitored, enabling smarter provisioning of resources and greater efficiency.

Storage efficiency is also important. Developments such as replication, point-in-time snapshots, deduplication, compression, and thin-provisioning help businesses make the most out of their IT infrastructure. Incorporating such space-saving technology in your infrastructure will help reduce storage requirements and costs.

Finally, an intelligent infrastructure should offer virtual storage tiering, which combats the inevitable uncertainties of data growth. Businesses should look for a self-managing, data-driven solution that automatically moves data according to real-time assessment of workload-based priorities.

Becoming immortal

Now more than ever, data has to stick around. You want to keep it until you decide that it is OK to delete it. With the ability for advanced analytics to extract value from masses of unstructured data, information that may seem irrelevant today could become the basis of tomorrow’s sustained profitability. This is why you need an infrastructure that allows data to be kept safely and efficiently for a long time, we call this ‘immortal’ data infrastructure.

Integrated data protection contributes to data ‘immortality’; delivering availability, backup, compliance, disaster recovery and virus-scanning services directly from storage. This helps IT managers ensure that data is online and accessible to support smooth business operations at all times. On top of this, a non-disruptive operation is critical. To achieve continuous data availability, look for solutions that offer workload migration support and load balancing. Such solutions will also permit the upgrade, repair, and replacement of nodes without service interruption.

Embedded data security is also crucial for any datacentre procurement. A solution with these functions such as role-based administration, encryption and antivirus software allows datacentre managers the flexibility and control to manage the security of data assets.

Reaching infinity (and beyond!)

Finally, if you’re thinking like a cloud provider, you’re thinking about how to scale up and out. Even a modest initial implementation should enable you to attain the top-line benefits of a cloud-enabled Big Data infrastructure. This is an infinite infrastructure that won’t limit your ability to store or retrieve data, no matter how much of it you decide to keep.

Having a storage system that can readily scale up or down provides the agility needed to map IT closely to business operations. This enables seamless, on-demand capacity to respective end-users, regardless of network protocol. Look for solutions that offer a unified scale-out solution for an adaptable, always-on storage infrastructure to accommodate your virtualized environment

As well as scalability, it is important to be in control of all systems. Having a single unified platform where servers, storage and networks are consolidated eases the management of workload requirements. This home-base will allow managers to flexibly meet changing business requirements, regardless of protocol, file type and sources thereby simplifying the work of administrators and reducing response time. On top of this, secure multi-tenancy means that authorisation levels can be customised so that the data center is able to serve multiple customers without compromising security and data privacy. Look for solutions that offer secure logical separation of data and administration, without compromising storage efficiency and responsiveness.

If you’re going to think like a cloud provider, you’ll need enough agility to meet unpredictable data growth. Whether the company is building up its datacentre from scratch or upgrading an existing legacy system, the stability of business operations should not be compromised. An agile IT foundation helps maintain an enduring infrastructure that can grow and evolve to deliver business efficiency, optimising data management at scale and enabling innovations that help organisations stay ahead of the curve.