Top 7 Persistent Storage Capabilities for Running Docker Containers

Docker Containers and DevOps make apps portable but also strain infrastructure in new ways. Are you ready for these new IT challenges? Learn the top 7 persistent storage capabilities needed to containerize enterprise apps.

Enterprises are racing to differentiate themselves with faster, higher quality software releases. Containers can help you develop better quality software while also lowering maintenance and IT costs for your applications. Some companies are folding whole application stacks into Continuous Integration and Continuous Deployment (CI/CD) tool chains or containerizing traditional applications one component at a time. And they’re not limiting themselves to stateless applications. They’re containerizing enterprise applications too.

This week, our engineers visited two IT/Operations teams who are evaluating containers: one at a large cable company and another at a large bank. These are the latest in a stream of interested infrastructure teams. It’s clear that CI/CD brings new challenges to infrastructure and IT teams. Here are some of the concerns all of our customers have raised:

How do you satisfy the requirements of diverse, changing users and applications?

When stateful application containers move between cloud and on-premises, should the data move with them?

How do you protect and secure data when you move Docker Containers to a new environment? Example, from dev to staging or from on-premises to cloud.

Deploying shared, consolidated IT resources is great for cost savings, but how do you ensure you meet increasingly stringent Service Level Agreements (SLAs) for each group of stakeholders?

Can your system deliver the reliability and availability mission critical production operations require?

When running containers on-premises, how do you give developers the automation and convenience they’re used to getting from the cloud?

Though storage requirements for containers may initially be limited, user needs and application requirements will grow. So, it’s crucial to find a system that can meet diverse application and performance requirements and be customized and tailored to meet the diChanging application requirements can be complex for IT.verse needs of users. Today, the best flash storage solutions can optimize performance, availability and cost characteristics of all-flash, hybrid flash and multi-cloud infrastructure to strike the right balance for their different workloads.

A strong persistent storage solution should do the same. It should deliver predictable performance for unpredictable environments and handle mixed workloads with ease. For example, it should be able to present all-flash for transactional workloads and hybrid-flash for unstructured data from the same pane of glass. Similarly, DevOps teams should be able to provision these diverse storage resources into the same container using the container and DevOps tools they know and like.

In addition, it’s difficult to predict every corner use case DevOps teams will come up with. Therefore, the underlying storage should support standardized APIs for end-users to build their own interfaces and bridge any gaps. A perfect example here: most container storage management interfaces today don’t do full Create, Read, Update, Delete (CRUD). On most persistent storage solutions, the Update function is missing. This feature is important for volume resizing, snapshotting or any time you need to change underlying storage characteristics, such as when adjusting a performance policy. Ensuring the storage system supports basic CRUD functions via REST or a similar API will enable customization to fully support your applications.

Since, modern enterprises are multi-protocol and will continue to be in the future, storage should support this. Greenfield applications will likely use iSCSI and commodity Ethernet for simplicity. Dedicated storage fabrics such as Fibre Channel, that separate application connectivity and storage domains, dominate transaction-heavy workloads such as traditional databases.

2. Any application, any platform, anywhere

Containerization makes bold and true claims of running apps on any platform. The story for data is more complex. An agnostic storage platform that caters to multiple operating systems, environments, and orchestration platforms, is a key requirement.Introducing the new application migration. With the right persistent storage, data movement can be this easy too.

Most enterprises rely on stateful Windows and Linux applications and will likely to continue to do so when using containers. The Docker interface is standard on both systems. Orchestration platforms will run hybrid clusters with both Linux and Windows. Microsoft is becoming more relevant for deployment. It’s important to select a storage platform that confidently can deliver storage and data services to all of these operating systems and orchestration platforms.

While exploring containers it’s natural to look at cloud projects, as well as related SaaS, PaaS and IaaS operating models. Containers are cloud-native. Your data is not. Moving traditional transactional applications to the cloud can be challenging, especially when you need to meet traditional SLAs or have concerns about data security, reliability and availability. Since storage provides data services to your container environments, ensure it is aligned with your preferred cloud vendors, service and payment options, including multi-cloud, hybrid cloud, cloud native environments, and pay-as-you-go. In addition, your storage should make data portability as easy to achieve as application portability.

3. Access control, security and conformance

Data is abstracted to container users as a named resource with parameters and instructions for where the stateful mount point should reside inside the container. In large organizations, developers are divided into separate teams serving different lines ofTo be effective, security policies and controls should be easy for IT to implement and users to adopt business. To reduce clutter, misunderstandings and human errors, you’ll need a clean separation of named resources, including data resources, between teams.

Control and data plane security are fundamental in any storage environment. Encapsulating the control plane with SSL and enforcing authentication is mandatory. Running iSCSI over dedicated secure VLANs or properly zoned Fibre Channel switches are good measures for storage security hygiene. Data at rest encryption eases operational hassles and complies with regulations associated with sensitive record keeping. It’s essential these capabilities extend to container environments to ensure industry security standards.

Provisioning storage resources for a container should be easy for users. Without being storage experts, DevOps teams should be able to setup defaults that ensure conformance with the organizations requirements. For example, such defaults should enable them to meet backup and recovery objectives and performance policies. When users provision a resource, it should simply inherit all the attributes necessary to conform to those objectives.

4. Ability to handle mission-critical data and operations

Data serving your traditional business applications today will remain important once those applications are containerized. Deployments Enterprise class persistent storage is key to running containers in production without overburdening IT.per day is a popular success metric for DevOps teams. Building images and deploying containers at high velocity will push infrastructure to its limits. Just as you scrutinize the Reliability, Availability and Serviceability (RAS) capabilities of any storage platform for production, it’s important to assess all what-if scenarios when you diverge from traditional storage deployment models.

Container orchestration platforms are powerful tools that require careful design and implementation. Without this, human factors cross boundaries they shouldn’t. Examples include architecture flaws when network partitions occur, or simply not understanding the implications of container scaling. It’s essential that the storage system is aware of the layers above and ensure the integration with the container engine can handle arbitration and properly fence access for improperly configured systems to maintain data integrity and prevent unnecessary outages.

5. Multi-tenancy and QoS

Isolating dev/test, staging and production is highly recommended. Yet buying isolated islands of resources to support each one can be costly and complex. Instead of trying to managing disparate storage systems. Consider a single storage system that can be securely partitioned to safely serve multiple container environments and hosts. Such a Sharing a pool of resources is a simple and cost-effective approach.system should prevent naming conflicts from multiple container hosts and prevent access data to data without permission.

Storage performance can sometimes be a scarce resource. Having basic capabilities to automatically limit noisy neighbors can work well in most single tenant environments.

Multitenant environments or infrastructure pools that will be divided to serve many purposes require stronger controls. Performance governors that restrict the amount of IOPS, throughput and capacity are essential for service providers who monetize infrastructure. Your system should allow you to provision performance limits for a given environment, container or volume. Such granular controls can also help single infrastructure serve different purposes. For example, a test environment can be throttled to not impact production. In addition, clones of Tier-1 application data could be safely shared and co-mingled with DevOps environments, without disrupting production IO.

6. Advanced data services and automation for DevOps

Just as Docker gives developers application automation, the right persistent storage provides an easy button for DIY automated data management across the DevOps lifecycle.To make it easy for DevOps teams to take full advantage of all an external array offers, all features should be integrated directly with volume provisioning frameworks to be made available to DevOps teams. Core features such as compression, deduplication and thin provisioning will ensure storage reduction by several factors. Data services such as snapshots, clones and replication but also the ability to present and import storage resources used elsewhere are key differentiators to enable advanced use cases.

Integration testing with real data in CI/CD pipelines on either primary or offloaded replicated data, the ability to lift-and-shift legacy data with application containers, full data and application stack cloning are just a few advanced use cases that high-quality external storage and integration into the volume provisioning frameworks enable.

7. Supportability

Adopting DevOps and leading transformation projects can be strenuous and difficult. A complete container strategy requires software, hardware, support, and services and ofConsider your support strategy up front and choose a vendor that can be a strategic partner .ten combines best-of-breed open source and proprietary technologies. Though most enterprises want the option to deploy containers into production, they lack the skills and expertise required to achieve their goals.

For the best results with data-centric workloads, consider which infrastructure components need traditional support and which can be self-supporting. Then choose a vendor who offers the combination of technologies, and support and services you need up front. The right vendor will give you freedom and flexibility to combine open source together with enterprise-class hardware, software, and service. Such vendors not only serve as strategic technology partners that support your container infrastructure across its entire lifecycle, but also accelerate your container projects.

Good advice when it comes to containers

Whether your container projects are in discovery phase or already in production, always take storage and data into account. Here are some examples of positive impacts you can have. You can reduce build times, increase builds per day, and increase developer productivity.

Set up smoke tests that use real data to improve defect discovery and enable faster resolution than ever before. Reducing your storage footprint can help you monetize your infrastructure and enable new revenue streams. Being smart about your storage and data enables your organization to thrive and reap the benefits of modernizing your applications.