The future is now. Memory prices are dropping drastically and companies are investing heavily in them. That doesn't mean spinning magnetic disks are to disappear anytime in the near future. Their densities continue to rise and prices are significantly cheaper than that of memory. Operating at slower speeds, this presentation dives into the methods one can employ to increase the performance, and in turn the value of this slower and aging data storage technology.

Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.

Now that you have become acquainted with basic container technologies and the associated storage challenges in supporting applications running within containers in production; let’s take a deeper dive into what differentiates this technology from what you are used to with virtual machines. Containers can both complement virtual machines and also replace them as they promise the ability to scale exponentially higher. They can easily be ported from one physical server to another or to one platform—such as on-premise—to another—such as public cloud providers like Amazon AWS. In this Webcast, we’ll explore “container best practices” that discuss how to address the various challenges around networking, security and logging. We’ll also look at what types of applications more easily lend themselves to a microservice architecture versus which applications may require additional investment to refactor/re-architect to take advantage of microservices.

When the SNIA Ethernet Storage Forum (ESF) last looked at the Ethernet Roadmap for Networked Storage in 2015, we anticipated a world of rapid change. The list of advances in 2016 is nothing short of amazing:

•New adapters, switches, and cables have been launched supporting 25, 50, and 100Gb Ethernet speeds including support from major server vendors and storage startups
•Multiple vendors have added or updated support for RDMA over Ethernet
•The growth of NVMe flash and release of the NVMe over Fabrics standard are driving demand for both faster speeds and lower latency in networking
•The growth of cloud, virtualization, hyper-converged infrastructure, object storage, and containers are all increasing the popularity of Ethernet as a storage fabric

The world of Ethernet in 2017 promises more of the same. Now we revisit the topic with a look ahead at what’s in store for Ethernet in 2017. With all the incredible advances and learning vectors, SNIA ESF is here to help you keep up. Here’s some of the things to keep track of in the upcoming year:
•Learn what is driving the adoption of faster Ethernet speeds and new Ethernet storage models
•Understand the different copper and optical cabling choices available at different speeds and distances
•Debate how other connectivity options will compete against Ethernet for the new cloud and software-defined storage networks
•And finally look ahead with us at what Ethernet is planning for new connectivity options and faster speeds such as 200 and 400 Gigabit Ethernet

The momentum is strong with Ethernet, and we’re here to help you keep on top of the lightning-fast changes. Come join us to look at the future of Ethernet for storage and join the SNIA ESF webcast on December 1st register here.

Computer users aren’t top data producers anymore. Machines are. Raw data from sensors, labs, forensics, and exploration are surging into data centers and overwhelming traditional storage. There is a solution: High performance, massively scale-out NAS with data-aware intelligence. Join us as Jeff Cobb, VP of Product Management at Qumulo and Taneja Group Senior Analyst Jeff Kato explain Qumulo’s data-aware scale-out NAS and its seismic shift in storing and processing machine data. We will review how customers are using Qumulo Core, and Nick Rathke of the University of Utah’s Scientific Computing and Imaging (SCI) Institute will join us to share how SCI uses Qumulo to cut raw image processing from months to days.

You have been told, or determined that you need (or want) to use cloud storage, ok, now what? What type of cloud storage do you need or want, or do you simply want cloud storage? However, what are your options as well as application requirements including Performance, Availability, Capacity and Economics (PACE) along with access or interfaces? Where are your applications and where will they be located? What are your objectives for using cloud storage or is it simply you have heard or told its cheaper. Join us in this discussion exploring your options, considerations for cloud storage decision making.

In this new Age of Data, data is growing at an exponential level and is being spread across on premise applications, cloud applications, SaaS applications and BYOD. Protecting this data, regardless of its location, has never been more important. With all of this increased flexibility, data protection can become challenging. See how you can protect your data with a single catalogue – regardless of its location – allowing you to focus on delivering the RTO & RPO being demanded by the business

The first wave of adoption of container technology was focused on micro services and ephemeral workloads. The next wave of adoption won’t be possible without persistent, shared storage. This webcast will provide an overview of Docker containers and the inherent challenge of persistence when containerizing traditional enterprise applications. We will then examine the different storage solutions available for solving these challenges and provide the pros and cons of each.

In this webcast we will cover
•Overview of Containers
◦Quick history, where we are now
◦Virtual machines vs. Containers
◦How Docker containers work
◦Why containers are compelling for customers
◦Challenges
◦Storage
•Storage Options for Containers
◦NAS vs. SAN
◦Persistent and non-persistent
•Future Considerations
◦Opportunities for future work

This webcast should appeal to those interested in understanding the basics of containers and how it relates to the storage used with containers.

Organizations have more options than ever when it comes to deciding how and where to store their data. In an ideal world, low-cost high-speed storage would be nearly infinite. Practicality, however, demands that IT groups determine how best to leverage their own storage (including local, NAS and SAN options), and how cloud storage can fit into the overall architecture.

This presentation will start with recommendations for classifying storage requirements based on various needs, ranging from lower-cost, long-term data archival to highly-available, fault-tolerant, geo-replicated architectures, along with the vast sea of data that's located in between these requirements. The focus will be on the many different ways organizations can leverage existing and new features in the Windows Server platform and the many available storage-related services in the Microsoft Azure cloud.

Also covered will be information about building a private cloud architecture in your own datacenter, using the Microsoft Azure Stack, System Center, and related OS and cloud options.

Past infrastructures provided compute, storage and network enabling static enterprise deployments which changed every few years. This talk will analyze the consequences of a world where production SAP and Spark clusters including data can be provisioned in minutes with the push of a button.

What does it mean for the IT architecture of an enterprise? How to stay in control in a super agile world?

Businesses are extracting value from more data, more sources and at increasingly real-time rates. Spark and HANA are just the beginning. This webcast details existing and emerging solutions for in-memory computing solutions that address this market trend and the disruptions that happen when combining big-data (Petabytes) with in-memory/real-time requirements., It provides an overview and trade-offs of key solutions (Hadoop/Spark, Tachyon, Hana, NoSQL-in-memory, etc) and related infrastructure (DRAM, Nand, 3D-crosspoint, NV-DIMMs, high-speed networking) and discusses the disruption to infrastructure design and operations when "tiered-memory" replaces "tiered storage"

In this part of the series, “Everything You Wanted To Know about Storage But Were Too Proud To Ask,” we’re going to be focusing on the network aspect of storage systems.

As with any technical field, it’s too easy to dive into the jargon of the pieces and expect people to know exactly what you mean. Unfortunately, some of the terms may have alternative meanings in other areas of technology. In this Webcast, we look at some of those terms specifically and discuss them as they relate to storage networking systems.

For people who are familiar with Data Center Technology, whether it be Compute, Programming, or even Storage itself, some of these concepts may seem intuitive and obvious… until you start talking to people who are really into this stuff. This series of Webcasts will help be your Secret Decoder Ring to unlock the mysteries of what is going on when you hear these conversations.

Today's storage world would appear to have been divided into three major and mutually exclusive categories: block, file and object storage. Much of the marketing that shapes much of the user demand would appear to suggest that these are three quite distinct animals, and many systems are sold as exclusively either SAN for block, NAS for file or object. And object is often conflated with cloud, a consumption model that can in reality be block, file or object.

But a fixed taxonomy that divides the storage world this way is very limiting, and can be confusing; for instance, when we talk about cloud. How should providers and users buy and consume their storage? Are there other classifications that might help in providing storage solutions to meet specific or more general application needs?

This webcast will explore clustered storage solutions that not only provide multiple end users access to shared storage over a network, but allow the storage itself to be distributed and managed over multiple discrete storage systems. In this webcast, we’ll discuss:
•General principles and specific clustered and distributed systems and the facilities they provide built on the underlying storage
•Better known file systems like NFS, GPFS and Lustre along with a few of the less well known
•How object based systems like S3 have blurred the lines between them and traditional file based solutions.

This webcast should appeal to those interested in exploring some of the different ways of accessing & managing storage, and how that might affect how storage systems are provisioned and consumed. POSIX and other acronyms may be mentioned, but no rocket science beyond a general understanding of the principles of storage will be assumed. Contains no nuts and is suitable for vegans!

The storage performance benchmarking dynamic duo, Mark Rogov and Ken Cantrell, are back. Having covered storage performance benchmarking fundamentals, system under test and most recently block components, this fourth installment of the Webcast series will focus on File Components.

Register now to learn why the File World is different from the Block World. Mark and Ken will walk through the basic filesystem theory to how filesystem data layout affects performance, covering:

At the core of next generation data centers are software defined data infrastructures that enable, protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for virtual servers and storage. Join us in this session to discuss trends, technologies, tools, techniques and services around storage and virtualization for today, tomorrow, and in the years to come.

Topics include:
- Data Infrastructures exist to support applications and their underlying resource needs
- Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers
- Server and Storage Virtualization better together, with and without CI/HCI
- Many different facets (types) of Server virtualization and virtual storage
- When, where, why and how to use storage virtualization and virtual storage

Docker is steadily gaining popularity within the industry, partly due to the rapid upgrade and redeployment properties containers can provide. While the ideal world consists of entirely stateless applications, the real world is a bit more complicated, and storage requirements are inevitable. In this session we'll explore the primary means of creating, attaching, and managing storage with Docker containers.

Containers are the latest in what are new and innovative ways of packaging, managing and deploying distributed applications. In this webcast, we’ll introduce the concept of containers; what they are and the advantages they bring illustrated by use cases, why you might want to consider them as an app deployment model, and how they differ from VMs or bare metal deployments.

We’ll follow up with a look at what is required from a storage perspective when using Docker, one of the leading systems that provides a lightweight, open and secure environment for the deployment of containers. Finally, we’ll round out our Docker introduction by presenting the takeaways from DockerCon, an industry event for makers and operators of distributed applications built on Docker, that took place in Seattle in June of this year.

Attendees will learn what software defined storage (SDS) is and is not, as well as the different SDS variations. There will be an in-depth discussion about the strengths and weaknesses of each SDS variation including, but not limited to:

Meeting storage-related requirements has been a long-standing challenge for IT organizations, and added workload requirements from cloud- and software-defined architectures can add quickly to the burden. Common goals are to implement solutions that provide high-availability and high performance, with low capital and operational costs. The Windows Server 2016 platform includes a tremendous list of improved and new features that are available "out-of-the-box". That makes the biggest barrier understanding how, when and why you should implement these features.

This presentation will cover a wide array of different features in the Windows Server platform, including Storage Spaces and Storage Spaces Direct; SMB 3.x improvements; storage tiering; Storage QoS; Storage Replica; data de-duplication; and many others. When compared to the costs and administrative complexity of traditional SANs, these tools can provide ready solutions for environments of all sizes and types. The focus will be on technical details about the features and capabilities of the Windows Server platform, and how organizations can make best use of them.

Join Anil Desai, independent consultant with over 20 years of experience in architecting, implementing, and managing IT software and datacenter solutions. He has worked extensively with IT management, development, and database technology. Anil holds many technical certifications and is a twelve-time Microsoft MVP Award recipient (currently Cloud/Datacenter Management).

Anil is the author of over 20 technical books focusing on the Windows Server platform, virtualization, databases, and IT management best practices. He is also a frequent contributor to IT publications and conferences.

While Software Defined Storage (SDS) solutions promise us the world, they have recently begun to showcase their shortcomings, almost all of which seem to focus on the hardware. Not all commodity hardware is created equal. Not all SDS solutions are equipped to handle these variations. This lack of knowledge ends up becoming problematic and in many cases will impact overall functionality to even performance.

Join us to discuss these shortcomings and how to not only resolve but also prevent them from both a hardware and software standpoint.

The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.