The Art of Storage Management

Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.

The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.

Join this live FCIA webcast where you’ll learn:

- How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
- The evolution of the FICON protocol optimizations
- How FICON adapts to new technologies

Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means

IT Transformation projects are usually driven by the need to reduce complexity, improve
agility, simplify systems, contain costs, manage ever-growing data and provide more efficient
operational management. Arguably, for seasoned IT professionals, there is nothing new
about the drivers for transformational change; it’s the velocity and scale of transformation
today that’s the big challenge.

Today, to effectively accelerate business innovation, successful IT leaders are building
infrastructure that focuses on automation and flexibility, supporting agile application
development and helping deliver world-class customer experience. Of course, IT teams are
still under pressure to deliver legacy, mission-critical applications but they also need to
support a seemingly constant flow of emerging business opportunities. ​They’re also tasked
to lower costs, reduce Capex, while helping to drive revenue growth. That’s a lot of drivers
and this complex juggling act often requires modernising infrastructure. An almost inevitable
result of this is that the mix of platforms they adopt will include public cloud.

So, does that signal the end of the corporate data centre as we know it? Well, as is so often
the answer – yes and no. ‘Yes’ because there is no doubt that the complexity and cost of
building and managing on-premise infrastructures is becoming increasingly unsustainable for
many businesses. And ‘no’ because business continuity and stability of legacy applications
are still, quite rightly, primary drivers today.

Management and control of any distributed IT infrastructure is increasing in difficulty with the variety of options available for hosting computing resources.

The benefits of on-premise, co-location, cloud and managed services continue to evolve, though they still all have to deliver reliable and secure computing services. Governance and control requirements continue to increase with the processes and systems that IT teams use coming under increasing scrutiny.

C level executives don’t want to keep hearing that their organizations (or outsource partners) struggle to know how many servers they have, what they do and the risks they currently live with in the new reality of data breaches, insider attacks and increasing systems complexity.

Edge computing has the potential to be a huge area of growth for datacenter, cloud and other
vendors. There are many flashy scenarios for the use of edge computing, including autonomous
transportation and smart cities. But there are near term opportunities to target that have a
better near-term payoff. Successful services in the market will need to address these
opportunities as part of an ecosystem solving the needs of application developers.

Attendees will gain insight into:

- Use cases for edge computing based on what application developers need – now
- The geography of the edge computing opportunity
- Challenges for adoption of edge computing services
- How the competitive landscape is evolving, and how an ecosystem approach to market
development is key to deriving value from edge computing services

When it comes to your SDDC, there are many moving parts, new technologies, and vendors to take into consideration. From software-defined networks and storage to compute, colocation, data center infrastructure, on-prem and cloud, the data center landscape has changed forever.

Tune into this live panel discussion with IT experts as they discuss what the future holds for compute, storage and network services in a software-defined data center, and what that means for vendors, data center managers, and colocation providers alike.

In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).

TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?

In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
•What is NVMe/TCP
•How NVMe/TCP works
•What are the trade-offs?
•What should network administrators know?
•What kind of expectations are realistic?
•What technologies can make NVMe/TCP work better?
•And more…

Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

- Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
- Key events in the community
- Content that data privacy and information management professionals care about
- What's coming up in Q1 2019

The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.

For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL

In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.

Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
•Pros and cons of centralized vs. distributed storage
•Typical use cases for centralized and distributed storage
•How distributed works for SAN, NAS, parallel file systems, and object storage
•How hyperconverged has introduced a new way of consuming storage

After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ

Interoperability is a primary basis for the predictable behavior of a Fibre Channel (FC) SAN. FC interoperability implies standards conformance by definition. Interoperability also implies exchanges between a range of products, or similar products from one or more different suppliers, or even between past and future revisions of the same products. Interoperability may be developed as a special measure between two products, while excluding the rest, and still be standards conformant. When a supplier is forced to adapt its system to a system that is not based on standards, it is not interoperability but rather, only compatibility.

Every FC hardware and software supplier publishes an interoperability matrix and per product conformance based on having validated conformance, compatibility, and interoperability. There are many dimensions to interoperability, from the physical layer, optics, and cables; to port type and protocol; to server, storage, and switch fabric operating systems versions; standards and feature implementation compatibility; and to use case topologies based on the connectivity protocol (F-port, N-Port, NP-port, E-port, TE-port, D-port).

In this session we will delve into the many dimensions of FC interoperability, discussing:

The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

That leads to several questions about FCoE, iSCSI and iSER:

•If we can run various network storage protocols over Ethernet, what
differentiates them?
•What are the advantages and disadvantages of FCoE, iSCSI and iSER?
•How are they structured?
•What software and hardware do they require?
•How are they implemented, configured and managed?
•Do they perform differently?
•What do you need to do to take advantage of them in the data center?
•What are the best use cases for each?

Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.

After you watch the webcast, check out the Q&A blog from our presenters http://bit.ly/2NyJKUM

FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilize Fibre Channel as the underlying transport. Mainframes are built to provide a robust and resilient IT infrastructure, and FICON is a key element of their ability to meet the increasing demands placed on reliable and efficient access to data. What are some of the key objectives and benefits of the FICON protocol? And what are the characteristics that make FICON relevant in today’s data centers for mission-critical workloads?

Are you a control freak? Have you ever wondered what was the difference between a storage controller, a RAID controller, a PCIe Controller, or a metadata controller? What about an NVMe controller? Aren’t they all the same thing?

In part Aqua of the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series, we’re going to be taking an unusual step of focusing on a term that is used constantly, but often has different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. From the outside looking in, it may be easy to get confused. You can even have controllers managing other controllers!
Here we’ll be revisiting some of the pieces we talked about in Part Chartreuse [https://www.brighttalk.com/webcast/663/215131], but with a bit more focus on the variety we have to play with:
•What do we mean when we say “controller?”
•How are the systems being managed different?
•How are controllers used in various storage entities: drives, SSDs, storage networks, software-defined
•How do controller systems work, and what are the trade-offs?
•How do storage controllers protect against Spectre and Meltdown?
Join us to learn more about the workhorse behind your favorite storage systems.

After you watch the webcast, check out the Q&A blog at http://bit.ly/2JgcHlM

Looking for more cost-effective ways to implement fibre channel cabling? Learn why proper cabling is important and how it fits into data center designs. Join this webcast to hear FCIA experts discuss:
- Cable and connector types, cassettes, patch panels and other cabling products
- Variables in Fiber Optic and Copper Cables: Reflectance, Insertion Loss,
Crosstalk, Speed/Length Limitations and more
- Different variations of Structured Cabling in an environment with FC
- Helpful tips when planning and implementing a cabling infrastructure within a SAN

After you watch the webcast, check out the Q&A blog: http://bit.ly/2KdtEx0

The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.

When it comes to storage, a byte is a byte is a byte, isn’t it? One of the truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?

The only way to manage large quantities of data is to make it addressable in larger pieces, above the byte level. For that, we’ve designed sets of data management protocols that help us do several things: address large lumps of data by some kind of name or handle, organize it for storage on external storage devices with different characteristics, and provide protocols that allow us to programmatically write and read it.

In this webcast, we'll compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own use cases, and advantages and disadvantages; each provides simple to sophisticated data management; and each makes different demands on storage devices and programming technologies.

Join us as we discuss and debate:

Storage devices
- How different types of storage drive different management & access solutions
Block
- Where everything is in fixed-size chunks
- SCSI and SCSI-based protocols, and how FC and iSCSI fit in
File
- When everything is a stream of bytes
- NFS and SMB
Object
- When everything is a blob
- HTTP, key value and RESTful interfaces
- When files, blocks and objects collide

After you watch the webcast, check out the Q&A blog: https://wp.me/p1kTSa-bh

SMB to Enterprise-level organizations are increasingly using or considering the use of outsourced IT infrastructure providers to remain cost-effectively competitive and productive. But while the rewards may be attractive, there are inherent risks that need to be understood.

If you are considering moving your infrastructure to an IT provider, or changing your current provider, join us during this 30-minute webinar to hear more about:

• Outsourced IT Infrastructure Drivers
• Common Risks and Rewards
• Frustrations that Customers Have Had with Service Providers
• Case study of a Customer that Moved Their Infrastructure to Steadfast and Why
• How Steadfast Approaches IT Infrastructure Management Differently
• Live Q&A

Getting help to manage your IT Infrastructure can help you to regain focus on your core business. But, before you make that move, make sure you understand all the nuances of your decision.

This webinar will cover the challenges of scaling up of liquid cooling technologies in the data centre space.

About the presenter:
The subject of sustainable and green data centres is very close to my heart. For many years I have been increasingly concerned with the ever increasing requirement for power that the information age requires and we built our business on explaining to organisations that they were wasting energy and overcooling server rooms as well as consulting on green IT issues from device to disposal.

The EURECA project will deliver a tool that public sector IT managers can use to benchmark their current actions and see what the current standards and latest concepts are and how to implement easy solutions to make a real difference. I have been lucky enough to see some state-of-the-art public sector data centres such as the University of St Andrews that have adopted the EU Code of Conduct for Data Centres and been awarded a GOLD CEEDA and how the management used the project to promote wider sustainability concepts to the rest of the university and beyond. At the other end of the scale there remains much work to be done in data centres/server rooms across the public sector to capitalise on the fast moving and sometimes daunting array of technologies and methodologies on offer.

With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.

This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more