Server and Storage I/O Networking Performance Management

How can you make informed decisions without timely insight and awareness into how resources are being used or performing? Without insight and awareness into your server and storage I/O environment, you are flying blind. This webinar looks at the various issues as well as associated tools, techniques, technologies and metrics for gaining insight into your server and storage environment including storage I/O. As part of this discussion, we will also look at metrics that matter, baseline to compare what is normal vs. abnormal along with for planning and forecasting.

Key themes:
•Metrics that matter are those that are relevant and have context
•Where to get metrics, measurements, insight and awareness
•Where to look for server and storage I/O bottlenecks, problems and opportunities
•Design for management and resiliency as well as metrics to measure
•Gaining insight and awareness to monitor how your information factory and its networks are performing

Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.

In this presentation, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on the storage aspects and best practices.

•What is Kubernetes? Why would you want to use it?
•How does Kubernetes help in a multi-cloud/private cloud environment?
•How does Kubernetes orchestrate & manage storage? Can Kubernetes use Docker?
•How do we provide persistence and data protection?
•Example use cases

Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.

We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.

Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).

New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. These PMs (persistent memories) lie between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:

•Traditional uses of storage and memory as a cache
•How can we build and use systems based on PM?
•What would a system with storage, persistent memory and DRAM look like?
•Do we need a new programming model to take advantage of PM?
•Interesting use cases for systems equipped with PM
•How we might take better advantage of this new technology

In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.

Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.

Join SNIA Solid State Storage Initiative Education Chair and leading analyst Tom Coughlin for a journey into the requirements and trends in worldwide data storage for entertainment content acquisition, editing, archiving, and digital preservation. This webcast will cover capacity and performance trends and media projections for direct attached storage, cloud, and near-line network storage. It will also include results from a long-running digital storage survey of media and entertainment professionals. Learn what is needed for digital cinema, broadcast, cable, and internet applications and more.

You need to rethink your WAN to survive the next 5 years. We can help show you how.

Think about it: half of your IT services come from the cloud, from folks such as Amazon Web Services, Google Cloud, IBM Cloud, Microsoft Azure and Office365, and Oracle Cloud. Mixing cloud and internal sources, you serve an increasingly scattered and mobile staff. IoT is turning the physical environment into both a provider and a consumer of IT services.

Is the WAN you built for Client/Server really going to serve?

No. IT needs to rethink its WAN and re-engineer the economics of wide-area networking.

Join Nemertes as we bring our WAN technology research study and freshly updated, one-of-a-kind cost and performance benchmarks to bear on the challenges of remaking your WAN to drive success in the cloud age. We'll discuss:
• SD-WAN and the real benefits it can deliver for performance and cost
• Other cloud-friendly network technologies such as direct-connect and WAN-Cloud Exchanges
• Up-to-date cost and provider performance data for MPLS and Internet services.

Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means

One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.

In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.

In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:

•The impact of metadata on the network
•What happens as we add additional nodes
•How to right-size the network for growth
•Tricks of the trade from the networking perspective to make your HCI work better
•And more…

Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.

After you watch the webcast, check out the Q&A blog at http://bit.ly/2Va4wwH

New advancements in high-speed distributed solid-state storage, coupled with remote direct memory access (RDMA) and new networking technologies to better manage congestion, are allowing these parallel environments to run atop more generalized next generation Cloud infrastructure. Generalized cloud infrastructure is also being deployed in the telecommunication operator’s central office.

The key to advancing cloud infrastructure to the next level is the elimination of loss in the network; not just packet loss, but throughput loss and latency loss.

There simply should be no loss in the data center network. Congestion is the primary source of loss and in the network, congestion leads to dramatic performance degradation. This presentation summaries work from the IEEE 802 Network Enhancements for the Next Decade Industry Connections Activity (Nendica).

The Nendica report describes the need for new technologies to combat loss in the data center network and introduces promising potential solutions.

2019 will be the year of application explosion and extension of the datacenter’s edge as we know it. This massively distributed application environment is being fueled by digital transformation projects in hybrid networking, multi-cloud and IIoT.

Do you have an architecture in place to support and future-proof your network in this new application-centric world?

Join this webinar to learn how to develop a strategy that is agile, reliable and secure and by design.

Industry experts will discuss:

- How this new edge-to-cloud landscape will change your network’s architecture

With all the different storage arrays and connectivity protocols available today, knowing the best practices can help improve operational efficiency and ensure resilient operations. VMware’s storage global service has reported many of the common service calls they receive. In this webcast, we will share those insights and lessons learned by discussing:
- Common mistakes when setting up storage arrays
- Why iSCSI is the number one storage configuration problem
- Configuring adapters for iSCSI or iSER
- How to verify your PSP matches your array requirements
- NFS best practices
- How to maximize the value of your array and virtualization
- Troubleshooting recommendations

After you watch the webcast, check out the Q&A blog at http://bit.ly/2WjmFJW

Join our panel of experienced Vivit Local User Group (LUG) leaders discussing real world use cases of how to bring Micro Focus customers, partners and field marketing teams together to network, work through software issues, share best practices, and advance your career opportunities.

Why you should attend:

• Hear from four seasoned LUG leaders about their Vivit experience
• Learn how to gain access to Micro Focus software, business leaders, product engineers and developers - all while leading valuable presentations to help your LUG members exchange information about new products and services
• Tips and Tricks on how to expand your visibility into Micro Focus and your network base to further advance your career

Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

- Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
- Key events in the community
- Content that data privacy and information management professionals care about
- What's coming up in Q1 2019

Fibre Channel’s speed roadmap defines a well-understood technological trend: the need to double the bit rate in the channel without doubling the required bandwidth.

In order to do this, PAM4 (pulse-amplitude modulation, with four levels of pulse modulation), enters the Fibre Channel physical layer picture. With the use of four signal levels instead of two, and with each signal level corresponding to a two-bit symbol, the standards define 64GFC operation while maintaining backward compatibility with 32GFC and 16GFC.

•New physical layer and specification challenges for PAM4, which includes eye openings, crosstalk sensitivity, and new test methodologies and parameters
•Transceivers, their form factors, and how 64GFC maintains backward compatibility with multi-mode fibre cable deployments in the data center, including distance specifications
•Discussion of protocol changes, and an overview of backward-compatible link speed and forward error correction (FEC) negotiation
•The FCIA’s Fibre Channel speed roadmap and evolution, and new technologies under consideration

After you watch the webcast, check out the FCIA Q&A blog: https://fibrechannel.org/64gfc-faq/

When it comes to your infrastructure, the buzzwords and technologies are abundant: IaaS, software-defined, composable, cloud, and more. What does the future hold for the cloud infrastructure market and for IT Ops and DevOps teams? How will digital transformation and security continue to play a key role?

Join this live panel discussion to answer these questions and more, and learn what should be top of mind for IT teams going into 2019 and beyond.

Topics include:
- Containers, Kubernetes and serverless - the next wave of IaaS?
- What is composable infrastructure and what does it mean for your data center, on-prem and in the cloud?
- Should software-defined infrastructures and SDDC's still be top of mind for your tech teams? Why or why not?
- Best practices for securing your network infrastructure

In the face of DevOps and Agile development methodologies, many enterprises have backed off entirely from the concept of an enterprise architecture. That's a mistake. Enterprise Architecture is needed more urgently than ever before--but not the old, silo-ed, inflexible architecture.

Next-generation Enterprise Architecture needs to be fast, flexible, and as adaptive as next-generation development methodologies. It needs to encompass the radical changes in infrastructure, from virtualization to cloud- and mobile-first.

And it's absolutely essential for enterprises who want to align their technology investments with their fast-moving business goals.

This webinar reviews the fundamentals of enterprise architecture and provides a blueprint for a next-generation EA that encompasses DevOps, cloud, mobility, virtualization, microservices, and more!

Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.

But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:

•Scale-out storage solutions and what workloads they can address
•How your network may need to evolve to support scale-out storage
•Network considerations to ensure performance for demanding workloads
•Key considerations for all flash

After you watch the webcast, check out the Q&A blog: http://bit.ly/scale-out-q-a

Join Kelly Harris, Senior Content Manager at BrightTALK and Bob Crews, Co-Founder and CEO of Checkpoint Technologies, as they discuss the ins and outs of founding a tech company.

Topics include:

- Juggling the challenges of being a practitioner, sales rep, marketer and CEO simultaneously
- What to look for in a great vendor partnership
- Application security validation, its trends, and what to look out for

For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL

The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.

The driving force behind adopting new tools and processes in test and measurement practices is the desire to understand, predict, and mitigate the impact of Sick but not Dead (SBND) conditions in datacenter fabrics. The growth and centralization of mission critical datacenter SAN environments has exposed the fact that many small yet seemingly insignificant problems have the potential of becoming large scale and impactful events, unless properly contained or controlled.

Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) for purposes of expedited data delivery place additional analytical demands on the datacenter manager.
To be sure, all tools have limitations in their effectiveness and areas of coverage, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. To that end, recognizing and reducing the effect of those limitations is essential.

This webinar will introduce participants to Protocol Analysis tools and how they may be incorporated into the “best practices” application of SAN problem solving. We will review:
•The protocol of the Phy
•Use of “in-line” capture tools
•Benefits of purposeful error injection for developing and supporting today’s high-speed Fibre Channel storage fabrics

With virtualization and cloud computing revolutionizing the data center, it's time that the network has its own revolution. Join the Network Infrastructure channel on all the hottest topics for network and storage professionals such as software-defined networking, WAN optimization and more to maintain performance and service in your infrastructure