Building Better Clouds Using SDN and OpenFlow

SDN and OpenFlow significantly extend and enhance the set of tools available with which to build flexible, scalable and reliable cloud infrastructure. They also enable new ways to expose both cloud capabilities and network functions as on demand services, with more options for monetization and faster service velocity than was previously possible. If you attend this webinar you will:

- Get a basic understanding of how SDN and OpenFlow can be used in a network
- See some strategies for leveraging the capabilities of SDN and OpenFlow in the cloud
- See some examples of their usage in WANs, data centers, and hybrid clouds

Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means

One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.

In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.

In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:

•The impact of metadata on the network
•What happens as we add additional nodes
•How to right-size the network for growth
•Tricks of the trade from the networking perspective to make your HCI work better
•And more…

Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.

New advancements in high-speed distributed solid-state storage, coupled with remote direct memory access (RDMA) and new networking technologies to better manage congestion, are allowing these parallel environments to run atop more generalized next generation Cloud infrastructure. Generalized cloud infrastructure is also being deployed in the telecommunication operator’s central office.

The key to advancing cloud infrastructure to the next level is the elimination of loss in the network; not just packet loss, but throughput loss and latency loss.

There simply should be no loss in the data center network. Congestion is the primary source of loss and in the network, congestion leads to dramatic performance degradation. This presentation summaries work from the IEEE 802 Network Enhancements for the Next Decade Industry Connections Activity (Nendica).

The Nendica report describes the need for new technologies to combat loss in the data center network and introduces promising potential solutions.

2019 will be the year of application explosion and extension of the datacenter’s edge as we know it. This massively distributed application environment is being fueled by digital transformation projects in hybrid networking, multi-cloud and IIoT.

Do you have an architecture in place to support and future-proof your network in this new application-centric world?

Join this webinar to learn how to develop a strategy that is agile, reliable and secure and by design.

Industry experts will discuss:

- How this new edge-to-cloud landscape will change your network’s architecture

With all the different storage arrays and connectivity protocols available today, knowing the best practices can help improve operational efficiency and ensure resilient operations. VMware’s storage global service has reported many of the common service calls they receive. In this webcast, we will share those insights and lessons learned by discussing:
- Common mistakes when setting up storage arrays
- Why iSCSI is the number one storage configuration problem
- Configuring adapters for iSCSI or iSER
- How to verify your PSP matches your array requirements
- NFS best practices
- How to maximize the value of your array and virtualization
- Troubleshooting recommendations

Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

- Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
- Key events in the community
- Content that data privacy and information management professionals care about
- What's coming up in Q1 2019

Fibre Channel’s speed roadmap defines a well-understood technological trend: the need to double the bit rate in the channel without doubling the required bandwidth.

In order to do this, PAM4 (pulse-amplitude modulation, with four levels of pulse modulation), enters the Fibre Channel physical layer picture. With the use of four signal levels instead of two, and with each signal level corresponding to a two-bit symbol, the standards define 64GFC operation while maintaining backward compatibility with 32GFC and 16GFC.

•New physical layer and specification challenges for PAM4, which includes eye openings, crosstalk sensitivity, and new test methodologies and parameters
•Transceivers, their form factors, and how 64GFC maintains backward compatibility with multi-mode fibre cable deployments in the data center, including distance specifications
•Discussion of protocol changes, and an overview of backward-compatible link speed and forward error correction (FEC) negotiation
•The FCIA’s Fibre Channel speed roadmap and evolution, and new technologies under consideration

After you watch the webcast, check out the FCIA Q&A blog: https://fibrechannel.org/64gfc-faq/

Join our panel of experienced Vivit Local User Group (LUG) leaders discussing real world use cases of how to bring Micro Focus customers, partners and field marketing teams together to network, work through software issues, share best practices, and advance your career opportunities.

Why you should attend:

• Hear from four seasoned LUG leaders about their Vivit experience
• Learn how to gain access to Micro Focus software, business leaders, product engineers and developers - all while leading valuable presentations to help your LUG members exchange information about new products and services
• Tips and Tricks on how to expand your visibility into Micro Focus and your network base to further advance your career

When it comes to your infrastructure, the buzzwords and technologies are abundant: IaaS, software-defined, composable, cloud, and more. What does the future hold for the cloud infrastructure market and for IT Ops and DevOps teams? How will digital transformation and security continue to play a key role?

Join this live panel discussion to answer these questions and more, and learn what should be top of mind for IT teams going into 2019 and beyond.

Topics include:
- Containers, Kubernetes and serverless - the next wave of IaaS?
- What is composable infrastructure and what does it mean for your data center, on-prem and in the cloud?
- Should software-defined infrastructures and SDDC's still be top of mind for your tech teams? Why or why not?
- Best practices for securing your network infrastructure

In the face of DevOps and Agile development methodologies, many enterprises have backed off entirely from the concept of an enterprise architecture. That's a mistake. Enterprise Architecture is needed more urgently than ever before--but not the old, silo-ed, inflexible architecture.

Next-generation Enterprise Architecture needs to be fast, flexible, and as adaptive as next-generation development methodologies. It needs to encompass the radical changes in infrastructure, from virtualization to cloud- and mobile-first.

And it's absolutely essential for enterprises who want to align their technology investments with their fast-moving business goals.

This webinar reviews the fundamentals of enterprise architecture and provides a blueprint for a next-generation EA that encompasses DevOps, cloud, mobility, virtualization, microservices, and more!

Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.

But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:

•Scale-out storage solutions and what workloads they can address
•How your network may need to evolve to support scale-out storage
•Network considerations to ensure performance for demanding workloads
•Key considerations for all flash

After you watch the webcast, check out the Q&A blog: http://bit.ly/scale-out-q-a

Join Kelly Harris, Senior Content Manager at BrightTALK and Bob Crews, Co-Founder and CEO of Checkpoint Technologies, as they discuss the ins and outs of founding a tech company.

Topics include:

- Juggling the challenges of being a practitioner, sales rep, marketer and CEO simultaneously
- What to look for in a great vendor partnership
- Application security validation, its trends, and what to look out for

For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL

The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.

The driving force behind adopting new tools and processes in test and measurement practices is the desire to understand, predict, and mitigate the impact of Sick but not Dead (SBND) conditions in datacenter fabrics. The growth and centralization of mission critical datacenter SAN environments has exposed the fact that many small yet seemingly insignificant problems have the potential of becoming large scale and impactful events, unless properly contained or controlled.

Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) for purposes of expedited data delivery place additional analytical demands on the datacenter manager.
To be sure, all tools have limitations in their effectiveness and areas of coverage, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. To that end, recognizing and reducing the effect of those limitations is essential.

This webinar will introduce participants to Protocol Analysis tools and how they may be incorporated into the “best practices” application of SAN problem solving. We will review:
•The protocol of the Phy
•Use of “in-line” capture tools
•Benefits of purposeful error injection for developing and supporting today’s high-speed Fibre Channel storage fabrics

About the speaker:
Jay is a Cloud Solution Architect, serving the North East Region of Microsoft US. As a solution architect, Jay functions as a trusted advisor to enterprise customers. In this role, he provides guidance on digital transformation, application modernization, cloud migration and IT operations to his clients.

Prior to joining Microsoft, Jay worked in various capacities such as R&D Engineer, Software Designer & Developer, Enterprise Architect, Agile Product Owner and Program Manager with various organizations. During his career, Jay consulted for domestic and international clienteles and worked in India, Germany, Switzerland and the US.

Interoperability is a primary basis for the predictable behavior of a Fibre Channel (FC) SAN. FC interoperability implies standards conformance by definition. Interoperability also implies exchanges between a range of products, or similar products from one or more different suppliers, or even between past and future revisions of the same products. Interoperability may be developed as a special measure between two products, while excluding the rest, and still be standards conformant. When a supplier is forced to adapt its system to a system that is not based on standards, it is not interoperability but rather, only compatibility.

Every FC hardware and software supplier publishes an interoperability matrix and per product conformance based on having validated conformance, compatibility, and interoperability. There are many dimensions to interoperability, from the physical layer, optics, and cables; to port type and protocol; to server, storage, and switch fabric operating systems versions; standards and feature implementation compatibility; and to use case topologies based on the connectivity protocol (F-port, N-Port, NP-port, E-port, TE-port, D-port).

In this session we will delve into the many dimensions of FC interoperability, discussing:

Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.

In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.

The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.

Join to hear the following questions addressed:

•Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
•Use cases for RoCE and iWARP and what differentiates them?
•UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
•What are the software and hardware requirements for each?
•What are the performance/latency differences of each?

Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate

After you watch the webcast, check out the Q&A blog http://bit.ly/2OH6su8

Telemetry: The essential ingredient to success with Agile, DevOps and SRE:

Measurements, metrics and telemetry enable teams and organizations to deliver successful results with Agile, DevOps and SRE; in order to achieve speed, quality and automation targets with built-in performance, security and resiliency.

With virtualization and cloud computing revolutionizing the data center, it's time that the network has its own revolution. Join the Network Infrastructure channel on all the hottest topics for network and storage professionals such as software-defined networking, WAN optimization and more to maintain performance and service in your infrastructure