Licensing Databases on vSphere with Converged and Hyperconverged Platforms

Discover the complexities of licensing database technologies such as Oracle, SQL Server and PostgreSQL on VMware, with particular emphasis on modern converged and hyper-converged platforms. It's vital to ensure your virtual machines stay compliant with your database vendor’s license requirements. Join us to learn about the business and financial risks involved if you don't have a solid plan in place for compliance, as well as explore strategies for controlling and/or reducing costs and limiting organizational risk.

Join us for a fast-paced and informative 60-minute roundtable as we discuss the newest technology disrupters to traditional storage architectures since flash: NVMe-oF and Storage Class Memory.

It was just five years ago that Flash technology transformed the traditional storage market forever. Modern flash-first arrays are now the new normal for traditional storage. However, will a new shared storage access protocol called NVMe over Fabric combined with the advent of storage class memory be just as disruptive to tradition storage for the next five years.

As people learn more about Fibre Channel and are curious about NVMe over Fibre Channel (FC-NVMe), more questions arise that relate to how storage solutions are going to be affected by the new technology. Perhaps the least understood aspect of storage networks involves distance solutions, and so FCIA has decided to tackle this issue next.

In this webinar you will learn:

•How does Fibre Channel work over distances?
•What are some of the limitations of distance?
•What happens when you are using FC-NVMe?
•What special equipment do you need?

Come spend time with FCIA and speaker Mark Allen, Technical Marketing Engineer Manager for Cisco, to learn how to understand and plan for distance solutions using Fibre Channel.

Agile works well for small, co-located teams that can hover around sticky notes on a Kanban board, and collaborate on-demand. But as initiatives become more complex, and teams become dispersed, Agile can be difficult to scale across the enterprise.

What’s missing in Agile practices today is the ability to see the whole picture. With user stories as the source of record for an application, it’s hard to connect all the dots to see the complete functionality of a feature or capability. Think about the new team who will be developing additional features on that capability - where can they learn everything it does today? How will they know the impact their changes will have on that capability or potentially other capabilities? How will they collaborate on those changes and get them reviewed in a timely manner? How will your business stakeholder understand the impact of new those changes to the overall business process? Storyteller provides a centralized reusable searchable repository of exiting business and technical process models and requirements (business, non-functional and regulatory). In addition we also offer rich visualization, collaboration and review functionalities. These base capabilities are being used today to address the questions above. And - we do it in a way that everyone can understand from Business Stakeholders down to developers and testers.

Storyteller is a key player in Enterprise Agile. We bridge the business gaps in the current agile process. For companies struggling with Agile Transformation, we providing guardrails around writing user stories and feature decomposition. Project teams can save time and effort by using our user story and test plan generation which can them be automatically synched over to ALM tools. We are also a key player in the BizDevOps chain by providing that upfront business alignment so what is produced though “Dev” and delivered and maintained by “Ops” is exactly what “Biz” wanted.

Originally restricted to making better use of physical servers, virtualization has grown to power anything and everything in the modern data center. Along with the technology came the luxury to simplify the provisioning of resources as they are needed.

In this webinar, we will explore how virtualization can be leveraged to quickly and cheaply create servers on-demand and avoid the lengthy delays associated with handling and cabling physical hardware.

Software is eating the world. It is the catalyst that is driving disruptive innovation and has led to marketing-ease such as software-defined data centers, software-defined storage, software-defined networks, and software-defined anything.

In this session, we will discuss how IT professionals can successfully add value to the software-defined ecosystem as they continuously integrate and continuously deliver application services that meet and exceed any customer’s expectation.

Join this webinar to discuss:
- What are the key tenets of software-defined constructs?
- How do I apply a foundational set of skills like monitoring as a discipline to guarantee success as a software-defined professional?

The ICT sector is moving us inexorably towards a software-enabled digital world, but many still fail to understand the power of this trend, how it is going to impact/benefit the ICT Industries and infrastructures delivering them.

Among topics discussed will be:
- Present and future impact of Software Defined technologies on IT (challenges and opportunities)
- How Software-Defined is enabling the digital transformation
- Best practices and recommendation on adopting software-defined technologies with the future in mind.

This Webinar is aimed at IT professionals and CIOs/CDOs/CTOs seeking to understand more about how software-defined impacts the present and future of enterprises.

With the irruption of Open Source and Software-Defined components in the Data Centre, there has been a shift of cost from licensing and appliances - in the traditional IT - to operations. The challenge in this new era, where all the components evolve faster than adoption, learning curves and skill development, resides in keeping the operations efficient while adding innovation, with its inherent complexity.

During this webinar we will go through best practices to navigate that transition and set up your operations teams to the future while keeping cost contained and services competitive.

In this, the seventh entry in the “Everything You Wanted To Know About Storage But Were Too Proud To Ask,” popular webcast series we look into the mysticism and magic of what happens when you send your data off into the wilderness. Once you click “save,” for example, where does it actually go?

When we start to dig deeper beyond the application layer, we often don’t understand what happens behind the scenes. It’s important to understand multiple aspects of the type of storage our data goes to along with their associated benefits and drawbacks as well as some of the protocols used to transport it.

Many people get nervous when they see that many acronyms, but all too often they come up in conversation, and you’re expected to know all of them? Worse, you’re expected to know the differences between them, and the consequences of using them? Even worse, you’re expected to know what happens when you use the wrong one?

We’re here to help.

It’s an ambitious project, but these terms and concepts are at the heart of where compute, networking and storage intersect. Having a good grasp of these concepts ties in with which type of storage networking to use, and how data is actually stored behind the scenes.

Please join us on August 1st for another edition of the “Too Proud To Ask” series, as we work towards making you feel more comfortable in the strange, mystical world of storage.

After the webcast, check out the Q&A blog http://sniaesfblog.org/?p=643

Cohesity is one of the rising stars in the world of data management. They have flipped the data protection market on its ear. In this CEO Series webcast, Arun Taneja, Founder and Consulting Analyst of Taneja Group will interview Mohit Aron, CEO of Cohesity, to understand the concept of Hyperconverged Secondary Storage and why it matters to the industry. We will explore the advantages it provides for your data protection, test/dev, and data analytics workloads and how Cohesity is different from other solutions on the market. It is time to say goodbye to the old, staid methods of protecting data. The traditional methods simply don’t make sense in the new world of Big Data, Multi and Hybrid cloud and web-scale applications. Join the webcast for a whirlwind tour of new ideas and methods in this space.

Server Message Block (SMB) is the core file-transfer protocol of Windows, MacOS and Samba, and has become widely deployed. It’s ubiquitous - a 30-year-old family of network code.

However, the latest iteration of SMB3 is almost unrecognizable when compared to versions only a few years old. Extensive reengineering has led to advanced capabilities that include multichannel, transparent failover, scale out, and encryption. SMB Direct makes use of RDMA networking, creates block transport system and provides reliable transport to zetabytes of unstructured data, worldwide.

SMB3 forms the basis of hyper-converged and scale-out systems for virtualization and SQL Server. It is available for a variety of hardware devices, from printers, network-attached storage appliances, to Storage Area Networks (SANs). It is often the most prevalent protocol on a network, with high-performance data transfers as well as efficient end-user access over wide-area connections.

In this SNIA-ESF Webcast, Microsoft’s Ned Pyle, program manager of the SMB protocol, will discuss the current state of SMB, including:

Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.

Now that you have become acquainted with basic container technologies and the associated storage challenges in supporting applications running within containers in production; let’s take a deeper dive into what differentiates this technology from what you are used to with virtual machines. Containers can both complement virtual machines and also replace them as they promise the ability to scale exponentially higher. They can easily be ported from one physical server to another or to one platform—such as on-premise—to another—such as public cloud providers like Amazon AWS. In this Webcast, we’ll explore “container best practices” that discuss how to address the various challenges around networking, security and logging. We’ll also look at what types of applications more easily lend themselves to a microservice architecture versus which applications may require additional investment to refactor/re-architect to take advantage of microservices.

After the webcast, check out the Q&A blog http://www.sniacloud.com/?p=233

When the SNIA Ethernet Storage Forum (ESF) last looked at the Ethernet Roadmap for Networked Storage in 2015, we anticipated a world of rapid change. The list of advances in 2016 is nothing short of amazing:

•New adapters, switches, and cables have been launched supporting 25, 50, and 100Gb Ethernet speeds including support from major server vendors and storage startups
•Multiple vendors have added or updated support for RDMA over Ethernet
•The growth of NVMe flash and release of the NVMe over Fabrics standard are driving demand for both faster speeds and lower latency in networking
•The growth of cloud, virtualization, hyper-converged infrastructure, object storage, and containers are all increasing the popularity of Ethernet as a storage fabric

The world of Ethernet in 2017 promises more of the same. Now we revisit the topic with a look ahead at what’s in store for Ethernet in 2017. With all the incredible advances and learning vectors, SNIA ESF is here to help you keep up. Here’s some of the things to keep track of in the upcoming year:
•Learn what is driving the adoption of faster Ethernet speeds and new Ethernet storage models
•Understand the different copper and optical cabling choices available at different speeds and distances
•Debate how other connectivity options will compete against Ethernet for the new cloud and software-defined storage networks
•And finally look ahead with us at what Ethernet is planning for new connectivity options and faster speeds such as 200 and 400 Gigabit Ethernet

The momentum is strong with Ethernet, and we’re here to help you keep on top of the lightning-fast changes.

After you watch the webcast, check out the Q&A blog http://sniaesfblog.org/?p=586

Come learn about the new capabilities for running IBM Spectrum Virtualize as software defined storage (SDS). This webinar will show how end users, MSPs and CSPs can better optimize block storage infrastructures as well as provide new storage services, such as disaster recovery. We will cover new deployment models for the software and share exciting new use cases around this SDS offering, which will enable hybrid cloud and software defined architectures.

Join a group of experts for a panel discussion on all things Disaster Recovery as-a-Service, Business Continuity, Cloud and Virtualization in the finance world. Representatives from iLand, Behind Every Cloud, and more will deep dive into:

In this part of the series, “Everything You Wanted To Know about Storage But Were Too Proud To Ask,” we’re going to be focusing on the network aspect of storage systems.

As with any technical field, it’s too easy to dive into the jargon of the pieces and expect people to know exactly what you mean. Unfortunately, some of the terms may have alternative meanings in other areas of technology. In this Webcast, we look at some of those terms specifically and discuss them as they relate to storage networking systems.

For people who are familiar with Data Center Technology, whether it be Compute, Programming, or even Storage itself, some of these concepts may seem intuitive and obvious… until you start talking to people who are really into this stuff. This series of Webcasts will help be your Secret Decoder Ring to unlock the mysteries of what is going on when you hear these conversations.

After you watch the webcast, check out the Q&A blog http://sniaesfblog.org/?p=588

Virtualization workloads generate many requirements and challenges for IT departments, including high performance, low latency, high-availability and the ability to quickly move and reconfigure workloads based on changing demands. This presentation focuses on best practices for employing a wide array of different storage features in the Windows Server platform. Details range from the SMB 3.x protocol to data-deduplication, clustering, Hyper-V Replica, and many more related features. The presentation will begin with suggestions for determining requirements for different kinds of virtual disks and different business workloads. Based on these requirements, we'll drill-down in to practical advice on how, when, and why these features can help increase service delivery and reduce costs for virtualized environments of all sizes.

Join Anil Desai, independent consultant and author of over 20 technical books around Windows Server platform, virtualization, databases and IT management best practices. He has over 20 years of experience in architecting, implementing, and managing IT software and datacenter solutions. He has worked extensively with IT management, development, and database technology. Anil holds many technical certifications and is a twelve-time Microsoft MVP Award recipient (currently Cloud/Datacenter Management).

Come prepared to look at disaster recovery planning with a 360 degree view for the enterprise and SMB space, and walk away with technical ideas you can begin to implement immediately. During this presentation, we will discuss disaster recovery planning considerations and partnerships. We will also walk through technical solutions that provide a way to use virtualization and storage strategies for an approachable DR solution.

Different workloads demand different attributes from their storage. These differences lead some to believe flash storage is only good for certain point use cases like accelerating databases. But the performance of flash systems lead others to claim a single flash system can support all workloads. The truth, as usual, is somewhere in the middle. Join Storage Switzerland and IBM for this live interactive webinar where we bust another flash myth and help you select the right flash for the right workload for the right reasons.

Virtualization is no longer a passing trend. Organization deploying it in their servers, desktops, storage and networks are experiencing increased performance and decreased costs, when implemented and managed properly. Join this channel to hear leading experts discuss this maturing technology and how you can create your own software-defined data center.