Approaching Disaster Recovery

Disaster Recovery is hard to be addressed since it can be seen as something for managing risks but also a competitive business advantage at the same time; so Disaster recovery can be a problem but also an opportunity. First of all Disaster recovery must be discussed not only inside the IT department but the complete process will require collaboration at multiple levels and among different departments. This means that you need to understand what is the cost of downtime for your business. In order to do this methods like BIA (Business Impact Analysis) can help and support your organization for defining goals, purposes and expected benefits. Disaster recovery is a continuous process that involves the complete environment.

The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

That leads to several questions about FCoE, iSCSI and iSER:

•If we can run various network storage protocols over Ethernet, what
differentiates them?
•What are the advantages and disadvantages of FCoE, iSCSI and iSER?
•How are they structured?
•What software and hardware do they require?
•How are they implemented, configured and managed?
•Do they perform differently?
•What do you need to do to take advantage of them in the data center?
•What are the best use cases for each?

Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.

Are you a control freak? Have you ever wondered what was the difference between a storage controller, a RAID controller, a PCIe Controller, or a metadata controller? What about an NVMe controller? Aren’t they all the same thing?

In part Aqua of the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series, we’re going to be taking an unusual step of focusing on a term that is used constantly, but often has different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. From the outside looking in, it may be easy to get confused. You can even have controllers managing other controllers!
Here we’ll be revisiting some of the pieces we talked about in Part Chartreuse [https://www.brighttalk.com/webcast/663/215131], but with a bit more focus on the variety we have to play with:
•What do we mean when we say “controller?”
•How are the systems being managed different?
•How are controllers used in various storage entities: drives, SSDs, storage networks, software-defined
•How do controller systems work, and what are the trade-offs?
•How do storage controllers protect against Spectre and Meltdown?
Join us to learn more about the workhorse behind your favorite storage systems.

You won’t want to miss the opportunity to hear leading data storage experts provide their insights on prominent technologies that are shaping the market. With the exponential rise in demand for high capacity and secured storage systems, it’s critical to understand the key factors influencing adoption and where the highest growth is expected. From SSDs and HDDs to storage interfaces and NAND devices, get the latest information you need to shape key strategic directions and remain competitive.

The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.

When it comes to storage, a byte is a byte is a byte, isn’t it? One of the truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?

The only way to manage large quantities of data is to make it addressable in larger pieces, above the byte level. For that, we’ve designed sets of data management protocols that help us do several things: address large lumps of data by some kind of name or handle, organize it for storage on external storage devices with different characteristics, and provide protocols that allow us to programmatically write and read it.

In this webcast, we'll compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own use cases, and advantages and disadvantages; each provides simple to sophisticated data management; and each makes different demands on storage devices and programming technologies.

Join us as we discuss and debate:

Storage devices
- How different types of storage drive different management & access solutions
Block
- Where everything is in fixed-size chunks
- SCSI and SCSI-based protocols, and how FC and iSCSI fit in
File
- When everything is a stream of bytes
- NFS and SMB
Object
- When everything is a blob
- HTTP, key value and RESTful interfaces
- When files, blocks and objects collide

After you watch the webcast, check out the Q&A blog: https://wp.me/p1kTSa-bh

Containers can make it easier for developers to know that their software will run, no matter where it is deployed. What do customers, storage developers, and the industry want to see to fully unlock the potential of persistent memory in a container environment? This presentation will discuss how persistent memory is a revolutionary technology which will boost the performance of next-generation packaging of applications and libraries into containers.

You’ll learn:
•What SNIA is doing to advance persistent memory
•What the ecosystem enablement efforts are around persistent memory solutions
•How NVDIMMs are paving the way for plug-n-play adoption into container environments

About the presenter:
Arthur is Co-Chair of the SNIA Persistent Memory and NVDIMM Special Interest Group, which accelerates the awareness and adoption of Persistent Memories and NVDIMMs for computing architectures.

As a Director of Product Marketing at SMART Modular Technologies. Arthur has been driving new product launch and business development activities at SMART since 1998. Prior to Smart, Arthur worked as a product manager at Hitachi Semiconductor America. While there, his focus was on DRAM, SRAM and Flash technologies.

Arthur holds a MBA from San Francisco State University and a MS from Arizona State University

Watson is a computer system capable of answering questions posed in natural language. Watson was named after IBM's first CEO, Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy! (where it beat its human competitors) and was then used in commercial applications, the first of which was helping with lung cancer treatment.

NetApp is now using IBM Watson in Elio, a virtual support assistant that responds to queries in natural language. Elio is built using Watson’s cognitive computing capabilities. These enable Elio to analyze unstructured data by using natural language processing to understand grammar and context, understand complex questions, and evaluate all possible meanings to determine what is being asked. Elio then reasons and identifies the best answers to questions with help from experts who monitor the quality of answers and continue to train Elio on more subjects.

Elio and Watson represent an innovative and novel use of large quantities of unstructured data to help solve problems, on average, four times faster than traditional methods. Join us at this webcast, where we’ll discuss:

Benchmarking storage performance is both an art and a science. In this 5th installment of the SNIA Ethernet Storage Forum’s “Storage Performance Benchmarking” series, our experts take on optimizing performance for various workloads. Attendees will gain an understanding of workload profiles and their characteristics for common Independent Software Vendor (ISV) applications and learn how to identify application workloads based on I/O profiles to better understand the implications on storage architectures and design patterns. This webcast will cover:
•An introduction to benchmarking storage performance of workloads
•Workload characteristics
•Common Workloads (OLTP, OLAP, VMware, etc.)
•Graph fun!

After you watch the webcast, check out the Q&A blog http://bit.ly/2GME6OR

In the enterprise, block storage typically handles the most critical applications such as database, ERP, product development, and tier-1 virtualization. The dominant connectivity option for this has long been Fibre Channel SAN (FC-SAN), but recently many customers and block storage vendors have turned to iSCSI instead. FC-SAN is known for its reliability, lossless nature, 2x FC speed bumps, and carefully tested interoperability between vendors. iSCSI is known for running on ubiquitous Ethernet networks, 10x Ethernet speed bumps, and supporting commodity networking hardware from many vendors.

As the storage world moves to more flash and other non-volatile memory, more cloud, and more virtualization (or more containers), it’s time to revisit one of the great IT debates: Should you deploy Fibre Channel or iSCSI? Attend this SNIA Ethernet Storage Forum webcast to learn:
•Will Fibre Channel or iSCSI deliver faster performance? Does it depend on the workload?
•How is the wire speed race going between FC and iSCSI? Does anyone actually run iSCSI on 100GbE? When will 128Gb Fibre Channel arrive?
•Do Linux, Windows, or hypervisors have a preference?
•Is one really easier [to install/manage] than the other, or are they just different?
•How does the new NVMe over Fabrics protocol affect this debate?

Join SNIA experts as they compare FC vs. iSCSI and argue in an energetic yet friendly way about their differences and merits of each.

After you watch the webcast check out the Q&A blog http://sniaesfblog.org/?p=680

This Webcast will provide a short Tutorial-style briefing of the EU-General Data Protection Regulations (GDPR), and then delve into the Roles and Responsibilities of the Data Protection Officer (DPO). After the short briefing, a Panel Discussion with Q&A from audience participation will take place.

In a recent survey of enterprise hybrid cloud users, the Evaluator Group saw that nearly 60% of respondents indicated that lack of interoperability is a significant technology-related issue that they must overcome in order to move forward. In fact, lack of interoperability was chosen above public cloud security and network security as significant inhibitors. This webcast looks at enterprise hybrid cloud objectives and barriers with a focus on cloud interoperability within the storage domain and the SNIA’s Cloud Storage Initiative to promote interoperability and portability of data stored in the cloud.

ESG, in cooperation with SNIA Europe, will present the key findings from the 2017 European Storage Research Report. The content will focus on key areas of technology spending and forecasts, as well as highlighting customer reaction to the adoption of new storage technologies.

Public, private and hybrid cloud are nothing new, but protecting sensitive data stored on these servers is still of the utmost concern. The NSA is no exception.

It recently became publicized that the contents of a highly sensitive hard drive belonging to the NSA (National Security Agency) were compromised. The virtual disk containing the sensitive data came from an Army Intelligence project and was left on a public AWS (Amazon Web Services) storage server, not password-protected.

This is one of at least 5 other leaks of NSA-related data in recent years. Not to mention the significant number of breaches and hacks we’ve experienced lately, including Yahoo!, Equifax, WannaCry, Petya, and more.

The culprit in this case? Unprotected storage buckets. They have played a part in multiple other recent exposures, and concern is on the rise. When it comes to storing data on public cloud servers like AWS, Azure, Google Cloud, Rackspace and more, what are the key responsibilities of Storage Architects and Engineers, CIOs and CTOs to avoid these types data leaks?

Tune in with Chris Vickery, Director of Cyber Risk Research at UpGuard and the one who discovered the leak, along with George Crump, Chief Steward, Storage Switzerland, David Linthicum, Cloud Computing Visionary, Author & Speaker, Charles Goldberg, Sr. Director of Product Marketing, Thales e-Security, and Mark Carlson, Co-Chair, SNIA Technical Council & Cloud Storage Initiative, for a live panel discussion on this ever-important topics.

SAS (Serial Attached SCSI) doubles in speed with the release of each new generation. The newest speed bump is 24G SAS, with end-user products anticipated in 2019. In addition to effectively doubling the speed from the current 12Gb/s SAS, 24G SAS has optimizations for both SSD and HDD. The end result is a highly scalable, highly flexible technology that optimizes use of the storage devices released today. This presentation provides an overview of why 24G SAS will be the protocol of choice for all-flash deployments, data centers, as well as tiered or cached systems with both HDD and SSD components.

The ecosystem of Persistent Memory is here. Are you on board? Don’t miss this “fundamentals” webcast on what you need to know about perhaps the most significant change to computer architecture since the transistor. Learn how system memory and storage are uniting into a single entity, how SNIA is contributing to addressing persistent memory via a programming model, and how persistent memory is being implemented today via NVDIMMs.

Come hear about what Real World Storage Workloads are and why they are so important to Datacenter server performance. See SNIA SSSI Reference IO Captures on Testmyworkload.com that are being used to develop new SNIA Technical Specifications. After an overview of RWSWs and their key metrics, see an analysis of a 2,000 Outlet Retail Web Portal 24-Hr SQL Server workload. The same SQL Server workload is then used to test four Datacenter SSDs and a SAS HDD.

We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. We’re also aware of how today we can order goods online, or reserve an airline seat over the Internet. Or even simpler, we can update a photograph on Facebook. Can these applications use the same models, or are new techniques required?

One of the more important concepts in storage is the notion of transactions, which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances.

And yes, we’ll explain all the acronyms and nomenclature too.

You will learn:

•A brief history of transactional systems from banking to Facebook
•How the Internet and distributed systems have changed and how we view transactions
•An explanation of the terminology, from ACID to CAP and beyond
•How applications, networks & particularly storage have changed to meet these demands

Ten years ago, the SNIA 100-Year Archive Task Force developed a survey with the goal to determine the requirements for long-term digital information retention in the data center - requirements needed to frame the definition of best practices and solutions to the retention and preservation problems unique to large,scalable data centers.

Now in 2017, SNIA presents a new survey developed to assess the following details:

1. Who needs to retain long term information
2. What information needs to be retained and for how long
3. If organizations are able to meet their retention needs
4. How long term information is stored, secured and preserved

Join us as we see where we were and where we need to be in the preservation and retention of data.

Storage can be something of a “black box,” a monolithic entity that is at once mysterious and scary. That’s why we created “The Everything You Wanted To Know About Storage But Were Too Proud to Ask” webcast series. So far, we’ve explored various and sundry aspects of storage, focusing on “the naming of the parts.” Our goal has been to break down some of the components of storage and explain how they fit into the greater whole.

This time, however, we’re going to open up Pandora’s Box and peer inside the world of storage management, uncovering some of the key technologies that are used to manage devices, storage traffic, and storage architectures. In particular, we’ll be discussing:

There’s so much to say on each of these subjects we could do a full webcast on any one of them, but for a quick overview of many of the technologies that affect storage in one place, we think you will find your time has been well spent.

Check out the Q&A blog from this webcast http://sniaesfblog.org/?p=658

The use of cloud object storage is ramping up sharply, especially in the public cloud, to reduce capital budgets and operating expenses. However, enterprises are challenged with legacy applications that do not support standard protocols to move data to and from the cloud.

Enterprises have developed strategies specific to the public cloud for Data Protection, Archive, Application development, DevOps, Big Data Analytics and Cognitive Artificial Intelligence. However, these same organizations have legacy applications and infrastructure that are not cloud friendly.

Object storage is a secure, simple, scalable, and cost-effective means of managing the explosive growth of unstructured data enterprises generate every day. Gateways enable SMB and NFS data transfers to be converted to Amazon’s S3 protocol while optimizing data with deduplication and providing QoS efficiency on the data path to the cloud.

This webcast will highlight the market trends toward the adoption of object storage and the use of gateways to execute a cloud strategy, the benefits of object storage when gateways are deployed, and the use cases that are best suited to leverage this solution.

Join this webcast to learn:
•The benefits of object storage when gateways are deployed
•Primary use cases for using object storage and gateways in private, public or hybrid cloud
•How gateways can help achieve the goals of your cloud strategy without
retooling your on-premise infrastructure and applications

The Storage Networking Industry Association (SNIA) is a non-profit organization made up of member companies spanning information technology. A globally recognized and trusted authority, SNIA’s mission is to lead the storage industry in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement and security of information.