Monitor with Discipline – The Application Stack

It’s apropos that the application is at the center of disruptive innovation. It is the momentum changer, providing both critical mass and atmospheric-shattering velocity. And any company can disrupt their industry or any adjacent industry, if they create the right application. The stack abides. Does this application focus change since there are a multitude of shifting possibilities provided by software-defined infrastructure?

In this session, we will discuss how to monitor your application stack with discipline to keep the application stack optimal.

·How do I manage applications efficiently and effectively in the ever-changing Software Defined Data Center?

·How do I put into practice skills and techniques to ensure acceptable Quality-of-Service (QoS) in a continuous service integration and continuous service delivery environment?

Resources are finite. So deploying them wisely is what differentiates successful cybersecurity organizations from those that are less successful. Find out how these successful cybersecurity organizations are structured, what policies they have in place, and what strategies they do—and don’t—follow to protect their enterprise organizations.

Enterprises are developing and buying applications to run everywhere: across multiple clouds, multiple data centers, desktops, mobile devices, and IoT devices. In a multicloud environment, IT needs to take a multipronged approach to securing applications.

We'll how organizations approach securing their applications for the multicloud, ranging from changes in the development process to the embrace of security technologies including IAMaaS, microservice authentication, and enterprise secure cloud access and policy enforcement (ESCAPE).

This webinar presents data from Nemertes' in-depth research study of 335 organizations in 11 countries across a range of vertical industries.

What does it mean to be protected and safe? You need the right people and the right technology. This presentation is going to go into the broad introduction of security principles in general. This will include some of the main aspects of security, including defining the terms that you must know, if you hope to have a good grasp of what makes something secure or not. We’ll be talking about the scope of security, including threats, vulnerabilities, and attacks – and what that means in real storage terms. In this live webcast we will cover:

The bad news? Threats evolve. Bad actors continue to improve their games.
The good news? Cybersecurity technology is also evolving and improving. This webinar drills down into the emerging technologies that successful cybersecurity organizations are deploying to protect their firms. Find out what works, what's a waste of resources--and how to deploy the technologies that work.

With most IT work being done in the cloud, what does it mean to be successful and what are the characteristics of highly successful cloud enterprises?

We'll dig into the what it means to be successful in the cloud and what successful organizations do more of (and less of) than their less successful peers. We'll look across technologies adopted, organizational and operational practices, and vendors embraced.

This webinar presents the highlights of Nemertes' in-depth research study of 335 organizations in 11 countries across a range of vertical industries. Later episodes will discuss security topics as well as focusing in on application development and security.

Learn how to seamlessly move your VMware-based workloads to the Cloud. Achieve the scale, automation, and choice for your VMware SDDC available at 50+ data centres across the globe.

Keeping your existing tools investments, such as licenses, skills, and home-grown tools, as IBM Cloud for VMware Solutions supports Bring Your Own License (BYOL) and provides you with full root access.

VMware Solutions on IBM Cloud enables to agility and scale of the cloud while allowing you to maintain your existing VMware investment and skills.

To facilitate a cloud-first development process comprised of the continuous delivery of quality code in small, frequent batches, infrastructure resources must be available on-demand, without the need manual intervention, or a team of scriptwriters.

Network isolation in Skytap enables development teams to provision multiple replicas of full applications in parallel, removing configuration drift and environmental issues while streamlining development, test and release.

Build your environment, save as a template, provision on-demand, and adopt new tools and processes. Moving to DevOps for traditional applications begins with Skytap. No other Cloud does it this way, and no other Cloud is as effective in changing the way we view infrastructure.

Join us as we demonstrate the power of one of IBM's most innovative and successful Cloud products in the App Dev and Test space.

As enterprises build practices and operations around Kubernetes, the cloud is becoming a clear choice for quick access to standardized and scalable infrastructure that can set the foundation of a digital transformation strategy. For a CISO or compliance risk officer, maintaining security posture and governance policies during a digital transformation can prove to be difficult and a serious blocker for an organization.

Join us as we discuss how the IBM Cloud for VMware Solutions can be the optimal platform for enterprises looking to modernize applications and operations around Kubernetes without sacrificing security policies and full-stack control through our lift, shift and transform methodology.

In this session, we will be discussing how combining VMware and Red Hat OpenShift can pave the way to a cloud-native architecture.

In this session, you will learn how to simplify a disaster recovery implementation by using the advanced features in Veeam that include automation based on schedules, automated tests, and automated reports.

To further accelerate your disaster recovery project, Veeam also features an automated DR plan plus readiness and execution documents. These rich features are offered as pre-built templates that you can add your customer-specific disaster recovery steps.

In 2019 the balance tipped, and for the first time the majority of enterprise IT workloads are running in the cloud, not in a data center.

Enterprise IT staff need to stop thinking of cloud solutions as islands of function and special cases and begin to think of pulling it all together into a cohesive multicloud. We'll lay out the major categories of tools and systems and how they fit together, and at the organizational structures and operational practices needed to support multicloud operations.

This webinar presents the highlights of Nemertes' in-depth research study of 335 organizations in 11 countries across a range of vertical industries. Later episodes will discuss cloud organizations and operational practices, and success metrics and best practices for cloud organizations.

What does it take for enterprise cybersecurity teams to "up their games" to the next level of cybersecurity? What does it mean to be a "successful" cybersecurity organization, and what technologies and practices does it take to become one?

This webinar presents the highlights of Nemertes' in-depth research study of 335 organizations in 11 countries across a range of vertical industries.

We separated the best from the rest, and took an in-depth look into what made the most successful organizations that way. Participants will come away with best practices, tools, technologies, and organizational structures that contribute to success. Most importantly, they'll learn how to measure cybersecurity success--and their progress towards it.

Kubernetes is great for running stateless workloads, like web servers. It’ll run health checks, restart containers when they crash, and do all sorts of other wonderful things. So, what about stateful workloads?

This webcast will take a look at when it’s appropriate to run a stateful workload in cluster, or out. We’ll discuss the best options for running a workload like a database on the cloud, or in the cluster, and what’s needed to set that up.

We’ll cover:
•Secrets management
•Running a database on a VM and connecting it to Kubernetes as a service
•Running a database in Kubernetes using a `stateful set`
•Running a database in Kubernetes using an Operator
•Running a database on a cloud managed service

After you watch the webcast, check out our Kubernetes Links & Resources blog at http://bit.ly/KubeLinks

The SNIA EMEA Storage Developer Conference (SDC) will return to Tel Aviv in early February 2020.

SDC EMEA is organised by SNIA, the non-profit industry association responsible for data storage standards and education, and the conference is designed to provide an open and independent platform for technical education and knowledge sharing amongst the local storage development community.

SDC is built by developers – for developers.

This session will offer a preview of what is planned for the 2020 agenda ahead of the call for presentations and will also give potential sponsors the information they need to be able to budget for their participation in the event. If you have attended previously as a delegate, this is a great opportunity to learn more about how you can raise your profile as a speaker or get your company involved as a sponsor. There will be time allocated during the webcast to ask questions about the options available. Companies who have significant storage development teams will learn why this conference is valuable to the local technical community and why they should be directly engaged.

Real-world digital workloads often behave very differently from what might be expected. The equipment used in a computing system may function differently than anticipated. Unknown quirks in complicated software and operations running alongside the workload may be doing more or less than the user initially supposed. To truly understand what is happening, the right approach is to test and monitor the systems’ behaviors as real code is executed. By using measured data designers, vendors and service personnel can pinpoint the actual limits and bottlenecks that a particular workload is experiencing. Join the SNIA Solid State Storage Special Interest Group to learn how to be a part of the real-world workload revolution

Swordfish School: Introduction to SNIA Swordfish™ Features and Profiles
Ready to ride the wave to what’s next in storage management? As a part of an ongoing series of educational materials to help speed your SNIA Swordfish™ implementation in this Swordfish School webcast, Storage standards expert Richelle Ahlvers (Broadcom Inc.) will provide an introduction to the Features and Profiles concepts, describe how they work together, and talk about how to use both Features and Profiles when implementing Swordfish.
Features are used by implementations to advertise to clients what functionality they are able to support. Profiles are detailed descriptions that describe down to the individual property level what functionality is required for implementations to advertise Features. The Profiles are used for in-depth analysis during development, making it easy for clients to determine which Features to require for different configurations. They are also used to determine certification / conformance requirements.

About SNIA Swordfish™
Designed with IT administrators and DevOps engineers in mind to provide simplified and scalable storage management for data center environments, SNIA Swordfish™ is a standard that defines the management of data storage and services as an extension to the Distributed Management Task Force’s (DMTF) Redfish application programming interface specification. Unlike proprietary interfaces, Swordfish is open and easy-to-adopt with broad industry support.
Your one stop shop for everything SNIA Swordfish is https://www.snia.org/swordfish.

- Challenges businesses are facing today with regards to security and compliance in the cloud
- Improvements that can be made today to ransomware prevention, detection, and recovery
- Long-term security and compliance strategies
- Quantifiable outcomes businesses can expect to see with a unified system of records

Moderated by Paige Bidgood, EMEA Community Lead - IT Security & GRC, BrightTALK

It's time to rethink data loss prevention. Today's progressive, employee-focused, idea-rich organizations are looking for new, less restrictive ways to protect their data.

Watch this interactive 1-2-1 discussion where Richard Agnew, VP EMEA will share insights from the field including;

How Code42 differs from legacy DLP vendors and why it is beneficial for Code42 customers
How Code42 is addressing insider threats in cybersecurity
Why organisations should consider adding Code42 to their security technology stack
Why visibility is key in addressing the new threats organisations are facing in 2019

Code42 Next-Gen DLP collects, indexes and analyzes all files and file activity, giving our customers full visibility to everywhere their data lives and moves — from endpoints to the cloud. With that kind of oversight, security teams can quickly and easily monitor, investigate, preserve and recover data without the complex classification rules and policies that ultimately block employee collaboration and productivity. Native to the cloud, Code42 Next-Gen DLP works without expensive hardware requirements and deploys in a matter of days. Today, more than 50,000 organizations worldwide rely on Code42 to protect their data from loss.

Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.

We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.

Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).

New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:

•Traditional uses of storage and memory as a cache
•How can we build and use systems based on PM?
•What would a system with storage, persistent memory and DRAM look like?
•Do we need a new programming model to take advantage of PM?
•How we might take better advantage of this new technology

After you watch the webcast, check out the Q&A blog at http://bit.ly/32F2l98.

In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.

Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.

With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.

This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more