App teams are excited about using microservices architectures and containers. Running a few containers in a development environment with a few open source tools is very different from running large clusters with thousands of containers supporting production applications. In this three-part tutorial, Avi Networks CTO, Ranga Rajagopalan explains step-by-step with examples, the services needed and practical considerations in deploying production-ready, container-based applications.

Episode 1:
In this episode, Ranga will cover the concept of a Service Mesh for an OpenShift - Kubernetes cluster along with the following:
Need for an ingress controller and an intra-cluster traffic manager
How to deploy a service mesh in a kubernetes cluster
Real-life examples of a service mesh deployment
Best practices and lessons learned from production deployments

Register for the upcoming episodes in this tutorial series for on-demand access to the videos.
Episode 2: https://attendee.gotowebinar.com/register/7441307064554961155
Episode 3: https://attendee.gotowebinar.com/register/6587591261168317699

About Ranga Rajagopalan:
Over the last 15 years, prior to co-founding Avi Networks, Ranga has been an architect and developer of several high-performance distributed operating systems as well as networking and storage data center products. Before his current role as CTO, he was the Senior Director at Cisco’s Data Center business unit, responsible for platform software on the Nexus 7000 product line. Joining Cisco through the acquisition of Andiamo where he was one of the lead architects for the SAN-OS operating system, Ranga began his career at SGI as an IRIX kernel engineer for the Origin series of ccNUMA servers.

Enterprises deploying cloud-native applications are ditching traditional infrastructure approaches. They realize that the need to order, connect, rack and stack dedicated hardware appliances for various network functions slow down application deployments and updates.

Cisco Systems and Avi Networks have teamed up to solve one the most common automation challenges in the application delivery cycle. Cisco CSP 2100 is a robust network function virtualization (NFV) platform and is part of Cisco Secure Agile Exchange solution. The Avi Vantage Platform delivers multi-cloud load balancing and adaptive per-app services using a scalable, software-defined architecture.

While virtual appliances for load balancing, long thought of as the answer for software-driven infrastructure, have existed since the advent of virtualization, they inherit most of the architectural challenges of legacy solutions, including limited scalability, lack of central management and orchestration, and performance limitations. Instead, what is needed is an application delivery architecture based on software-defined principles that logically separates the control plane from the data plane delivering the application services.

Join us for this 30 minute webinar with Q&A to learn how to overcome the challenges of using virtual load balancers.

Production-ready applications implemented on OpenShift need networking services such as load balancing within and across clusters, service discovery, application performance monitoring, security, and visibility to service dependencies. Hardware or virtual appliance-based load balancers are not built to handle the dynamic and distributed nature of microservices applications, while lightweight proxies don't have the analytics, scale, security, nor enterprise class load balancing features that enterprises expect for production deployments.

Join Avi Networks and Red Hat for this informative session on important OpenShift and container networking concepts and how you can create robust microservices applications.

You will learn:

- How to setup ready-to-use local and global load balancing services
- Perform service discovery to map service domain names to virtual IPs
- How to visualize microservices with dependency graphs
- Deliver monitoring, analytics, and security services

Many enterprises embarking on OpenStack projects or data center automation initiatives are not aware of potential pitfalls during their implementation. Networking and application services represent one such area that can trip up even the most experienced network and cloud teams.

In this webinar, Nate Baechtold, Enterprise Architect at EBSCO Information Services - a pioneer and successful early adopter of OpenStack - will share his experience and information that he wishes he had before he started his OpenStack journey.

You will learn:
- Best practices and lessons learned from EBSCO’s successful Red Hat OpenStack deployment
- Load balancing (LBaaS) and self-service considerations within the OpenStack environment and how to meet performance and availability requirements
- Building a CI/CD delivery model with blue-green deployments on top of OpenStack and software-defined load balancers

The market for ADCs is very competitive, with nearly every vendor claiming a performance advantage. Join Avi Networks CTO, Ranga Rajagopalan for an extraordinary demo on how you can scale from zero to 1 million SSL transactions per second (TPS) at a fraction of the cost of a traditional, appliance-based load balancers.

In this webinar, Ranga will debunk the performance myth commonly perpetuated by vendors of hardware load balancers. He will perform a live test during the webinar using the Google Cloud Platform (GCP). Don't miss this opportunity to learn how to achieve “ludicrous scale without incurring ludicrous costs” for high-volume SSL transactions.

Does the following IT conversation look familiar?
Application Owner: “It takes the network guys too long to provision load balancing services”
Network Team: “Yes, but it is because our hardware load balancers are so inflexible”
CIO: “I am tired of spending hundreds of thousands every few years and it doesn’t even help our future plans”

If that sounds familiar, this webinar is for you. Enterprises seeking elastic application services are choosing an elastic software-defined approach to load balancing. Learn how large multinational enterprises have solved their application delivery challenges by migrating from F5 BIG-IP and Viprion hardware appliances to Elastic Application Service Fabric.

Nathan McMahon from the Avi Networks solution architecture team will present the best practices for migrating from F5 appliances to the Avi Vantage Platform. You will learn how to convert your current configurations and iRules to a simple API-driven software platform that:

Traditional ADCs force organizations into proprietary and expensive hardware-based appliances even as application needs are evolving and to improve SLAs to internal customers. This not only increases TCO but also does not provide a uniform architecture for on-premises and cloud deployments. Before you commit to an expensive multi-year license and maintenance contract, it is important to consider key questions and what they will mean for the future of application services in your enterprise.

• Can I provision VIPs in minutes in any data center, private, or public cloud?
• Can I scale my load balancers and my backend application servers on-demand?
• How can I get real-time insights and visibility into application traffic?

Register for this webinar to see other critical considerations and how next-gen ADC architectures can address these challenges head on. In just 30 minutes, you will learn how a software-defined load balancing architecture can play an expanded role in delivering and scaling your applications across clouds, providing performance insights, and automating application services - all at great cost savings.

Enterprises are increasingly adopting hybrid and multi-cloud computing strategies to speed up application rollouts, gain operational and cost benefits, and improve business agility. According to RightScale’s “Cloud Computing Trends: 2016 State of the Cloud” survey and report, hybrid cloud adoption increased from 58% to 77% year-over-year. 17% of enterprises now have more than 1000 VMs in the public cloud, up from 13% in 2015. This of course means that businesses need to consider application services that work across data centers and multiple cloud environments.

In just 40 minutes, you will learn how next-gen load balancing architectures can play an expanded role in delivering and scaling your applications across clouds, providing performance insights, and automating application services.

The annual holiday shopping season starting with Black Friday is the litmus test for application availability and performance for online businesses. You want to ensure that end users have the best online experience and that your business does not suffer from unexpected application outages or performance issues. These disruptions are costly both due to lost revenue and reduced customer satisfaction. With the spotlight on shopping experiences and customers and critics using social media to share updates, high profile outages can also adversely affect the reputation of the business. For network administrators and architects this means preparing their networks and load balancers to handle demand spikes and the ability to react quickly to unexpected issues.

Many IT administrators and network architects find that despite their best laid plans, things can go wrong and performance issues can bog down applications.

Learn about the top ways to make sure that your application services are ready to serve your business for the busiest time of the year.

Load balancing and what load balancers can do are undergoing significant changes as enterprises seek to deploy cloud-native applications in data centers and public clouds. According to Gartner, “Application-centric personnel are driving a return to lightweight, disaggregated load balancers, creating challenges and opportunities for I&O leaders.”

Simultaneously, Network Function Virtualization (NFV) is now reaching mainstream enterprises with the Cisco Cloud Services Platform (CSP) 2100. Avi Networks and Cisco CSP integrate to provide a turn-key solution for the rapid deployment of application services such as load balancing, analytics, and autoscaling on an elastic NFV platform, without requiring any additional expertise. The joint solution ensures that administrators can efficiently roll out elastic, high-performance, load balancing and application monitoring capabilities.

Join Avi Networks and Cisco guest speaker, Gunnar Anderson to learn how enterprises now have an opportunity to take advantage of software-defined load balancing and NFV quickly and easily using:

Load balancers occupy an important position (in the path of application traffic) on the enterprise network. Yet, traditional application delivery controllers (ADCs) are unable to provide meaningful application insights to drive business decisions. Avi Networks’ software-defined architecture for load balancing separates the control plane (management layer) from the data plane (load balancers) to generate continuous insights about applications.

In this presentation from Avi Networks, learn how you can get rich analytics and actionable insights into end-user experience, application performance, resource utilization, security, and anomalous behavior. See how you can benefit from:

• A “Network DVR” to record and replay traffic events to pinpoint app issues
• At-a-glance view of all virtual services throughout the system
• Real-time visibility into DDoS attacks, SSL versions and ciphers used in transactions
• Health scores that give you immediate feedback on application performance

Container-based applications are driving changes in the IT tool chain up and down the stack. In addition to a container orchestration platform, these applications need a full stack of services including load balancing, service proxies, visibility/monitoring, service discovery, and security.

This webinar jointly presented by Avi Networks and Mesosphere, will uncover the strategies for delivering application services and best practices for building, deploying, and managing large Mesos clusters with thousands of nodes.

Computing models in today’s dynamic data centers and clouds are changing dramatically. Application-centric enterprises are finding that they need to develop nimble operational models for infrastructure, networking, and application services. Application delivery controllers (ADCs) are an important part of the networking and application services considerations for software-defined data centers.

New software-defined load balancers are significantly improving the way that application services are delivered and scaled, while freeing IT from repetitive tasks through intelligent automation. Application developers and lines of business are benefiting from better APIs that align with their goals for continuous integration and delivery (CI/CD) and cloud-native applications.

In this webinar, you’ll learn:

– How to eliminate the overprovisioning and overspending that is typical with traditional hardware-based load balancing solutions?
– How to scale not just load balancers but also applications, elastically and predictively based on real-time traffic patterns?
– How to take advantage of x86 servers, VMs, or containers to deliver application services close to individual applications?
– What are the best ways to support multi-cloud deployments and cloud-native applications?
– Ways to troubleshoot applications in minutes with the ability to record and replay traffic events, security and client data.
– How to accelerate application services for SDN environments such as Cisco ACI and private clouds such as OpenStack or for container-based microservices applications?
– The move to agile infrastructure and operations is already happening and it is now reaching critical networking components in the stack such as ADCs.

Application Delivery Controllers (ADC) & Legacy Load balancers were not designed for an era of the software-defined data center, virtualization and cloud-based applications. Join industry expert, Ashish Shah, in this 2 part series as we look at the top 7 reasons why enterprises are choosing Avi Networks in VMware and OpenStack environments.