Share this:

The EMC Federation of companies have submitted many talks this summit. Take a look and vote for the ones you believe will be useful. I am especially interested in getting feedback on DefCore and the Product Team.

Organization

Speaker(s)

Session Type (Talk, Demo, Panel)

Track

Title

Description

URL for Voting

1

EMC

Shamail Tahir & John Griffith

Talk

Cloud Storage

Cinder: Efforts in Cinder to provide quality as well as compatibility

This session will provide an overview of the quality control and assurance procedures being taken in Cinder, specifically through third party CI. We will explain how the CI systems work, and what they test. We’ll also talk about why this benefits the end users and operators as well as discuss some of the challenges and learnings encountered along the way. This presentation will not be focused on a single Vendor or Driver, but is meant to discuss various drivers including the reference LVM driver.Key Take-aways for Attendees:
Understand the development workflow for Cinder (with an emphasis on quality)
Understand how Cinder tests and ensure interoperability with a large number of storage providers
Understand the emphasis on compatibility requirements for heterogeneous storage environments in OpenStack/Cinder

This session will explain the multitude of storage options provided by OpenStack. We’ll talk about the differences between Object, Block and Shares as well as the options available to provide each of them. We’ll also explain the differences between persistent and ephemeral storage and how that relates to building and using Instances.Key Take-aways for Attendees:
Understand the various storage use-cases available in OpenStack
Highlight the purpose and high-level functionality of the OpenStack Storage Projects
Discuss the difference in storage consumption model from traditional IT
Gain an overview of how the existing OpenStack storage projects can serve your cloud needs
Walk-through a sample application architecture that leverages all storage services as a part of it’s stack.

We all know there are two sides to every story, and, in this session the theory and reality of highly available OpenStack clouds will be discussed. We will cover concepts from the OpenStack HA-Guide that provide active/active and active/passive configuration guidelines (the “theoretical” configuration of OpenStack HA) and walk-through HA considerations that Deutsche Telekom had to implement, and design, from a practical production grade OpenStack deployment perspective (the reality). We will also discuss the state (and implications) of a multiple availability zone and region setup.Key Take-Aways:
Learn the tools and components you can leverage to make your cloud highly-available
Practical advice on implementation of HA OpenStack services
Identify resources available to help with the planning and design elements of your cloud HA strategy
Recommendations on selecting active/active or active/passive as a HA strategy

The original SOTS was the first end-to-end view of OpenStack as a project. It has over 90,000 views on slide share and growing every day. This is the fourth iteration of SOTS and we will cover the good, bad, and ugly of all integrated and core OpenStack projects. This session is ideal for anyone who is new to OpenStack, who desires to understand “the big picture”, or who is trying to get an honest “self-evaluation” of OpenStack as as a whole.

Struggling with block storage for your OpenStack deployment? Over a third of OpenStack deployments use an open source distributed block storage solution that is difficult to deploy, maintain, has problematic performance characteristics, and can’t deliver on the promise of scale-out storage. EMC, the world’s leader in storage has a number of scale-out storage solutions, including ScaleIO, the product of over a decade of development by experience storage experts. ScaleIO is a 100% software solution that runs on commodity hardware. It is easy to deploy, maintain, and operate. It is highly performant and proven in the field at scale at over 100PB. ScaleIO’s built-in “protection domains” provide smaller fault domains, while ensuring performance across the entire cluster.In this session we will give a deep dive on the ScaleIO technology, and show it running on a cluster of 100+ servers doing massive IOPS and show how easy it is to manage faults by inducing several failures. We will also show how ScaleIO can be easily downloaded directly from the EMC website and evaluated for free. ScaleIO is plug and play with OpenStack and may solve some of the most egregious block storage problems you have had to date.

This session will demonstrate how to use EMC ViPR as an option to build a multi-cloud SDS platform capable of handing heterogeneous storage environments. ViPR, when used in conjunction with Cinder, allows your storage platforms to be abstracted in a simplified manner while allowing storage operators to optimize resources through policy based management.

As OpenStack grows incredibly quickly in popularity there is an increasing and consistent need to have a means to bridge the gap between app developers, operators, and the code itself. In a mainstream business this function is filled by Product Management, whose role it is to understand the needs of customers, work with engineering to build the right product, and then communicate product information back out to product marketers and/or customers using terms they understand.
The issue OpenStack faces today is that virtually any level of exposure to either the technology or the community almost immediately entails deciphering of unfamiliar terms and concepts. This means that product managers must also serve as educators and create products that and value that customers are willing to pay for, while bridging the gap between the community, engineers, customers, and users.In this panel, join OpenStack veterans as they share perspectives on OpenStack Product Management. They’ll discuss:
Strategies for turning trunk into productized solutions and offers
Pros and Cons of keeping product releases close to trunk
Creating an effective feedback loop between developers, operators, and users
Monetization strategies in the open source era
What IT organizations need to know and understand to get the most out of OpenStack
This is must-attend session for product managers and contributors. Contributors and users responsible for the creation and success of OpenStack-focused products should also find benefit.
Panelists:
Niki Acosta – Cisco (Chair)
Andre Beafield – Blue Box Group
Aaron Delp – Solidfire
Jim Haselmaier – Product Management – EMC
Shamail Tahir – Office of the CTO – EMC

One of the most important aspects for many cloud service providers is the ability to measure and predict cloud resource health and consumption. With the adoption of OpenStack on the rise in the Enterprise, it’s becoming more and more important to be able to seamlessly integrate with existing tools and products that exist in the marketplace.As an operator, how will you be able to predict scale-out requirements? How can you ensure availability and reliability? How can you demonstrate chargeback or showback to your various groups who make use of your infrastructure? How do you budget and plan for additional capacity? How does all of this integrate with your existing tools and skill sets?This talk will focus on the various integration points for monitoring your OpenStack infrastructure. We will be presenting a reference design which covers a layered approach to monitoring, allowing for operation at scale. We will also discuss opportunities where the community can contribute to the advancement of monitoring in OpenStack. You should leave this session with a better understanding of how to extract useful monitoring and telemetry data from your OpenStack deployment in order to operate at scale both efficiently and reliably.

Things you need to know before you containerize your OpenStack deployment

For those starting out, there is a tendency to treat containers like glorified light-weight VMs. When architecting your cloud around docker, you need to ensure that you adhere to some best practices to really leverage the benefits. This talk will go over some of those best practices and touch upon some of the challenges that you might encounter when deploying OpenStack services in docker containers.

OpenStack is a very active community. Bursts of change happen quite often and it can be difficult to keep up if you are not immersed. We need manage the flow of critical information and decision making just like any other engineering organization. The people that represent the Product Management of OpenStack are a critical group within the community.The Product Management working group has met a few times starting in Paris. This group has organized itself around three first activities. Gathering the current state of the OpenStack projects, Defining what the Roadmap could look like, and working with the Cross Project team. The User Stories from the Win the Enterprise working group will used alongside the project needs.Join us to discuss what we have so far and let debate where we the OpenStack community should be going towards.

This all-star panel discussion will discuss and debate if containers are a threat to OpenStack. This panel discussion will be recorded as a “Speaking in Tech” Podcast which is distributed by Europe’s largest tech publication, The Register.
Topics that will be discussed include:- Can containers replace OpenStack in the enterprise?
– Where do Kubernetes and Mesos compete and where are they complimentary?
– What role does Docker have in OpenStack?
– Is it practical to use a combination of Kubernetes and Docker to completely replace OpenStack and KVM?

OpenStack is complicated, and explaining it to app developers and other consumers of the services it delivers can be a tough, thankless and often counterproductive task. But if you don’t cover the basics, small misunderstandings can bloom into major headaches as you move to production.Operators of OpenStack clouds have learned a lot about what app developers and other consumers of OpenStack services need to know about the project. In this session, we’ll discuss the five major tripwires that can put the best-laid deployments flat on their faces.We’ll look at the biggest landmines, including what to explain — and what to avoid — regarding governance and the integrated release cycle. We’ll also examine how to talk with new users about the best way to become engaged in the community, without scaring them away from open source altogether. We’ll also talk about how to make an honest assessment of their engineering chops and their appetite for getting into the weeds with OpenStack.These lessons from the trenches will be presented by five people who design, deploy, operate and consult on OpenStack with demanding customers. Come with an appetite for a reality check, and leave with a critical assessment of how to bring your users into OpenStack with their eyes open.

With the almost limitless storage configuration options in OpenStack, architecting, operating, and troubleshooting can be daunting. In this session, we’ll cover configuration best practices, operational tips, and troubleshooting techniques with real-world examples. We’ll also discuss the various storage projects in OpenStack- Cinder, Swift, and Manila, and how EMC is contributing to them as well as how we are integrating our storage products.

While there is a lot of information available about the various OpenStack deployment options, there’s surprisingly less about what to do once you have your OpenStack environment up and running. In this session we’ll talk about all of the hot-button OpenStack operational issues- high availability, upgrades, monitoring, troubleshooting, and more!

Monitoring and alerting: two things that everyone, operator to CTO, can agree are critical parts of any production deployment. In this presentation we’ll discuss the different generations of monitoring technologies – where we’ve been and where we’re going – and give a high-level overview of the current efforts and difficulties within the OpenStack ecosystem. We’ll talk about the importance of the shift away from polled service checks towards ‘push metrics’ and active telemetry, and present some concept designs for some seriously cool operator / administrator features that will be made possible in the near future.

Helios Burn is an out-of-the-box REST fault injection platform that captures and modifies HTTP/S traffic. It implements a man-in-the-middle interception using self-signed certificates to be able to intercept and interpret HTTPS traffic.The purpose of Helios Burn is to provide developers with a tool that injects failures in REST APIs so that developers can verify the stability and resilience of their applications and identify and prevent failures before deploying them into a production environment.HeliosBurn let’s users create custom rules to match with the REST target traffic, or they can also benefit from the preset rules for common Cloud services including OpenStack Swift and Nova. Upon a match, users are able to apply actions such as modifying any HTTP information (i.e., headers, URL, status code, payload), respond on behalf of the server, delay the request or response, or drop the connection.It is designed with an extendable modular architecture that enables third parties developers to add new modules with custom functionality.HeliosBurn is managed through a friendly web dashboard that allows users to tweak any aspect of the platform and observe the HTTP traffic going back and forth. In addition, HeliosBurn provides a full-featured API for developers to create custom clients and libraries.HeliosBurn is shipped both as a VM and a Docker microservice, making it really easy to deploy it. Depending on the need, it can be placed in a standalone server, in a Virtual Machine, or co-located with the Web server or Client application.As an open-source project, HeliosBurn welcomes and encourages any kind of collaboration from the community.

The presentation will provide a beginner overview related to Application Modernization that will focus on 4 lifecycle processes:· Planning Phase
· Design Phase
· Build Phase
· Run PhaseWithin the lifecycle processes, 3 architectural concerns will be outlined which include:Application Alignment – Process of identifying applications that are most critical to the business, along with their technical and business value.Suitability and Selection – Identifying whether or not OpenStack is the right fit for supporting certain types of applications based on common workload characteristics and sizing deployments.Modernize and Migrate – For Modernize, explanation will be provided on how to optimize applications to run on OpenStack. For Migration, discussion points will include what can be moved, what can’t be moved and supporting use cases.The discussion will conclude with insights on how leveraging the 4 lifecycle processes can help plan application development for OpenStack.

This session will provide an overview of the Reference Architecture developed to enable high-performance Cinder block storage using EMC block storage systems, such as VNX, XtremIO and ScaleIO. We will explain how the Cinder integration with EMC storage systems work, and what additional storage capabilities they offer. We¹ll also talk about how this reference architecture benefits the admins and operators as well as discuss some of the challenges and learnings encountered along the way. This presentation will be focused on several Distributions and Drivers, with the emphasis on the reference architecture and best practices.Key Take-aways for Attendees:
Understand the integration workflow for Cinder with EMC storage systems works (with an emphasis on functional testing)
Understand the specific challenges using different storage protocols and ensuring interoperability with EMC storage systems
Understand the benefits of creating heterogeneous storage environments in OpenStack/Cinder

So you’re going to use Neutron plus an SDN overlay? Or perhaps just simple VLANs? Regardless of which way you go, it turns out there is already a set of well understood best practices for building scalable networks. In this session folks who have built scalable networking for large OpenStack deployments will walk you through the dos and don’ts of networking. Why layer-3 networking is your friend, how OSPF and BGP work together, and why everyone loves a spine/leaf networking architecture. We’ll give real world examples of networks we have built, including one that handled the load for a major retailer during Black Friday 2014.

Whether you are a newbie to OpenStack looking at building your first cloud or an experienced operator with years of OpenStack success behind you, you’ve probably spent some time wondering what to expect from the OpenStack project over the next several releases. Will it finally support that new capability you’ve been waiting for? Should you plan for an upgrade in the next 6 months?While the development community is always working and planning new features, its takes a lot of time on IRC to get a complete view across the different projects. The OpenStack Product WG spent time this cycle working with the project teams and PTLs to understand their priorities for the next several OpenStack releases. In this session, we’ll present our findings across the different projects in an effort to give users a glimpse into the OpenStack roadmap.

oin to our panel talk about community status report and meet with the OpenStack Ambassadors. They connect the user groups to the Foundation, and help initialize the groups and guide them to grow.
Review of some Ambassadors launched actions during the last release cycle:
– OpenStack community report
– What is the size of the community ?
– Global and regional trends
– Introduce new groups, leaders
– Official group process
– Officials groups
– Process
– Examples of User group help
– Groups portal
– Overview
– Results
– Welcome pack and OpenStack shop
– Q&A

As OpenStack continues to grow, Enterprises are beginning to explore and to implement OpenStack as their Cloud platform of choice. Often, these companies have existing investments and expertise with VMware technologies. In order to prepare for this new world, these people who are familiar with VMware concepts and terminology will need to understand the parallel concepts and terminology in OpenStack. This session will be valuable for anyone who needs a better grasp of how to talk about both VMware and OpenStack in an enterprise context.

Most enteprise customers are transitioning or augmenting their IT strategies with OpenStack. In this session, we’ll discuss how to repurpose, or leverage, your existing IT investments in your OpenStack project. We will discuss how workloads may influence which assets to leverage, how to “pilot” OpenStack”, and start using OpenStack with the minimal amount of net new investment.

In this session, we will cover how Congress can be leveraged by your organization to ensure compliance and policy adherence in your OpenStack cloud. The example governance scenario will show how to set and monitor policies for compute, network, and storage.

DefCore is a recently formed set of criteria that identifies which products, providers, and solutions meet the requirements to use the OpenStack mark. In this session, we will explore the considerations and implications from one vendors perspective as they begin to assess their own readiness under this new program. We will also discuss why OpenStack cloud operators can also benefit from validating their implementation against DefCore using a tool called RefStack and how this initiative will help the compatibility of OpenStack clouds in the long-term.

This Committee was formed during the OpenStack Ice House Summit in Hong Kong by Board Resolution on 11/4. DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled “OpenStack.” Our mission is to define “OpenStack Core” as chartered by the by-laws and guided by Governance/CoreDefinition What has DefCore done so far? Who is involved with DefCore? What changes are planned around OpenStack branding in 2015? How will DefCore change OpenStack in general?

We want to energize an OpenStack user group near you!To do that, we will be using the community training project. It is happenning at the Tokyo and San Francisco user groups, Coming off the Paris summit, the community training guides project is focusing on the user groups as their main audience. While there are paid OpenStack training programs available, the OpenStack Training Guides project aims to teach the basics of OpenStack through the user groups.In this talk, we will describe the Training Guides project objectives of OpenStack training cluster, training content mostly by HTML slides, example scenarios and use cases, and quizzes. We will walk through our successes to date delivering training thorugh the user groups.

In this session you will learn about the Neutron plugin that VMware has developed and released to the community. This plugin allows supports basic and advaned Neutron workflows and leverages the NSX vSphere solution for added flexibility and scalability in your OpenStack Cloud. Logical Switching, Logical Routing, Distributed Firewalling are all NSX services that can be consumed by Neutron and exposed to your cloud tenants.

Do you want to learn and use OpenStack APIs? Do you just want to get hands-on experience of using Heat templates or Neutron Networking? Want to learn how OpenStack integrates and runs on VMware technologies such as vSphere and NSX? Is there an architecture that I can check out to see what all components are need to run OpenStack in production (Message Queues, Memcache, DBs, Load Balancer…etc)? Curious how you would monitor and troubleshooting your OpenStack Deployment? Merge this with Hands on Lab for broader OpenStack + VMware. Whether you are curious to learn about OpenStack or how it works on VMware. This hosted lab gives you the perfect oppoertunity to learn all aspects of OpenStack.

As OpenStack continues to grow, Enterprises are beginning to explore and to implement OpenStack as their Cloud platform of choice. Often, these companies have existing investments and expertise with VMware technologies. In order to prepare for this new world, these people who are familiar with VMware concepts and terminology will need to understand the parallel concepts and terminology in OpenStack.This session will be valuable for anyone who needs a better grasp of how to talk about both VMware and OpenStack in an enterprise context.

33

VMware

Eric Lopez, Aaron Rosen, Janet Yu

HOL

Hands On Lab

Openstack Networking Introduction Hands on Lab

This session is an introduction to new users on Openstack Networking. Users will be provided access to a live Openstack environment with Neutron setup. We will walk through
the key neutron deployment use cases with members of the Neutron core development team available to provide guidence and answer questions.Demonstrated features will include:
– Creation of tenant networks using overlay tunnels.
– Configuration of external connectivity
– Advanced Neutron Features, including support for overlapping IPs, L3 + NAT usage via logical routers, Firewall as a Service, Loadbalancer as a Service, VPN as a Service, IPv6 and more!We will incorporating lessons learned from presentation of this session at previous Openstack Summits and also including new Neutron capabilities introduced in the Kilo release.

This session is an introduction for operators on Openstack Networking. Users will be provided access to a live Openstack environment to install and configure Openstack Networking Neutron.
We will walk through configuration of Neutron with the ML2 plugin via OpenvSwitch(OVS) and L3 services with OpenvSwitch Virtual Networking(OVN).Demonstrated features will include:
– Interaction with other OpenStack components (Compute & Storage)
– Configuration of Metadata Services and DHCP Services
– Designing Neutron for HA
– Troubleshooting NeutronThis session highlights how the environment is configured for Openstack Networking Hands on Lab at previous Openstack Summits.

Congress is an OpenStack project that provides policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures. In this lab users will get access to a live OpenStack setup with congress already installed and will be able to walk through several key congress deployment use cases and get hands on experience working with congress. Users will write policies that interface with several OpenStack projects (neutron, glance, nova, keystone, cinder, murano) and understand how the policy language works and how one can tame their cloud with congress.

Organizations are using OpenStack to increase delivery speed of
Infrastructure As A Service (IaaS) capability for a number of reasons, including Self Service and programmatic consumption of Infrastructure services reducing delivery of these services from weeks to seconds.VMware’s internal OpenStack platform is one of the largest multi-hypervisor environments in production and is currently utilized by engineering, support, and sales teams to provide demonstration, training, product development, customer support, and partner integration. Originally built to serve moderate demands, it has grown to serve more than 300 hypervisors, 6000 VMs, 20,000 logical switch ports, and 2000 logical routers.In this session, we will discuss the lessons learned in running a multiple hypervisor environment based on KVM and vSphere, empowering the business by increasing their efficiency and ability to innovate whilst retaining flexibility in the infrastructure used to provide these services.

Deploying applications is hard to get right. It requires gathering information from many different resources (e.g. the application itself, the infrastructure, the other applications already deployed), and making technical and business decisions about where and how to deploy the app while satisfying the multitude of business/infrastructure/application policies that govern the deployment process.In this talk, we describe an integration of Murano and Congress that eases the burden of policy-governed application deployment. This integration ensures that application-deployment done through Murano complies with the policy expressed in Congress—from initialization all the way through to final deployment. In this session we demonstrate how to define policy with Congress and how policy is enforced within Murano during application fulfillment, culminating in a live demo.

Currently OpenStack does little to help Telcos optimize their workloads for energy consumption, cost, and speed. Today, operators must manually (or via scripts) provision, migrate, and decommission workloads to achieve the desired balance of energy/cost/speed, and they must do so repeatedly.In this talk, we describe an open architecture for automating resource optimization, where operators provide a policy describing how workloads ought to be optimized, and OpenStack continually monitors and migrates workloads to satisfy that policy. Under this architecture, operators give their policy to Congress [1] (the not-yet-incubated OpenStack project for Policy-as-a-Service), and Congress continually enforces that policy by migrating workloads as appropriate. In addition to discussing the architecture, we demo a proof-of-concept implementation where Congress migrates real VMs via Nova in response to changes in datacenter readings reported by Ceilometer.

Policy has quickly become a hot topic in cloud management and orchestration. As OpenStack clouds expand, penetrate the enterprise, and evolve with technologies such as containers, policy-based solutions for capturing user intent, automating management and security, and ensuring governance and compliance for applications has emerged as a critical area for development.
This panel will explore emerging trends and projects in policy developing in the OpenStack and OpenDaylight communities. It will discuss a number of topics, including:
-What is meant “policy” in the context of OpenStack. Is there a “right” approach?
-Why is policy important? What are the key use cases?
-What projects and capabilities are present in OpenStack today
-How will it fit with existing OpenStack components

VMware is now serving a plentiful menu of OpenStack delicacies; bound to satisfy the appetite of a wide range of customers. Whether you fancy a small-to-mid sized prescriptive OpenStack deployment drizzled atop your existing VMware based technologies in a matter of minutes, or you have an intense craving for a highly customized large OpenStack deployment; VMware’s got you covered.In this meal we’ll sample VMware’s OpenStack menu which ranges from a click-and-go out-of-the-box integrated OpenStack distribution, to a highly customized made-to-order OpenStack masterpiece. We’ll dive into the ingredients of these recipes to better understand VMware’s common OpenStack reference architecture, how the solution is deployed / operated and how VMware offers a OpenStack based Software Defined Data Center (SDDC) solution for Cloud appetites of all sizes. For dessert we’ll indulge with some details on the custom integration between OpenStack and VMware technologies; making your OpenStack meal service a pleasant and affordable experience.By the end of this feast you should walk out fully satisfied with an understanding of how VMware cooks a delicious dish of OpenStack to suit any occasion you may have.

It is well known that Nova works with the VMware hypervisor. Yet, there is quite some confusion around how Nova integrates with VMware ESXi. Does Nova interact directly with ESXi or with the vCenter Server? Is Nova capability X supported when using the VMware hypervisor? Can I take advantage of ESXi/vCenter Server’s feature Y from Nova? When should an admin use Horizon and when should one use the vCenter client? This talk will mitigate such confusion by digging into the nuts and bolts of the integration between Nova and the VMware hypervisor with the help of a demo that will also show case how some of the advanced ESXi/vCenter Server features can be leveraged from Nova.

Congress is an OpenStack project that provides policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures. In this lab users will get access to a live OpenStack setup with congress already installed and will be able to walk through several key congress deployment use cases and get hands on experience working with congress. Users will write policies that interface with several OpenStack projects (neutron, glance, nova, keystone, cinder, murano) and understand how the policy language works and how one can tame their cloud with congress.

Cloud Foundry is a open source cloud computing platform as a service (PaaS) which allows users the ability to deploy and scale their application easily. The platform provides several as as a service features such as redis, mysql, mongo, etc that application developers can leverage with their applications. Congress integrates with Cloud Foundry and allows Security teams to audit and define policies over specific applications. For example, a security team could define a policy saying applications deployed in production require https or a specific autoscaling configuration. In this talk we’ll give an overview of congress and demo this integration.

Cloud Foundry on OpenStack Hands-on: It’s what on the Stack that matters!

The goal for both OpenStack and Cloud Foundry foundations is to build open source software and communities. The software is intended for public, private and managed clouds. CloudFoundry is intended for a variety of IaaS including OpenStack. This Hands-on session will cover the basics of Cloud Foundry with OpenStack. We will discuss the motivation for a PaaS, how they complement IaaS and OpenStack in particular. Attend this session for a quick technical overview of Cloud Foundry and to be able to deploy a variety of apps. on the platform that handle requirements such as HA, Scaling, Logging, Monitoring, Debugging, etc. First, we deploy CloudFoundry OpenStack. We will look at how to leverage BOSH to manage OpenStack VM instances. Then we’ll dive into a hands-on lab showing the use of PaaS to deploy a wide variety of applications and micro services to an OpenStack cloud. This session will be of value to developers, devops, systems adminstrators, and any IT decision makers.After attending this session you should be able to walk away with a good understanding of Cloud Foundry and how it complements an IaaS like OpenStack.We will cover the following topics as short exercises
o Installing Cloud Foundry and OpenStack running on a public cloudo Managing Cloud Foundry install and system administrationo CloudFoundry internals and conceptso cf push sample applicationo High Availabilityo Multiple exercises involving the followingo Scaling including Autoscalingo Hooking up MySQL and other serviceso User Provided Services and Loggingo Security groupso Tying things together

Find the EMC Federation summit talks as is with current information below. Respond with comments and/or to the speaker directly. There will be a new post after the summit talk voting starts.

Speaker(s)

Session Type (Talk, Demo, Panel)

Track

Title

Description (once submitted link to right, treat below as read only)

URL for Submission

1

Shamail Tahir & John Griffith

Talk

Cloud Storage

Cinder: Efforts in Cinder to provide quality as well as compatibility
Proposed Track: Storage

This session will provide an overview of the quality control and assurance procedures being taken in Cinder, specifically through third party CI. We will explain how the CI systems work, and what they test. We’ll also talk about why this benefits the end users and operators as well as discuss some of the challenges and learnings encountered along the way. This presentation will not be focused on a single Vendor or Driver, but is meant to discuss various drivers including the reference LVM driver.Key Take-aways for Attendees:
Understand the development workflow for Cinder (with an emphasis on quality)
Understand how Cinder tests and ensure interoperability with a large number of storage providers
Understand the emphasis on compatibility requirements for heterogeneous storage environments in OpenStack/Cinder

This session will explain the multitude of storage options provided by OpenStack. We’ll talk about the differences between Object, Block and Shares as well as the options available to provide each of them. We’ll also explain the differences between persistent and ephemeral storage and how that relates to building and using Instances.Key Take-aways for Attendees:
Understand the various storage use-cases available in OpenStack
Highlight the purpose and high-level functionality of the OpenStack Storage Projects
Discuss the difference in storage consumption model from traditional IT
Gain an overview of how the existing OpenStack storage projects can serve your cloud needs
Walk-through a sample application architecture that leverages all storage services as a part of it’s stack.

We all know there are two sides to every story, and, in this session the theory and reality of highly available OpenStack clouds will be discussed. We will cover concepts from the OpenStack HA-Guide that provide active/active and active/passive configuration guidelines (the “theoretical” configuration of OpenStack HA) and walk-through HA considerations that Deutsche Telekom had to implement, and design, from a practical production grade OpenStack deployment perspective (the reality). We will also discuss the state (and implications) of a multiple availability zone and region setup.Key Take-Aways:
Learn the tools and components you can leverage to make your cloud highly-available
Practical advice on implementation of HA OpenStack services
Identify resources available to help with the planning and design elements of your cloud HA strategy
Recommendations on selecting active/active or active/passive as a HA strategy

The original SOTS was the first end-to-end view of OpenStack as a project. It has over 90,000 views on slide share and growing every day. This is the fourth iteration of SOTS and we will cover the good, bad, and ugly of all integrated and core OpenStack projects. This session is ideal for anyone who is new to OpenStack, who desires to understand “the big picture”, or who is trying to get an honest “self-evaluation” of OpenStack as as a whole.

Struggling with block storage for your OpenStack deployment? Over a third of OpenStack deployments use an open source distributed block storage solution that is difficult to deploy, maintain, has problematic performance characteristics, and can’t deliver on the promise of scale-out storage. EMC, the world’s leader in storage has a number of scale-out storage solutions, including ScaleIO, the product of over a decade of development by experience storage experts. ScaleIO is a 100% software solution that runs on commodity hardware. It is easy to deploy, maintain, and operate. It is highly performant and proven in the field at scale at over 100PB. ScaleIO’s built-in “protection domains” provide smaller fault domains, while ensuring performance across the entire cluster.In this session we will give a deep dive on the ScaleIO technology, and show it running on a cluster of 100+ servers doing massive IOPS and show how easy it is to manage faults by inducing several failures. We will also show how ScaleIO can be easily downloaded directly from the EMC website and evaluated for free. ScaleIO is plug and play with OpenStack and may solve some of the most egregious block storage problems you have had to date.

This session will demonstrate how to use EMC ViPR as an option to build a multi-cloud SDS platform capable of handing heterogeneous storage environments. ViPR, when used in conjunction with Cinder, allows your storage platforms to be abstracted in a simplified manner while allowing storage operators to optimize resources through policy based management.

As OpenStack grows incredibly quickly in popularity there is an increasing and consistent need to have a means to bridge the gap between the populations using and consuming OpenStack and the myriad of technologies and details that must be navigated when developing, producing and delivering OpenStack. In a mainstream business this function is filled by Product Management: Understand the needs of customers, work with Engineering to build the right product, and then communicate product information back to customers using terms they understand. The issue OpenStack faces today is that virtually any level of exposure to either the technology or the community almost immediately entails deciphering terms and concepts such as GitHub, repositories, trunk, open source, forks, CI testing, etc. etc. These terms are 2nd-nature to virtually all currently involved in the community, but are new and potentially intimidating to others. This is situation is producing unique challenges for three distinct groups:
The OpenStack Community: How do the people writing software for OpenStack best learn the needs of those that will be using it – when many of these prospective users do not wish to write OpenStack software themselves?
OpenStack Vendors: How do they manage the technology coming out of the OpenStack community and turn it into products that meet their customers’ needs?
Organizations Using OpenStack: With OpenStack’s new technology and cloud model (which often requires enterprises to behave in new ways) how do IT organizations change how they relate to and serve their user base so they can receive full benefit of this new environment?This panel discussion will explore these dynamics and provide practical perspectives on how these user/technology gaps can be addressed.

One of the most important aspects for many cloud service providers is the ability to measure and predict cloud resource health and consumption. Withthe adoption of OpenStack on the rise in the Enterprise, it’s becoming more and more importantto be able to seamlessly integrate with existing tools and products that exist in the marketplace.As an operator, how will you be able to predict scale-out requirements? How can youensure availability and reliability? How can youdemonstratechargeback orshowback to your various groups who make use of your infrastructure? How do you budget and plan foradditional capacity? How does all of this integrate with your existing tools and skill sets?This talk will focus on the various integration points for monitoring your OpenStack infrastructure. We will be presenting a reference design which covers a layered approach to monitoring, allowing for operation at scale. We will also discuss opportunities where the community can contribute to the advancement of monitoring in OpenStack. You should leave this session with a better understanding of how to extract useful monitoring and telemetry data from your OpenStack deployment in order to operate at scale both efficiently and reliably.

Whether you are a newbie to OpenStack looking at building your first cloud or an experienced operator with years of OpenStack success behind you, you’ve probably spent some time wondering what to expect from the OpenStack project over the next several releases. Will it finally support that new capability you’ve been waiting for? Should you plan for an upgrade in the next 6 months?While the development community is always working and planning new features, its takes a lot of time on IRC to get a complete view across the different projects. The OpenStack Product WG spent time this cycle working with the project teams and PTLs to understand their priorities for the next several OpenStack releases. In this session, we’ll present our findings across the different projects in an effort to give users a glimpse into the OpenStack roadmap.

Things you need to know before you containerize your OpenStack deployment

For those starting out, there is a tendency to treat containers like glorified light-weight VMs. When architecting your cloud around docker, you need to ensure that you adhere to some best practices to really leverage the benefits. This talk will go over some of those best practices and touch upon some of the challenges that you might encounter when deploying OpenStack services in docker containers.

11

Sean Roberts, Allison Randal, Rob Hirschfeld, Stefano Maffulli

Talk

State of OpenStack Product Management

OpenStack is a very active community. Bursts of change happenquite often and itcan be difficult to keep up if you are not immersed. We need manage the flow of critical information anddecision making just like any other engineering organization. Thepeople that represent the Product Management of OpenStack are a critical group within the community.The Product Management working group has met a few times starting in Paris. This group has organized itself around three first activities. Gathering the current state of the OpenStack projects, Defining what the Roadmap could look like, and working with the Cross Project team. The User Stories from the Win the Enterprise working group will usedalongside the project needs.Join us to discuss what we have so far and let debate where we the OpenStack community should be going towards.

12

Sean roberts, Rob Hirschfeld, Egle Sigler, Alan Clark

Panel

DefCore, tempest

DefCore 2015

This Committee was formed during the OpenStack Ice House Summit in Hong Kong by Board Resolution on 11/4. DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled “OpenStack.” Our mission is to define “OpenStack Core” as chartered by the by-laws and guided by Governance/CoreDefinition What has DefCore done so far? Who is involved with DefCore? What changes are planned around OpenStack branding in 2015? How will DefCore change OpenStack in general?

oin to our panel talk about community status report and meet with the OpenStack Ambassadors. They connect the user groups to the Foundation, and help initialize the groups and guide them to grow.
Review of some Ambassadors launched actions during the last release cycle:
– OpenStack community report
– What is the size of the community ?
– Global and regional trends
– Introduce new groups, leaders
– Official group process
– Officials groups
– Process
– Examples of User group help
– Groups portal
– Overview
– Results
– Welcome pack and OpenStack shop
– Q&A

We want to energizean OpenStack user group near you!To do that, we will be using the community training project. It ishappenning at the Tokyo and San Francisco user groups, Coming off the Paris summit, the community training guides project is focusing on the user groups as their main audience. While thereare paid OpenStack training programs available, the OpenStack Training Guides project aims to teach the basics of OpenStack through the user groups.In this talk, we will describe the Training Guides project objectives of OpenStack training cluster, training content mostly by HTML slides, example scenarios and use cases, and quizzes. We will walk through our successes to date delivering training thorugh the user groups.

With the almost limitless storage configuration options in OpenStack, architecting, operating, and troubleshooting can be daunting. In this session, we’ll cover configuration best practices, operational tips, and troubleshooting techniques with real-world examples. We’ll also discuss the various storage projects in OpenStack- Cinder, Swift, and Manila, and how EMC is contributing to them as well as how we are integrating our storage products.

While there is a lot of information available about the various OpenStack deployment options, there’s surprisingly less about what to do once you have your OpenStack environment up and running. In this session we’ll talk about all of the hot-button OpenStack operational issues- high availability, upgrades, monitoring, troubleshooting, and more!

Monitoring and alerting: two things that everyone, operator to CTO, can agree are critical parts of any production deployment. In this presentation we’ll discuss the different generations of monitoring technologies – where we’ve been and where we’re going – and give a high-level overview of the current efforts and difficulties within the OpenStack ecosystem. We’ll talk about the importance of the shift away from polled service checks towards ‘push metrics’ and active telemetry, and present some concept designs for some seriously cool operator / administrator features that will be made possible in the near future.

Helios Burn is an out-of-the-box REST fault injection platform that captures and modifies HTTP/S traffic. It implements a man-in-the-middle interception using self-signed certificates to be able to intercept and interpret HTTPS traffic.The purpose of Helios Burn is to provide developers with a tool that injects failures in REST APIs so that developers can verify the stability and resilience of their applications and identify and prevent failures before deploying them into a production environment.HeliosBurn let’s users create custom rules to match with the REST target traffic, or they can also benefit from the preset rules for common Cloud services including OpenStack Swift and Nova. Upon a match, users are able to apply actions such as modifying any HTTP information (i.e., headers, URL, status code, payload), respond on behalf of the server, delay the request or response, or drop the connection.It is designed with an extendable modular architecture that enables third parties developers to add new modules with custom functionality.HeliosBurn is managed through a friendly web dashboard that allows users to tweak any aspect of the platform and observe the HTTP traffic going back and forth. In addition, HeliosBurn provides a full-featured API for developers to create custom clients and libraries.HeliosBurn is shipped both as a VM and a Docker microservice, making it really easy to deploy it. Depending on the need, it can be placed in a standalone server, in a Virtual Machine, or co-located with the Web server or Client application.As an open-source project, HeliosBurn welcomes and encourages any kind of collaboration from the community.

Most enteprise customers are transitioning or augmenting their IT strategies with OpenStack. In this session, we’ll discuss how to repurpose, or leverage, your existing IT investments in your OpenStack project. We will discuss how workloads may influence which assets to leverage, how to “pilot” OpenStack”, and start using OpenStack with the minimal amount of net new investment.

In this session you will learn about the Neutron plugin that VMware has developed and released to the community. This plugin allows supports basic and advaned Neutron workflows and leverages the NSX vSphere solution for added flexibility and scalability in your OpenStack Cloud. Logical Switching, Logical Routing, Distributed Firewalling are all NSX services that can be consumed by Neutron and exposed to your cloud tenants.

21

Marcos

HOL

Hands on Lab

Guided Lab for Learning All Aspects of OpenStack

Do you want to learn and use OpenStack APIs? Do you just want to get hands-on experience of using Heat templates or Neutron Networking? Want to learn how OpenStack integrates and runs on VMware technologies such as vSphere and NSX? Is there an architecture that I can check out to see what all components are need to run OpenStack in production (Message Queues, Memcache, DBs, Load Balancer…etc)? Curious how you would monitor and troubleshooting your OpenStack Deployment? Merge this with Hands on Lab for broader OpenStack + VMware. Whether you are curious to learn about OpenStack or how it works on VMware. This hosted lab gives you the perfect oppoertunity to learn all aspects of OpenStack.

22

Dan W

Talk

IT strategies

Unicorn Stack

23

Dan W

Talk

IT strategies

OpenStack for VMware Admins

As OpenStack continues to grow, Enterprises are beginning to explore and to implement OpenStack as their Cloud platform of choice. Often, these companies have existing investments and expertise with VMware technologies. In order to prepare for this new world, these people who are familiar with VMware concepts and terminology will need to understand the parallel concepts and terminology in OpenStack.This session will be valuable for anyone who needs a better grasp of how to talk about both VMware and OpenStack in an enterprise context.

24

Eric Lopez, Aaron Rosen, Janet Yu

HOL

Hands On Lab

Openstack Networking Introduction Hands on Lab

This session is an introduction to new users on Openstack Networking. Users will be provided access to a live Openstack environment with Neutron setup. We will walk through
the key neutron deployment use cases with members of the Neutron core development team available to provide guidence and answer questions.Demonstrated features will include:
– Creation of tenant networks using overlay tunnels.
– Configuration of external connectivity
– Advanced Neutron Features, including support for overlapping IPs,L3 + NAT usage via logical routers, Firewall as a Service,Loadbalancer as a Service, VPN as a Service, IPv6 and more!We will incorporating lessons learned from presentation of this session at previous Openstack Summits and also including new Neutron capabilities introduced in the Kilo release.

25

Eric Lopez, Aaron Rosen, Janet Yu

HOL

Hands On Lab

Openstack Networking Advanced Hands on Lab

This session is an introduction for operators on Openstack Networking. Users will be provided access to a live Openstack environment to install and configure Openstack Networking Neutron.
We will walk through configuration of Neutron with the ML2 plugin via OpenvSwitch(OVS) and L3 services with OpenvSwitch Virtual Networking(OVN).Demonstrated features will include:
– Interaction with other OpenStack components (Compute & Storage)
– Configuration of Metadata Services and DHCP Services
– Designing Neutron for HA
– Troubleshooting NeutronThis session highlights how the environment is configured for Openstack Networking Hands on Lab at previous Openstack Summits.

26

Eric Lopez, Aaron Rosen

HOL

Hands On Lab

Congress

Congress is an OpenStack project that provides policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures. In this lab users will get access to a live OpenStack setup with congress already installed and will be able to walk through several key congress deployment use cases and get hands on experience working with congress. Users will write policies that interface with several OpenStack projects (neutron, glance, nova, keystone, cinder, murano) and understand how the policy language works and how one can tame their cloud with congress.

27

Talk

Community

Lessons from the San Francisco OpenStack User Group

28

Tim Hinrichs

Talk

Related OSS Projects

State of Congress

29

Jay Jahns

Talk

Operations

Practical Lessons from real world Multi-Hypervisor deployments

30

Ryan Hsu

Talk

How to Contribute

How to run and upkeep a 3rd party Openstack community CI

31

Tim Hinrichs, Serg Melikyan

Talk

Operations

Governing (Murano) Application Deployment with (Congress) Policy

Deploying applications is hard to get right. It requires gathering information from many different resources (e.g. the application itself, the infrastructure, the other applications already deployed), and making technical and business decisions about where and how to deploy the app while satisfying the multitude of business/infrastructure/application policies that govern the deployment process.In this talk, we describe an integration of Murano and Congress that eases the burden of policy-governed application deployment. This integration ensures that application-deployment done through Murano complies with the policy expressed in Congress—from initialization all the way through to final deployment. In this session we demonstrate how to define policy with Congress and how policy is enforced within Murano during application fulfillment, culminating in a live demo.

32

Tim Hinrichs, Ramki Krishnan

Talk

Telco Strategies

Helping Telcos go Green and save OpEx via Policy

Currently OpenStack does little to help Telcos optimize their workloads for energy consumption, cost, and speed. Today, operators must manually (or via scripts) provision, migrate, and decommission workloads to achieve the desired balance of energy/cost/speed, and they must do so repeatedly.In this talk, we describe an open architecture for automating resource optimization, where operators provide a policy describing how workloads ought to be optimized, and OpenStack continually monitors and migrates workloads to satisfy that policy. Under this architecture, operators give their policy to Congress [1] (the not-yet-incubated OpenStack project for Policy-as-a-Service), and Congress continually enforces that policy by migrating workloads as appropriate. In addition to discussing the architecture, we demo a proof-of-concept implementation where Congress migrates real VMs via Nova in response to changes in datacenter readings reported by Ceilometer.

Policy has quickly become a hot topic in cloud management and orchestration. As OpenStack clouds expand, penetrate the enterprise, and evolve with technologies such as containers, policy-based solutions for capturing user intent, automating management and security, and ensuring governance and compliance for applications has emerged as a critical area for development.
This panel will explore emerging trends and projects in policy developing in the OpenStack and OpenDaylight communities. It will discuss a number of topics, including:
-What is meant “policy” in the context of OpenStack. Is there a “right” approach?
-Why is policy important? What are the key use cases?
-What projects and capabilities are present in OpenStack today
-How will it fit with existing OpenStack components

34

Somik Behera

Panel

Networking

User Panel: Neutron Considerations in Production environments

35

Somik Behera, Gurucharan Shetty

Talk

Networking

Container Networking models with OpenStack Neutron

36

Somik Behera

Talk

Networking

Neutron – Past, Present & Future of Cloud Networking

37

Boden

Talk

Products Tools Services

Choices of deploying OpenStack on VMware

VMware is now serving a plentiful menu of OpenStack delicacies; bound to satisfy the appetite ofa wide range of customers. Whether you fancy a small-to-mid sized prescriptive OpenStack deployment drizzled atop your existing VMware based technologies in a matter of minutes, or you have an intense craving for a highly customized large OpenStack deployment; VMware’s got you covered.In this meal we’ll sample VMware’s OpenStack menu which ranges from a click-and-go out-of-the-box integrated OpenStack distribution, to a highly customized made-to-order OpenStack masterpiece. We’ll dive into the ingredients of these recipes to better understand VMware’s common OpenStack reference architecture, how the solutionis deployed / operated and how VMware offers a OpenStack based Software Defined Data Center (SDDC) solution for Cloud appetites of all sizes. For dessert we’ll indulge with some details on the custom integration between OpenStack and VMware technologies; making your OpenStack meal service a pleasant and affordable experience.By the end of this feast you should walk out fully satisfied with an understanding of how VMware cooks a delicious dish of OpenStack to suit any occasion you may have.

It is well known that Nova works with the VMware hypervisor. Yet, there is quite some confusion around how Nova integrates with VMware ESXi. Does Nova interact directly with ESXi or with the vCenter Server? Is Nova capability X supported when using the VMware hypervisor? Can I take advantage of ESXi/vCenter Server’s feature Y from Nova? When should an admin use Horizon and when should one use the vCenter client? This talk will mitigate such confusion by digging into the nuts and bolts of the integration between Nova and the VMware hypervisor with the help of a demo that will also show case how some of the advanced ESXi/vCenter Server features can be leveraged from Nova.

46

Dan F

Talk

Storage

vSAN (Nexenta?): VSAN for Cinder & Nexenta for Manila/object storage

47

Dan F

Talk

Storage

VIO + SwiftStack

48

Dan W

Talk

User Stories

Adobe (Frans plans to submit)

49

Sean Roberts, Sharmail Tahir, Tim Hinrichs

Talk

Planning your OpenStack Project

Leveraging Congress for Policy Management

In this session, we will cover how Congress can be leveraged by your organization to ensure compliance and policy adherence in your OpenStack cloud. The example governance scenario will show how to set and monitor policies for compute, network, and storage.

So you’re going to use Neutron plus an SDN overlay? Or perhaps just simple VLANs? Regardless of which way you go, it turns out there is already a set of well understood best practices for building scalable networks. In this session folks who have built scalable networking for large OpenStack deployments will walk you through the dos and don’ts of networking. Why layer-3 networking is your friend, how OSPF and BGP work together, and why everyone loves a spine/leaf networking architecture. We’ll give real world examples of networks we have built, including one that handled the load for a major retailer during Black Friday 2014.

Congress is an OpenStack project that provides policy as a service across any collection of cloud services in order to offer governance and compliance for dynamic infrastructures. In this lab users will get access to a live OpenStack setup with congress already installed and will be able to walk through several key congress deployment use cases and get hands on experience working with congress. Users will write policies that interface with several OpenStack projects (neutron, glance, nova, keystone, cinder, murano) and understand how the policy language works and how one can tame their cloud with congress.

Cloud Foundry is a open source cloud computing platform as a service (PaaS) which allows users the ability to deploy and scale their application easily. The platform provides several as as a service features such as redis, mysql, mongo, etc that application developers can leverage with their applications. Congress integrates with Cloud Foundry and allows Security teams to audit and define policies over specific applications. For example, a security team could define a policy saying applications deployed in production require https or a specific autoscaling configuration. In this talk we’ll give an overview of congress and demo this integration.

53

Randy Bias, Sean Roberts

Talk

Community Building

DefCore and Me

DefCore is a recently formed set of criteria that identifies which products, providers, and solutions meet the requirements to use the OpenStack mark. In this session, we will explore the considerations and implications from one vendors perspective as they begin to assess their own readiness under this new program. We will also discuss why OpenStack cloud operators can also benefit from validating their implementation against DefCore using a tool called RefStack and how this initiative will help the compatibility of OpenStack clouds in the long-term.

Neutron is a critical OpenStack project. Any major changes can make or break a business relying on OpenStack networking. To start off day two, Gary Kotton ran down the details of a major change, the Neutron service split. Due to lengthly patch review / merge cycles along with inconsistent vendor contributions, the Neutron vendor plugins were moved to the github Stackforge organization repositories from the github OpenStack Neutron repository. This will make the OpenStack Neutron project tighter around code contributions. This change also has the side effect of making CI testing more complicated and lengthening bug tracing. We included this talk, not to debate the right or wrong of the change, rather to inform the broader community. If your OpenStack company is not aware of the Neutron service split and it’s impact on your company, then you better know quickly.

We had no shortage of ideas and possible directions for this group to go. Allison Price, Stefano Maffulli, and myself (Rob Hirschfeld was close by, but unavailable for this part of the day) ran down different ways of going forward.

Simple OpenStack Organization Layout

We settled on the basics. Identifying the gap this group seeks to fill. It’s the space between the Customers and the Contributors. So what can we do that is actionable in this space?

Leaving out some of the sausage making, we decided that a Multi-Release Roadmap is what Customers and Contributors need here. It’s the gap. So to fill this gap with least amount of OpenStack project disruption while providing goodness we divided ourselves into three groups.

The first group will listen to OpenStack project pain points and priorities from each project cores and lead. The focus is only listen at this point and gather what the projects need. We are calling this the Socialization Effort. The results will be presented at the Vancouver summit. This group has an etherpad here https://etherpad.openstack.org/p/kilo-product-management-socialization.

The second group will define a roadmap process. Much like the DefCore work, this is a very early effort. The roadmap process will require a lot of feedback and socialization before it is considered complete.

Early Version of the Release Roadmap

The Win the Enterprise group has been very successful in gathering user stories. Each of the user stories have one or more features. There are many verticals like Enterprise, Telecommunications, or Cloud Service Providers that need to support features that are missing from OpenStack. Through the community a feature or two is selected to serve the needs of one or more verticals like scalability in the image above. Blueprints would be the result to implement the features. The feature implementation would be tracked over many OpenStack releases. Coordination with the many OpenStack partner companies on priority user stories and their features will be critical for getting developer support. We want to include personas and some other ideas into this process, so expect refinements by the Vancouver Summit.

Not to jump the gun, BUT Shamail Tahir came up with an excellent first features idea. The OpenStack Mission is “to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable.” Would it not be a good idea to make “simple to implement” and “massively scalable” happen?

Monday

9:00 – 10:30 Talk one: (James Haselmaier will lead the discussion) establish a process by which longer term vision and product direction can emerge from within the community, http://lists.openstack.org/pipermail/product-wg/2014-December/000051.html along with a 2-3 architecture plan

Share this:

The group called the OpenStack Ambassadors was created to recognize the active user group leaders worldwide and promote their leadership for other user groups. More details on the group can be found here. We met during the OpenStack Kilo Summit in Paris and developed a plan for 2015. Find that plan in detail here. We decided that our highest purpose is simply to mentor OpenStack user groups.

That’s me, second from the end on the right.

As part of the Foundation support, Martin Kiss is creating a new site dedicated to the support of the user groups. It should be ready by the Vancouver Summit. We also plan to have user group resources such as starter packs. Going forward we are meeting regularly around building and improving the OpenStack user groups. This should and could be the easiest way for the Ambassadors to help other user groups through communicating. We are meeting alternating Weekly on first, third, fifth Tuesdays at 08:00 GMT and second and fourth Fridays at 18:00 GMT on the chat.freenode.net IRC channel: #openstack-meeting-alt. The meeting details can be found here.

If you are part of the OpenStack community, jump on the train and join us!