]]>If you’ve checked out an ebook or trawled through an online newspaper archive at a school or public library, it was probably thanks to EBSCO Information Services.

Used by about five million people worldwide, every day, this division of EBSCO Industries is one of the leading providers of resources for libraries including discovery, resource management, databases and ebooks. Which probably also makes it one of the largest tech companies you use but don’t know by name.

Powering these Information Services is a private cloud in transition. “We built a private cloud and we’re also embracing the public cloud as well. So we’re in a transition right now between a very well established and productive application framework that we built on our own managed data centers. To make that environment more productive, we’ve invested in private cloud technologies like OpenStack. We’re using Platform9, we’re using AVI software, load balancers, we have a lot of technology going there,” says CIO Doug Jenkins.

They’re currently one of Platform9’s largest deployments: more than a 1,000 sockets and a workload that involves deploying Heat stacks hundreds of times a day. “They’ve stressed OpenStack and Neutron and Heat to levels we’d not seen before,” Bich Le, Platform9 chief architect tells Superuser in an interview at KubeCon + CloudNativeCon North America where the San Francisco-based startup announced five new managed Kubernetes customers.

A client of Platform9’s for about a year-and-a-half, EBSCO is now expanding to Kubernetes, making a “big push” as part of their modernization/digital transformation initiative, Lee says. As part of the push, they looked into Amazon EKS, but ultimately because of the multitude of data centers and mix of private data center and cloud and a mix of workloads they gravitated to Platform9. Le says they are just getting started with Kubernetes. In the short term, they’re redesigning many of their applications. Some of the databases are still kept external so they’re running stateless apps against those, but Le says that with time they’re considering migrating the stateful ones as well. They’re currently in the process of architecting a new stack for their next generation applications. They have, however already chosen some components, including Istio for the network mesh.

One thing will remain constant: They plan to continue running everything on OpenStack infrastructure-as-a-service “not bare metal, but Kubernetes on OpenStack,” Le explains. “It’s a very natural choice to make…if you have a mature IAAS, it’s just so much more attractive. The IAAS gives you a lot of flexibility, control and great utilization over your hardware.”

Especially if you have a mix of VMs and containers it makes sense to run them side-by-side, he adds. “If something goes wrong with a Kubernetes node, you can just kill it. It’s much more complicated when you’re dealing with bare metal.”

]]>http://superuser.openstack.org/articles/ebsco-openstack-managed-kubernetes/feed/0Tips for the Open Infrastructure Summithttp://superuser.openstack.org/articles/tips-for-the-open-infrastructure-summit/
http://superuser.openstack.org/articles/tips-for-the-open-infrastructure-summit/#respondFri, 18 Jan 2019 17:06:28 +0000http://superuser.openstack.org/?p=14864Open Infrastructure Summit Programming Committee members are sharing recommended topics and content ideas for the April Summit.

]]>There are only three months until the first Open Infrastructure Summit—April 29-May 1 in Denver, Colorado —and only three days to get your sessions submitted. Typically, just 25 percent of submissions are chosen for the conference, but don’t worry, the Summit Programming Committees are here to help.

Superuser is talking to Programming Committee members of the Tracks for the Denver Summit to help shed light on what topics, takeaways and projects they’re hoping to see in sessions submitted to their Track.

So far, we’ve covered the Open Development and Public Cloud tracks. In this article, you can find content tips for the Artificial Intelligence (AI) /Machine Learning / High-Performance Computing (HPC), Hands-On Workshops, Private and Hybrid Cloud, Security and Telecom and NFV Tracks.

AI / machine learning / HPC

I look forward to seeing current use cases that are solving real world problems using AI and machine learning with best practices applied to various industrial and academic settings. These might include, but aren’t limited to, the use of deep learning (i.e self-driving cars), natural language processing (NLP), in software engineering/development to predict defects, release engineering (DevOps)/automation, CI/CD, robotics, health care and financial models for predictive analytics. Also, I’d like to see works on performance analysis, scientific research, data mining and visualization techniques, IoT and computation.

Attendees can expect to take home applications of the most recent works in the industry. including, such as cutting-edge research in AI, HPC, scientific research, data mining and visualization and IoT. I’d like to see attendees of this track to go home satisfied that they are in step with current advances in AI and a sense of where we are today.

Hands-on workshops

A Hands-on Workshop needs to allow the attendee to work on something they are not an expert in, something that they recognize as valuable for their cloud but that they do not feel comfortable to implement on their own. The attendee will be presented steps-by-steps instructions to guide them to get the expected result and get an environment ready to operate. The presenter has to guide the attendees using slides and, ideally, with the help of other subject experts, help every single attendee. Issues and solutions have to be shared with the whole audience.

Summit attendees should expect to take away practical knowledge they can apply to their day job immediately after the Summit.

Private and hybrid cloud

Topics for this Track includes Private and Hybrid Cloud deployment, success stories, use cases, from different industry segments, including financial services, retail and other non-IT industries. Hybrid Cloud pain points and potential solutions, as well as best practices including what type of workload is best proven for private vs. hybrid deployment.Summit attendees should walk away understanding the private and hybrid cloud landscape by learning from other cloud practitioners.

Security

There are many security issues that plague modern infrastructure, it’s always the effort of those involved in security to help triage and fix these issues. The sessions will hopefully shed light on existing issues, explain how these issues were overcome and what knowledge was gleaned from the process. Anything regarding many of the current security issues present in today’s environment would also be most welcome. Presentations or workshops that describe best practices based on real-world experiences and currently available mechanisms in Open Infrastructure are interesting as well.

Attendees will become more aware of existing security enhancing mechanisms as well as of any of the issues regarding security in the community, from tackling current issues, improving upon already existing areas and how to contribute if they wish to do so.

Telecom and NFV

I’d love to see talks describing the implementation challenges for NFV solutions and proposals on what’s next in the evolution of NFV, or ways to decrease time-to-market for NFV. For some companies understanding usage and controlling spending on clouds is difficult; how can utilization metrics be integrated into financial models to create holistic showback/chargeback capability? How can a telco with a traditional “zoned” model for network security integrate hybrid cloud into the model, or extend that model onto the cloud(s)? How are micro-services and the “observability” paradigm driving the evolution of how service-level agreements are handled?

Telcos are driving some of the major changes in networking because for them, the network is the product; making it be able to serve more applications and uses helps them differentiate from their competitors and offer unique value to their customers. But one way or another networks are a part of every service and application these days! So I hope attendees gain knowledge and inspiration from the evolution in the the technology and practices of operating telco networks and clouds, so they can get more value out of their own networks and clouds.

Typically, just 25 percent of submissions are chosen for the conference. In light of that fierce competition, Superuser is talking to Programming Committee members of the Tracks for the Denver Summit to help shed light on what topics, takeaways and projects they’re hoping to see in sessions submitted to their Track.

Here we’re featuring the Public Cloud Track with tips from Tobias Rydberg, chair of the OpenStack Public Cloud Working Group. He talked to Superuser about some of the content that should be submitted to this track as well as what attendees can expect. Want more help on your submission? Rydberg offered to help over Twitter direct message or in IRC (#open-infra-summit-cfp) before next week’s deadline.

What content you would like to see submitted to this Track?

We’re looking for a broad variety of presentations in the Public Cloud Track. Everything from the business perspective to technical talks. Summit attendees would love to hear more from operators who have delivered OpenStack as a public cloud, how you as a public cloud operator handle your daily business, what challenges you have and how you solve them. It’s also helpful to share what provisioning tools you’re using and how do you manage upgrades.

What will Summit attendees take away from these sessions?

Attending the Public Cloud track at the OpenInfra Summit in Denver will give attendees a better understanding of the benefits and challenges of using open source in the public cloud sector, both as an operator and as an end user. We hope that attendees will leave with more knowledge and ideas how to evolve and improve their current operations and business.

Who’s the target audience for this track?

The potential audience of this track is pretty broad – it could be developer wanting to get a better understanding of the challenges with OpenStack and Open Source in a pubic cloud environment, operators looking for ideas and solutions to their businesses as well as potential end users interested seeing the benefits of using open-source solutions.

]]>http://superuser.openstack.org/articles/tips-for-talks-on-the-public-cloud-track-for-the-denver-summit/feed/0Inside open infrastructure: The latest from the OpenStack Foundationhttp://superuser.openstack.org/articles/inside-open-infrastructure-1-16/
http://superuser.openstack.org/articles/inside-open-infrastructure-1-16/#respondWed, 16 Jan 2019 20:03:46 +0000http://superuser.openstack.org/?p=14784Call for presentations for the Open Infrastructure Summit, a diversity survey and updates from Zuul, Airship, Kata Containers and StarlingX.

]]>Welcome to the latest edition of the OpenStack Foundation Open Infrastructure Newsletter, a digest of the latest developments and activities across open infrastructure projects, events and users. Sign up to receive the newsletter and email community@openstack.org to contribute.

Spotlight on… Zuul

Zuul, a pilot project supported by the OpenStack Foundation (OSF), is a suite of free/libre open source software that drives continuous integration, delivery and deployment (CI/CD) with a focus on project gating and coordinating changes across interrelated projects.

Zuul tests cross-project changes in parallel so users can easily validate changes to multiple systems together before landing a single patch.

Since 2012, Zuul has been proven at scale as a critical part of the OpenStack development process. In early 2018, version 3 was released and Zuul became an independently-governed effort, distinct from the OpenStack project. The third major release also marked a significant rewrite to improve general re-usability of Zuul outside of the OpenStack project and has seen adoption in organizations like BMW, leboncoin, GoDaddy and OpenLab. Many of Zuul’s users are also contributors, with development coming from the likes of Red Hat, BMW, GoDaddy, Huawei, GitHub and the OSF.

Zuul now supports code management through connection drivers for Gerrit, GitHub and generic Git remote repositories, with work underway to add Pagure. Since Zuul relies on Ansible for job definition language, it can run builds on any operating system that Ansible can manage. Zuul’s resource pool manager, Nodepool, can manage workloads on resources dynamically provisioned through APIs for OpenStack and Kubernetes, or on separately-maintained “static” servers and is working to add Red Hat OpenShift, Amazon Elastic Compute Cloud (AWS EC2), Microsoft Azure and VMware vSphere support to that list. Major Zuul design discussions currently underway include support for using multiple concurrent Ansible versions to run jobs from its executor, and distributing the resource scheduler to eliminate it as a single point of failure.

If you’re interested in trying out Zuul for yourself, check out these resources:

If you want to find members of the Zuul community to ask questions in real time or even just tell them how you’re using the software, reach out on the #zuul channel on IRC.

OpenStack Foundation news

The Call for Presentations is currently open for the Open Infrastructure Summit that is being held April 29-May 1 in Denver, Colorado. Check out the updated Summit tracks and submit your session by next week’s deadline: Wednesday, January 23 at 11:59 p.m. PT.

All OpenStack Foundation members received a link to vote in the Board of Directors Individual election and bylaws amendments this week. Check your email and cast your vote by this Friday, January 18, 2019 at 11:00 a.m. CST/1700 UTC.

The Diversity & Inclusion Working Group is conducting an anonymous survey to better understand the diversity and makeup of the community. Participation is appreciated so we can better understand and serve the community. Share any questions with working group chair, Amy Marrich (spotz on IRC).

OpenStack Foundation project news

OpenStack

The development of the upcoming OpenStack release reached the Stein-2 milestone last week. We now know what deliverables to expect in the final Stein release, planned for April 10.

It’s been a month since the OpenStack community switched back to using a single list for discussion, forming a single community of contributors. Please read Jeremy Stanley’s report to learn more.

Airship

The Airship Team continues to work towards the 1.0 release and invites comments and feedback. A developer and user feedback session to help new users become more engaged with 1.0 release is in the works. Details to come on the Airship mailing list.

A specification for leveraging Ironic as a bare metal driver has merged. Catch up on the full discussion by watching the recording of the January 10 Airship Design Meeting. Want to get involved with Airship design and learn more? The team meets every Thursday at 11:00 a.m CT for an open design meeting.

Kata Containers

Over the past several weeks the Kata team has been busy working on the 1.5 release, scheduled to land January 23. It will offer support for containerd v2, Firecracker and live upgrades. The 1.5 release candidate is available now for preview.

The Kata community has formed a new Marketing Content special interest group that will begin monthly meetings on January 16. Details are available in the #kata-marketing channel in the Slack group.

The community set up a mail report of StarlingX builds from CENGN to make sure any issue is corrected immediately.

Questions / feedback / contribute

This newsletter is edited by the OpenStack Foundation staff to highlight open infrastructure communities. We want to hear from you!
If you have feedback, news or stories that you want to share, reach us through community@openstack.org and to receive the newsletter, sign up here.

]]>Just about a year ago, the security community got a nasty wake-up call: Spectre and Meltdown.

Considered “pretty catastrophic” by experts, they were a series of vulnerabilities discovered by various security researchers around performance optimization techniques built in modern CPUs. Those optimizations (involving superscalar capabilities, out-of-order execution, and speculative branch prediction) essentially created a side-channel that could be exploited to deduce the content of computer memory that should normally not be accessible.

For e-commerce giant eBay, keeping the nightmares away was a particularly complex project. The eBay Classifieds Group has a private cloud distributed in two geographical regions (with future plans in the works for a third), around 1,000 hypervisors and a capacity of 80,000 cores. The team needed to patch hypervisors on four availability zones for each region with the latest kernel, KVM version and BIOS updates. During these updates the zones were unavailable and all the instances restarted automatically.

Bruno Bompastor and Adrian Joian, from eBay’s cloud reliability team, shared how shoring up their system against these vulnerabilities stretched from January until July. One of the takeaways? First that Ansible is a great tool for infrastructure automation. “We decided to use Ansible as our main tool and heavily relied on Ansible roles as a way to organize tasks,” Bompastor says. As an example, the team has OpenStack roles, hardware roles, update roles and — the most important one for this project — the checker role, to scan for these vulnerabilities. They ran a checker on the host (an open-source script that basically tests the variants they wanted to check). Available on GitHub, “it’s a very nice script that covers everything like this…”

They also gave an inside look all the work the team had to perform to shut down, update and boot successfully an infrastructure fully patched and without data loss. They discussed how the team managed of SDN (Juniper Contrail) and LBaaS (Avi Networks) when restarting this massive number of cores.

]]>TripleO (OpenStack On OpenStack) is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack’s own cloud facilities as the foundations – building on Nova, Neutron and Heat to automate fleet management at datacenter scale.

If you read the TripleO setup for network isolation, it lists eight distinct networks. Why does TripleO need so many networks? Let’s take it from the ground up.

WiFi to the workstation

I run Red Hat OpenStack Platform (OSP) Director, which is the productized version of TripleO. Everything I say here should apply equally well to the upstream and downstream variants.

My setup has OSP Director running in a virtual machine (VM). To get that virtual machine set up requires network connectivity. I perform this via wireless, as I move around the house with my laptop. The workstation has a built-in wireless card.

Let’s start here: Director runs inside a virtual machine on the workstation. It has complete access to the network interface card (NIC) via macvtap. This NIC is attached to a Cisco Catalyst switch. A wired cable to my laptop is also attached to the switch. This allows me to set up and test the first stage of network connectivity: SSH access to the virtual machine running in the workstation.

Provisioning network

The blue network here is the provisioning network. This reflects two of the networks from the TripleO document:

IPMI* (IPMI System controller, iLO, DRAC)

Provisioning* (Undercloud control plane for deployment and management)

These two distinct roles can be served by the same network in my set up, and, in fact they must be. Why? Because my Dell servers have a NIC that acts as both the IPMI endpoint and is the only NIC that supports PXE. Thus, unless I wanted to do some serious VLAN wizardry, and get the NIC to switch both (tough to debug during the setup stage), I’m better off with them both using untagged VLAN traffic. This way, each server is allocated two static IPv4 address, one used for IPMI and one that will be assigned during the hardware provisioning.

Another way to think about the set of networks that you need is via DHCP traffic. Since the IPMI cards are statically assigned their IP addresses, they don’t need a DHCP server. But the hardware’s operating system will get its IP address from DHCP. So it’s OK if these two functions share a network.

This doesn’t scale very well. IPMI and IDrac can both support DHCP and that would be the better way to go in the future, but it’s beyond the scope of what I’m willing to mess with in my lab.

Deploying the overcloud

In order to deploy the overcloud, the director machine needs to perform two classes of network calls:

SSH calls to the bare metal OS to launch the services, almost all of which are containers. This is on the blue network above.

HTTPS calls to the services running in those containers. These services also need to be able to talk to each other. This is on the yellow internal API network above. (I didn’t color code “yellow” as you can’t read it. )

Internal (not) versus external

You might notice that my diagram has an additional network; the external API network is shown in red.

Provisioning and calling services are two very different use cases. The most common API call in OpenStack is POST https://identity/v3/auth/token. This call is made prior to any other call. The second most common is the call to validate a token. The create token call needs to be access able from everywhere that OpenStack is used. The validate token call does not. But, if the API server only listens on the same network that’s used for provisioning, that means the network is wide open; people who should only be able to access the OpenStack APIs have the capability to send network attacks against the IPMI cards.

To split this traffic, either the network APIs need to listen on both networks or the provisioning needs to happen on the external API network. Either way, both networks are going to be set up when the overcloud is deployed.

Thus, the red server represents the API servers that are running on the controller and the yellow server represents the internal agents that are running on the compute node.

Some Keystone history

When a user performs an action in the OpenStack system, they make an API call. This request is processed by the web server running on the appropriate controller host. There’s no difference between a Nova server requesting a token and project member requesting a token. These were seen as separate use cases and were put on separate network ports. The internal traffic was on port 35357 and the project member traffic was on port 5000.

It turns out that running on two different ports of the same IP address doesn’t solve the problem people were trying to fix. They wanted to limit API access via network, not by port. Therefore, there really was no need for two different ports, but rather two different IP addresses.

This distinction still shows up in the Keystone service catalog, where endpoints are classified as external or internal.

Deploying and using a virtual machine

Now our diagram has become a little more complicated. Let’s start with the newly added red laptop, attached to the external API network. This system is used by our project member and is used to create the new virtual machine via the compute create_server API call.

Here’s the order of how it works:

The API call comes from the outside, travels over the red external API network to the Nova server (shown in red)

The Nova posts messages to the the queue, which are eventually picked up and processed by the compute agent (shown in yellow).

The compute agent talks back to the other API servers (also shown in red) to fetch images, create network ports and connect to storage volumes.

The new VM (shown in green) is created and connects via an internal, non-routable IP address to the metadata server to fetch configuration data.

The new VM is connected to the provider network (also shown in green.)

At this point, the VM is up and running. If an end user wants to connect to it they can do so. Obviously, the provider network doesn’t run all the way through the router to the end user’s system, but this path is the open-for-business network pathway.

Tenant networks

Let’s say you’re not using a provider network. How does that change the set up? First, let’s re-label the green network as the external network. Notice that the virtual machines don’t connect to it now. Instead, they connect via the new purple networks.

Note that the purple networks connect to the external network in the network controller node, show in purple on the bottom server. This service plays the role of a router, converting the internal traffic on the tenant network to the external traffic. This is where the floating IPs terminate and are mapped to an address on the internal network.

Wrap up

The TripleO network story has evolved to support a robust configuration that splits traffic into component segments. The diagrams above attempt to pass along my understanding of how they work and why.

I’ve left off some of the story, as I don’t show the separate networks that can be used for storage. I’ve collapsed the controllers and agents into a simple block to avoid confusing detail: my goal is accuracy, but here it sacrifices precision. It also only shows a simple rack configuration, much like the one here in my office. The concepts presented should allow you to understand how it would scale up to a larger deployment. I expect to talk about that in the future as well.

I’ll be sure to update this article with feedback. Please let me know what I got wrong and what I can state more clearly.

About the author

Adam Young is a cloud solutions architect at Red Hat, responsible for helping people develop their cloud strategies. He has been a long time core developer on Keystone, the authentication and authorization service for OpenStack. He’s also worked on various systems management tools, including the identity management component of Red Hat Enterprise Linux based on the FreeIPA technology. A 20-year industry veteran, Young has contributed to multiple projects, products and solutions from Java based eCommerce web sites to Kernel modifications for Beowulf clustering. This post first appeared on his blog.

]]>It’s January. The time of year when the mind turns to calls for papers. Lifelong learning. And binge watching home organization shows rather than actually decluttering. (Maybe just me on that last one?)

Here are top picks for free or low cost upcoming learning opps. If you’ve recently done a webinar and want to share your takeaways about it for Superuser, remember that contributed posts earn you Active User Contributor status.

Intent-based network load balancer automation and Ansible

In this session and live demo from Red Hat, you will learn from actual customer use cases on Ansible automation modules from configuring network functions, automating load balancer clusters how to and create L4-L7 configuration.Detailshere.

Storyboard and Launchpad

The Vietnam Open Infrastructure Community’s first meetup (and webinar) of the year will feature the OpenStack Foundation’s Kendall Nelson talking about StoryBoard and Launchpad. StoryBoard is a web application for task tracking across inter-related projects. Launchpad is a web application and website that allows users to develop and maintain software.Detailshere.

The webinar twill show you how to use provisioning persistent volumes for Kubernetes using NetApp’s Trident and Cloud Volumes ONTAP. NetApp Trident is a dynamic storage provisioner leveraging cloud volumes ONTAP for Kubernetes persistent volume claims. Trident is a fully supported open source project maintained by NetApp. Detailshere.

Kubernetes and Istio Security

Gadi Naor, CTO and co- founder at startup Alcide will cover basic Istio security features, managing and restricting traffic to external services with Kubernetes and Istio network policies, spotting security anomalies as as well as some interesting use case. Detailshere.

Modernize your data center through open source

Cumulus’ co-founder and current CTO, JR Rivers, will discuss the growth of the open source community and how Cumulus is helping bring open source principles into modern data center networks. He’ll dive into some of the company’s contributions to the open source community such as Open Network Install Environment (ONIE), ifupdown2, VRF for Linux and FRRouting. Detailshere.

Suse Expert Days 2019

Offered in more than 80 cities worldwide, the SUSE Expert Days tour offers a free day of technical discussions, presentations and demos. The theme? Open. Redefined. Participants will learn about how to

Transform IT infrastructure

Create a more agile business

Make room for innovation

Events kick off in January for Europe and in February Latin America and North America. Full list of eventshere.

]]>http://superuser.openstack.org/articles/open-infrastructure-training-webinars-january/feed/0Tips for talks on the Open Development Track for the Denver Summithttp://superuser.openstack.org/articles/tips-open-development-track-denver-summit/
http://superuser.openstack.org/articles/tips-open-development-track-denver-summit/#respondThu, 10 Jan 2019 15:03:59 +0000http://superuser.openstack.org/?p=14669Open Infrastructure Summit Programming Committee members share recommended topics and content ideas for the April Summit.

]]>Time to harness those brainstorms: The call for presentations for the first Open Infrastructure Summit is open until January 23. OpenStack Summit veterans will notice some changes in the event beyond the name. In order to reflect the diversity of projects, use cases and software development in the ecosystem, several conference tracks have been added or renamed.

Historically, just 25 percent of submissions are chosen for the conference. In light of that fierce competition, Superuser is talking to Programming Committee members of the Tracks for the Denver Summit to help shed light on what topics, takeaways and projects they are hoping to see covered in sessions submitted to their Track.
Here we’re featuring the Open Development Track, formerly the Open Source Community Track. Thierry Carrez, VP of engineering for the OpenStack Foundation and Allison Randal, board member of the OpenStack Foundation, are tasked with leading selections for this Track. They shared these insights ahead of the submission deadline.

Open Development Track topics

The 4 Opens, the future of free and open source software, challenges of open collaboration, open development best practices and tools, open source governance models, diversity and inclusion, mentoring and outreach, community management.

Describe the content you’d like to see submitted to this Track.

Today, open source licensing is not enough, we need to define standards on how open source software should be built. Models of open development come with their benefits and challenges, and with their best practices and tools. I’d like this track to be where those open development models, those standards on how open source should be built are discussed. Beyond that, this track will cover open source governance models, the challenges of diversity and inclusion, the need for mentoring and outreach, and community management topics.

What should Summit attendees take away from sessions in this Track?

Too much of open source software development today is closed, one way or another. Its development may be done behind closed doors, or its governance may be locked down to ensure lasting control by its main sponsor. I hope that this track will expose the benefits of open collaboration and help users tell the difference between different degrees of openness. I also hope this track will explain why diversity is critical to the success of open source projects and inspire attendees to participate in mentoring and outreach.

Who’s the target audience for this track?

This Track is broadly applicable to anyone who participates in an open source project, as a designer, operator, developer,community member, or sponsor. No specialist background knowledge is required, you’ll gain value from the sessions even if you are completely new to open collaboration. Experienced community leaders will benefit from exchanging ideas and best practices across communities, while new community leaders, or anyone curious about getting involved in community leadership, will benefit from the experiences of those who have gone before them.

]]>Edge computing is at the top of tech bingo terms for yet another year. Gartner Inc. recently announced that it expects that in the next five years, “empowered edge” will be moving everything from internet of things to 5G: “Cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.”

]]>http://superuser.openstack.org/articles/how-open-infrastructure-drives-the-empowered-edge/feed/0Where to connect with OpenStack in 2019http://superuser.openstack.org/articles/openstack-events-2019/
http://superuser.openstack.org/articles/openstack-events-2019/#respondTue, 08 Jan 2019 15:09:48 +0000http://superuser.openstack.org/?p=14632Connect with the open infrastructure community near you.

Right out of the gate in January, user groups from New Delhi to Munich are putting their heads together. You’ll also find community members answering questions from a booth at the fun-packed FOSDEM in Brussels, February 2-3. (For what to expect, check out our coverage here.) There are a host of spring events – consider checking out OpenInfraDays in London on April 1. It will cover all things cloud computing, from the latest developments in bare metal hardware infrastructure through to scaling scientific computing workloads.

The next milestone: the first-ever Open Infrastructure Summit, taking place in April 29-May 1 in Denver. Remember to plan your trip with an eye on the Project Teams Gathering (PTG) May 2-4, 2019, also in Denver. It’s an event for anyone who self-identifies as a member in a specific given project team as well as operators who are specialists on a given project and willing to spend their time giving feedback on their use case and contributing their usage experience to the project team. Wondering what it’s like when the “zoo” of OpenStack projects get together? Check out this write-up.

Then there’s an entire summer of global get-togethers that will be focusing on open infrastucture. (The calendar, much like the technology, is constantly evolving so keep an eye on that events page as it fills up for the year.)

Once you’ve saved the date, remember that Superuser is always interested in community content – get in touch at editorATopenstack.org and send us your write-ups (and photos!) of events. Articles posted are eligible for Active User Contributor (AUC) status.