Building a Developer Community from the Ground Up

Posted by Pascal Joly August 24, 2018

In the Software world, Developer communities have been the de facto standard since the rise of the Open Source movement. Once started as a counter-culture alternative to the commercial dominance of Microsoft, open source spread rapidly way beyond its initial roots. Nowadays, very few questions the motivations to offer an open source option as a valid go to market strategy. Many software vendors have been using this approach in the last few years to acquire customers through the freemium model and eventually generate significant business (Redhat among others). From a marketing standpoint, a community is as a great vehicle to increase brand visibility and reach to the end users.

The Journey to get a community off the ground can be long and arduous

If in theory, it all sounds great and fun, our journey from concept to reality was long and arduous.

It all starts with a cultural change. While it now seems straight-forward for most software engineers (just like smartphones and ubiquitous wifi are to millennials), changing the mindset from a culture of privacy and secret to one of openness is significant, especially for more mature companies. With roots in the conservative air force, this shift did not happen overnight at Quali. In fact, it took us about 3 years to get all the pieces off the ground and get the whole company aligned behind this new paradigm. Eventually, what started as a bottom-up, developer-driven initiative, bubbled up to the top and became both a business opportunity and a way to establish a competitive edge.

A startup like Quali can only put so many resources behind the development of custom integrations. As an orchestration solution depending on a stream of up to date content, the team was unable to keep up with the constant stream of customer demand. The only way to scale was to open up our platform to external contributors and standardize through an open source model (TOSCA). Additionally, automation development was shifting to Python-based scripting, away from proprietary, visual-based languages. Picking up on that trend early on, we added a new class of objects (called "Shells") to our product that supported Python natively and became the building blocks of all our content.

Putting together the building blocks

We started exploring existing examples of communities that we could leverage. There is thankfully no shortage of successful software communities in the Cloud and DevOps domain: AWS, Ansible, Puppet, Chef, Docker to name a few. What came across pretty clearly: a developer community isn't just a marketplace where users can download the latest plugins to our platform. Even if it all started with that requirement, we soon realized this would not be nearly enough.

What we really needed was to build a comprehensive "one-stop shopping" experience: a technical forum, training, documentation, an idea box, and an SDK that would help developers create and publish new integrations. We had bits and pieces of these components mostly available to internal authorized users, and this was an opportunity to open this knowledge to improve access and searchability. This also allowed us to consolidate disjointed experiences and provide a consistent look and feel for all these services. Finally, it was a chance to revisit some existing processes that were not working effectively for us, like our product enhancement requests.

Once we had agreed on the various functions we expected our portal to host, it was time to select the right platform(s). While there was no vendor covering 100% of our needs, we ended up picking AnswerHub for most of the components such as Knowledge Base Forum, idea box and integrations, and using a more specialized backend for our Quali University training platform. For code repository, GitHub, already the ubiquitous standard with developers, was a no-brainer.

We also worked on making the community content easier to consume for your target developer audience. That included a command line utility that would make it simple to create new integration, "ShellFoundry". Who said developing Automation has to be a complicated and tedious process? With a few commands, this CLI tool can get you started in a few minutes. Behind the scene? a bunch of Tosca based templates covering the 90% of the needs while the developer can customize the remaining 10% to build the desired automation workflow. It also involved product enhancements to make sure this newly developed content would be easily uploaded and managed by our platform.

Driving Adoption

Once we got all the pieces in place, it was now time to grow the community beyond the early adopters. It started with educating our sales engineers and customer success with the new capabilities, then communicating it to our existing customer base. they embraced the new experience eagerly, since searching and asking for technical information was so much faster. They also now had visibility through our idea box of all current enhancement requests and could endorse other customer's suggestions to bring up the priority of a given idea. 586 ideas have been submitted so far, all nurtured diligently by our product team.

The first signs of success with our community integrations came when we got technology partners signed up to develop their own integration with our product, using our SDK and publishing these as publicly downloadable content. We now have 49 community plugins and growing. This is an on-going effort raising interesting questions such as vetting the quality of a content submitted through external contributors and the support process behind it.

It's clear we've come a long way over the last 3 years. Where do we go from there? To motivate new participants, our platform offers a badge program that highlights the most active contributors in any given area. For example, you can get the "Bright Idea" badge, if you submitted an idea voted up 5 times. We also created a Champion program to reward active participants in different categories (community builder, rocket scientist...). We invite our customers to nominate their top contributors and once a quarter we select and reward winners who are also featured in an article with a nice spotlight.

Additional links

Learn more about Quali

Watch our solutions overview video

5 Tips for Implementing Environment-as-a-Service

Posted by admin June 29, 2018

Looking back at years of automation and setting up Environment-as-a-Service with our clients and partners, we’ve made and witnessed quite a few mistakes. I have long wanted to collect some of the lessons we have learned and share them. Blood, sweat, and tears were poured into these, and it’s easy to see the traces of these experiences in how we have shaped our products. Here are my top 5, would love to hear your comments and thoughts!

Keep the end users in the loop

Environment as a service is all about the painful tension between the horribly technical environment orchestration and end users that want it to be dead simple. Infrastructure environments are the base for pretty much every task in a technology company. And today, when EVERY company is a technology company, a larger number of end users couldn’t care less about how complex it is. They want it Netflix-style and rightfully so. When building a service, it’s important to identify the end users and understand their needs, making sure they know how to contact the service admins and who can help them (e.g. “contact us” option). It’s also important to continuously get their feedback. From my experience when a service was launched without involving the end users, no matter how much amazing magic was done in orchestration, it often was rejected and failed (did I mention tears?)

Don’t automate your manual sequence as-is

It’s tempting to approach automation as a series of tasks that are done manually and we need to automate one by one, resulting in a magnificent script that replaces our tedious manual effort. But try to understand your scripts 6 months later or apply some variation and reality hits you in the face.

Automation opens new possibilities. It often requires changing the mindset to achieve maintainability and scalability. Much like test automation, if we try to simply automate what we did manually the results are often sub-optimal. Good automation usually requires some reconstruction of the process – identifying reusable building blocks and finding the right way to model the environment. We’ve been investing in evolving our model for years, and still do (this topic probably deserves its own post!)

Start simple

Automation is such a powerful thing, it is only natural to target the most complex environment, thinking it would be the most valuable to automate, whereas simple ones are not worth it (i.e. “providing developers with a single virtual device is something we do all the time. But come on, it’s one virtual machine, that’s not worth automating”). But the return on investment in automation is highest on things that are easy to automate and maintain AND can be reused very frequently. Some of the most successful implementations I’ve seen started with very simple environments that created an appetite for more

Invest in adoption and visibility

It’s easy to get lost in the joy of technical details and endless automation tasks, but if we spend a year populating inventories and creating building blocks and complex blueprints that nobody uses, it will be hard to convince anyone it’s worth it. It’s important to make sure value is demonstrated in every milestone, that the development of the service is iterative, and that high-level vision is not lost.

A few best practices that would help -

Start with 1-3 simple blueprints that you think will be used frequently

Invest in aesthetics – blueprint images, names, meaningful descriptions – make it easy and convenient to consume. Yes, it’s as important as the orchestration part (think shopping in Amazon with the same picture for all products)

Invest in elements that ease the use and would make self-service easy – attractive user instructions, videos if possible, easy remote access to environment elements

Make sure you expose your end users to the new service – it’s best if they are involved and contribute. Announce the availability of the service, and actively get feedback

Track and report your KPIs – present to your management and get feedback

Don’t think automation is magic

Well, automation IS magic for the end users. But behind the scenes, someone needs to make the magic happen, and this is often not a walk in the park. Automation is becoming easier to create, but it’s important to also remember maintenance and scale.

Some best practices on this front -

Start with offering a few predefined environment blueprints as your service. You’ll get the best results if you maximize reuse. When you let many users create environment blueprints, you’re increasing complexity. This could be the next step, but not recommended to begin with

Don’t automate things that are not worth automating. I sometimes hear people describe how they are working for months to automate a very complex environment, and then realizing it is used once a year and nobody appreciates the effort. Sometimes people are frustrated with automating very dynamic environments where everything changes on a daily basis – perhaps this is not the right target to start with

Whatever tool you use to automate, try to use built-in capabilities as much as you can. There are always additional capabilities that may be uniquely required to you and seem

Learn more about Quali

Watch our solutions overview video

My Journey into Microservices with Kubernetes

Posted by admin February 25, 2018

In this post I will explain how I came to believe that microservices are the future through a short demonstration of deploying a simple microservices application using Kubernetes.

From Skyscrapers to Cottages

I’ve been what we call today a full-stack developer since asp.net 2.0. Back then we used to have ‘code behind’ files which typically contained *a lot* of code. Then the 3 tier architecture kicked in, which made a clear separation between data access and business logic. This reduced the amount of code in each class and made the code more readable.

Success of online businesses created the need for web applications with more complex requirements, driving the need for new architectures. The rise of design patterns, n-tier architectures, and domain driven design (DDD) addressed this need, but also made web applications more complex and harder to manage. So we started covering our applications with onion layers and DTOs for each layer, hoping to please the DDD gods.

.net onion

As applications grew with time, so the need to break down was evident and this is when SOA became popular. By breaking down monoliths into smaller parts and services, it became easier to manage and write code. The disadvantage was that often the deployment process was still monolithic. Each service was not independent from the rest of the system so testing a single service was a nightmare since you needed to deploy bunch of other services without which the service under the test couldn’t act on its own. In the image below you can see that our ‘Shop Application’ will require deployment with its service dependencies but each such dependency cannot be deployed or manually tested on its own.

.net SOA

Then came microservices.

Microservices came into this world as a concept in late 2013, ‘a new way of thinking about structuring applications’ as Martin Fowler, a known microservices guru describes them.

Unlike SOA, microservices are independently deployed and act as small parts doing a specific job, while they inherit all the good coming from SOA. They can be written using any language which might lead to faster change in technology stack, be built by independent teams with no previous knowledge of the certain programming language or whole produce and can horizontally scale with ease. For example, on the image below you can see ‘Payment’ microservice which theoretically can support payment processing from any source not just our application because the main goal of microservice is to decouple an unit from the system.

microservices

All seemed like heaven, but deployment remained complicated. It often required expertise no one previously had, tools which just came into this world and different mindset, working with a mindset of focusing on a small component which can do it’s ‘stuff’ well, rather than the whole product.

Let’s talk about the costs of running a microservice which needs to be scaled in a complex system deployed on AWS or other cloud. With the instantiation of each microservice instance, you pay 100% for utilization even if you only use 15%. This represents a lot of waste, and an enormous amount of money can be saved just from moving from an instance way of thinking to a process way of thinking. By process I mean of course Docker containers which are the most efficient way to deploy microservices or server-less solutions like AWS Lambda.

Orchestration just with Docker is full of complexities as you will need to write your own scripts to scale up, process networks between new machines running Docker. So this is where container orchestrators kicks in.

In the following demonstration I will use the most common container orchestrator — Kubernetes — which will help us to deploy a simple application consisting of a few microservices.

Taking it Live with Kubernetes

Let’s start from a brief look in the code, there are 4 parts of this application; Api-Gateway, Order-Service, Payment-Service and Db. Don’t get too familiar with the code since it does not matter much while we will concentrate mainly on deployment with Kubernetes. You can always clone or download the full repository.

Microservices

We have REST API gateway made with the help of node.js and express.js in which we can order a product which will create an order entry in order-service and transaction entry in payment-service. Both will reach db at the end. Again, this demonstration is for Kubernetes capabilities nor microservices.

Api-Gateway.js can get all orders which are successful and create new order.

Order-service.js can get all orders which are successful and create new order.

Payment-service.js can get all transactions and creates new transaction routes

DB.js just basic in memory collection of orders and transactions.

Kubernetes

First we start with a Namespace, think of this entity as a ‘folder’ in which rest of our resources will reside. You can read more about Namespaces here.

Next let’s talk about services. Services are Kubernetes entities which define networking from containers to other containers in cluster or to public networks via cloud provider’s native load balancing. Find more information about Services here.

Each service route to ports which container exposed.

We will use two service types, LoadBalancer which will be for our Api-Gateway because it will be exposed to the public network. Other type will be ClusterIP which will be for inner communication inside our namespace between other microservices.

Kubernetes comes with kube-dns application which allows us to know the hostname before the deployment of any service running in our namespace by following convention [service name].[namespace]:[port exposed by service]. In other words, if the name of the service is ‘order-service’, we can access it from any other container inside of the namespace by order-service.shop:3002. This is crucial part for creating configuration management scripts like you will see below.

Last but not least are Deployments. Think of deployment as an watchdog controller which will make sure if you ask to scale 5 instances ( containers ) of your microservice it will make sure there are always 5 online even if some containers will die due to software crash. Moreover deployments are useful for up/down grading our application running on the containers, with short command against Kubectl — a CLI tool for interaction with Kubernetes cluster you can upgrade or downgrade a microservice.

As you could guess ‘replicas’ means the amount of containers that will be deployed for a specific microservice; feel comfortable to change that to whatever makes you happy for any service besides our DB since data saved in it is in memory thus cannot be scaled.

Most important part of deployments are usually container definitions, as you can see we are using official nodejs image running Alpine linux known for its small footprint then we execute our configuration management in bash; here some of Kubernetes magic happens.

First we download both app.js and package.json from Github, install packages defined in package.json and run our app.js passing to it as arguments already known to us hostnames with a little help of kube-dns of course. This way each of our microservices knows how to access other microservices in our application.

Hope you enjoyed my experience with microservices and Kubernetes, again feel free to clone or download repository. Check for updates.