Streamline your CI/CD pipeline with TeamCity and CloudShell Colony’s Environment as a Service

Streamline your CI/CD pipeline with TeamCity and CloudShell Colony’s Environment as a Service

Posted by german lopez April 30, 2019

This article originally appeared on JetBrains on 4/25/19

Posted on April 25, 2019 by Yegor Naumov
This guest post is brought to you by Meni Besso, Product Manager at Quali, the creators of CloudShell Colony

CloudShell Colony is an Environment as a Service platform. It connects to your cloud providers including AWS, Azure, and Kubernetes and automates environment provisioning and deployment throughout your release pipeline in a scalable, maintainable, and smart way.

Creating a scalable CI/CD pipeline is harder than it looks.

Too many times have we witnessed DevOps engineers dive straight into configuring builds and writing scripts, only to find out later that they need to redo a significant part of their work. Any professional engineer will tell you that avoiding this unsavory outcome is easy. You need to treat your pipeline as a product: gather your requirements, design your architecture, develop, test, and improve your pipeline as your product grows.

In order to come up with a good design, you have to take three important aspects into account:

Release workflow: Working out the steps that need to happen from the moment a developer commits code until the code goes live in production.

Test automation: Planning the tests that need to run to validate the quality of the release and the automation tools that will be used to run them.

Much has been said about the first two items, but we at Quali believe that environments are the missing piece on the way to a reliable CI/CD pipeline, because continuous integration only works if you can trust your tests, and running tests on faulty environments is a serious barrier.

So, how do you design environments?

First, let’s distinguish between the two types of environment:

Sandbox environments: Environments that you use temporarily to complete a specific task. For example, to run sanity tests or performance tests from your pipeline.

Production environments: Environments that are always-on serving users. For example, your live production environment or a pre-production environment that you share with your customers.

A typical release pipeline uses both sandbox environments and production environments. You run various tests on sandboxes to validate your code and then deploy to production.

Sandbox environments can be tricky. Once you have set up your automated pipeline, you may notice the complexities. You need to support different variations of these environments, often in parallel. Some environments need to provide quick feedback while others need to replicate the full production environment. How do you deploy the artifacts, load test data, and set up your test tools? How do you debug failed tests? Are you using cloud infrastructure? How much does it cost?

At some point, you end up realizing that your challenge is also one of scale. Meaning, all these issues grow from bad to worse as your automation gradually consumes more and more environments.

Production environments can be tricky as well, but for different reasons. With production, you need to think about ongoing upgrades, handling downtime, backward compatibility, rollbacks, monitoring, and security.

Integrating your TeamCity CI/CD pipeline with an Environment as a Service platform like CloudShell Colony will ease many of these complexities and provides you with a more stable, reliable, and scalable CI/CD pipeline.

Let’s see how it works.

Integrating CloudShell Colony with your DevOps Toolchain

You start by creating a CloudShell Colony account and connecting it to your AWS or Azure accounts. This is done through a simple process that gives CloudShell Colony permission to create infrastructure in your public cloud accounts.

Once you have your TeamCity server connected to your CloudShell Colony account, you can connect your artifacts’ storage provider and start using CloudShell Colony in your release pipeline: run automated tests on its isolated sandbox environments and update your production using blue-green deployment.

Pipeline TeamCity and CloudShell Colony

Running Automated Tests on Sandbox Environments

To use sandbox environments in your TeamCity builds, you need to update your build configuration flow.

The first build step should start a sandbox. Use the ‘Colony’ runner, that comes out of the box with CloudShell Colony’s plugin, and fill in the required parameters. For example, enter the name of the environment blueprint you want to use, the location of your artifacts in the storage provider, and any test data you may wish to load to the sandbox environment.
When the environment is ready, provisioned and deployed, CloudShell Colony saves its details to a TeamCity variable, so you can extract any needed information (like IP addresses or application links) and feed your tests.

Finally, after the tests complete and the sandbox environment is no longer needed, you can terminate it by adding another build step that ends the sandbox and reclaims its cloud resources.

Creating Environments from Blueprints

CloudShell Colony’s environments are created from blueprints.

Blueprints define application requirements in a simple YAML format that you store in your source control provider. They include the definitions of artifacts, networking and computing requirements, deployment scripts, variables, and test data. They can also integrate with traditional IaC (Infrastructure-as-Code) tools, such as Terraform.

Once you create your blueprints and store them in your source control repository, they will be available in CloudShell Colony’s self-service catalog, allowing you to launch environments from CloudShell Colony’s user interface or from your TeamCity builds.

Environment Lifecycle

Every environment goes through several stages including a verification step that ensures that the environment is 100% ready for your automated tests.

And since your environment is composed of applications, CloudShell Colony manages the application dependencies and initialization and provides quick debugging and troubleshooting options.

In case of errors, you will know exactly which steps failed and you are able to browse the deployment logs and securely connect to your compute instances.

When the environment is no longer needed, CloudShell Colony cleans up the environment’s infrastructure from your cloud account, ensuring that you won’t have to pay for unused infrastructure.

Environments have a crucial part in building a stable application pipeline, and in DevOps processes in general. Using Platforms like CloudShell Colony enables teams to get fully provisioned and deployed environments quickly while keeping cloud expenses under control.

Additional links

Learn more about Quali

Watch our solutions overview video

Augmented Intelligent Environments

Posted by german lopez November 19, 2018

You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights. These insights provide the information necessary to make business, cybersecurity and technology decisions. Your organization seems poised to enable strategies that harness your proprietary data with external data.

So, what’s the problem you ask? Well, my answer is that things don’t always go according to plan:

Data streams from IoT devices get disconnected that result in partial data aggregation.

Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges. Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges. This blog highlights an approach in a few steps that can get you started.

Identify the Boundary Functional Blocks

Identifying the boundaries will help to focus on the specific components that you want to address. In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions. The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software. Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.

Establish your Environments

Environments can be established to segment the functionality required within each functional block. A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks. The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment. The benefit is that these environments can be self-service and automated for the authorized personnel.

Introduce an Augmented Intelligent Environment

The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization. Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience. Blueprints can be orchestrated to model the required functional blocks. Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins. Organizations would introduce their AI software elements to enable augmented intelligence workflows.

The following is an example environment concept illustration. It depicts an architecture that combines multiple analytics and platform components.

Summary

The opportunity to orchestrate augmented intelligence environments has now become a reality. Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments. The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization. Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets. Additional information and resources can be found at Quali.com

Learn more about Quali

Watch our solutions overview video

DevSecOps Environments Deployed Secure and Fast

Posted by german lopez August 25, 2018

You've just implemented security tools that lower your organization's risk profile for your applications deployed on the Microsoft Azure Public Cloud. End-user experience is compromised and you're trying to figure out why...sound familiar? Responsibility rests squarely upon the DevOps, SecOps or DevSecOps teams who modified the application workflow behavior.

So where to start? You contact the DevOps team to provide you a test environment so that you can start your troubleshooting efforts. The DevOps team is busy updating the latest software updates and don't have cycles to spare due to production deployment deadlines. At the same time, you are notified that the Azure Load Balancer is being replaced with Nginx and you have no idea what the ramifications will be on your security posture or end-user experience.

The initial troubleshooting activity occurs in the SecOps environment but you still require the involvement of the DevOps team. DevOps will provide the latest updated software releases. DevOps teams, responsible for cloud architectural components, re-platform the Azure environment to reflect network modifications. These tasks are daunting without an ability to access self-service Azure test environments. In order to address these challenges, test environments are required to isolate troubleshooting activities. The following example outlines a microservice application deployed in a hybrid Azure cloud utilizing Quali CloudShell for orchestration.

Functionality: The first step in any application and infrastructure deployment is to ensure that the baseline functions of the application are responsive per the requirements. CloudShell provides the capability to introduce objects that represent the physical and virtual elements required within the solution architecture. These objects are modeled and deployed in Azure Public Cloud and within the Azure Stack at the organization's datacenter or remote edge network.

Cybersecurity: Once the functionality of the solution has been validated, the security software components are assessed to determine if they are the cause of the traffic bottlenecks. In this example, a security scan utilizing Cavirin's Automated Risk Analysis Platform (ARAP) determines the risk posture. If the risk score violates a regulation or compliance standard, a Polymorphic Binary Scrambling solution from Polyverse is installed to enable a Moving Target Defense. The DevSecOps team utilizes the blueprint design to update the application software, determine risk posture and remediate as required.

Performance: So we're feeling good, functionality is in place, security protection is enabled, but end-user experience is terrible! Visibility is required at the data, application and network layers within the Azure Cloud environment to determine where the bottlenecks exist. In this example, Accedian PVX captures, analyzes and reports on the traffic workflows with test traffic from Blazemeter. Environments are quickly stood up by the DevSecOps team and a root cause is identified for the traffic bottlenecks.

Automation: To bring it all together requires a platform that allows you to mix and match different objects to build your solution architecture. In addition, Self-service is a key workflow component that allows each team to conduct each operation. This allows savings in time, resources and costs whereby solution validation can be achieved in minutes and hours rather than days, weeks and months.

In summary, CloudShell automates environment orchestration, modeling, and deployments. Any combination of public/private/hybrid cloud architectures are supported. The DevOps team can ensure functionality and collaborate with SecOps to validate the security risk and posture. This collaboration enables a DevSecOps workflow that ensures performance bottlenecks are addressed with visibility into cloud workloads. Together this allows the DevSecOps team to deploy environments secure and fast.

To learn more on DevSecOps Environments please visit the Quali resource center to access documents and videos.

Additional links

Learn more about Quali

Watch our solutions overview video

New DevOps plugin for CloudShell: TeamCity by JetBrains

Posted by Pascal Joly January 27, 2018

Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.

JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.

So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.

The Onus is on the DevOps team to meet Application Developer Needs and IT budget constraints.

Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.

The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.

The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.

Integration process made simple

As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.

No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.

Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:

Ease of use: the whole point of our plugins is to be easy to configure to get up and running quickly. Typically, we provide wrapper scripts around our REST API and detailed installation and usage instructions. We also provide a way to test the connection between the ARA tool (e.g. TeamCity) and CloudShell once the plugin is in installed.

Security: encrypt all passwords (API credentials) in both storage and communication channels.

Scalability: the plugin architecture should support a large number of parallel executions and scale accordingly to prevent any performance bottleneck.

Additional links

Learn more about Quali

Watch our solutions overview video

Deep Dive with Quali at the DevOps Enterprise Summit!

Posted by Pascal Joly November 8, 2017

Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.

Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:

Microsoft VSTS and Visual Studio plugin: Microsoft VSTS is the hosted version of Visual Studio, and offers a way for developers using this popular IDE to create and terminate CloudShell Sandboxes from a Continuous Integration workflow, as well as trigger a test suite tied to dynamic environments.

AWS codepipeline plugin: the AWS codepipeline service is available to any AWS users and provides a simple way to create DevOps pipeline and integrate as part of the workflow actions to create CloudShell Sandboxes, run CloudShell Commands, and terminate the Sandboxes.

If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.

I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.

Additional links

Learn more about Quali

Watch our solutions overview video

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.

Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Additional links

Learn more about Quali

Watch our solutions overview video

Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Posted by Pascal Joly August 28, 2017

Wrapping up the "Summer of Love" 50th anniversary, Quali will be at the 3 day JenkinsWorld conference this week from 8/29-8/31 in San Francisco. If you are planning to be in the area, make sure to stop by and visit us at booth #606!

Additional links

Learn more about Quali

Watch our solutions overview video

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project. For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs as the de facto standard. For the most part, exposing these hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all" solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

Additional links

Learn more about Quali

Watch our solutions overview video

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Posted by Pascal Joly May 15, 2017

3 in 1! we recently integrated CloudShell with 3 products from CA into one end to end demo, showcasing our ability to deploy applications in cloud sandboxes triggered by an Automic workflow and dynamically configure CA Service Virtualization and Blazemeter end points to test the performance of the application.

Using Service Virtualization to simulate backend SaaS transactions

Financial and HR SaaS services, such as Salesforce and Workday have become de-facto standards in the last few years. Many ERP enterprise applications while still hosted on premises (Oracle, SAP…) are now highly dependent on connectivity to these external back end services. For any software update, they need to consider the impact on such back end application service that they have no control on. Developer accounts may be available but they are limited in functionalities and may not reflect accurately the actual transaction. One alternative is to use a simulated end point known as service virtualization that records all the API calls and responds as if it was the real SaaS service call.

Modeling a blueprint with CA service Virtualization and Blazemeter

We've discussed earlier on this year in a webinar about the benefits of using Cloud Sandboxes to automate your application infrastructure deployment using devOps processes. The complete application stack is modeled into a blueprint and publish to a self service catalog. Once ready for deployment, the end user or API provides any required input to the blueprint and deploys it to create a sandbox. This sandbox can then be used for completing some testing against its components. A Service virtualization component is yet a new type of resource that you can model into a blueprint, connected to the Application server template, to make this process even faster. The Blazemeter virtual traffic generator, also a SaaS application, is represented as well in the blueprint and connected to the target resource (the web server load balancer).

As an example let's consider a web ERP application using Salesforce as one of its end point. We'll use CA Service Virtualization product to mimic the registration of new Leads into Salesforce. The scenario is to stress test the scalability of this application with a virtualized Salesforce in the back end to simulate a large number of users creating leads through that application. For the stress test we used Blazemeter SaaS application to run simultaneous user transactions originating from various end points at the desired scale.

Running an End to End workflow with CA Automic

We used the Automic ARA (Application Release Automation) tool to create a continuous integration workflow to automatically validate and release end to end a new application built on dynamic infrastructure from QA stage all the way to production. CloudShell components are pulled into the workflow project as action packs and include the create sandbox, delete sandbox and execute test capabilities.

Connecting everything end to end with Sandbox Orchestration

The way everything gets connected together is by using CloudShell setup orchestration to configure the linkage between application components and service end points within the sandbox based on the blueprint diagram. On the front end, the Blazemeter test is updated with the load balancer web IP address of our application. On the back end, the application server is updated with the service virtualization IP address.

Once the test is complete, the orchestration tear down process cleans up all the elements: resources get reset to the initial state and the application is deleted.

AWS re:Invent 2016 Recap

Posted by admin December 11, 2016

November went out with a bang, culminating in the always exciting AWS re:Invent conference. Amazon continued to drive the power of public cloud forward with a host of new announcements, including Lex, which offers conversational interfaces using deep learning as a service, Lightsail, a super cheap server provisioning service aimed at developers, and a service to move exabytes of data to the cloud in weeks rather than years - using trucks (literally!). It's always a blast to see what Amazon will do next!
The Quali team had a great time showcasing the value of Cloud Sandboxes at AWS. While most vendors focused on security or big data, Quali's unique Hybrid Cloud Sandboxing offering gathered huge crowds. In fact, Network World featured Quali Sandboxes in their AWS re:Invent 2016 Cool Tech article!

The Buzz

Here's what people were so excited to hear about:

Our native integration with AWS EC2 and vCenter for creating powerful hybrid cloud sandboxes that allow multi-cloud sandboxes or push button deployment of sandbox resources from private to public cloud. Check out the hybrid cloud demo!

CloudShell's YAML based modeling language and visual modeling canvas for creating massively complex, heterogeneous, application and infrastructure blueprints. Our flexible and easy to use modeling is extremely powerful for AWS CloudFormations users who need more than the AWS tool can offer.

Integration of Quali cloud sandboxes with Delphix data virtualization engine showed how data can be easily and rapidly brought in to sandboxes running in AWS or on-prem, helping businesses "Fill the Data Gap".

Support for sandbox data in AWS Cloud Watch, Reports, and Budget tools to give you visibility into how infrastructure and application blueprints are being used, better control over your cloud compute consumption (and who's consuming it!), and more granular and organized tracking of sandbox resources.

Lastly - thanks to the HUNDREDS who filled out our 2016 DevOps Survey. Stay tuned - we'll be publishing the insights gleaned from this survey in early 2017. Good luck to all who entered to win an Apple Watch!