The Grand Challenge: Simplifying IT to Unleash Innovation

I don't listen to music before I buy it anymore. I don't need to - as long as I'm purchasing it from Amazon. Based on algorithms that crawl my purchasing history, the online retailer knows what I like as well as I do and, dare I say, better than my wife.

A company develops this intimate knowledge by using Big Data to improve the customer experience in an economical and adaptable way. Amazon has integrated this agility into every aspect of its business, from running its warehouses to its revolutionary ability to publish on demand.

Game-changing innovation happens when you simplify everyday processes. But managing a large, traditional application portfolio - however crucial it is to your business - disrupts innovation.

The cloud virtualizes your software, protecting you from the inefficiencies associated with IT. As a result, you spend less time and money introducing and maintaining applications and more resources innovating and advancing your business. Whether you implement a software-designed data center, a hybrid cloud approach or and end-user computing strategy, you're taking the burden off of your own team and into the virtual world.

Conventional application integration is exhausting. Each has to be written, architected and incorporated into the existing structure. Then it must be tested, and tested again. Then you need to install, monitor and archive it, and manage and restore the archives to make sure they work. The whole thing gets complicated fast, so you need specialists who understand the management processes.

Multiply these processes by the thousands - on the high end, healthcare companies typically use 10,000 applications - and it's no wonder IT has a reputation for slowing down business.

More than 80 percent of traditional IT costs are driven by the people and processes involved in managing it. The cloud flips the equation. Eighty to eighty-five percent of the costs instead go toward infrastructure because the cloud automates everything. You're untethering the applications from the computers on which they run.

It's comparable to the impact standardized shipping containers had on modern trade. Every port could work with the same containers, streamlining the process as freighters received and delivered goods across the globe.

The cloud - the virtualized data center - does the same thing with applications. It puts collections of apps in standardized containers, complete with the machines and storage necessary to run them, and automates the management so you're not worrying about things breaking or delivering an unexpected result. Instead of automating thousands of apps individually, you handle entire containers all at once. You can seamlessly move the containers from a damaged machine to a working one. And business goes on.

CIO, CTO & Developer Resources

The ramifications are fundamentally transformative. Innovation speeds up when businesses function unencumbered by the disruptions of traditional IT. Acquiring new technology is no longer a barrier because you're not devoting time to the cycle of writing, architecting, integrating, testing, installing, monitoring, archiving and restoring. You're just creating.

Today you can have a good idea, register as a corporation, get applications, manufacture products and launch an enterprise without leaving your home. The cloud provides global access to the IT necessary to build a business. It's faster, cheaper and lower risk - the three ingredients that propel innovation.

Consider the rise of Netflix. Blockbuster dominated the movie rental business for two decades, making it simple for customers to watch films in their home almost any time they wanted - as long as they were willing to trek to the store.

Netflix took the DVD rental business online and slashed the bottom out of Blockbuster's model. Movies came directly to customers' homes with a reasonably priced monthly subscription and no threat of late fees. As a result, Blockbuster announced last fall it would be shuttering all but 50 franchised stores in the U.S.

But Netflix didn't rest on its laurels. It recognized the threat of iTunes' pioneering streaming business. Netflix bifurcated its model, maintaining its mail-order DVD option while venturing into streaming. It used data gleaned from customers' viewing habits to create original content that aligns with not just what they watch but how they watch. Data revealed viewers liked streaming multiple episodes of TV shows in marathon sessions, prompting Netflix to release in one day entire seasons of dramas such as House of Cards.

This is what the cloud enables. Companies can use technology as a weapon of mass disruption. If you don't take advantage of it, you're at risk of not being able to grow and stay competitive. Rather than let technology happen to you, use it to become the agent of change.

How do you do that?

Software-defined data centers let customers quickly, safely and inexpensively deliver the right applications. A hybrid cloud strategy offers a choice between running applications in a software-defined data center, within the public cloud or another proprietary cloud. And an end-user computing strategy creates a safe, secure and compliant environment for application consumption.

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usage of their services for licensing and billing purposes?
In his session at 16th Cloud Expo, Delano ...

In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.

SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.

Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.

Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.

Learn how to solve the problem of keeping files in sync between multiple Docker containers.
In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience.
In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.

Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating the organization can lead to inconsistent masking and exposure of sensitive data.
But some organ...

Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization.
In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss:
The acceleration of application delivery for the business with DevOps

The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow.
In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is relevant to small scale DevOps, and if there is an expectation of growth as the number of build targets,...

"ProfitBricks was founded in 2010 and we are the painless cloud - and we are also the Infrastructure as a Service 2.0 company," noted Achim Weiss, Chief Executive Officer and Co-Founder of ProfitBricks, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.

"Alert Logic is a managed security service provider that basically deploys technologies, but we support those technologies with the people and process behind it," stated Stephen Coty, Chief Security Evangelist at Alert Logic, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.

"We specialize in testing. DevOps is all about continuous delivery and accelerating the delivery pipeline and there is no continuous delivery without testing," noted Marc Hornbeek, Sr. Solutions Architect at Spirent Communications, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.

"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.

How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers.
In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction and risks they impose on the business.

Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement.
In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.

Delphix, the market leader in Data as a Service (DaaS), has been announced winner of the DevOps Solution Award at the prestigious Computing Vendor Excellence Awards in London. The awards celebrate the achievements of the technology vendors and service providers that are leading the field of enterprise IT. Delphix was recognised as the vendor demonstrating the most effective support of DevOps culture for its ability to improve time to market and collaboration between teams.

Sysdig has announced two significant milestones in its mission to bring infrastructure and application monitoring to the world of containers and microservices: a $10.7 million Series A funding led by Accel and Bain Capital Ventures (BCV); and the general availability of Sysdig Cloud, the first monitoring, alerting, and troubleshooting platform specializing in container visibility, which is already used by more than 30 enterprise customers. The funding will be used to drive adoption of Sysdig Cloud in the container market.

SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.

"The new SDKs for Go and Java are yet another addition to our growing support for our DevOps community," said Achim Weiss, Co-founder and CEO of ProfitBricks. "Since the launch of ProfitBricks' DevOps Central, the productivity of the DevOps community remains a top priority for our development team. We've built a strong foundation for our DevOps Central users, and intend on continuing this momentum as the year progresses."

Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs; this makes assuring performance compl...

Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous challenge for development and operations teams. The Sumo Logic Collector and Application for Docker now a...

Shipping daily, injecting faults, and keeping an extremely high availability "without Ops"? Understand why NoOps does not mean no operations. Agile development methodologies require evolved operations to be successful.
In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless of whether you are a startup or a mature enterprise, whether you are using PaaS, Micro Services, or ...

SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...

Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.

Business and IT leaders today need better application delivery capabilities to support critical new innovation. But how often do you hear objections to improving application delivery like, "I can harden it against attack, but not on this timeline"; "I can make it better, but it will cost more"; "I can deliver faster, but not with these specs"; or "I can stay strong on cost control, but quality will suffer"? In the new application economy, these tradeoffs are no longer acceptable. Customers will ...

JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers.
If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.

Technology changes at the speed of light. To say it can be hard to keep up is an understatement. For performance engineers, taking charge of your own continuing education is one of the most important things you can do to remain at the top of your game.

Most developers learn best by examples, which naturally tend to simplify matters and omit things that aren’t essential for understanding. This means that the “Hello World” example, when used as starting point for an application, may be not suitable for production scenarios at all.
I started using Node.js like that and I have to confess that it took me almost two years to quantify the huge performance impact of omitting a single environment variable. In fact it was just a coincidence that I even...

It is interesting to me, how quickly the hype cycle of a good thing can turn it into a monster that will inevitably eat itself, leaving a much smaller – and much more useful – concept or toolset behind. It has happened over and over in high tech, one need only say “XML” to understand what I mean. It is definitely a useful tool for some jobs, but the “XML Everywhere” craze was insane. People declaring such patently false ideas as “It will end the need for programmers.”

As companies embrace the DevOps movement, they rely heavily on automation to improve the time to market for new features and services. DevOps is a long, never-ending journey with a goal of continuously improving the software delivery process, resulting in better products and services and, ultimately, happier customers. At the beginning of their DevOps journeys, many companies focus on continuous integration (CI), in which they automate the build process. Automated testing is implemented so that ...

It's been three years since I compared medieval security to web security, and a few things have happened. Mobile and wireless have evolved as the dominant platforms, while the life between personal computing and business computing has continued to fray. And, of course, thanks to web services, the web-delivered API now dominates the connected world.
It's time to take a second look, focusing on the API. That is, if our organization is a castle, then the API is the unique services that people ca...

In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at Logz.io, and Tomer Levy, co-founder and CEO of Logz.io, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data.
They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the archi...

"To us DevOps is actually a movement that at the very center is about developers and IT operations teams collaborating in a framework that drives agility, drives the ideation to production readiness in a seamless manner," explained Monish Sharma, Director in PwC's consulting business, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.

Beginning with Ruxit Agent v1.73, Ruxit provides root cause analysis for Node.js errors down to the code level. As with other services, Ruxit marks web requests to Node.js as failed based on the accompanying HTTP error code.
Simply click the Failure rate portion of any Node.js service infographic to view the Failure analysis chart. If you have failures, click the View details of failures button to see which requests failed along with an overview of failure reasons sorted by HTTP code. You can s...

One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...

SYS-CON Events announced today that Logz.io has been named a “Bronze Sponsor” of SYS-CON's @DevOpsSummit Silicon Valley, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Logz.io provides open-source software ELK turned into a log analytics platform that is simple, infinitely-scalable, highly-available, and secure.
For more information, visit http://logz.io/.

Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.

Much has already been written about the virtues of Docker, and containers in general like CoreOS or Kubernetes. How life-changing Docker is, how innovative, etc. However, the real secret to Docker's success in the marketplace is the hidden retribution of innovation. Innovation and R&D is the lifeblood of today's technology success. Companies, no matter how large, must iterate constantly to stay ahead of their legacy competitors and new upstarts risking disruption. The rise of Agile methodologies...

Brand owners are caught in a digital crossfire.
From one direction comes intense competitive pressure to innovate or to at least follow very, very quickly. From the precisely opposite direction comes the potentially existential threat of an app very publicly flopping or – even worse – being very publicly revealed to jeopardize the customer’s well-being.
Either way, you lose brand value in a social marketplace where brand is your primary currency.
What’s a brand owner to do?

A website visit may involve a wide range of components many of which are off-site or not easily monitored. So what do you have in your toolbox to help address this? Load testing, real-user monitoring and site instrumentation all help you prepare for and monitor your website visitors’ experiences. But one more tool that’s essential for a performance engineer is synthetic user monitoring. It’s a critical part of a web monitoring strategy, however for many people, it’s uncharted territory. So, in t...

Stage setting: Camera is positioned above the treetop of one of three tall poplars. It looks down on the terrace of a pub. It’s evening, but there’s still enough light to see that the terrace is sparsely populated.
Camera slowly moves down towards a specific table in the corner…
As the camera moves down, an old, crummy typewriter font appears on the screen, typing with distinct sound. It spells:

This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...

Unbeknownst to some, organizations have run infrastructure containers in production for years, reaping benefits on the operational end but not yet providing value for developers. When Docker catapulted containers into mainstream adoption, another type of container emerged — one that’s enormously popular for developers, but not quite ready for Ops. It’s time to close the gap between the promise that Devs see in containers and the operational challenges of actually running them in production.
In...

You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently.
In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.

The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.