Legacy storage architectures do not perform very efficiently in virtual computing environments. The very random, very write-intensive I/O patterns generated by virtual hosts drive storage costs up as enterprises either add spindles or look to newer storage technologies like solid state disk (SSD) to address the IOPS shortfall.

SSD costs are coming down, but they are still significantly higher than spinning disk costs. When enterprises do consider SSD, how it is used and where it is placed in the virtual infrastructure can make a big difference in how much enterprises have to spend to meet their performance requirements. It can also impose certain operational limitations that may or may not be issues in specific environments.

Some of the key considerations that need to be taken into account are SSD placement (in the host or in the SAN), high availability/failover requirements, caching vs logging architectures, and the value of preserving existing investments vs rip and replace investments that promise storage hardware specifically designed for virtual environments.

SSD PlacementThere are two basic locations to place SSD, each of which offers its own pros and cons. Host-based SSD will generally offer the lowest storage latencies, particularly if the SSD is located on PCIe cards. In non-clustered environments where it is clear that IOPS and storage latencies are the key performance problems, these types of devices can be very valuable. In most cases, they will remove storage as the performance problem.

But don't necessarily expect that in your environment, these devices will deliver their rated IOPS directly to your applications. In removing storage as the bottleneck, system performance will now be determined by whatever the next bottleneck in the system is. That could be CPU, memory, operating system, or any number of other potential issues. This phenomenon is referred to as Amdahl's Law.

What you probably care about are application IOPS. Test the devices you're considering in your environment before purchase, so you know exactly the level of performance gain they will provide to you. Then you can make a more informed decision about whether or not you can cost justify them for use with your workloads. Paying for performance you can't use is like buying a Ferrari for use on America's interstate system - you may never get out of second gear.

Raw SSD technology generally can provide blazingly fast read performance. Write performance, however, varies depending on whether you are writing randomly or sequentially. The raw technical specs on many SSD devices indicate that sequential write performance may be half that of read performance, and random write performance may be half again as slow. Write latencies may also not be deterministic because of how SSD devices manage the space they are writing to. Many SSD vendors are combining software and other infrastructure around their SSD devices to address some of these issues. If you're looking at SSD, look to the software it's packaged with to make sure the SSD capacity you're buying can be used most efficiently.

Host-based SSD introduces failover limitations. If you have implemented a product like VMware HA in your environment to automatically recover failed nodes, any data sitting in a host-based SSD device that has not been written through to shared storage will not be available on recovery. This can lead to data loss on recovery - something that may or may not be an issue in your environment. Even though SSD is non-volatile storage, if the node it is sitting in is down, you can't get to it. You can get to it after that node is recovered, but the issue here is whether or not you can automatically fail over and have access to it.

Because of this issue, most host-based SSD products implement what is called a "write-through" cache, which means that they don't acknowledge writes at SSD latencies, they actually write them through to shared disk and then send the write acknowledgement back from there. Anything on shared disk can be potentially recovered by any other node in the cluster, ensuring that no committed data is unavailable on failover. But what this means is that you won't get any write performance improvements from SSD, just better read performance.

What does your workload look like in terms of read vs write percentages? Most virtual environments are very write intensive, much more so than they ever were in physical environments, and virtual desktop infrastructure (VDI) environments can be as much as 90% writes when operating in steady state mode. If write performance is your problem, host-based SSD with a write-through cache may not help very much in the big picture.

CIO, CTO & Developer Resources

SAN-based SSD, on the other hand, can support failover without data loss, and if implemented with a write-back cache can provide write performance speedups as well. But many implementations available for use with SAN arrays are really only designed to speed up reads. Check carefully as you consider SSD to understand how it is implemented, and how well that maps to the actual performance requirements in your environment.

Caching vs Logging ArchitecturesMost SSD, wherever it is implemented, is used as a cache. Sizing guidelines for caches start with the cache as a percentage of the back-end storage it is front-ending. Generally the cache needs to be somewhere between 3% to 6% of the back-end storage, so larger data store capacities require larger caches. For example, 20TB of back-end data might require 1TB of SSD cache (5%).

Caches are generally just speeding up reads, but if you are working with a write-back cache, then the cache will have to be split between SSD capacity used to speed up reads and SSD capacity used to speed up writes. Everything else being equal in terms of performance requirements, write-back caches will have to be larger than write-through caches, but will provide more balanced performance gains (across both reads and writes).

Logging architectures, by definition, speed up writes, making them a good fit for write-intensive workloads like those found in virtual computing environments. Logs provide write performance gains by taking the very random workload and essentially removing the randomness from it by writing it sequentially to a log, acknowledging the writes from there, then asynchronously de-staging them to a shared storage pool. This means that the same SSD device used in a log vs used in a cache will be faster, assuming some randomness to the workload. The write performance the guest VMs see is the performance of the log device operating in sequential write mode almost all the time, and it can result in write performance improvements of up to 10x (relative to that same device operating in the random mode it would normally be operating in). And a log provides write performance improvements for all writes from all VMs all the time. (What's also interesting is that if you are getting 10x the IOPS from your current spinning disk, given Amdahl's Law, you may not even need to purchase SSD to remove storage as the performance bottleneck.)

Logs are very small (10GB or so) and are dedicated to a host, while the shared storage pool is accessible to all nodes in a cluster and primarily handles read requests. In a 20 node cluster with 20TB of shared data, you would need 200GB for the logs (10GB x 20 hosts) vs the 1TB you would need if SSD was used as a cache. Logs are much more efficient than caches for write performance improvements, resulting in lower costs.

If logs are located on SAN-based SSD, you not only get the write performance improvements, but this design fully supports node failover without data loss, a very nice differentiator from write-through cache implementations.

But what about read performance? This is where caches excel, and a write log doesn't seem to address that. That's true, and why it's important to combine a logging architecture with storage tiering. Any SSD capacity not used by the logs can be configured into a fast tier 0, which will provide the read performance improvements for any data residing in that tier. The bottom line here is that you can get better overall storage performance improvements from a "log + tiering" design than you can from a cache design while using 50% - 90% less high performance device (in this case, SSD) capacity. In our example above, if you buy a 256GB SAN-based SSD device and use it in a 20 node cluster, you'll get SSD sequential write performance for every write all the time, and have 56GB left over to put into a tier 0. Compare that to buying 1TB+ of cache capacity at SSD prices.

With single image management technology like linked clones or other similar implementations, you can lock your VM templates into this tier, and very efficiently gain read performance improvements against the shared blocks in those templates for all child VMs all the time. Single image management technology can help make the use of SSD capacity more efficient in either a cache or a log architecture, so don't overlook it as long as it is implemented in a way that does not impinge upon your storage performance.

Purpose-Built Storage HardwareThere are some interesting new array designs that leverage SSD, sometimes in combination with some of the other technologies mentioned above (log architectures, storage tiering, single image manage-ment, spinning disk). Designed specifically with the storage performance issues in virtual environments in mind, there is no doubt that these arrays can outperform legacy arrays. But for most enterprises, that may not be the operative question.

It's rare that an enterprise doesn't already have a sizable investment in storage. Many of these existing arrays support SSD, which can be deployed in a SAN-based cache or fast tier. It's much easier, and potentially much less disruptive and expensive if existing storage investments could be leveraged to address the storage performance issues in virtual environments. It's also less risky, since most of the hot new "virtual computing-aware" arrays and appliances are built by startups, not proven vendors. If there are pure software-based options to consider that support heterogeneous storage hardware and can address the storage issues common in virtual computing environments, allowing you to potentially take advantage of SSD capacity that fits into your current arrays, this could be a simpler, more cost-effective, and less risky option than buying from a storage startup. But only, of course, if it adequately resolves your performance problem.

The Take-AwayIf there's one point you should take away from this article, it's that just blindly throwing SSD at a storage performance problem in virtual computing environments is not going to be a very efficient or cost-effective way to address your particular issues. Consider how much more performance you need, whether you need it on reads, writes, or both, whether you need to failover without data loss, and whether preserving existing storage hardware investments is important to you. SSD is a great technology, but your best value from it will come when you deploy it most efficiently.

Eric Burgener is vice president product management at Virsto Software. He has worked on emerging technologies for almost his entire career, with early stints at pioneering companies such as Tandem, Pyramid, Sun, Veritas, ConvergeNet, Mendocino, and Topio, among others, on fault tolerance and high availability, replication, backup, continuous data protection, and server virtualization technologies.

Over the last 25 years Eric has worked across a variety of functional areas, including sales, product management, marketing, business development, and technical support, and also spent time as an Executive in Residence with Mayfield and a storage industry analyst at Taneja Group. Before joining Virsto, he was VP of Marketing at InMage.

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

An IoT product’s log files speak volumes about what’s happening with your products in the field, pinpointing current and potential issues, and enabling you to predict failures and save millions of dollars in inventory. But until recently, no one knew how to listen.
In his session at @ThingsExpo, Dan Gettens, Chief Research Officer at OnProcess, discussed recent research by Massachusetts Institute of Technology and OnProcess Technology, where MIT created a new, breakthrough analytics model for ...

IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effici...

Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation.
In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...

In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...

The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location.
With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...

You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time.
In his session at 19th Cloud Expo, Mark Allen, General Manager of...

The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can ...

Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...

As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...

"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.

Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases.
In his general session at @ThingsExpo, Dave McCarthy, Director of Products...

"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.

The many IoT deployments around the world are busy integrating smart devices and sensors into their enterprise IT infrastructures. Yet all of this technology – and there are an amazing number of choices – is of no use without the software to gather, communicate, and analyze the new data flows. Without software, there is no IT.
In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Dave McCarthy, Director of Products at Bsquare Corporation; Alan Williamson, Principal...

Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...

Video experiences should be unique and exciting! But that doesn’t mean you need to patch all the pieces yourself.
Users demand rich and engaging experiences and new ways to connect with you. But creating robust video applications at scale can be complicated, time-consuming and expensive. In his session at @ThingsExpo, Zohar Babin, Vice President of Platform, Ecosystem and Community at Kaltura, discussed how VPaaS enables you to move fast, creating scalable video experiences that reach your aud...

WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.

"At ROHA we develop an app called Catcha. It was developed after we spent a year meeting with, talking to, interacting with senior citizens watching them use their smartphones and talking to them about how they use their smartphones so we could get to know their smartphone behavior," explained Dave Woods, Chief Innovation Officer at ROHA, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.

20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.

In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...

DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain.
In this power panel at @...

The IoT continued its toddler-like growth and stumbles in 2016. Here are five trends to look for in 2017 as the IoT enters its adolescence and how to benefit from them.
1. Ecosystems begin to determine winners and losers
Previously these were nice in-the-future concerns; now they will really count. Filling out a whole product value proposition through partnerships has repeatedly proven its importance across B2B and enterprise software sectors. In the IoT, they will be even more critical.

Hewlett Packard Enterprise advanced across several fronts at HPE Discover 2016 in London, making inroads into hybrid IT, Internet of Things, and on to the latest advances in memory-based computer architecture.
A leaner, more streamlined Hewlett Packard Enterprise (HPE) advanced across several fronts at HPE Discover 2016 in London, making inroads into hybrid IT, Internet of Things (IoT), and on to the latest advances in memory-based computer architecture. All the innovations are designed to help customers address the age of digital disruption with speed, agility, and efficiency.

The time of year when crystal balls get a viewing and many pundits put out their annual predictions for the coming year. Rather than thinking up my own, I figured I’d regurgitate what many others are expecting to happen.
8 Predictions About How the Security Industry Will Fare in 2017 – An eWeek slideshow looking at areas like IoT, ransomware, automated attacks and the security skills shortage in the industry. Chris Preimesberger (@editingwhiz), who does a monthly #eweekchat on twitter, covers many of the worries facing organizations.

I recently recovered from ACDF surgery where they remove a herniated or degenerative disc in the neck and fuse the cervical bones above and below the disk. My body had a huge vulnerability where one good shove or fender bender could have ruptured my spinal cord. I had some items removed and added some hardware and now my risk of injury is greatly reduced.
Breaches are occurring at a record pace, botnets are consuming IoT devices and bandwidth, and the cloud is becoming a de-facto standard for many companies. Vulnerabilities are often found at the intersection of all three of these trends, so ...

In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management...

Okay, let me get this out there: I find the term “Citizen Data Scientist” confusing. Gartner defines a “citizen data scientist as “a person who creates or generates models that leverage predictive or prescriptive analytics but whose primary job function is outside of the field of statistics and analytics.” While we teach business users to “think like a data scientist” in their ability to identify those variables and metrics that might be better predictors of performance, I do not expect that the business stakeholders are going to be able to create and generate analytic models. I do not believe...

We have been seeing a sudden rise in the deployment of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). It looks like the long “AI winter” is finally over. It is interesting to note that AI was mentioned by Alan Turing in a paper he wrote back in 1950 to suggest that there is possibility to build machines with true intelligence. Then in 1956, John McCarthy organized a conference at Dartmounth and coined the phrase Artificial Intelligence. Much of the next three decades did not see much activity and hence the phrase “AI Winter” was coined. Around 1997, IBM’s Deep Blu...

My daughter called with a frantic message. She was driving my car (why she was driving my car when she has her own is the subject for another time) and a warning message appeared on the car console: “Engine overheated! Stop engine and allow to cool down” (see Figure 1).
Fortunately, my daughter was nearly home, so she got the car home, shut it down and called me immediately (I was on the road somewhere…Washington DC, Philadelphia, Knoxville, Chicago, Toronto…I don’t even remember where anymore). I called my trusty mechanic (Chuck) and he was able to work my car into the schedule when I got ba...

With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterp...

Almost a year ago, I wrote these words, "Technology has reached the tipping point for me, it moved from a help to a hindrance." The plethora of adrenaline- and endorphin-inducing mobile apps, 24x7 news, notifications, alerts and updates, drip fed my brain and hindered my "deep work and deep thoughts." In Cal Newport's new book titled, Deep Work he posits that most knowledge workers need concentration and substantial time, dedicated and uninterrupted, to produce their best work. He argues that a lot of technologies and open office layouts today inhibit creativity, "deep work" and "deep thoughts...

Predictions can be enlightening as we round out the end of the year, and industry analysts covering the Industrial Internet of Things (IIoT) have begun forecasting what to expect in 2017. In the ever changing digital business landscape, companies need to keep a pulse on the technology and regulatory environments to have direction on where to focus their efforts. Over the past few years, IIoT has taken on the shared title of industry 4.0, as new ways of connecting businesses and consumers impact systems infrastructures and technology integrations across many, if not all. business lines.

The holiday season is nearly upon us (I’ve already heard Christmas songs being played…really?) and retailers are usually the big winners during the holiday season. However, leading retailers are already thinking beyond the current holiday season, and not just from marketing and merchandising perspectives. These leading retailers are considering how this holiday season – and the resulting wealth of customer, product and operational data – can be converted into new analytic insights that can be used to optimize key business processes, uncover new monetization opportunities and create a more comp...

I was on a high-rise construction site 34-floors above the city. I was talking to the construction crew when a fight broke out. There was an explosion and the floor collapsed. I removed the virtual reality (VR) goggles and laughed. It was so real. The VR solutions provided an incredible experience, almost like being there. As good as my experience was, it was not reality. It was a controlled pre-programmed experience - a notional idea. Today, however, VR and sensor technologies enable a notional idea to become reality – a Real-Reality.

The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals, as experts estimate that “as-a-service” cloud sourcing will increase from today’s 15% to 35% by 20...

Internet of @ThingsExpo has announced today that Chris Matthieu has been named tech chair of Internet of @ThingsExpo 2017 New York
The 7th Internet of @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, New York.
Chris Matthieu is the co-founder and CTO of Octoblu, a revolutionary real-time IoT platform recently acquired by Citrix. Octoblu connects things, systems, people and clouds to a global mesh network allowing users to automate and control design flows, processes and sensor data, and analyze/react to real-time events and messages as well as big dat...

As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.

2016 brought about more cyberattacks than we thought possible, especially involving ransomware, and we definitely won't see that trend breaking stride in 2017. By next year, we expect every single adult in the US will know a blood relative that has had their identity stolen - the Internal Revenue Service reported that 2.7 million people had their identities stolen in 2014 and according to TransUnion, 19 people fall victim to identity theft every minute.

For large enterprise organizations, it can be next-to-impossible to identify attacks and act to mitigate them in good time. That’s one of the reasons executives often discover security breaches when an external researcher — or worse, a journalist — gets in touch to ask why hundreds of millions of logins for their company’s services are freely available on hacker forums.
The huge volume of incoming connections, the heterogeneity of services, and the desire to avoid false positives leave enterprise security teams in a difficult spot. Finding potential security breaches is like finding a tiny ne...

Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resources of one or more underlying hosts. It's not uncommon for Docker servers to run thousands of short-te...

There’s a funny thing about digital transformation: we are simultaneously over-hyping it and understating it. On the one hand, every tech company in the world is talking about it. It doesn’t matter how mundane the technology; every company is somehow relating their products to digital transformation.
On the other, many people are failing to grasp the import and impact of what digital transformation really means. In far too many cases, business and IT leaders are dismissing it as nothing more than a marketing ploy. The unfortunate result is that the over-hypedness of digital transformation i...

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.