Think you have a handle on application performance? So did Amazon, which suffered a 49-minute e-commerce outage last month, and Nasdaq, which halted trading for three hours in late August due to a tech failure that led to wild speculation in USA Today about a shady Iranian hacking collective taking down the U.S. stock market. They're still investigating the root causes, but with these organizations' advanced infrastructures, you can bet it's more complicated than someone switching off a router.

Managing applications has never been simple, and between network and server virtualization, cloud applications and infrastructure, big data and the challenges of root cause analysis and diagnosis, if you're not careful, you too could lose control.

Some IT pros have written off application performance management systems as not up to the challenge of monitoring today's distributed applications, where resources and data could be, literally, anywhere. APM systems that were at the top of their game a couple of years ago struggle to cope, and our InformationWeek2013 Application Performance Management Survey reveals IT's frustration. The percentage of respondents using APM tools or systems slipped slightly since our 2010 poll, and when we asked nonusers to select among seven reasons for passing, the percentage saying that too much staff time is required to implement correctly jumped a stunning 19 points, from 32% to 51%. Lack of expertise is the No. 2 answer, cited by 40%.

Our take: The volume and variety of data and architectural complexity have increased greatly since 2010, and some APM vendors have not kept up. That shortcoming has forced CIOs to throw people at the problem.

However, don't write off APM just yet. Big providers including BMC, CA, Hewlett-Packard and IBM have capabilities from acquisitions that many end users haven't tapped. If companies using these APM suites for basics such as threshold, latency and response time monitoring dig a bit deeper, they may just find goodies like graphs that show trends unfolding in real time. It will take some experimentation, and maybe even an engagement with the vendor's consulting team, but you could be pleasantly surprised.

The third-place inhibitor to APM is cost, cited by 33%, down from 41% in 2010. It's true that many implementations require outside help, sometimes after a big investment of in-house time. We saw this on a recent client project, where the customer had no intention of using professional services but was forced to reallocate funds to meet the APM vendor's requirements. Essentially, if we wanted official support, we had to hire the vendor to validate the environment. Such factors surely contribute to just 10% of APM users saying their systems exceed expectations, down from 18% in 2010. What we're seeing is both APM users and vendors struggling to make sense of the large and complex operational data that today's virtualized and cloud-connected systems generate.

Thanks for mentioning CA Technologies in the piece. Just wanted to point out that we're already including advanced analytics in our CA APM 9.5 release that came out in the spring: http://www.serviceassurancedai...

Most IT teams have their conventional databases covered in terms of security and business continuity. But as we enter the era of big data, Hadoop, and NoSQL, protection schemes need to evolve. In fact, big data could drive the next big security strategy shift.

Why should big data be more difficult to secure? In a word, variety. But the business won’t wait to use it to predict customer behavior, find correlations across disparate data sources, predict fraud or financial risk, and more.