I had the unique experience of working for a private equity-backed software company for several years. There are a few things that set apart that experience for me from any I have had before in my 20+ years working in the high tech sector. However, those few things have had a lasting impact and memory on my career since then.

Private Equity Pump and Dump?

First of all, the news that Compuware has been taken private by Thoma Bravo is interesting but not surprising. In my opinion, PE companies don’t buy companies in the midst of a growth story, they buy companies that are floundering and try to inflate the value for another suitor…that is their history, it’s what they do. Often, these companies are carrying some amount of debt which the PE company will cover, and to gain some return on their investment, they will do whatever to even out the books for some liquidity event.

In my experience, once a PE company takes control, the innovation stops. There’s no reason to risk any time, resources, or money on R&D. Which, at the end of the day means customers won’t be getting cutting-edge technology or new features. Furthermore, there is often an exodus of top talent. Again, this can strain development, and more importantly, customer success.

What About the Customers?

If you are a prospective or existing customer of Compuware, you should deeply consider your position after this news. My experience working at a major PE portfolio company is new customers will be forgotten as a result of this acquisition and existing customers…well good luck there too. Innovation and development will take an enormous hit in the next 6-12 months as the tendrils of new centric management philosophies take hold. A lot of times, this new philosophy is based on EBITDA.

What is an EBITDA centric philosophy? It’s simple, it’s when company employees are incentivized for improving EBITDA instead of other customer centric measures of success. EBITDA is focused solely on corporate profitability!

I watched my company, which was agile, nimble, and aggressive with updates severely lose their development velocity. That’s not conducive to innovation and it’s not even tolerable when it comes to bug fixes.

During my tenure after the buyout, I watched as several large strategic customers decided to move on with other, more innovative technologies.The lack of innovation in the market segment and the collapse of customer care were defining attributes in my recollection of them moving on. The bottom line is if you stop innovating and forget about your customers in the high-tech world you are dead.

Based upon my experience, I’ve personally vowed to never work for another PE-backed company again. I wish all the current Compuware customers the very best in their journey with their newly privatized vendor but unfortunately I know there is pain on the horizon.

Have you ever worked for a company taken private by a PE company? Is my experience unique or do you have a similar story to share? I’d love to hear about all experiences in the comments below to try and keep this dialogue as accurate as possible.

The views and opinions expressed herein are those of the author and may not reflect the views of AppDynamics, Inc. or its affiliates.

We get a lot of questions about our analytics-driven Application Performance Management (APM) collection and analysis technology. Specifically, people want to know how we capture so much detailed information while maintaining such low overhead levels. The short answer is, our agents are intelligent and know when to capture every gory detail (the full call stack) and when to only collect the basics for every transaction. Using an analytics-driven approach, AppDynamics is able to provide the highest level of detail to solve performance issues during peak application traffic times.

AppDynamics, An Efficient Doctor

AppDynamics’ APM solution monitors, baselines and reports on the performance of every single transaction flowing through your application. However, unlike other APM solutions that got their start in development environments, ours was built for production, which requires a more agile approach to capturing transaction details.

I’d like to share with you a story which illustrates AppDynamics analytics-based methodology and compares it with many of our competitors’ “capture as much detail as possible whether there are problems or not” (aka, our agents are too old to have intelligence built in) approach.

You visit Dr. AppDynamics for your regular health checkups. She takes your vital signs, records weight, measures reflexes and compares every metric taken against known good baselines. When your statistics are close to the baselines the doctor sends you home and sees the next patient without delay. When your health vitals deviate too far from the pre-established baselines the smart doctor orders more relevant tests to diagnose your problem. This methodology minimizes the burden on the available resources and efficiently and effectively diagnoses any issues you have.

In contrast, you visit Dr. Legacy for your regular health checkups. She takes your vital signs, records weight, measures reflexes and immediately orders a battery of diagnostic tests even though you are perfectly healthy. She does this for every single patient she sees. The medical system is now overburdened with extra work that was not required in the first place. This burden is slowing down the entire system so in order to ensure things move faster Dr. Legacy decides to reduce the amount of diagnostics tests being run on every single patient (even the ones with actual problems). Now the patients who have legitimate problems are going undiagnosed in the waiting room during the time when they need the most attention. In addition, due to the large amount of diagnostics testing and data being generated, the cost of care is driven up needlessly and excessively.

Does Dr. Legacy’s methodology make any sense to you when better methods exist?

AppDynamics intelligent approach to collecting data and inducing diagnostics makes it easier to spot outliers and, because deep diagnostic data is provided for only the transactions that require this level of detail, there is less impact on system resources and very little monitoring overhead.

Monitoring 100% of Your Business Transactions All the Time

AppDynamics monitors every single business transaction (BT) that flows through your applications. There is no exception to this rule. We automatically learn and develop a dynamic baseline for end-to-end response time as well as the response time of every component along the transaction flow, and also for all critical business metrics within your application.

We score each transaction by comparing the actual response time to the self-learned baseline. When we determine that a BT has deviated too far from normal behavior (using a tunable algorithm), our agent knows to automatically collect full call stack details for your troubleshooting pleasure. This analytics-based methodology allows AppDynamics to detect and alert on problems right from the start so they can be fixed before they cause a major impact.

Of course, there are times when deep data capture of every transaction is advantageous—such as during development—and the AppDynamics APM solution has another intelligent feature to address this need. We’ve built a simple, one-click button to enable full data recording system-wide. Developer mode is ideal for pre-production environments when engineers are profiling and load testing the application. Developer mode will capture a transaction snapshot for every single request. In production this would be overkill and wasteful. It’s even smart enough to know when you’re done using it and will automatically shut off when it is unintentionally left on, so your system won’t get bogged down if transaction volume increases.

Who Looks at Production Call Stacks When There are No Problems?

One of the worst qualities about legacy APM solutions is the fact that they collect as much data as they can, all the time. Usually this originates from the APM tool starting as a profiling tool for developers that has been molded to work in production. While this methodology is fine for development environments (we support this with dev-mode as described above), it fails miserably in any high volume scenario like load testing and production. Why does it fail? I’m glad you asked 😉

Any halfway decent APM tool has built-in overhead limiters to keep themselves from causing harm and introducing too much overhead within a running application. When you are collecting as much deep dive data as possible with no intelligent way of focusing your data collection you are inducing the maximum allowed overhead basically all the time (assuming reasonable load). The problem is that as your application load gets higher, this is the time when your problems are most likely to surface, and this is the time when legacy APM overhead is skyrocketing (due to massive amounts of code execution and deep collection being “always on”) so the overhead limiters kick in and reduce the amount of data being collected or kill off data collection altogether. In plain English this means that legacy APM tools can’t tell good transactions from bad and will provide you with the least amount of data at the time you need the most data. Isn’t it funny how marketing and sales teams try to turn this methodology into the best thing ever?

I have personally used many different APM tools in production and I never needed to look at a full call stack when there was no problem. I was too busy getting my job accomplished to poke around in mostly meaningless data just for the fun of it.

Distributed Intelligence for Massive Scalability

All of the intelligent data collection mentioned above requires a very small amount of extra processing to determine when to go deep and what to save. This is a place where the implementation details really make a difference.

At AppDynamics, we put the smarts where they are best suited to be – at the agent level. It’s a simple paradigm shift that distributes the workload across your install base (where it’s not even noticed) rather than concentrating it a single point. This important architectural design makes it so that as the load on the application goes up, the load on the management server remains low.

Contrasting this with legacy APM solutions, restricting whatever intelligence you have to the central monitoring server(s) causes higher resource requirements and therefore a monitoring infrastructure that requires more servers and greater levels of care and feeding.

Collecting, transmitting, storing, and analyzing large amounts of unneeded data comes with a high total cost of ownership (TCO). It takes a lot of people, servers, and storage to properly manage those legacy APM tools in an enterprise environment. Most APM vendors even want to sell you their expensive full time consultancy services just to manage their complex solutions. Intelligent APM tools ease your burden instead of increasing it like the legacy APM tools do.

All software tools go through transition periods where improvements are made and generational gaps are recognized. What was once cutting edge becomes hopelessly outdated unless you invest heavily in modernization. Hopefully this detailed look at APM methodologies helps you cut through the giant pile of sales and marketing propaganda that develops and IT ops folks are constantly exposed to. It’s important to understand what software vendors really do, but it’s most important to understand how they do it as it will have a major impact on real life usage.

In this blog post I’m going to share with you my personal experience dealing with the aftermath of a software vendor being bought by a private equity firm. All names are being withheld to protect the identity of the guilty parties.

2 Years of Good Value

The story begins several years ago when I was working as a Monitoring Architect for a major investment bank. We had purchased a two year, two million dollar, “all you can eat” site license from a major software vendor. This entitled us to deploy as many licenses of their product as we wanted within a two year period. At the end of those 2 years our licenses would turn perpetual and we would true-up the total number and pay 15% maintenance on the value of those licenses.

Towards the end of this two year period the major software vendor spun off their monitoring business by selling it to a private equity firm. This is when things started to go very wrong. We were near the end of our two year contract so we were beginning the negotiation process on a new contract. We were happy with the technology, support, and relationship with our major vendor and intended to continue using their software.

You want us to pay how much?

Under new leadership, we were told by our account manager that we owed four million dollarsin licensing fees since we deployed too many licenses under our “all you can eat” contract. The contract had no wording to support this and an argument ensued. After a very long negotiation period we eventually agreed to pay about four hundred thousand dollars in maintenance fees and we immediately started looking for a replacement vendor.

Get Out and Don’t Come Back

At this point the relationship was completely broken. There was no possible way this vendor could stop what they had set in motion. It was my job to find suitable replacements for their products being used by our company. With 2 years all of their software was ripped out and replaced by competing products. Every time I think about this situation I am amazed by the greed and the gall of this company, all under the direction of a private equity firm.

The guilty party is still in business today somehow, but from looking at their product portfolio I don’t see much progress in the past 5 years. It seems like the private equity company is just riding out the technology they bought and trying to squeeze as much profit as possible from it until such time where it will just die off. It’s sad to see good technology being left to grow antiquated and rot.

So that is my cautionary story. Be very wary when there is talk of a buy out from private equity. Be proactive and seek options in case things go down like they did for me at the investment bank. Have you ever been in a similar situation? Do you have a happy or sad tale to tell? Let me know in the comments section.

McLaren this year will launch their P1 Supercar, which will turn the average driver into a track day hero. What’s significant about this particular car is that it relies on modern day technology and innovation to transform a drivers ability to accelerate, corner and stop faster than any other car on the planet–because it has:

903bhp on tap derived from a combined V8 Twin Turbo and KERS setup, meaning it has a better power/weight ratio than a Bugatti Veyron

Active aerodynamics & DRS to control the airflow so it remains stable under acceleration and braking without incurring drag

Traction control and brake steer to minimize slip and increase traction in and out of corners

600Kg of downforce at 150mph so it can corner on rails up to 2G

Lightness–everything exists for a purpose so there is less weight to transfer under braking and acceleration

You don’t have to be Lewis Hamilton or Michael Schumacher to drive it fast. The P1 creates enormous amounts of mechanical grip, traction, acceleration and feedback so the driver feels “confident” in their ability to accelerate, corner and stop, without losing control and killing themselves. I’ve been lucky enough to sit in the drivers seat of a McLaren MP4-12C and it’s a special experience – you have a driving wheel, some dials and some pedals – that’s really it, with no bells or whistles that you normally get in a Mercedes or Porsche. It’s “Focused” and “Pure” so the driver has complete visibility to drive as fast as possible, which is ultimately the whole purpose of the car.

How does this relate to Application Performance Monitoring (APM)?

Well, how many APM solutions today allow a novice user to solve complex application performance problems? Erm, not many. You need to be an uber geek with most because they’ve been written for developers by developers. Death by drill-down is a common symptom because novice APM users have no idea how to interpret metrics or what to look for. It would be like McLaren putting their F1 wheel with a thousand buttons in the new P1 road car for us novice drivers to play with.

It’s actually a lot worse than that though, because many APM vendors sell these things called “suites” that are enormously complex to install, configure and use. Imagine if you paid $1.4m and McLaren delivered you a P1 in 5 pieces and you had to assemble the engine, gearbox, chassis, suspension and brakes yourself? You’d have no choice but to pay McLaren for engineers to assemble it for with your own configuration. This is pretty much how most vendors have sold APM over the past decade–hence why they have hundreds of consultants. The majority of customers have spent more time and effort maintaining APM than using it to solve performance issues in their business. It’s kinda like buying a supercar and not driving it.

Fortunately, a few vendors like AppDynamics have succeeded in delivering APM through a single product that combines End User Monitoring, Application Discovery and Mapping, Transaction Profiling, Deep Diagnostics and Analytics. You download it, install it and you solve your performance issues in minutes–it just works out-of-the-box. What’s even great is that you can lease the APM solution through annual subscriptions instead of buying it outright with expensive perpetual licenses and annual maintenance.

If you want an APM solution that lets you manage application performance, then make sure it does just that for you. If you don’t get value from an APM solution in the first 20 minutes, then put it in the trash can because that’s 20 minutes of your time you’ve wasted not managing application performance. Sign up for a free trial of AppDynamics and find out how easy APM can be. If these vendors built their solutions like car manufacturers build supercars, then the world would be a faster place (no pun intended).

In this week’s episode, Donald Trump enlists Team ROI and Team Overhead to solve a Severity1 incident on the “Trump Towers Website”. Team Overhead used “Dynoscope” and took 3 weeks to solve the incident, while Team ROI took 15 minutes by using AppDynamics.

At AppDynamics we’re laser focused on Application Performance Management. Our growth over the past three years has been fuelled by customers worldwide who selected AppDynamics as their preferred APM solution over legacy vendors like BMC Software and Compuware. This is one of the reasons why AppDynamics was positioned as a Leader in its debut year for the Gartner APM Magic Quadrant.

According to Reuters on March 21st, buyout firms are teaming up to take BMC private via auction, in addition Reuters also reported today that Sandell Asset Management are urging Compuware Management to sell the company, “We believe that the only viable path to maximize stockholder value, rather than destroy it, is to execute a sale of the company to the highest bidder as promptly as possible. ”

On the day that AppDynamics announced their APMaaS (yep, APM as a Service) offering for MSP’s (Managed Service Providers) the Compuware marketing team took to the twitterverse to let the world know how ridiculous it is to describe your offering in such plain terms. Here are a few of the exceptionally creative tweets attacking the complete lack of marketing lies and immorality displayed by the goody two shoes at AppDynamics…

Ben Grubin was so pleased with his initial tweet that he decided to tweet almost the same thing twice more…

It’s been almost two years since I joined AppDynamics and it’s been one of the best career moves I’ve ever made. I used to work at a competitor, and quickly realized I was working for the wrong company. Sometimes you just have to trust your gut feeling when it comes to technology–you’ve either got a product that’s special or you don’t, and I know what it’s like to experience both feelings.

At AppDynamics the technology is definitely special, but I also joined a group of like-minded people who shared the same passion as I did for application monitoring. The no-compromise approach to figuring out new ways of doing things that couldn’t be done previously, along with a laser-focus on solving real world problems for customers, is pretty inspiring. Things are never perfect at any company but the passion to make our customers successful, and the will to win business professionally, is unique at AppDynamics. We really believe that enterprise software doesn’t have to suck, it should never be shelfware, and it should be affordable by everyone–which is one of the reasons why we created a free product AppDynamics Lite that now has over 100,000 users and our commercial product AppDynamics Pro is reasonably priced.

In just two years we’ve disrupted an application monitoring market that was previously dominated by expensive complex solutions that quite frankly sucked. This disruption was one of the reasons why Gartner recognized AppDynamics as a Leader in their 2012 APM Magic Quadrant, and we’ve only been selling our product for two years! This speaks volumes for what we’ve achieved in such a short period of time. What’s also great is that our customers are very vocal about their success; our case study page is packed with customer success stories, with several customers willing to publish actual ROI results from their AppDynamics deployments. How many real customer ROI stories have you read recently from any vendor? My guess is not many.

One online community that provides an accurate inside look at companies is Glassdoor.com. It basically lets employees rate different aspects of the company they work for, from compensation all the way through to culture and leadership. If you search for all the APM companies on Glassdoor.com that are currently recognized in the Gartner’s APM Magic Quadrant, here is what the top 10 looks like:

*Glassdoor ratings correct as of 1/10/2013

I’m pretty proud to work for a company where employees are very satisfied and give their CEO 100% approval. That says a lot about the success and leadership of the company–happy employees also means a happy place to work and trust me, this is pretty important when you spend most of your life at work!

One company that didn’t score well was Compuware. Only 38% of employees would recommend a friend and only 68% approve of their CEO. Not particularly encouraging when you need your employees to innovate, run through walls, and beat the competition. A hedge fund recently put an offer on the table to take Compuware private–let’s hope those guys can get the employees jazzed.

If you’re looking for the next challenge, cool technology and a great place to work, you should consider joining AppDynamics. We’ve got 21 positions currently open and we need great people to help scale the great company we’re building!

With customers like Netflix, Orbitz, Fox News, Vodafone and Yahoo you’ll experience the ins and outs of monitoring some of the largest applications in the world.

Last week I flew into Las Vegas for #Interop fully suited and booted in my big blue costume (no joke). I’d been invited to speak in a vendor debate on User eXperience (UX): Monitor the Application or the Network? NetScout represented the Network, AppDynamics (and me) represented the Application, and “Compuware dynaTrace Gomez” sat on the fence representing both. Moderating was Jim Frey from EMA, who did a great job introducing the subject, asking the questions and keeping the debate flowing.

At the start each vendor gave their usual intro and company pitch, followed by their own definition on what User Experience is.

Defining User Experience

So at this point you’d probably expect me to blabber on about how application code and agents are critical for monitoring the UX? Wrong. For me, users experience “Business Transactions”–they don’t experience applications, infrastructure, or networks. When a user complains, they normally say something like “I can’t Login” or “My checkout timed out.” I can honestly say I’ve never heard them say – “The CPU utilization on your machine is too high” or “I don’t think you have enough memory allocated.”

Now think about that from a monitoring perspective. Do most organizations today monitor business transactions? Or do they monitor application infrastructure and networks? The truth is the latter, normally with several toolsets. So the question “Monitor the Application or the Network?” is really the wrong question for me. Unless you monitor business transactions, you are never going to understand what your end users actually experience.

Monitoring Business Transactions

So how do you monitor business transactions? The reality is that both Application and Network monitoring tools are capable, but most solutions have been designed not to–just so they provide a more technical view for application developers and network engineers. This is wrong, very wrong and a primary reason why IT never sees what the end user sees or complains about. Today, SOA means applications are more complex and distributed, meaning a single business transaction could traverse multiple applications that potentially share services and infrastructure. If your monitoring solution doesn’t have business transaction context, you’re basically blind to how application infrastructure is impacting your UX.

The debate then switched to how monitoring the UX differs from an application and network perspective. Simply put, application monitoring relies on agents, while network monitoring relies on sniffing network traffic passively. My point here was that you can either monitor user experience with the network or you can manage it with the application. For example, with network monitoring you only see business transactions and the application infrastructure, because you’re monitoring at the network layer. In contrast, with application monitoring you see business transactions, application infrastructure, and the application logic (hence why it’s called application monitoring).

Monitor or Manage the UX?

Both application and network monitoring can identify and isolate UX degradation, because they see how a business transaction executes across the application infrastructure. However, you can only manage UX if you can understand what’s causing the degradation. To do this you need deep visibility into the application run-time and logic (code). Operations telling a Development team that their JVM is responsible for a user experience issue is a bit like Fedex telling a customer their package is lost somewhere in Alaska. Identifying and Isolating pain is useful, but one could argue it’s pointless without being able to manage and resolve the pain (through finding the root cause).

Netscout made the point that with network monitoring you can identify common bottlenecks in the network that are responsible for degrading the UX. I have no doubt you could, but if you look at the most common reason for UX issues, it’s related to change–and if you look at what changes the most, it’s application logic. Why? Because Development and Operations teams want to be agile, so their applications and business remains competitive in the marketplace. Agile release cycles means application logic (code) constantly changes. It’s therefore not unusual for an application to change several times a week, and that’s before you count hotfixes and patches. So if applications change more than the network, then one could argue it’s more effective for monitoring and managing the end user experience.

UX and Web Applications

We then debated which monitoring concept was better for web-based applications. Obviously, network monitoring is able to monitor the UX by sniffing HTTP packets passively, so it’s possible to get granular visibility on QoS in the network and application. However, the recent adoption of Web 2.0 technologies (ajax, GWT, Dojo) means application logic is now moving from the application server to the users browser. This means browser processing time becomes a critical part of the UX. Unfortunately, Network monitoring solutions can’t monitor browser processing latency (because they monitor the network), unlike application monitoring solutions that can use techniques like client-side instrumentation or web-page injection to obtain browser latency for the UX.

The C Word

We then got to the Cloud and which made more sense for monitoring UX. Well, network monitoring solutions are normally hardware appliances which plug direct into a network tap or span port. I’ve never asked, but I’d imagine the guys in Seattle (Amazon) and Redmond (Windows Azure) probably wouldn’t let you wheel a network monitoring appliance into their data-centre. More importantly, why would you need to if you’re already paying someone else to manage your infrastructure and network for you? Moving to the Cloud is about agility, and letting someone else deal with the hardware and pipes so you can focus on making your application and business competitive. It’s actually very easy for application monitoring solutions to monitor UX in the cloud. Agents can piggy back with application code libraries when they’re deployed to the cloud, or cloud providers can embed and provision vendor agents as part of their server builds and provisioning process.

What’s interesting also is that Cloud is highlighting a trend towards DevOps (or NoOps for a few organizations) where Operations become more focused on applications vs infrastructure. As the network and infrastructure becomes abstracted in the Public Cloud, then the focus naturally shifts to the application and deployment of code. For private clouds you’ll still have network Ops and Engineering teams that build and support the Cloud platform, but they wouldn’t be the people who care about user experience. Those people would be the Line of Business or application owners which the UX impacts.

In reality most organizations today already monitor the application infrastructure and network. However, if you want to start monitoring the true UX, you should monitor what your users experience, and that is business transactions. If you can’t see your users’ business transactions, you can’t manage their experience.

The most enjoyable part of my job at AppDynamics is to witness and evangelize customer success. What’s slightly strange is that for this to happen, an application has to slow down or crash.

It’s a bittersweet feeling when End Users, Operations, Developers and many Businesses suffer application performance pain. Outages cost the business money, but sometimes they cost people their jobs–which is truly unfortunate. However, when people solve performance issues, they become overnight heroes with a great sense of achievement, pride, and obviously relief.

To explain the complexity of managing application performance, imagine your application is 100 haystacks that represent tiers, and somewhere a needle is hurting your end user experience. It’s your job to find the needle as quickly as possible! The problem is, each haystack has over half a million pieces of hay, and they each represent lines of code in your application. It’s therefore no surprise that organizations can take days or weeks to find the root cause of performance issues in large, complex, distributed production environments.

End User Experience Monitoring, Application Mapping and Transaction profiling will help you identify unhappy users, slow business transactions, and problematic haystacks (tiers) in your application, but they won’t find needles. To do this, you’ll need x-ray visibility inside haystacks to see which pieces of hay (lines of code) are holding the needle (root cause) that is hurting your end users. This X-Ray visibility is known as “Deep Diagnostics” in application monitoring terms, and it represents the difference between isolating performance issues and resolving them.

Why Deep Diagnostics for Production Monitoring Matters

A key reason why AppDynamics has become very successful in just a few years is because our Deep Diagnostics, behavioral learning, and analytics technology is 18 months ahead of the nearest vendor. A bold claim? Perhaps, but it’s backed up by bold customer case studies such as Edmunds.com and Karavel, who compared us against some of the top vendors in the application performance management (APM) market in 2011. Yes, End User Monitoring, Application Mapping and Transaction Profiling are important–but these capabilities will only help you isolate performance pain, not resolve it.

AppDynamics has the ability to instantly show the complete code execution and timing of slow user requests or business transactions for any Java or .NET application, in production, with incredibly small overhead and no configuration. We basically give customers a metal detector and X-Ray vision to help them find needles in haystacks. Locating the exact line of code responsible for a performance issue means Operations and Developers solve business pain faster, and this is a key reason why AppDynamics technology is disrupting the market.

Below is a small collection of needles that customers found using AppDynamics in production. The simple fact is that complete code visibility allows customers to troubleshoot in minutes as opposed to days and weeks. Monitoring with blind spots and configuring instrumentation are a thing of the past with AppDynamics.

Needle #13 – Excessive Cache Usage

If you want to manage and troubleshoot application performance in production, you should seriously consider AppDynamics. We’re the fastest growing on-premise and SaaS based APM vendor in the market right now. You can download our free product AppDynamics Lite or take a free 30-day trial of AppDynamics Pro – our commercial product.