Blog

The old adage “If You Don't Monitor It, You Can't Manage It”
holds true just as much today as in the past. People are pushing compute load to the
cloud, both public and private, without implementing the robust measuring/monitoring
solutions they do with their current infrastructure.

We hear the
incumbent solution providers say they support the cloud while telling
themselves and anyone else who will listen that no one will truly use the
public cloud so you don’t need to bother. I am seeing both public and private
cloud projects being kicked off by everyone from banks to government.
Private clouds will certainly come first but public cloud is everyone’s goal.

Security is usually touted as the main reason large companies are reluctant to
use something like Amazon’s EC2 and that is true with the current mind-set
around security. But rest assured very soon we will need to use a new mind-set, one which
recognizes that nothing can be deemed as being safe or trusted. The military
are wrestling with this new paradigm, where they are not sure if another
country or state has already hacked into and currently owns their data. The days
of thinking a section of a company’s infrastructure, much less the entire
company’s infrastructure, is trusted are gone.
We need to embrace the ethos of not trusting any data source, not even
your own!

Amazon’s EC2 cloud is of course the current market 800 pound
gorilla and with good reason. They created the current version of cloud
(remember old versions like grid computing?) and it does an exceptional job of pushing new
features and keeping the system up and running (although no one is perfect as
they have had down time lately). Open Stack, Cloud Stack, Eucalyptus and others are
desperately trying to gain market share; some have been more successful than
others.

To enable better management of the cloud, we have built a
HyperGlance collector to pull in data from the Amazon EC2 API to visualise it.

HyperGlance can pull in any structured data and create a
topology as long as we have relationship data or something we can postulate connections with.
Amazon EC2 is no different. It has a
very nice, well-known API that specifies a Region and Availability zone per VM instance so we can just create a topology that maps to those attributes. Once we
have that data, we can query the API for any metrics and attributes for the
instances and then overlay them onto the model.

We can also connect the topology to a physical device like a firewall which gives you a
hybrid cloud view. I see people using
Nagios or similar tools to monitor the state of the internals of the Instance
as Amazon can’t (and shouldn't see inside. We can also pull in the Nagios data and
overlay onto the Amazon topology.

Next up is OpenStack with Quantum. Networking is taking its place in the core of
the stack, as it should. We are working on a collector to pull in OpenStack
data via the API, then we will add on SDN data to that mix. We are working
towards a true end-to-end view of I.T., from the applications down to the
hardware.