Monthly Archives: June 2013

(Excerpt from original post on the Taneja Group News Blog)

VMware announced vSphere Big Data Extensions this week, which might at first seem to be just a productizing of some open source Hadoop deployment software, but if you dig in a bit you can see that the big future of Hadoop might just be virtual hosting, a big shift from its intentional commodity server roots. And this puts VMware on top of data center workload trends towards scale-out computing apps and offering everything “as a service”.

(Excerpt from original post on the Taneja Group News Blog)

How many eyeballs does it take to really see into and across your whole network – internal and external? Apparently a “thousand” according to Thousand Eyes, who is coming out of stealth mode today with a very promising new approach to network performance management. We got to talk recently with Mohit Lad, CEO of Thousand Eyes, who gave us a fascinating preview. Thousand Eyes already has a very strong list of logos signed up, and we think that they will simply explode across the market as one of the few strong solutions that really helps unify management across clouds.

One of the issues with network side performance management these days is that much of the network that IT enterprises rely on is external. Over the years there have been a few attempts at quantifying and monitoring external network issues – who hasn’t scripted some traceroute commands to check on some questionable services? You might have many network tools – maybe you run Netscout in-house but rely on some APM tools to monitor a SaaS provider. Still you are likely missing all the network traversal in between, and it’s almost impossible to analyze for actual contention and hotspots in a deep topology (there is an echo here in what NetApp Balance did for layers of server/storage virtualization).

Companies like Gomez/Compuware, BMC, Keynote, and others have created vast endpoint armies (the good botnets I suppose) to test public facing web apps from many, many places at once. But if there is a problem, then what? There is now a growing challenge to find and isolate devious performance issues hidden in complex networks hosting all kinds of internal and external apps and services that span private and public clouds, CDNs, and even DDOS protection vendors. Network topologies are just darned complex and often your visibility is extremely limited.

This is where Thousand Eyes comes in. They do offer a similar HTTP page component loading analysis like Compuware and Keynote, but the exciting thing is what and how they link that application performance with network performance constraints on end-to-end network topology mappings, regardless of where those networks traverse. Imagine being able to tell your service provider that they are dropping packets at IP xxx in their network, and its affecting your apps x, y, and z in location a and b!

It’s a bit hard to describe in text – a picture here would really be worth thousands of words – but when there is a network issue, Thousand Eye’s uniquely focusing topology visualization (supported by “deep path analysis”) nails down the bottleneck fast regardless of if the problem is in-house, or at your ISP, service provider, CDN, etc. And since its SaaS based, you can easily “share” your dynamic view of the problem with the support teams at those providers for quick collaborative resolution (and I expect providing a motivation for those folks to also subscribe to Thousand Eyes).

This may be one of those things that you need to see to appreciate, but I was quickly impressed by the thoughtful visual analysis of some issues that would otherwise be pretty opaque and hairy to figure out. If you manage application performance across wide areas and multiple providers, or are responsible for network performance on complex topologies, you’ll want to check these guys out. Network pros will probably get more out of it immediately than others, but even a server or storage guy can tell that a red circle in a service providers IP address is something to call them about.

RT @TruthinIT: There's no cost of goods like a traditional NAS device where I've got disks I've got to pay for. And if I'm not using the data on those disks, I still got to pay for those disks. bit.ly/2BBX073@Nasuni@smworldbigdata

In 30 min I'm interviewing @Cohesity (and customer) on @TruthinIT about Mass Data Fragmentation. It's about having too many copies in about four or five different "dimensions", including cloud! Join us webcast (12.11.18) @ 1pmET (and there will be prizes) bit.ly/2PdqrQn