Category Archives: cloud

Upon perusing the Intel Cloud Builders site for interesting new cloudy vendors and reference architectures, I came across an interesting new company called Oxygen Cloud. Although Storage as a Service is a reasonably well formed concept, much of the attention has been around public provider services such as livedrive, drop box or backup with products such as EMC Mozy. This is all well and good, but a number of companies have concerns over how the “public cloud” type products align to corporate policy. Take drop box for example, the ease of how data is shared or migrated across to other devices maybe doesn’t align to how they want to control one of an organisations most valuable commodities.. data.

So how does an organisation offer device agnostic storage, not based on the contraints of conventional file systems, in such a fashion where they maintain control ? Ultimately there are 101 ways to skin a cat… but as far as skinning cats goes, I quite like this one.
The Back End

You take a product like EMC Atmos; EMC Atmos is what we call cloud optimised storage. In real terms this means the way data is stored, how available it is, how its tiered across different costed storage and where it is stored geographically is handled by repeatable policy, not only this, but also meta data is leveraged to the nth degree (beyond that of traditional metadata uses in traditional file system). I won’t re-invent the explanation as EMC has done a good job of explaining this concept with pretty pictures (video below).

Atmos itself has a fair amount to it, but my point being is that this use of metadata means that not only can the way data is handled be derived from this meta data, but now the infrastructure can have some awareness the context of data, context which is relevant to a front end such as Oxygen Cloud. Yes Atmos can deliver storage with NFS or CIFS, this is fine, but not overly exciting. The cool part is giving a front end direct access to the context of a file or a set of files using REST, rather than just last modified date and all the usual stuff. The metatags can be used to define the segregation of data in a muti-tenant environment or application specific elements, such as how a file can be shared and with whom.

Also, with Atmos being scale out storage the upper limits of scalability or need is say endless ? (or as near as), with the beauty of the storage being content addressable and not based around hierarchal file systems meaning that as the system is grown, you are not constrained and challenged by overly complex file system structures which need to be maintained.

Clearly availability is important, but hey.. this is expected. Needless to say, the system handles it very well.

The Front End

I’m not going to spend a great deal of time upping my word count on this section, as Oxygen Cloud have some very descriptive videos (further down), but the key things here are that the company controls the data in their own way. We have LDAP/AD integration, full access controls, we can set expiration of a link if we do share a file publicly, encryption at all point of a files transit and file can be presented as a normal explorer/finder plugin (same way we view normal CIFS shares) or files can be accessed via devices such as iPhone/Pad. One nice feature for me is that if a phone is stolen or an employ leaves, the organisation can sever access to data/directories on a per user or device basis.

Anyway, worth spending a bit of time watching the below :

I shall be building this solution out on the lab over the next month or so (as much as the day job allows), so watch this space for more info and a revised review.

VMware have a history of innovation and creating disruptive technology. Disruption may sound like bad thing, although as we know with things like the VMware hypervisor, disruption makes people money. It may be disruptive, but if the benefits are clear then people standardise on the technology and IT Resellers, Vendors and professionals benefit from the plethora of technology requirements which spill out the sides to accomodate these new marvells of modern tech.

VMware first set the trend when they abstracted the OS dependancy on directly seeing physical hardware, by introducing a hypervisor; now they have taken away the application dependancy on seeing the operating system.. lovelly jubbly ! this sounds good, but why ? how? what?

I’m a little light on the nuts and bolts right now, but needless to say; needless to say, if you can deliver a windows/linux/mac application to any device with a browser supporting HTML5, the benefit is clear ! Visio on my iPad.. yes please, Safari on my Windows PC.. Why not ?!

I shall await the finer details with baited breath, but leave you with a pretty cool demo as shown below.. geeky soul food ! Enjoy !!

Back in 2009 VMware, Cisco and EMC joined forces to create a new approach to selling full datacenter pre-configured solution stacks. Rather than simply a gentlemen’s agreement and a cross pollination of development from the three companies, it was decided they would create a new start up business as the delivery mechanism to drive this new concept to market. This new start up, known as VCE (Virtual Computing Environment), would take to market a new range of pre-validated, pre-configured and singularly supported solution stacks called VBlock.

The purpose of a VBlock is to simplify infrastructure down to effectively units of IT and define that a workload can be supported by “a number of floor tiles” in the data centre. This approach is enabled by the fact that everything within a VBlock is pre-validated from an interoperability perspective and customizable components are reduced down to packs of Blades (compute), Disks and network components required to connect into the upstream customer environment, means that solution design is massively simplified and can be focus to supoprting the identified workload.

Pre-Validated

VCE extensively soak test workloads and configurations available within the VBlock to reduce pre-sales time spent on researching interoperability between the Network/compute/storage layers of the Data centre. This means that defining how a workload is supported is the focus and planning phases are significantly reduced. This pre-validated approach means that power and cooling requirements are easily determined in preparation for site deployment.

Pre Build and Pre Configured

As part of the VBlock proposition, the physical and logical build process are carried out in VCE facilities, so that time on customer site is restricted to that if integrating into the customer environment and application layer services. This reduces deployment time massively.

Single Support Presence

Rather than dealing with the parent companies (VMware, Cisco, EMC) of VCE on a per vendor basis. VCE act as a single support presence and will own any VBlock related issue end to end. This is partly enabled by the pre-validated aspect of VBlock, as VCE have a number of VBlocks in house and provided the VBlock is constructed as per approved architectures, VCE can simulate the environment which has caused the error to decrease time to resolution.

The Technology

The technology element at the core of the VBlock consists of VMware VSphere, Cisco UCS (Cisco’s Unified compute solution), Cisco Nexus (Cisco’s Unified fabric offering) and EMC VNX’s unified storage platform. Cisco simplify management of their blade computing platform down to a single point of management (UCS Manager) which resides on the 6100 Fabric interconnects and allows for “stateless” computing, in that it is possible to abstract the server “personality” (Mac addresses, word wide names, firmware, etc) away from the server hardware, then create and apply these personalities on demand to any blade within the UCS system. This management system manages all aspects of the UCS system (blade/chassis management, connectivity, firmware and connectivity). Cisco’s Unified Fabric commonly refers to their Nexus range (but elements of unified fabric apply to UCS). Cisco Nexus allows both IP network traffic and fibre channel traffic to be delivered over common 10 Gigabit switches using FcoE (Fibre Channel over Ethernet). In addition the Cisco Nexus 1000v enables deployment of a virtual switch within the Vmware environment ,allowing network services to be deployed within virtual infrastructure where it was previously only possible in the physical world.

EMC VNX is a multi protocol storage array allowing for storage connectivity via block storage technologies (iSCSI/Fibre Channel) or NAS connectivity (CIFS/NFS/pNFS), giving the end user free choice as to how storage is provided to the UCS Server estate. EMC also drive efficiencies in how capacity and performance are handled by leveraging technologies such as deduplication and thin provisioning to achieve a lower cost per gigabyte. EMC are also able to leverage solid state disk technologies to extend storage Cache or enable sub LUN level tiering of data between Solid state disk and traditional mechanical disk technologies based on data access patterns.

VMware Vsphere has provided many companies cost saving in the past but in the Vblock is leveraged to maximum effect to provide operational efficiencies with features such as dynamic and automated mobility of virtual machines between physical servers based in load, high availability and the native integration that is inherent between VMware and EMC with the VAAI API integration. This integration enables much lower SAN fabric utilisation for what were very intensive storage network operations such as storage migration. EMC Powerpath/VE is also included in the Vblock which enables true intelligent load balancing of storage traffic across the SAN fabric.

Management

VCE utilise the Ionix Unified Infrastructure Manager (UIM) as a management overlay which integrates with the Storage,Compute,Network and Virtualisation technologies within the Vblock and allows high level automation of and operational simplicity with how resources are provisioned within the VBlock. UIM will discover resources within the VBlock and the administrator then classifies those resources. As an example High performance blades may be deemed “Gold” blades verses lower specification blades which may be classified as “silver” blades. This classification is also applied to other resources within the Vblock such as storage. Once resources have been classified, then they can be applied on a per tenancy/application/department basis which is allowed access to differing levels of Gold/silver/Bronze resources within the Vblock. UIM now also includes operational aspects which give end to end visibility of exactly which hardware within a VBlock a particular VM is utilising (Blades, disks, etc). Native Vendor management tools can be utilised, although with the exception of Vcenter, UIM would be the point of management of 90% of VBlock tasks after initial deployment.

In Summary

The VCE approach to IT infrastructure with VBlock enables simplification of procurement and IT infrastructure planning as VCE are able to reduce their infrastructure offerings to essentially units of IT which are sized to support a defined workload within a number of “floor tiles” in the data centre. These predetermined units of IT have deterministic power and cooling requirements and scale in such aware to where all VBlock instances (be it few or Many) can be managed from a single point of management and are all supported under a single instance of support. Leveraging technologies which drive efficiencies around Virtualisation, networking, storage and computing we see benefits such as higher performance in smaller physical footprints when addressing storage and compute, minimised cables management and complexity with 10GbE enabling technologies such as Fibre Channel over Ethernet and operational simplicity with the Native Vblock unified infrastructure management tool UIM.management tool UIM.

So in the last 3 weeks I’ve spent time un cork with a number of tier 1 VMWare, Cisco and EMC Partners, aswell as Subject matter experts from the 3 vendors themselves; I’ve also just come back from Arizona after course around Cisco’s UCS B Series offering and Nexus piece.

The Infrastructure offerings from VMware, Cisco, EMC are all very impressive; there are integration points between the 3 vendors which go beyond just marketing fud. Cisco have their Nexus 1000v which extends the network access layer into the virtual server environment rather than simply at the hypervisor OS itself, EMC offer direct integration and management capability of their systems from VMWare’s Management suite by making optimal use of the various Vstorage API’s, EMC/VMWare’s Ionix portfolio integrates with both management of the 3 vendor offerings, but also giving application discovery capability visible from VCenter and granular trending and reporting cababilities; even covering change control for those lucky folks who must be ITIL compliant.

So that’s the whole package.. job done.. NAY !!. In my humble opinion, the businesses that really excel are those organisations that can offer all of this, but can also wear a development and integration hat. Dealing with the presentation layer as to how all of this is managed, provisioned and tweaked to meet business needs, not just IT infrastructure needs. IT is moving more and more towards a self service model, to where within the constraints of what a business or provider allows; a user/customer/business can spin up instances of applications/servers/resource/storage on the fly and the underlying infrastructure simply goes and does.

From a Service provider instance this might be a virtual machine or computing resource thats spun up, from an internal business perspective it may be a complete virtual environment that’s spun up for dev purposes or demonstration purposes, It may simply be using something like XML to extend on the management capabilities of the native vendor tools (much like BMC Bladelogic have with Cisco UCS) or simply making the management tools more personal and relevant to an organisation.

Kaavo is one company which is working on management of public and private cloud deployments

The below video is a very good example of someone that has taken the open XML framework and tuned an IT deployment specifically to an organisations business needs

So in summary, selling tin and selling licences will make you money, BUT !! consultancy, development and services demonstrate more value, a deeper fundamental understanding of how business needs map to IT requirements and are more margin rich.

Firstly.. how much does it cost ? heres the good news.. its free !! you use it in conjunction with standard disk licences or CDSO licences. Typically people are using this option for auxillary (secondary or tertiary copies) of data. If you have existing backup to disk licences, you simply need to upgrade to service pack 4 of Simpana 8 and download the March 2010 update pack, the connector is in there. The only difference in setting up the maglib for cloud storage is that you have to input the username and password provided by your cloud storage provided by the cloud storage provider. Currently this only supports Amazon S3 and Microsoft Azzure, although Commvault have a big sales event comming up soon, so we’ll see if the announce any other supported providers there.

Remember that if you are interested in backing up or archiving to the cloud you will still need standard disk or CDSO licences and if you want to do your primary backups to the cloud, you will still need a disk staging area locally if you want to use dedupe (advanced disk/CDSO) as deduplicaton will need to occur locally. Also if you want to run auxillary jobs of deduped backup jobs, you will need to implement silo storage to facilitate this.

Seeing as lots of people are asking lots of questions around EMC,Vmware and Cisco’sVblock. I thought I’d best dig something out. Attached is a very concise, granular, document which outlines the different elements of a Vblock, how the disks are configured, supported Vblock applications and… some pretty pictures for your delectation.

I wanted to do a post simply based on some of the technologies which have facilitated this vision of… the cloud and to look at some of those things in isolation with a view to understanding the bigger picture.

IT is at an interesting crossroads at the moment, there is a whisper in the wind accumulating Clarity and momentum by the day. A whisper which tells us that the way people think about IT is changing. The concept of the cloud is not a new one, but its shape and purpose have differed quite dramatically depending on who you talk to. For the moment at least and for the various vendor channels, its been very much business as usual. People are still buying tin, assessing the viability of virtualization, putting out to tender for the traditional server/SAN type solutions and vendors will continue to cater for those traditional needs. However, Vendors have also been doing something else… better defining this cloud thing, how they can commoditise it, slap a price tag on it, stick it in a box and sell it.

Lets look at some of these technologies which have been developed to facilitate this transition.

Virtualisation on the whole giving us the ability to better utilise tin and deploy new virtual servers with speed and ease. VMWare’s Vmotion/Dynamic power management/Distributed Resource Scheduling gives us the ability to move virtual servers between physical servers without disruption for any number of reasons (DPM allows us to reduce our power requirements by moving virtual servers onto a lesser number of physical machines, powering down machines left unused, as and when the business deems it suitable. DRS allows us to distribute virtual servers dynamically between physical servers based on the resource requirements of the virtual server). This mobility allows the business be flexible and adaptive. The advent of virtualisation also allows us to in effect commiditise resource, be it memory, CPU resource or storage and distribute that in the most effective manner possible.

Storage has become something which is intelligent. Virtualisation and automation technologies in the storage world have given storage platforms the ability to adapt. Things like thin provisioning and online archive give us the ability to make better use of storage. Also players like compellent and EMC with their FAST technology gives storage the edge by digging down into the bare blocks of storage and moving individual blocks of data between fast/expensive and cheaper/high density storage based on how often those blocks of data are being access and their IOPS requirements. Deduplication, again another technology allowing transparency to the user while efficiently storing data.

Mobility. VMWare again, with virtual desktops being delivered on demand to where-ever the user needs it and maintaining access to all their bits and pieces. IP telephony and VPN, giving the external user the ability to access all the resources of the internal user and be as mobile as they need to. With networking capabilities becoming ever more efficient and robust also with things like 10GbE anf FCoE coming to the market, the datacenter is able to consolidate their network infrastructure and provide resources to the user in ever more efficient and increasingly more intelligent ways.

Here are a couple of video’s just showing some deployments of IP telephony and virtual desktops and the likes which I found interesting :

Here’s a very cool way in which Subway have deployed IP telephony in their setup

and a video showing VMware virtual desktop offering

Management. We’re seeing integration between the network, the serverside and the storage in a big way. You can now manage EMC storage from within VMWare, VMware have pulled Ionix into their portfolio meaning they can manage physical and virtual infrastructure. Again, Ionix have released the unified infrastructure Manager which can manage Cisco Nexus networking tin, VMWare and EMC Storage. Meaning that not only can you have all these separate and different technologies working as one, but you can manage them as one.

EMC/VMWare/Cisco have their offering with the Vblock, Netapp are hot on the tails of EMC and microsoft and HP/Lefthand are all working to a common goal (in competition with eachother ofcourse). To be right where its happening when service providers take a the next step from providing telecoms, disaster recovery, software as a service.. and start providing effectively resource as a service, infrastructure as a service.

When organisations are comfortable that they trust a 3rd party to host their applications, their user data and their desktops even, any vendor worth their salt wants to be there. Before long, we won’t be asking customers what switches, servers and storage they want. We’ll be asking broader questions… How many IOPS do you want ? how much memory ? how much computing power you need ? and how much bandwidth you want? How many people do you want to be able to make phone calls ? This adaptable, mobile architecture we’re seeing now will be doing the math…. Service Providers will be selling virtual commodities.

Below is a video by Gartner, with some of their analysts discussing some of the points of cloud computing :

Ofcourse, we’re a little while away from seeing that happen in the mainstream, a little way away from seeing the masses flock to these service providers. People like to have control over their data, they know that if its in a rack they can walk up to and touch.. they have control. The market needs to have confidence in this concept that is the cloud… and again, there are businesses who understand and are comfortable with this concept and have adopted it with aspects of their business. But when people start entrusting there critical core business applications, which are bound to OLA’s and SLA’s… this is when it will get really exciting.

Who am I ?

I work as a Technical Architect for a European Data Centre distributor. The goal of this blog is simply to be a resource for people the want to learn about some of the technologies which make up todays datacentre, be it large or small. I've been working with EMC Storage, Virtualisation, Backup and Cisco UCS technologies for some time now and have a drive to learn and share on various topics related to the above. This Blog will provide information on how specific technologies work, what questions need to be asked in order to qualify certain technology requirements and my two pence on some of these technologies. Ultimately I enjoy technology and count apon other people sharing their knowledge.. so this blog is to try and give back to the IT community where I can.

Please feel free to provide feedback as to the content on this blog and some bits you'd like to see.

Get my posts by email

Legal Disclaimer

Information on this blog may contain inaccuracies or typographical errors and may be changed or updated without notice. This blog doesnt constitute an offer or contract.

Links:

This blog may provide links to other blogs or websites I feel may have relevant content. However I make no representations whatsoever about other websites that you may access through this one.

Liability:

IN NO EVENT WILL ”INTERESTING EVAN’S BLOG”" (Evan Unrue) BE LIABLE TO ANY PARTY FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES FOR ANY USE OF THIS WEBSITE/BLOG,OR ANY OTHER HYPERLINKED WEBSITE/BLOG, INCLUDING WITHOUT LIMITATION, ANY LOST PROFITS, BUSINESS INTERRUPTION, LOSS OF PROGRAMS OR OTHER DATA.