Friday, September 17, 2010

At one time IT products were invented for business use first before being redeveloped for consumers. Now products are being designed for users first. One just has to look at the incredible growth of Apple to recognize this trend. Apple's market cap exceeds Microsoft's so it is easy to forget that they do not own a large share of the market relatively speaking. Just 7% of the PC market and 18% of the smartphone market according to industry analysts, however they are now one of the biggest software companies around.

The economics of IT are shifting with Gartner predicting that by 2012 20% of businesses will not own IT assets. In addition users are becoming increasingly demanding and oblivious to traditional IT problems like where data is stored or how it is secured. It is also Interesting that services like hotmail that have no SLAs, tend to provide better uptime than some internal IT shops because they are designed for demand from a vast number of users.

Personal computing has also arrived at work and is unlikely to be stopped. IT must now come to grips with how they integrate these new devices, how they deal with user applications and more importantly how do you impose some level of control?

There is evidence to support that these trends are not all bad for an organization. For example enabling Bring Your Own Computer programs lets businesses generally increase employee satisfaction while reducing support costs. Employees tend to be more productive as they have a single device with potentially more mobility leading to a better work life balance.

Why bother looking at consumerization and it's impact? A generation of employees are entering the work force that have grown up with consumerization and expect IT to work in a similar fashion. This will become an increasingly competitive factor as the economy heats up and companies work to attract and retain the best, brightest and most technically savvy employees.

This information is based on a presentation given by Rowena Samuel, the Canadian Technical Partner Manager at NetApp while at our VMworld revisited event.

We all are aware that Virtualization creates storage challenges. Inherently centralized storage is more expensive than DASD. What is often overlooked is the cost of floor space for storage related to server or desktop virtualization. What also impacts storage is the increased consumption rate of storage in virtual environments and new response time expectations.

NetApp addresses cost using deduplication and storage efficiency. When NetApp talks about storage efficiency they are talking about server offload or using wintel servers as servers and storage devices for storage and backup.

To NetApp mobility is about being able to mirror data between data centers; in addition integration with virtualization products from VMware like SRM is very important to NetApps strategy.

NetApp believes unified storage is much more than multi-protocol support. It's also about unified storage controllers to enable customers to move from low end to high end storage. NetApp also supports other vendors storage solutions by adding NetApp controllers in front of existing storage solutions.Unification also means one management tool and one storage OS to support.

NetApp integrates "tier-less" storage or high speed caching hardware called Flashcache. NetApp provides many degrees of efficiency such as using Flashcache in front of SATA to accelerate cheaper disk solutions.

NetApp Snapshot technology is highly robust and plugs into VMware management tools. For example, you can snapshot entire data stores but recover individual vms using NetApp technology. NetApp believes strongly in the value of deduplication in virtualization environments. They offer a guarantee of a 50% recovery of storage when NetApp dedupe is integrated into virtual infrastructure (of course conditions apply)

NetApp offers storage for 50$/VDI instance however you will have to check through their white papers to determine what combination of product and configuration is required to provide this price point.

Flashcache prevents boot storms (performance issues on storage caused by multiple vms starting at the same time) by reading from cache vs. The disk subsystem. It is dedupe aware so if 100 images are requested only one loads and is can be used to serve multiple requests. This can offer significant performance gains and cost savings in VDI environments.

NetApp is very excited about the vCloud Director. They were heavily involved in the testing stage prior to product launch. NetApp sees vCloud Director as the first initial offering in the ITaaS (IT as a Service) space.

The key features of vCloud are support for virtual data centers or true multi-tenant environments. In addition Infrastructure service catalogs can be created to allow users to browse and select vms or a collection of vms representing an application or service. vCloud Director fully integrates with VMware Orchestrator. vCloud director provides an extra level of abstraction on virtual infrastructure to merge public and private clouds.

NetApp believes they can enhance multi-tenant environments because they can create virtual storage devices on top of a single storage device. While customers may not require these features now, these technologies will be increasingly important as we migrate to the cloud. The full value message can be found on youtube at http://tinyurl.com/36wx9cw

NetApp sees a journey towards the cloud as a roadmap for all customers. To prepare for this customers will need to integrate their silos of virtualization and standardize to ensure they are "cloud ready"

Friday, September 10, 2010

You know I always find it interesting when a new concept breaths new life into one that has been around for awhile. I am delivering the VDI message as the interest level is extremely high and it is a good solution in many cases. As a person who has been in the industry for a while, I was delivering Presentation virtualization when it was just called server based computing. Over the last few months I have had a few customers mention that they consider terminal services (TS) a legacy delivery method. The great criticism has always been that it changes the user experience by not delivering a full desktop (I do recognize you can deliver a shared server desktop but in general it is not recommended).

What is interesting is that on an iPad this is it's greatest strength. There is a lot of development being done by all VDI vendors to provide a iPad client to deliver the desktop anywhere. Some of them have delivered and a few are in alpha or beta. But from using sysadmin type tools on the iPad, I am wondering if going straight to the app provides a better experience then first to the desktop and then to the app?

As a TS administrator can tell you, giving up the desktop in favor of thin clients in a traditional Citrix or TS Server environment was a bit of an up hill battle. But does the iPad close this last mile of hard fought over end user space? Clearly Citrix has hedged their bets by providing an Apple'esc method of consuming applications called Dazzle. VMware mentioned a similar on demand application model but details were light and Microsoft has significantly improved TS server in the release of 2008. What may not serve the vendors as well is the rebranding around desktop centric virtualization. As we look to transition from IT shops to Service driven organizations the server based computing/tablet model has become compelling thanks in large part to the iPad. And unlike in past IT end user turf wars the demand for the new device is coming from the users themselves.

Thursday, September 9, 2010

I was attending an Intel presentation last week on their cloud strategy and what stuck me was the amount of internal alignment that had to be completed in order for them to take advantage of cloud based services. Now most organizations are not the size of Intel which runs 100000 servers across 95 datacenters but there are lessons to be learned from their cloud readiness preparation that are applicable to organizations of all sizes. The areas of focus were compute consistency which when we look at consuming Infrastructure as a Service (IaaS) relates to ensuring standard sizing of a virtual machine to an application workload. In our virtualization practice we recommend that customers standardize on a set of configurations for high, medium and low workloads if they do not include performance measuring as an internal process. This ensures consistency when transferring on premise workloads to IaaS providers. For companies running large virtualization shops this can be daunting if no consistent standards existed in the first place. Ironically Amazon just announced micro instances on EC2 for smaller application workloads which highlights the need for this type of categorization and standard http://aws.amazon.com/ec2/ .

The second area of focus was on consistent service standards. This is somewhat intuitive but may not be at the top of the list for customers considering consuming resources from the cloud. It stands to reason that if you are going to move compute resources to the cloud that you need to ensure that your services are well defined and consistent so either they can be matched by the provider or seamlessly maintained by your internal IT organization irrespective of geography. Really the message is rather simple but it is important; ensure you IT house is in order so consistency can be maintained when you take advantage of cloud based services.

Wednesday, September 8, 2010

One interesting phenomena that is impacting IT environments is the DIY "Do It Yourself" type applications that are designed to work over standard ports, allowing remote access to information. Increasingly these DIY applications are showing up inside our environments. Applications like logmein or dropbox to name a few, which allow a lightweight applet to load on a desktop, without privileged access in some cases. These "applets" enable users to turn up their own remote access type services without IT. While they have been around for a while, the use of these applications is on the increase as users get acclimatized to installing "applets" similar to the AppStore or Facebook model. In addition a new generation of online storage and synchronization tools designed for end users are readily available. Demand is also increasing as people transfer data from their desktops to devices like the iPad to enable mobility.

The age of "DIY IT" is upon us and we need to move quickly to respond to the trend. How then do we balance the requirement for IT self service, the Bring Your Own Computer (BYOC) trend and complete mobility against compliance and regulation requirements. Luckily solutions such as VDI are now offering flexible delivery of virtualized applications and desktop access from anywhere to help. In addition security standards are being introduced to cloud providers along with vendor security certification programs to ensure data is protected and that the hosting facility can be trusted. However, while these aim to meet users changing consumption requirements they do not form a comprehensive enough solution to prevent an underground exodus of corporate data. Additional layers of security such as digital rights management and a strong policy governing end users responsibilities will be required to secure corporate information.

It will be a fine balance between enabling users with technology while protecting company information at the same time. All the pieces however have reached a level of maturity to meet or address most of these issues. The onus will be on the IT team to work with their partners to bring these components together to provide security and reduce the risks without restricting mobility or end user flexibility. As it is likely this trend is already impacting our environments the time to start planning is now.

Tuesday, September 7, 2010

During the keynote Stephen Herrod made a few announcements regarding acquisitions, however there was little detail. The eCommerce Times published a conference report with a little more information. You can read more here VMware Buys Parts for Its 'Virtual Giant'

Friday, September 3, 2010

So the virtualization event of the year has come and gone. With it several new product announcements specifically designed to fix scale, security and management of cloud based infrastructure.

As always the event was an awesome opportunity to interact with people from all areas of the industry. After much confusion regarding some of the acquisitions, VMware is putting forth a vision of the future. What was interesting was the shift in focus to applications. The comment "it's all about the applications" was reinforced at the keynotes and some of the future looking breakout sessions. This sounds decidedly familiar to Citrix's mantra, but is it really? VMware went to great pains to distinguish between legacy and future application development platforms. Clearly they see Citrix as an enabler of legacy applications but not as a platform or application framework. This is what Springsource is all about. It is a hosted development platform much like Microsoft Azure.

The new generation of applications will live in the cloud with a small client side pluggin that provides the user interface; the model that Apple has pioneered with the "AppStore". Citrix has moved quickly to emulate this through Dazzle but they are focused on delivery vs. delivering a development platform for customers.

There is evidence to strengthen this view, as Gartner recently released statistics that mention that 50% of the applications businesses consume are coming from the cloud (think salesforce.com). This integration will create new challenges for IT teams as they seek to ensure standards and compliance are maintained on 3rd party infrastructure that they do not have any visibility into. In addition they will be forced to move quickly as users who have become accustomed to the AppStore model are likely to pose a real threat to private internal information bleeding out to the Internet (i.e. If IT does not provide similar services why not use something like DropBox on my desktop or smartphone).

The hardware vendors are also moving quickly to take advantage of the rush to the cloud. Several announced turn key unified server, storage and networking hardware that can be purchased as a single unit or block. This will have an impact on integrators as they transition from providing component based services like VMware product deployment to Integrated Service Providers. Service Providers will have to become a one stop shop for the design, deployment and delivery of the entire stack of infrastructure. While those of us who have been doing virtualization for a while have largely made these adjustments because virtualization already tightly integrates the infrastructure components, there are still a few things that are not yet clear. In addition the demand for buying large blocks of infrastructure will inevitably lead to partnering and acquisitions between the software, hardware and service providers to shore up any gaps in their capabilities or ability to compete in certain markets.

It will be interesting to see if traditional infrastructure services will need to expand to incorporate development as an additional capability. VMware has already encouraged partners to start engaging customers in conversations around development. With VMware's stronger focus on a new generation or platform for development I wonder how long it will be before application development and virtual infrastructure become the same conversation.

I was speaking with one my colleagues and he could not get over the amount of product that was being introduced to optimize all different forms of virtualization (end user, storage, etc) The irony was not lost on us that a technology that was introduced to simplify management has introduced such a vast array of complexity into the market place. Even as problems are addressed the level of expertise that is required has increased significantly. This is apparent in the vSheild and Nexus products, targeted at many of the deficiencies in Virtual Infrastructure but requiring a different level of understanding of the network and security stack.

VMware decided to eat their own dog food a little and offered all labs from the cloud through several cloud providers. The amount of labs they were able to deliver using this strategy and the number of virtual machines they deployed and then refreshed was staggering over the 4 day conference. Clearly they have developed their own case study for the power of cloud computing even if the lifespan of the vms was typically under 90 minutes.

Wednesday, September 1, 2010

View currently supports AD, Novell RSA SecurID and Smart Card as authentication methods. In addition there are 3rd parties that support additional methods of authentication for View.

Two methods can be used to establish a connection; direct to desktop and tunneled through the View server over https. In addition the tunnel can be moved to a View Security server to offload the overhead from the View server to the proxy (Security server). Typically a Security server is deployed in a DMZ and is appropriate for remote access scenarios.

PCoIP is not supported on View 4 or 4.5 through the secure server proxy, only RDP. VMware is working with Teradici to get this working but recommends using a VPN if you are serving PCoIP externally.

With 4.5 you have delegated role-based access control. Certificate management and revocation has also been added.

Administrators can now be associated by role, with associated permissions and then assigned to folders within the VMview hierarchy of resources. Some of the roles include inventory management or global administration. In addition custom roles can be added with specific permissions. For example you can divide your View architecture into folders that represent geographic regions and then add regional administrator roles.

VMware recommends integrating the vShield products into your VDI environments. One interesting thing about vShield edge is that it can provide load balancing to the View servers. vShield App can be used for zoning desktops and vShield EndPoint can offload antivirus protection.

The Internet is only 45 years old. Now there are 1.8 billion users accessing the internet. This presentation will focus on what the Internet means to Intel. By 2015 this number is expected to be 4 billion. Each smart device typically has 4 separate radios accessing the Internet. Intel is working to move that number to 5 - 6 using chip technology.

So how is cloud taking shape? Two different layers; cloud computing or consumption and cloud architecture which is typically shared and dynamic and virtual in nature. Intel will build micro processors to expose more and more instrumentation that partners like VMware will take advantage of.

When we look at private vs. public cloud there are number of concerns.The providers will need to assure their customers these things have been addressed. Specifically interoperability, security and standards. Intel is making acquisitions to deliver on their vision of the cloud. Intel believes the cloud should be simplified, efficient and secure.

Intel runs 100 000 servers and 95 datacenters broken down into 4 verticals; design, office, manufacturing and enterprise, growing at 45% a year. Storage utilization is 18 PB. Virtualization is key to meeting this internal demand. In addition a proactive server refresh introduces new capacity in CPU processing capacity. Four servers can be replaced with one Westmere (CPU) based server. In addition the network is being optimized so that the path to storage takes the lion share of the bandwidth.

A key component is global data center metrics and monitoring to guide how money should be spent to increase efficiency and reduce compute costs. In addition rationalizing the number of applications that Intel supports is part of this strategy.

Intel is piloting cubicle cluster computing. Building a virtual rack based on desktops distributed in an office cubicle. Intel studies have shown that cost is lower because natural air flows in office locations are more efficient at cooling than pushing air through a dense rack based datacenter. They put desktops in a virtual cluster and deliver demand from the collection of desktops to create a local datacenter experience for a branch location.

Intel has worked on pushing standards across all datacenters to ensure compute power can be pushed to any datacenter irrespective of physical location. This has lead to 82% utilization for the design environment. Now that they have consistent standards they can look to the public cloud to ensure information can be shared. Public cloud is being used for sales and marketing currently, but is expected to play a bigger role in the future.

In addition service delivery standards must be consistent across the entire organization to enable private and public delivery from cloud infrastructure. Intel has a new point of view "device independent mobility and client aware computing" to focus their service delivery standards.

IT's challenge; How can I take a large environment and run it with different SLAs and scale It? How can it be run with zero touch management? Two solutions have traditionally been available; scale out (separate storage tiers) or server and storage combinations (i.e. virtual storage appliances)

In the future storage needs to become vmdk aware and also to understand vm based policies. In addition it needs to support key encryption to deal with multi-tenancy and security.

Enablers for these transitions are multi-site global identities for the vms. Long distance vMotion for site-to-sit load balancing.

Some considerations for desktop and storage architectures include new problems like anti-virus storms brought about by VDI. Challenges include cost, scale (100000 vm deployment should be as easy as the deployment of the 10th vm). Also a state full experience must be delivered to the users.

Storage is moving to cloud based application centered management. Traditional storage solutions prevented applications from scaling. This was helped by scale-out or clustered applications but this involved development coding with architecture in mind and restricted application designs.

VMware believes the storage of the future will be a blob store ( hmmmm not to sure about this concept). Essentially it will lack structure completely opposite to the RDBMS model and instead will be a series of datastore services that will likely be made up of a collection of infrastructures both internal and external to an organization. The benefit to this architecture is low OpEx as there is no structure to manage.

Cisco has shipped over 1 million virtual Ethernet ports to date. The switch is built on the Cisco NX-OS and is compatible with all switching platforms. The infrastructure is made up of a virtual supervisor module and virtual Ethernet module. Based on the Nexus 7000 framework. Cisco is looking to extend it's framework using a Virtual Service Node. The VSM has a virtual appliance form factor.

Customers have been asking for networking services at the kernel level vs. the guest OS level to improve performance. Cisco has started down this path by introducing virtual service domains. Virtual service domains define a logical group of vms protected by a virtual appliance.

This year Cisco is introducing a new architecture: vPath. Network packets are redirected to virtual service nodes to enforce policies to say push communications through firewall. Virtual service nodes can support multiple ESX hosts to eliminate the appliance per host framework. These redirect policies are cached on the Nexus 1000 to reduce the network overhead. Cisco's first implementation of this architecture is the virtual security gateway which is a firewall architecture. The virtual security gateway can be deployed in an active standby configuration. To manage a combined virtual security gateway and Nexus 1000V architecture an administration point is now available; the virtual network management center. You can now manage multiple zones; network and security, so that policy can be enforced across both. This environment allows you to setup different SLA's for network bandwidth consumption on a group of VMs. In addition Port Mirroring is supported to enable traffic analyzers. This enables troubleshooting on a multi-tenant environment without exposing everyones traffic. This allows you to get very granular when you pipe out network traffic in a cloud environment.

Cisco has been working with several partners to extend vMotion across long distances. Cisco refers to this development as Over the Top Virtualization (OTV).

Nexus 1000V Myths

- Nexus switching is based on proprietary Cisco standards. No it is based on open standards

- Only works with Nexus switching. No it works with any Ethernet switch

The goal of this session is to review the complete VMware stack to provision a business service. The session will introduce vBlock. A vBlock is the ability to buy a turn key infrastructure. It can be purchased as a low to high end solution. It is bundled vSphere, vDirector, CISCO blade and networking with a CLARiiON storage and SAN switch. The idea is to provide predictable facilities, performance and fault tolerance. Additional vBlocks can be purchased to buy additional scale and capacity. The basis of the vBlock model is to buy a service platform and not concentrate on the components.

The provisioning block of the vBlock is done by EMC Ionix Unified Infrastructure Manager. A series of templates are provided for configuring network, compute and storage resources. For example the ESX build can be pushed with the storage and networking environments. The state information for the blades are stored in a service profile so that the blades are stateless. Once associated the blade takes on the appropriate identity.

Using vCloud Director with vCenter Chargeback and vShield for security you can provision virtual DataCenters (vDC). To provision you need to start with an organization which you associate vDC's to. The vDC's can then be subdivided into service catalogs (a collection of vms, templates and ISOs that deliver applications). In addition you have organizational networks which tie the vDC to the physical network layer.

VMware Horizon Suite

About Me

I am a Principal Cloud Architect at Long View Systems and have spent 16 years designing, implementing, and managing IT Infrastructures in highly available computing environments. My primary areas of focus are the deployment of virtualization (Server, Storage, Desktop, Application and WAN Optimization).