This website uses cookies to provide the best browsing experience. By continuing to use this website you are giving consent to cookies being used. For more information on cookies visit our Privacy and Cookie Policy.

Search results

Some of the top trends for 2015 in enterprise IT are focussing on cloud, security and mobility. Since Microsoft’s ‘mobile first, cloud first’ strategy announcement

Current IT

Some of the top trends for 2015 in enterprise IT are focussing on cloud, security and mobility. Since Microsoft’s ‘mobile first, cloud first’ strategy announcement, followed later by VMware’s recent ‘one cloud, any application’ theme you will be hard pressed not to hear these topics being discussed. However, before we look at 2015 let’s take a look at what organisations used to do.

Back in the day

In the pre-cloud era I used to use the phrase “well managed IT”. Shocking to hear, but before cloud, orchestration and everything-as-a-service, in some organisations we operated the following models:

Self-service request fulfilment

A combination of automated and manual tasks depending upon the service

Role and policy based configuration combined with roaming profiles, login scripts and intelli-mirror technologies, we designed user experiences that followed you. If you logged into our remote desktop solution the experience also followed you (customised to cater for lower bandwidth by providing a slightly reduced feature set)

Applications that followed you. Using Microsoft systems management server we could target applications at groups, advertising the application ensuring you could use your application wherever you went. Fancy stuff!

Self-service recovery. If your machine had a software failure that was not catastrophic we could re-image your device over the wire, or, as was our standard solution, utilise a local source. We would even run tools to try and keep your documents safe during this process. The only downside was that you had to be on the corporate network.

Orchestration and automation. Before fancy orchestration engines and workflow tools were invented using open standards we had to settle for using our trusty notepad. We did however build scripts using standard languages, protocols data structures (xml), draw our workflows on paper (Visio) and build task based engines, using databases, files, registry and xml datasets as reference and tracking tools (we even updated configuration and user data into other systems)

Automation. Xtravirt’s co-founder and CIO Paul Davey and I used to try to automate everything, and this is also where Xtravirt’s SONAR cloud-based analytics service emerged from. To this day I still believe in automation, so much so I wrote a quick script to read my firewall log out to me (useful? maybe less so than previous work but it kept the brain working). Even though we architected and designed the solutions, we also built the management systems and images etc. To this end we worked a lot with the operational teams to provide automated tools and processes to make the support team’s life easier.

Configuration management

We used a number of methods and systems, including systems management server and many bespoke scripts to maintain asset and configuration information. Again we built release management tools & processes to ensure we were in control (as much as possible) of the activities and were able to accurately report on the configuration and asset baselines of the estate.

I could probably go on forever about different areas we used to work on, however the main theme here is that we’ve been doing this over the past 10 years.

Moving into the mobile and cloud era

So fast forward to the present. We are talking cloud, enterprise mobility, software-defined everything. No longer are we making bespoke solutions in notepad, and we no longer have a host of tools to orchestrate automate and provide self-service everything, all out of the box.
While it’s true our technology capabilities have improved, what used to require some special magic now ships as a standard capability with the products. The reality is, to achieve a well-managed mobile and cloud-based model there is still a ton of effort that is required.
So can we provide access to our systems and data on any device, anytime, anywhere and in a secure manner? Well the answer is yes; we could before and we still can today. The main advances I see are we can spend more time on providing business solutions and less time writing bespoke engines cobbled together from a set of scripts.
Remember the IT landscape is still incredibly complicated with billions of transactions occurring, weaving this web together into a well-managed, efficient, cost effective and business valued service still requires more than just opening the box.
If need help along your virtualisation journey and moving into the mobile and cloud era, Xtravirt can deliver the right strategy and architecture for your business, so contact us today.

Over the past 40 years or so we have moved from centralised mainframe computing onto client/server applications and there began the stacking of beige servers in every server room.

Looking into the crystal ball

For starter’s I don’t have a crystal ball (if I did I would probably have won the lottery and be on an island somewhere hot), so predicting the future isn’t that easy. We can however at least give it a go.

Evolution of Computing

Over the past 40 years or so we have moved from centralised mainframe computing onto client/server applications and there began the stacking of beige servers in every server room. We then realised we could consolidate, and swapped out the numerous beige servers for fewer but larger shiny silver rack mount servers running virtual machines. Once we had virtualised as much as possible the next logical step was to then consume these offerings as a service. This is what we currently describe as Cloud computing.
Whilst evolution has provided the ability to allow us to consume serviced offerings today, the stark reality is we are currently somewhere between the adoption curves of virtualisation and cloud.The question on many people’s minds is what does life look like for IT post-cloud? For this prediction I’m going to assume that cloud has been adopted by the masses as opposed to the world moving into an era of cyber warfare where secrecy is paramount and the idea of using multi-tenant services is off the cards. For this prediction, the IT department now has the role of a cloud broker.

Internal IT organisation gap analysis

The following matrix outlines typical existing IT departmental capability, with a view that in the post cloud era requirement in a particular organisation capability will increase, reduce or remain the same:
So the GAP analysis produced is very high level and incredibly speculative, I have however begun to consider the likelihood and impact of changes to the technology landscape; who knows maybe at some point we will have answers to questions such as:

Will my level 1 headcount need to increase and level 2/3 be offset to vendors and cloud providers?

Will we rely on a far greater maturity of supplier management?

Will IT security internally become outsourced to cloud providers?

With the increase in 3rd party services and solutions increase the requirement for strong central governance?

How will cyber warfare affect the corporate IT landscape?

Will cloud be overtaken by a far greater disruptive force?

What’s your view on the post cloud era? Will it go full circle and bring IT back in house? Will we exist in a hybrid world, or will we become consumers of service?
Anyway this is just a glimpse of the types of conversation the team at Xtravirt have when they aren’t out solving current customer issues.
We are always here to help you with your virtualisation challenges so if you have a requirement, please contact us and we’ll be happy to assist.

I was looking forward to the London VMUG meeting a great deal as aside from the interesting and thought provoking sessions I hadn’t been able to get along to a London VMUG since May 2014. VMUG meetings are also a … [More]

I was looking forward to the London VMUG meeting a great deal as aside from the interesting and thought provoking sessions I hadn’t been able to get along to a London VMUG since May 2014. VMUG meetings are also a great opportunity to catch up with friends and peers who share a passion for virtualization.
As ever Alaric Davies kicked off the meeting in his own unique and amusing style outlining the agenda for the day and also presenting the five community contributor/speaker awards. It was great to see a few of my Xtravirt colleagues in the list of community speakers from 2014.
The first session of the day was PernixData’s “FVP Software in a real-world environment” presented by the ever eloquent Frank Denneman (PernixData) and James Leavers (Cloudhelix). This was a great session outlining how PernixData was being currently used in a large environment and what benefits, cost savings and performance gains it had provided. Key quotes of the presentation for me were “mountains of greatness” and “molehills of mediocrity” when displaying performance data on the slide deck.
Next up were the “vFactor” lightening talks where five community members (who had volunteered) were asked to do a strictly 10 minute presentation on any relative technical subject/project. After which, we the audience would get the opportunity to vote for our favourites. This was an excellent session and a great way to see what other folk are doing in their unique environments. All five of the presenters did a fantastic job and I am looking forward to seeing this happen again at future meetings.
After the break I sat in on the very first Xtravirt Lab which was a technical preview and demonstration of SONAR (Reporting-as-a-Service) presented by Peter Grant. It was good to observe the Q&A and feedback after the demonstration and there was very apparent interest in the product and rolling beta programme!
After the lunch break, I headed into the Simplivity session titled “Making sense of converged infrastructure” presented by Stuart Gilks. This was an enjoyable session where the case for having converged infrastructure was made using a great analogy of motorsport. It was also good to learn more about the Simplivity product and its capabilities.
Xtravirt’s very own Michael Poore presented one of the next sessions which I missed, (sorry Michael!) but I heard a lot of great feedback from those that attended.
The final session I attended was by Valentin Bondzio from VMware Global Support Services, titled “RDY, NUMA and LLC Locality”. Anyone who attended this session will likely agree this was an excellent deep dive communicated in an easy to understand and often humorous fashion.
To cap off the day everyone was gathered together in the main meeting room where the winners of the vFactor were announced (all five of the guys were given prizes, a well-deserved pat on the back and round of applause)
Also the winners from the various vendor prize draws were announced so lots of smiling faces to finish what was an excellent day.
If you have never been to a VMUG meeting I would strongly recommend it as the content is always pertinent and engaging. There are VMUG groups all around the UK so if you need more information on which one is local to you, visit the VMUG website.
Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please contact us and we’ll be happy to assist.

‘Twas the last week before Christmas and Santa’s head of IT, Eric the Elf, was quietly satisfied. He’d had a busy year carrying out a long overdue refresh of ‘End Elf Computing’ at the North Pole.

‘Twas the last week before Christmas and Santa’s head of IT, Eric the Elf, was quietly satisfied. He’d had a busy year carrying out a long overdue refresh of ‘End Elf Computing’ at the North Pole. Of course, the users (including the Boss) may not recognise the amount of work put in by Eric, to make the solution scalable, reliable and able to deliver – but nor should they be – a quiet user is a happy user! Such is the lot in life for a busy IT elf.

So what did Eric have to do?

Eric had to upgrade a considerable number of users from old Windows XP machines to something new and had spoken to Xtravirt about how to not just ‘rip and replace’ the desktops, but to move to a more flexible working environment. He had to think about the following:

VIP Users: Obviously Santa, being the boss, is THE VIP user. He’s also pretty mobile and network connections can vary while he’s on his travels.

Toy Factory Users: These need a reliable platform to handle desktop style applications. Downtime is a problem, particularly in the last quarter of the year.

Christmas Admin Users: These are also largely desktop based, but require a secure client – after all, these elves look after the list of who’s been naughty and nice. They’ve recently moved this application to a browser based system, but have other smaller packages too.

What was the solution?

Eric decided VMware Horizon would work best for his organisation, and worked with Xtravirt to ensure the design and transition went smoothly.
It was decided that a virtual desktop approach would suit the Toy Factory and Christmas Admin users, so a Horizon View solution was deployed. The Toy Factory users accessed non-persistent virtual desktops via zero clients. If they hit a software problem, it was easy to fix by simply logging off and back on again, with the virtual desktops refreshing themselves. The zero clients, being relatively simple and quite rugged devices, proved more reliable too.
The Christmas Admin users went down a similar path, however they were given security tokens for two-factor authentication. They liked the security aspect, but what they liked more was that their desktop sessions weren’t set to log off immediately on disconnect, which allowed them to move between devices in different locations and still access their running session.
While some applications were installed straight into the Virtual Desktop image, Horizon Workspace was deployed to provide applications assigned as needed. Because Santa’s List application was SAML (Security Assertion Mark-up Language) compliant, the application was published via Horizon Workspace based on membership of the Christmas Admin group, leveraging Horizon Workspace Single Sign On capabilities. Christmas Admin users could log onto View using their token and access their key application in a secure manner without the need to keep entering passwords.
Older applications, were re-packaged using VMware ThinApp for deployment through Horizon Workspace to the virtual desktops.

For Santa, and a few other VIP users, Eric deployed Horizon Mirage. Now Santa has a shiny new Windows 8.1 touch screen laptop, with his data protected over the internet via Horizon Mirage. If his new laptop falls off the sleigh, Eric can recover the data to either a replacement laptop to be picked up later, or to a virtual desktop, allowing Santa to quickly access it over Horizon View using View’s web browser access from a Web Café, or via the View client on his mobile phone.

More importantly, he can still access his locally installed applications and data locally, regardless of connectivity – which is important when he’s checking his Excel spreadsheets full of addresses…
When Santa does have a connection back to the North Pole, he can also access the Horizon Workspace portal to gain access to his allocated applications. This means he doesn’t have to maintain favourite URLs for web based applications as they’re all available in the workspace.
And so Eric can sit back with some eggnog, content that he’s done his bit to helping Santa in the smooth running of yet another Christmas.
If you’d like to learn more about the VMware Horizon Suite, or any of our virtualisation solutions, we have lots of experience to share, so please contact us.

VMware User Group (VMUG) host global events designed to enable customers and end users to interact with the community through knowledge sharing, training and collaboration. I’ve attended a couple of the London based VMUGs over the last couple of years, … [More]

VMware User Group (VMUG) host global events designed to enable customers and end users to interact with the community through knowledge sharing, training and collaboration. I’ve attended a couple of the London based VMUGs over the last couple of years, but this was my first time attending the UK VMUG.
Due to a late change in work commitments, I was fortunate enough to be able to attend this year’s UK VMUG. From my perspective, these events provide an opportunity much closer to home than VMworld, to interact with other community members and continue to stay up-to-date with announcements, industry trends and technical content.
The first thing that struck me this year was the agenda and line-up. I’ve taken note of this in recent years (just out of curiosity), but I genuinely thought ‘wow’ that’s some line-up in terms of speakers, session and content, considering the event is sponsored with no registration fee. It took me around 20 minutes to come up with the sessions I wanted to attend, which showed the variety, quality and quantity of the sessions were of the highest order
After arrival on Monday afternoon, I headed to the vCurry evening (thanks to Jane of the VMUG committee), tucking into some food (yes, a curry!), catching up with colleagues and then awaiting the start of the vQuiz. The quiz was entertaining and fun, 30 questions with a mix of categories, although it took me back in time to my VCP 3 and 4 exams, with expectancy around memorising and knowing maximum supported\configuration numbers. The table I was located on finished 3rd, but following some technicalities and overrule by VMware EMEA CTO Joe Baguley, we were promoted to 2nd place! As a group on the table, we decided to feed the prize back into the community for the main show.

On to the Conference

Keynote Address

A cold morning, started with a brief introduction from VMUG leader Alaric Davies, welcoming attendees, followed by the keynote by Joe Baguley (CTO VMware EMEA) titled ‘Rant as a Service’. A high level summary, that today, the goal for those of us in IT is to deliver applications to the business, through whatever means. We’re on this iterative IT business process circle of Data, App and Analysis, whether this is a 12 month or 2 year circle to complete projects. How can we reduce this? VMware are continuing the journey towards the software defined enterprise, driven by policy management and automation, abstracting the entire physical layer and moving this into software, with the obvious advantages of the intelligence and flexibility of the code within. No longer is the focus specifically on just hardware or infrastructure, but the layer above that in software described as ‘Infrastructure as Code’, and the innovation VMware is rapidly delivering to the market across the datacentre to achieve this.

Breakout Session #1

The first breakout session I attended was around a hybrid storage solution and the upcoming Virtual Volumes (VVOLs) integration, which held particular interest, as this is going to change how we manage storage capacity and provisioning. It’s clearly part of VMware’s overall strategy to define the datacentre by software, and bring policy management to admins, without worrying about the underlying characteristics of the storage hardware.
The interesting thing to note about this vendor is that their integration of VVOLs will come using the vSphere APIs for Storage Awareness (VASA) provider on the storage array, plus array firmware updates. In contrast, the presenter mentioned a few other vendors (not all) are going to be using virtual appliances for the integration, so how does this address manageability and availability concerns around the appliances?

Breakout Session #2

vRealize Operations 6.0, commonly known as vCenter Operations Manager (vC Ops), is due for release by the end of the year. Following the announcements at VMworld 2014, I attended this breakout session to gain further insight and clarity into the new offerings. The product has undertaken a massive overhaul (for the better), in terms of architecture, scale, deployment and usability to name a few. Almost 1 million lines of code have been changed, however the core principles and concepts have been ported across into the new product. We still deal with the familiar Health, Risk and Efficiency major badges, for example. A simple migration path from existing deployments does exist for customers (dependent on current version). I’m looking forward to getting my hands on the product and taking it for a spin!

Breakout Session #3

After lunch (including nibbles, biscuits and coffee with a few colleagues), I headed to the Horizon Architecture and Design session, as this fits around my core skills and interest. The important message to take here, aside from the various technical input around host, storage and network design for example, was focusing on your specific use cases and behavioural working patterns of the end users (engaging with them) and analysing assessment data before beginning to consider a proof of concept or design of the solution. Depending on these outcomes, you may not require a full Windows 7 desktop for example; perhaps publishing applications or shared desktops are going to meet your requirements, thus drastically reducing infrastructure required and cost.

Breakout Session #4

The final breakout session I headed to was the vSphere Availability Update. This session focused on products such as vSphere Data Protection, vSphere Replication, vCenter Site Recovery Manager and Stretched Storage Clusters. Out of all these, I’ve worked more closely with Site Recovery Manager, and the deployment of the new v5.8 is now quicker and simplified with the optional ability to install SRM using an internal (vPostgreSQL) database, therefore eliminating the need to request Database Admins to setup a database, with the necessary privileges and roles. Also, there is now full integration with the vSphere Web Client, among many other enhancements.
Future versions scheduled for next year, are being completely re-written from the ground up, and barriers will be removed within the code, which should allow SRM to use three sites instead of the current limitation of two, although a topology does exist today for a many to one relationship, which is commonly used more for service providers. Further, there are solutions available from other vendors combined with VMware, which could utilise three sites today, if needed.

Closing Keynote

The theme of the closing keynote presented by IT industry expert, Chris Wahl, was ‘Stop being a Minesweeper’. Generally I didn’t know what to expect from reading the title, but having used some of the training materials Chris has produced, I knew it would be presented in an entertaining fashion. Overall, the message delivered was that automation is the way forward, and to begin considering learning some scripting now such as PowerShell, PowerCLI or Python to ‘get the skills to pay the bills'. Finally, in addition, vCenter Orchestrator for automation is a ‘hidden gem’.

Final Thoughts

To summarise, I thoroughly enjoyed the event and the opportunity to meet folks I’ve only communicated with through social media before. The UK VMUG provides an optimal platform to collaborate with the community, partners and the VMware staff who have been asked to present. The VMware Global Support Services team were also on hand, to answer any pending questions or escalate existing support tickets, overall a fantastic idea. Also, the exhibit hall is worth visiting to speak directly with vendors and learn about new technology to help overcome current business challenges.
I would like to thank the VMUG committee for all the hard work that goes into the preparation and planning, to organise and finalise such a smooth and efficient event.
The UK VMUG presentations are available online and can be downloaded from here.
**vFactor**
If you are interested in presenting at a VMUG event for the first time, you can register here for London VMUG for January 2015. This will be a lightening talk of 10 minutes, and you will be mentored, prepared and advised by a current community speaker, to provide guidance and wisdom around your presenting skills before delivering this at the London VMUG. There are also some fantastic prizes on offer as an incentive.
Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

I have for quite some time been a regular attendee at the London VMUGs, but have only been to one UK meeting before – that was in fact the first one ever, so one of the first things I noticed … [More]

I have for quite some time been a regular attendee at the London VMUGs, but have only been to one UK meeting before - that was in fact the first one ever, so one of the first things I noticed when attending this year was how much the event has grown over the last three years.
The event was well attended by Xtravirt consultants, which was ideal as, having only been at Xtravirt for a week, it gave me the opportunity to meet and converse with a number of my new colleagues for the first time – all of which are highly active within the community.
VMUGs are a great opportunity for us to keep up to date with product and technology developments, talk to customers and fellow IT professionals about their requirements and how they are using (or looking to use) VMware and partner’s products, and participate further in the virtualisation community. Xtravirt’s commitment to supporting the community by this means was demonstrated to me very clearly by the fact that five of the company’s twelve vExperts were present at the event, two of whom were presenting sessions.

The line-up

Joe Baguley, CTO of VMware EMEA, gave a very entertaining opening keynote titled ‘CTO Rant-as-a-Service’. It wasn’t so much of a rant, but it was a great view of what's going on in the industry from VMware’s point of view. One of the key themes was the significant decrease that will be seen in the time between the traditional IT refresh cycles going forward, and how the whole Software Defined Enterprise/SDDC concept supports that. He also talked about some of the exciting announcements that VMware made at VMworld, such as EVO:RAIL and EVO:RACK. I noted that Joe also commented that the VMUG is the best community event that he has involvement with, which reinforces my point above as to their significance and importance within the ecosystem.
The next session I attended was Julian Wood’s ‘The Unofficial Low Down on Everything Announced at VMworld’. I wasn’t able to attend VMworld this year, so I thought this would be a great opportunity for me to get an overview of pretty much all the new products and improvements that were announced at VMworld. I was right - Julian put together and presented an excellent, information-packed session, and to be honest I was struggling to make coherent notes without missing anything as the information was coming thick and fast, but fortunately the VMUG Committee have kindly uploaded his comprehensive slides (each of which includes supporting links) to the London VMUG workspace on Box.com.
The next session I selected was ‘What's Coming for vSphere in Future Releases’ presented by VMware’s Chief Technologist Duncan Epping. Duncan expanded on a number of the products that Joe had mentioned in his keynote and also detailed some exciting improvements to existing products and features we all know and love!
The first session I attended after lunch was presented by our very own Jonathan Medd, and was entitled ‘Designing Real-World vCO Workflows for vRealize Automation Center (vCAC)’. This session was one of the reasons I really wanted to attend the UK VMUG this year – Jonathan is an expert in his field whose sessions always draw a good crowd. I am lucky enough to have worked with him personally for a number of years on and off, and every time I speak to him I learn something new, so I was expecting a good session. I am going to be heavily involved in vCAC and vCO at Xtravirt because of my interest and skills in scripting and automation, so I was quite excited about hearing tips from someone who has most definitely ‘been there and done that’. And I wasn’t disappointed..!
The space available in the mezzanine section for this session was overcommitted by at least 100%, and people were crowding round the table two rows deep in places, which to my mind demonstrates the community interest in automation based around vCAC and vCO. Jonathan ran this as an interactive session and got everybody to think about important aspects of designing an automation process that need to be considered at an early stage, and we discussed within the group the pros and cons of many of the possible approaches. As someone who is just getting up to speed with these products, I found it (as I expected to) an incredibly interesting and informative session – thanks Jonathan!
Up next was ‘vSphere Availability Updates and Tech Preview’ by Lee Dilworth, Principal Systems Engineer at VMware. This was a great opportunity to brush up on the significant number of improvements that VMware have made, and continues to make, in this area. The slides for this session have also helpfully been uploaded to the Box workspace.
After this, I went along to a partner session, ‘Re-thinking Storage by Virtualizing Flash and RAM’ by Frank Denneman, who is Chief Evangelist at Pernix Data. Pernix Data are doing some exciting things with their ‘FVP Cluster’ technology which allows any VM to remotely access flash RAM on any other vSphere host, enabling fault tolerant storage write acceleration, with pretty impressive results. FVP supports all VM operations with no impact on performance, so features such as vMotion, DRS, HA, snapshots, VDP and SRM continue to operate transparently. Nice!
The final session of the day was a hugely entertaining closing keynote by Chris Wahl, a double VCDX, prolific blogger, author and vExpert from Chicago who describes himself as a ‘Virtualization Whisperer’! This session was entitled ‘Stop Being a Minesweeper’, and in it Chris talked us through his journey into automation and included a number of good resources to help people begin the learning process.
So all in all, a great day, and thanks go to the London & UK VMUG Committee who once again did a fantastic job of organising the event – primarily Jane Rimmer, Alaric Davies, Simon Gallagher and Stuart Thompson, and of course also the wider VMUG organisation.

Want to get involved?

The Committee are running a competition for new community speakers, known as ‘V-Factor’! Entrants will have the opportunity to give a 10-minute lightning talk at the London VMUG meeting in January 2015 and could win one of a number of great prizes. You can find more here if you are interested in entering.
Xtravirt is always here to help you with your virtualisation challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

The UKVMUG is a yearly full day event under the banner of the VMware User Group organisation. Larger than the regional VMUGs held around the UK, the idea is to gather those from all parts of the UK interested in … [More]

The UK VMUG is a yearly full day event under the banner of the VMware User Group organisation. Larger than the regional VMUGs held around the UK, the idea is to gather those from all parts of the UK interested in VMware virtualisation to a user conference with some of the best content available to you outside of VMworld.
Held at the National Motorcycle Museum near Birmingham the day includes breakout sessions, side sessions and discussion groups. There are also opportunities to ask questions with well-known VMware employees, including the EMEA CTO Joe Baguley, and meet with many of the most popular vendors in the marketplace and community contributors with real world experience.

The warm-up

The event is pre-pended the night before with the now traditional vCurry and vQuiz night. An excellent opportunity to relax with fellow attendees, enjoy some Birmingham curry and test your knowledge of obscure items from the vSphere Configuration Maximums Guide, supported Operating Systems and the History Channel.
During the evening my Xtravirt colleague Ather Beg and I were interviewed for a future episode of the popular vNews virtualisation podcast. We chatted about what we thought of the vCurry and vQuiz and what we were looking forward to for the following day’s event.

The main event

The main event kicked-off with the opening Keynote delivered by VMware EMEA CTO Joe Baguley with his take on the future trends of IT, particularly for us infrastructure folks.
Joe has a very relaxed presenting style for an executive and is also not afraid to tell it like he thinks it is. Telling a room full of hundreds of infrastructure people that they will need to change how they currently approach their career because changes in technology may significantly impact their existing role is quite a tough but compelling message. ‘Infrastructure as code’ was the key takeaway message.
After the keynote session the Solutions Exchange opened with the opportunity to tour the various vendors and the solutions they have to offer.
A significant part of the rest of the day gave everyone opportunities to take part in breakout sessions from vendors, VMware employees and community sessions on topics including VMware Horizon View, vCloud Air, Virtual SAN, NSX and vSphere Futures.
I was fortunate enough to be given the opportunity to contribute to the event by hosting one of the community discussion sessions. The title of my discussion was ‘Designing Real-World vCO Workflows for vRealize Automation Center’ with the idea to generate some conversations around some of my recent experiences on a project utilising and delivering these technologies. The session was run twice and both groups contributed to some excellent discussions around the questions to ask and what to think about when identifying the requirements for automation projects and what would be needed to develop vCO Workflows to implement the requirements.
The day was finished off with the closing keynote from well-known virtualisation expert Chris Wahl and again it was good to hear a lot of emphasis on the suggestion that infrastructure professionals will need to learn how to code.
Alaric Davies from the UKVMUG organising committee closed out the day with prize giving and a round of thanks to all contributors.
A big thank you to all of the organisers for putting on such a great event and for giving me the opportunity to contribute!
Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

UK VMUG is one of the biggest national level VMware User Group conferences and is held annually in Birmingham. For those who don’t know, VMUG stands for VMware User Group and they are an independent customer led organisation and hold … [More]

UK VMUG is one of the biggest national level VMware User Group conferences and is held annually in Birmingham. For those who don’t know, VMUG stands for VMware User Group and they are an independent customer led organisation and hold meetings and conferences for the benefit of users of VMware products. The annual conference is a day event with sessions/talks from prominent speakers and breakout sessions. Vendors also typically sponsor the event and are there to showcase their offerings and answer any questions.
At Xtravirt, we aim to provide our clients with the best solution that fits their requirements, which makes this conference an ideal one for us to participate in. It not only gives us an opportunity to meet people from different industries and hear about their challenges but also allows us to speak to vendors, to see if their latest offerings can help fulfil the needs of our clients.
For that reason, Xtravirt usually have a strong presence at such conferences and this one was no different. We were there as both attendees and presenters. Being spoilt for choice, we spread out to attend sessions that interested us the most, before it was time to present for some of us. As the day is jam-packed with interesting sessions but also great solutions, one has to pick what to attend very carefully.
Our first hosted session was from Sam McGeown, who discussed VMware NSX. He is a VCP-NV and spoke about the architecture of the solution, things to keep in mind while designing such environments and how to prepare for the VCP-NV exam, which is decidedly much harder than the regular VCP exams.
Another one of the Xtravirt team, Jonathan Medd hosted a session on “Designing Real-World vCO Workflows for vRealize Automation Center”. Given his experience and the fact that he’s currently working on such a project, makes him best placed to talk about it. He took his audience through the common issues surrounding such projects, complexities faced and things that one might easily forget when embarking on such a project.
As always, it was great to be present at UKVMUG and meet so many like-minded people. If you are a user of VMware products, I would highly recommend you attend this yearly event (in addition to your local VMUG). It’s free, fun and a day well-spent as along with attending key relevant sessions, you may also find people who are facing the same challenges as you and get the chance to find out what they’re thinking to resolve them.
A number of people including Alaric Davies, Jane Rimmer, and Simon Gallagher work very hard to make this great day possible and it is worth attending.
VMUG is well worth being a part of; you can find out more and register for UKVMUG (or your regional one) at www.vmug.com.
Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

At an event recently one of the speakers advised the audience that agentless software auditing was the preferred method, I do not completely agree with this viewpoint.

I recently attended an IT Service management event recently, and one of the speakers advised the audience that using agentless software for IT auditing was the preferred method. I do not fully support this viewpoint, and this article briefly discusses the relative merits of both agent versus agentless management techniques in the auditing context.
Technical Note: The idea of agentless doesn’t exist in my mind - if you connect to WinRM, WMI, SSH etc. you’re already connecting to a service (agent) running on a system – however for simplicity we’ll stick to agent vs. agentless.
The key decision points for using agent-based or agentless is normally not a technology based decision, but one of operational versus project requirements, and whether there is an ongoing need for management post data capture and whether this is a one off event. Additional contributing factors can include change management, capex costs, and timeframes for data discovery.
Below is a comparative view of both methods.

Agent based

Pros

Cons

Device can be monitored regardless of network connectivity

Requires an agent install – however, a well-managed environment should cater for this, eg: included in the gold image

Data can be collected prior to service starts

Agent conflicts, some management tools can conflict, however this is usually mitigated by a suitable design.

Agents can run as a local system and communication can utilise certificates

Access to systems management tools can come with political hurdles, however, effective sponsorship and good communication should mitigate this.

Scanning can be scheduled to run without requiring serial or multi-threaded connections

Agentless scans still rely on remote management services which must be enabled and secured

Troubleshooting data collection can be time consuming

Catering for DMZ or multiple forest/domain can be problematic

Thread control can be problematic

In summary, choosing a single method of data collection is not ideal practice, a combination of technologies and methods will give you the most detail about an environment. Long term endpoint management strategies without agent based management in my experience result in a poorly managed environment.
The idea of a Configurations Management Database is antiquated; a federated Configuration Management System is what is required for a well-managed environment. The path to achieve this however is not short or easy. Continual review of your requirements should occur, picking the right tools for the right outcomes, consideration of the short and long term objectives should ensure you are able to utilise solutions that give you the ability to make the right business decisions.
If you would like to learn more about IT transformation strategy, virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us

VMworld 2014 Europe in Barcelona has just finished and I had the pleasure of attending again this year.

VMworld 2014 Europe in Barcelona has just finished and I had the pleasure of attending again this year. This is my favourite conference on the technical calendar as almost everyone from the VMware community is there and you get a chance to have great conversations about what is happening. It’s also a chance for VMware to show customers, partners and vendors the roadmap for existing and upcoming products. As always, Xtravirt showed its commitment to its team of consultants, and the event, by organising a big presence at VMworld.
Whilst I blogged every day about my experiences at this year’s event (see links at the end of this post), this post is about my views on why I think VMware is now in the prime position when it comes to providing solutions that can truly satisfy the definition of “Software-Defined Data Centre (SDDC)”.
Last year, VMware announced that their focus for the year and going forward will be to create and develop products that allow organisations to create “policy-driven” deployments. Solutions that will be completely automated and once defined, can be made available to their users “as a service” based on their entitlements.
We all know that VMware’s meteoric rise in the virtualisation world is due to their unparalleled solutions when it comes to compute virtualisation, but now the focus is on virtualising networking and storage layers. VMware has been working hard on this front for the past couple of years, not only on developing products like NSX and EVO:RAIL (RACK) but also on ensuring that these solutions are able to be driven completely from vRealize Automation (formerly vCenter Automation Center).
vRealize Automation has come a long way in the past year or so and holds the key to VMware’s strategy of policy-driven automation of solution deployment. At VMworld, VMware was keen to demonstrate the power of vRealize Automation with all VMware and third-party products and it’s quite clear to see that these products are mature and integrated enough to satisfy almost any requirement.
This overall integration is not limited to private cloud deployments only. There have been big investments in terms of availability and capabilities of vCloud Air. Stretching an environment to the public cloud has never been easier and with its elastic capabilities, there are great use cases where these capabilities can benefit all kinds of organisation. We all know that VMware already provides Infrastructure, Desktop and Disaster Recovery “as a Service” but more features are coming e.g. DBaaS (Database as a Service) and Automation etc.
One other fact that makes me think that VMware is the vendor that currently has the most complete solution is their work on integrating with other technologies e.g. OpenStack and Docker. There are a lot of organisations out there that have existing investments or are developing interest in these technologies. While some people might see VMware’s “Better Together” philosophy as clever marketing, from what I’ve seen, VMware is making real effort in ensuring that ecosystems containing OpenStack and Docker can integrate and work together with vSphere and vRealize Automation.
Considering all this, VMware has a mature, integrated and flexible offering when it comes to SDDC deployments and if you are thinking about starting on this journey the VMware suite is a good place to start.
Being an Enterprise Solution Partner for VMware, Xtravirt has the required skills and experience to help you along this path so if you are interested in deploying any of these technologies, please contact us and we’ll be happy to assist.
My summary of VMworld 2014 Europe is summarised by Day 0 and 1,Day 2,Day 3 and Day 4 posts.

I recently attended the annual VMworld Europe event and, due to the current focus in my day job, decided to formulate a session schedule largely based on VMware’s NSX for vSphere (NSX-v). My goal was to build on the experience … [More]

I recently attended the annual VMworld Europe event and, due to the current focus in my day job, decided to formulate a session schedule largely based on VMware’s NSX for vSphere (NSX-v). My goal was to build on the experience that I’ve gained from working with NSX for the best part of the last year and also learn about the future of the platform along with both VMware and third-party integrations.
The MGT1969 session with Ray Budavari and Zackary Kielich gave an update on the recently rebranded vRealize Automation (formerly vCAC) and its latest integrations with NSX-v. This included native NSX functions that previously relied on vCNS behind the scenes plus the powerful new vRealize Orchestrator plugin for NSX that now drives the REST API-based communications for automation. I also witnessed an impressive demo (NET1949) by Scott Lowe and Aaron Rosen on deploying elastic applications using Docker where NSX-MH (multi-hypervisor) provided the logical network provisioning agility required to scale to this demanding degree.
Attending Dimitri Desmidt and Max Ardica’s session (NET1586) on Advanced Network Services with NSX was a refresher for me due to the fact I had originally trained with them at VMware. It was a useful revision exercise with a comprehensive overview of NSX logical network functions including logical firewalling, load balancing and VPN. Some good questions came up that also forced me to reevaluate my knowledge on a couple of topics and provided me with some test cases to investigate upon returning to my lab environment.
The first day finished with Anirban Sengupta and Srinivas Nimmagadda’s session (SEC2238) on Micro-Segmentation Use Cases with the NSX Distributed Firewall (DFW). I’ve been working with this tool a fair amount and micro-segmentation is one of the most compelling reasons to deploy NSX for a lot of companies. The DFW allows granular vNIC-level firewalling on Virtual Machines, distributed at the Hypervisor layer. The typical model of trust zones, common to traditional data centre firewalling, only really cater for perimeter security and do not address the possibility of lateral attacks once the inside of the network is compromised. NSX facilitates an extremely powerful approach by inspecting traffic directly at source i.e. the vNIC. Integration with Tufin Orchestration Suite was also announced with features including change management and real-time compliance checking for the DFW.
The MGT1878 session by Vyenkatesh Deshpande and Jai Malkani was a highly interesting deep dive into the new vRealize Operations integration with NSX-v. This allows previously unheard of centralised visibility into the platform for monitoring purposes such as tracing both physical and logical topologies for VMs for troubleshooting purposes. Traditional networking opinion may have concerns that overlay technologies such as VXLAN are too opaque from the monitoring perspective but this session did wonders to dispel that perception.
Scott Lowe and Brad Hedlund’s session (NET1468) on IT Operations with VMware NSX covered how to approach delegating administrative access to NSX-v for both network and server admins and gave me some immediately usable material around Role Based Access Control. It was also a very entertaining and well-presented session! Possibly the session I gained the most from was Nimesh Desai’s talk on the NSX-v reference design for SDDC (NET1589). This was a relatively advanced session with good coverage of topics such as VTEP teaming recommendations, NSX Edge scale out with ECMP and physical data centre topologies and how to map NSX-v deployments to them.
Other sessions of note included Francois Tallet’s vSphere Distributed Switch Best Practices for NSX (NET1401) and Ray Budavari’s session on Multi-Site NSX (NET1974). The latter is a topic that is very much of note as currently NSX-v maintains a mapping to a single vCenter server and out of the box implies a single-site configuration. There are, however, multiple means by which a multi-site configuration for disaster avoidance or recovery can be architected when involving technologies such as vSphere Metro Storage Cluster, NSX’s L2 VPN and when considering optimising egress traffic using NSX Edge Service Gateways.
Overall it seemed that, despite the recently debuted technologies such as EVO:RAIL and VMware Integrated OpenStack there was a huge buzz around NSX at VMworld Europe 2014. The goal of rapidly deploying applications in the data centre cannot easily be achieved when network provisioning lags behind compute in its agility. NSX is rapidly developing a rich feature set building upon its core network hypervisor and network function virtualisation and is experiencing tighter integration with VMware’s core toolsets in the vRealize suite that facilitate automation and monitoring. This will surely see it become deployed in more and more data centres and I relish the opportunity to continue architecting these solutions for our customers.
If you would like to learn more about our cloud solutions, or wish to discuss your workspace challenges, we can help - please contact us today.

Introduction If you’re thinking of implementing a private/hybrid infrastructure as a service (IaaS) platform, then one of the key considerations is how to operate the platform. I’ve been researching online to see if there are any industry standards in this … [More]

Introduction

If you’re thinking of implementing a private/hybrid infrastructure as a service (IaaS) platform, then one of the key considerations is how to operate the platform. I’ve been researching online to see if there are any industry standards in this area and have found any detailed analysis to be lacking. VMware and IBM provide some guidance which appears to be aimed at a policy and people perspective, but I’ve not found much that describes these activities at the process and procedural level.
In this article I’ll explore the creation of a virtual server on a private cloud tenant to see how this fits in with ITIL guidance when considering change management.
The word Cloud can be used to describe a number of different solutions. For the purpose of this article we are looking at Cloud from an infrastructure perspective. VMware’s definition of cloud computing (one that I and the industry seem to agree with), has the following characteristics:

ITIL Lifecycle Elements

A typical ITIL service management lifecycle would normally contain the following processes -

Cloud Platform solution components

To understand one of the key differentiators between traditional IT and Cloud based computing is the concept of multi tenancy. The following diagram shows the distinct layers that make up a cloud solution.
In this example we are going to talk about the customer facing “tenant” and “platform” layers. A key differentiator between traditional and cloud computing is the introduction of an additional layer - tenancy. Traditionally we would use a standard Change Management Process across the board, however in a cloud environment one of the elements we are looking for is agility and self-service. This is because in a “cloud” environment we have further layers of abstraction to consider:

Tenant Change Management - changes that affect the environment provided through a tenant abstraction which may include tenant configuration and virtualised guest services

Example Requirement – Single Server Deployment

Take the following scenario:
Note: for the purposes of this article I’ve provided a simple view without refining other process interactions such as configuration management, service level management etc.
Jane is a member of the applications development team at a fictitious company called BlueStar. Jane is working on a project where a single virtual server is required. This project has a valid business case and has been approved by the programme governance board. In a traditional environment a service design package would be created, acceptance criteria fulfilled, a normal change raised/reviewed/approved, a server be procured, the server deployed, tested, handed over to support and change closed.
Now how would that work in a cloudy world?

Public Cloud

Jane would log into a self-service portal and request a single virtual machine from the Cloud Service Catalogue. In a public cloud system e.g. Azure/AWS/vCHS out-of-the-box, after specifying a few details a server would be provisioned granting Jane administrator rights to that server. She would be billed on a pay-as-you use basis, it would be accessible as a loosely-coupled service (vpn/internet access/api’s etc.).

Private Cloud

In this example I’m using VMware vCAC, vCO and a pseudo ITSM tool.
Jane has project/financial approval to proceed, Jane logs into vCAC and goes to the service catalogue (I’ll refer to this as the cloud service catalogue as this doesn’t replace the technology of business service catalogue). Jane requests a single virtual machine and provides the relevant details. This initiates a workflow which registers a standard change in the ITSM suite/CMP. Due to the nature of the IaaS system operating as a utility, the act of deploying a virtual server is pre-authorised in the change management process (CMP) (essentially this can now be managed through the Request Fulfilment process).
We have options, we could simply log the change, and provision the server is an automated/routine manner and provide change feedback through to close via a workflow. We could also use approval mechanisms to provide an additional level of governance and control. Whichever method is used the provision of a new server is in line with ITIL good practise. Utilising technology we can also integrate with other processes in an automated manner, for example as part of the automated deployment of the virtual server we may have included a software agent which provides integration to a configuration management system, we would also be able to notify different process owners of the action either in real time or via reporting mechanisms.

Change Governance, Control & Lifecycle Management

In this example we have utilised project and programme management to provide a level of governance and control rather than utilise the change approval board. This does however highlight some potential areas for concern. Below are just some of the concerns that may exist in relation to cloud and service management:

Does the project/programme board ensure that the service provision aligns with the enterprise IT strategy?

Are existing services analysed to check if functionality already exists?

Is risk and security considered thoroughly prior to authorising the provision of a new server?

How will testing be conducted?

It is assumed that the service template (Virtual Machine) will be in a highly tested and verified state so this shouldn’t be a problem, however the changes in configuration and application load may have far and wide reaching implications. This would suggest that post deployment the standard/normal/emergency change route would still be required.

What continual governance process will be used to assess system/platform usage?

How do we ensure financial approval is in place?

How do we conduct demand management in a fully autonomous environment?

Peak Demand vs. Average Demand etc.

How do we communicate with our customers to understand demand?

Does our chargeback model accommodate standby/overcapacity?

Does providing “room to grow” capacity negate the benefits?

Do we have a good supply chain and integration model for rapidly bolstering our IaaS platform?

Strategy, Design, Change, Release and Deployment

There are a number of policies, processes and procedures that play a part in the fully defined and managed change world. I have provided a subset of activities that need to be considered:

Service Portfolio Updated

Service Design Package (SDP)

Capacity Planning

Request for Change (RFC)

Release and Deployment

Change Closure

Conclusion

To achieve the agility and flexibility features of cloud computing whilst providing a valued customer experience and drive business value, a challenge is presented. For the internal IT division, becoming an IT Service Provider/broker is no easy feat.
Understanding how ITIL and cloud computing complement each other is one of the key aspects. A rigid and inflexible change management policy may heavily impact the benefit realisation of cloud computing. A Just-Do-It (JDI) approach may scare the business away from cloud or worse (for the internal IT provider) into the hands of a 3rd party.
Automation and agility brings many benefits, harnessing these powers can give IT the edge, being close to the customer, with robust people, process and technology skills will see IT as the valued enabler and the business will think twice about before considering outsourcing.
If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

When designing a virtual infrastructure to host either desktops (VDI) or servers it’s important to size it correctly. Time and time again I see people misunderstand the approach to sizing by reading metrics on the face value but not fully … [More]

When designing a virtual infrastructure to host either desktops (VDI) or servers it’s important to size it correctly. Time and time again I see people misunderstand the approach to sizing by reading metrics on the face value but not fully understanding them.
In this article I’m going to focus specifically on compute sizing, that is how much CPU and memory capacity will I need on my virtual hosts to run my virtual workloads. I’ll talk about Disk and Network IO in another post but the same principles apply.
To size correctly we need to do the following:

Understand what workloads are in scope

Monitor these workloads over a representative period to understand their performance requirements

Convert the metrics into hardware requirements

Let’s break this down…

1. Understand workloads in scope

This is a fairly simple concept. You need to know which workloads are going to run on the new environment so you know which ones to profile.

2. Monitor these workloads

To design a virtual infrastructure of any scale you’ll likely need to use a tool such as VMware Capacity Planner, PlateSpin Recon etc. Whatever tool you use you want to measure some core Metrics:

CPU utilisation (MHz)

Memory Utilisation (Active Memory)

It’s important….no it’s critical that the above CPU and Memory metrics don’t just show peak and average values but the actual values for each workload based on a function of time.
Let me try and explain. Say for simplicity we have 4 VMs in scope. We profile these over a 30 day period and then report on their performance over a typical 24 hour period. The results may look something like the chart below where each VM is peaking at 500 MHz utilisation but each of them peak at different times of the day.

Now what figures should we use to size?

Average?

If we use the average of each VM over the monitoring period then this would come out at only 200MHz total required! This would be only a quarter of the total compute power you need to buy so clearly sizing on average is often going to get you in trouble and give you performance issues.

Peak?

If we use the peak value of each VM and then add these up to come up with the total MHz required then this would come out at 2,000 MHz. This would be over twice what is actually required. I have heard a number of people say, “it’s better to be safe than sorry” etc however remember that ordering more than twice the compute required could cost your company or customer hundred’s of thousands of pounds extra. Money that could be better spent on other areas of the project.

Cumulative Peak

The term I use for correct sizing is cumulative peak. Take a step back and think what you’re trying to do here. You’re trying to size a virtual platform with enough computer power to run all the virtual machines during their observed peak period. If you have a large set of VMs (100s or 1000s) and if you profile them for a representative time period (30 days +) then this is going to give you enough accurate data to size correctly.
The example I gave in the charts are to illustrate the point and are very simplistic. In practice you’ll also need to account for:

A margin of error (say 5-10%)

Growth

Your specific knowledge of the customer that might warrant additional considerations. i.e. You profiled during the quieter months and need to allow for busy summer months as an example.

The key point about this post is to be aware that sizing you virtual environment based on workload average or peak can have dramatic implications.

But peak values are still useful

Understanding peak workload values are useful when it comes to right sizing individual VMs in order to give them the correct CPU and MEM specification. Here the peak values should be used. This is particularly important when you’re planning on running these workloads on an environment such as vCHS where you cannot over commit memory. If you over allocate the memory when you don’t need to then you’re reducing the amount of VMs you can run on your vCHS cloud.
If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

VMware released Horizon 6 on the 9th April, only a day after the demise of Windows XP support. Horizon 6 has an array of fancy End User Computing (EUC) related features that make a really compelling case, however, writing a … [More]

VMware released Horizon 6 on the 9th April, only a day after the demise of Windows XP support. Horizon 6 has an array of fancy End User Computing (EUC) related features that make a really compelling case, however, writing a piece solely around this is not the plan with this article.
Instead let us consider the VMware portfolio for a moment. It has been a busy time recently, with a number of innovative technologies being released and starting to gain some traction. Let us consider a handful of these technologies as a bit of a thought exercise for a moment.

New and Improved vSphere and vSAN

VMware vSphere 5.5 Update 1 was released relatively recently. This accompanied the release of vSAN – VMware’s own storage virtualisation technology. This, in itself, is something of a game-changer. Taking a set of relatively proprietary servers with some SSDs and spinning rust, it is possible to configure a vSphere environment with performance and resilience without needing to buy an expensive SAN with all the paraphernalia that such a solution usually requires.

vCloud Automation – Presenting and Automating Service Provisioning

Next, let’s consider vCloud Automation Center. This provides a highly customisable self-service portal that allows an enterprise to present cloud provisioning servers to customers, whether internal private-cloud customers, or externally, in the case of cloud providers. A nice idea – a user can request a service from a catalogue and all the technical processes can be automated and hidden away.

Horizon 6 - End User Compute, Reloaded.

Now we look at the new boy on the block – Horizon 6. This is more than simply an extension of the Horizon View stack. With the release of Horizon 6, we start to see the integration between the somewhat disjointed elements of the previous Horizon Suite. We see a raft of changes:

An improved Horizon View, with enhanced performance and extra features.

A centralised Workspace interface for ease of use for the end user.

Local SSD storage

And let’s also consider that this release directly supports vCAC and vSAN, so we can do clever things with provisioning services to customers and the storage infrastructure without resorting to third party solutions.

Networking with NSX

The last item on the shopping list is VMware NSX. This is VMware’s new network virtualisation stack which allows the provisioning of a whole networking environment within a virtual infrastructure:

Provisioning of virtual VLANs – VXLANs – across a virtual estate. Pretty much as many as you will ever need, as well as several ways of bridging to physical VLANs upstream.

Firewalling – at several different levels, from the virtual NIC on a VM, to VXLAN wide and across network boundaries, NSX includes its own firewall solution, as well as providing integration mechanisms to support third party options.

Putting This All Together

So, taking this list of products, we can consider our thought exercise. What do we get if we combine all of these into an integrated EUC solution?
Firstly, we can look at provisioning. Using vCAC provides the ability to offer a console to present a catalogue of end user services – remote desktops of different specifications, access to applications and services. The service catalogue can then automate the provisioning of these services, as well as the underlying infrastructure where applicable.
The infrastructure would, as you would expect, sit on a vSphere environment, augmented using vSAN and NSX. In the case of vSAN, considerable performance can be gained through the use of locally installed SSD presented across hosts as a virtual SAN. Scaling is relatively straight forward to accomplish too,as hosts are added in a scale-out fashion, so too is storage, presenting a potentially linear model.
NSX as part of the environment is a subject for discussion in itself. Using commodity network hardware – a relatively cheap managed switch infrastructure, a dynamic, fully featured network infrastructure can be established by moving the network stack from the physical to the virtual world – Software Defined Networking.
An End User Compute solution such as this is likely to include a management infrastructure separate to the virtual desktop infrastructure. View brokers, Horizon Workspace and Horizon Mirage all require network load balancing in order to scale in a resilient fashion with adequate performance. NSX Edge appliances can be used to provide this ability. In addition, use of routing and firewalling within the virtual infrastructure not only provides tighter security in a traditional single-tenant enterprise, but also opens up the ability to provide secure multi-tenancy on a shared architecture – with VXLANs supporting discrete customers in isolation. Of course, this becomes all the more important when internet connectivity for these services is required.
On the infrastructure supporting Virtual Desktops, NSX can provide similar segregation between tenants. Client security using NSX is potentially a massive benefit. The NSX Distributed Firewall applies to VMs on the individual VM network interface, subject to rules established within NSX. This is much more flexible than a hardware appliance working at a global level – discrete policies can be applied using parameters such as what VXLAN the VM is located, or even VM parameters such as the VM name.
One pretty intriguing feature of NSX includes integration with third party antivirus scanning solutions, for example Symantec Critical System Protection. Consider a default rule for firewalling applied to a VM. If the VM is picked up as being infected by the antivirus solution and tagged as infected, NSX can automatically apply a different policy to isolate the VM until it is cleaned by the antivirus solution. All in an automated fashion.
So, all in all, potentially a slick, compelling solution, all provisioned using VMware’s product range.
If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

Overview and requirement Over the course of the last month I’ve been working with one of our customers on a VMware Horizon View proof of concept project. Primarily, their main business driver and use case was to provide a virtual … [More]

Overview and requirement

Over the course of the last month I’ve been working with one of our customers on a VMware Horizon View proof of concept project. Primarily, their main business driver and use case was to provide a virtual desktop infrastructure capable of delivering desktops to their CAD users without impacting the functionality and user experience. Virtual desktops with high intensity graphic demands typically require more raw horsepower than would be necessary to deliver a traditional operating system and back office application, with this in mind I realised I’d need to seek out what options were available to support this requirement.
Graphics card technology has advanced significantly in recent years and with the introduction of GPUs, Graphics Processing Units, tasks can be offloaded leaving the CPU to concentrate on serving application and operating system needs. Modern operating systems from Microsoft and Apple as well as virtualisation hypervisors from VMware, Citrix and Microsoft are now able to detect the presence of a GPU and natively pass graphics processing requests across to them, but how would this work in a VDI deployment? Do I need to dedicate a GPU per client (1 to 1 relationship) or share a single one (1 to many)?
In this write-up I’ll be sharing items that I feel are often overlooked and sometimes assumed in these types of deployments. While I will be discussing the use of VMware Horizon View, it’s worth noting that Citrix and Microsoft offer the same functionality with graphics card hardware enablement and acceleration.

Desktop and application assessment

Before installing any software or creating a design document it’s extremely important to investigate the operating system and applications in scope, how they function (single or multi-threaded), the pre-requisites of the application and resource demands during peak conditions. For this reason a desktop assessment was completed on the existing physical CAD workstations to provide a greater insight into the workload metrics, such as CPU, RAM, network and disk.

Design consdierations

There were a number of design decisions captured in the vSphere and View design documentation and I’ve pulled out some of the more prominent ones relating to delivering the higher graphics demands and the impacts / constraints they introduce.

When considering large VMs for CAD users and dedicating, say, 4 vCPUs, 8GB RAM and 512mb Video RAM, how will this affect ESXi CPU co-scheduling?

What impact would a group of large VMs, with a specification as mentioned above, have on VMware HA Clusters and their design? vSphere’s Direct Path I/O cannot be used with HA, DRS and vMotion which introduces challenges for Business Continuity

Should the Horizon View Pool type be automated or manual? Both options require their own subsequent design considerations

Should the Horizon View Pool type be automated or manual? Both options require their own subsequent design considerations

Network connectivity and latency must be defined up-front. Poor bandwidth and high latency will present a poor user experience

The PCoIP protocol can be tuned, parameters such as image quality, caching, frames per second and maximum session bandwidth should be reviewed to prevent saturation from noisy neighbour(s)

Is the client device capable of handling 3D workloads? Review the specifications but more importantly try and acquire loan devices to see exactly how they perform side by side

GPU, dedicated or shared?

The Virtual Dedicated Graphics acceleration (vDGA) option within Horizon View presents a virtual machine with a dedicated GPU, this requires vSphere’s Direct Path I/O feature however; only that virtual machine can use the GPU. The alternative is to use Virtual Shared Graphics acceleration (vSGA), this permits multiple virtual machines to share a GPU and typically would be used for light weight 3D use cases. I ruled this last option out meaning Dedicated (vDGA) would be needed.

Future proof your deployment

In this POC the customer only had a requirement for vDGA but to support the use of vSGA at a later date for another application or ‘light CAD’ testing, it was agreed to install the graphics drivers in to the ESXi console. It’s good practice to do this at the initial deployment stage when the hosts aren’t being utilised as they will require a restart.

Virtual machine checks & ESXi tweaks

A few things to bear in mind:

VMware virtual machine virtual hardware v8 and below only supports 128MB Video RAM per VM, use v9 or higher if more is required

Install the latest graphics card drivers into the VM

Install the Horizon View Agent into the V

Run the VMware OS optimise tool (be careful not to disable settings required for 3D experience)

Remove the PCI device from the VM Parent Image before cloning, you won’t be able to otherwise

Following the graphics driver install on the ESXi host, configure the GPUs using:

ESXi > Advanced Settings > DirectPath I\O Configuration

Intel VT-d in BIOS is enabled (enable use of vDGA)

Power Management is set to OS Controlled

MS Windows OS registry update

Horizon View Pool Configuration

A number of configuration changes were applied to the virtual machines within the pool. If you do the same, remember, that after making changes you must power off the virtual machines then back on again for the changes to take effect. Restarting or rebooting a virtual machine does not apply the new configuration.

Performance tips

Performance tuning the virtual machines can be achieved in more than one area. Items such as virtual hardware, the PCoIP protocol and 3D application itself will all contribute to boosting the experience.

Increased vCPU count for high rendering performance.

PCoIP FPS - the application required a high amount so this was increased.

By default this value is 30, if lag and fragmented display is observed during animation then change to 0 as above

Also consider using performance monitoring software tools and utilities provided by the graphics card manufacturer. Avoid monitoring the GPU on the ESXi host but instead monitor within the guest operating system especially when using vDGA. If the application has its own performance and / or benchmark facility use this to provide a before and after comparison especially when fine tuning.

Final thoughts

As you can see, there are a number of steps that must be undertaken and areas that shouldn’t be overlooked but I cannot emphasise enough that the only way to achieve a successful deployment is to assess the original application(s), benchmark and document. Once the new environment is up and running, test it thoroughly and use the original benchmarks to validate outcomes. Never assume the facts, figures and performance claims from manufacturers will be sufficient, the real test is when a customer nods then agrees it’s acceptable to them.
For further reading, see the VMware whitepaper entitled ‘Graphics Acceleration in VMware Horizon View Virtual Desktops’ https://www.vmware.com/files/pdf/techpaper/vmware-horizon-view-graphics-acceleration-deployment.pdfIf you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

Citrix held its most important event of the year for partners and customers at the beginning of May. I have never had the chance to attend a Synergy event. Prior to joining Xtravirt I worked as a freelance consultant, meaning … [More]

Citrix held its most important event of the year for partners and customers at the beginning of May. I have never had the chance to attend a Synergy event. Prior to joining Xtravirt I worked as a freelance consultant, meaning I would have had to finance the trip myself. To say I was excited would be an understatement. For someone who has been working with Citrix technology for over 10 years, this really was the trip of a lifetime.
Citrix Synergy is one of the largest virtualization conferences attended by everyone from IT professionals to C level execs. Synergy covers topics around end user computing, enterprise mobility, cloud computing and networking, in addition to core traditional topics. Those attending also hope to hear from Citrix about new products and features. At last year's show, Citrix unveiled XenMobile and the major changes to their XenApp and XenDesktop offerings.
There were a couple of key points I wanted to ensure I got out of the conference. First, I wanted to get a further technical understanding in some areas I did not have a lot of exposure to, such as XenMobile, ShareFile, and Worx mobile apps. Then, I wanted to get a better understanding of how Citrix is making it easier for customers to adopt.

Highlights

Opening Keynote

The opening keynote was very exciting and had me captivated. Firstly, we were serenaded by the entertaining “iBand” who played instruments exclusively on mobile devices! CEO Mark Templeton entered the stage dancing and very happy, to a standing ovation. As Mark started talking he was visibly emotional at what would be his last keynote. Brad Peterson was particularly impressive as he demoed a lot of new features. We learned his official job title was “Chief Demo Officer” and I immediately decided that was certainly going to be my next role.

Citrix Workshop

Citrix launched the Workspace Suite. It combines a host of Citrix end user computing technologies, including application and desktop virtualization, mobile application and device management; file syncing and sharing with built-in enterprise security controls, WAN optimization, access gateway and a number of additional features to assist partners and customers aiming to build and deliver their own mobile workspaces, as well as their own desktops as a service offerings (DaaS). What’s most impressive is that it integrates with not just Microsoft’s Azure offering, but also with public clouds, private clouds, and customer’s own data centres. Workspace Services will go into a technology preview in Q2 2014.

NetScaler and MobileStream technology

To help improve mobile network and app performance, Citrix is launching a new technology that will improve the user’s mobile device experience, but will also provide increased network visibility and enhanced security.
NetScaler MobileStream optimizes the amount of data a mobile device downloads when it opens an app, and it maximizes mobile page downloads and rendering. It also makes use of wireless and cellular connectivity with network mode technology, meaning apps will run five times faster, according to Citrix. NetScaler also includes Citrix’s TriScale for deploying mobile services over the cloud. NetScaler MobileStream should be available Q2 2014.

Social Media

Since this was my first visit to Synergy, I relied heavily on social media to figure out which sessions to attend, experts to speak to, and also which training courses were of value. The Citrix social media team at Synergy were very active, so was the community. They live-tweeted all the keynotes, and kept everyone up-to-date on all of the conference goings on.

In Conclusion

Synergy 2014 was a very well executed conference with some great speakers and excellent training opportunities. The Anaheim convention centre was a very nice venue with great facilities. I will definitely try to attend Synergy 2015.
And the highlight of this amazing experience was passing my final CCE “Designing Citrix XenDesktop 7 Solutions” exam.

In order to manage the VMware vCloud Hybrid Service with the vSphere client you first need to install the vCloud Connector. The vCC comes in one free version now, it used to be two but now you get all features … [More]

In order to manage the VMware vCloud Hybrid Service with the vSphere client you first need to install the vCloud Connector. The vCC comes in one free version now, it used to be two but now you get all features for free. This post shows the steps to set this up:

Summary

Download the vCC Node and vCC Server appliance into your local vSphere environment

Configure the vCC Node and Server to talk to each other and the local vCenter

Register the vCC plugin with the vSphere client

Configure the vCC Server to connect to the vCHS cloud node and provide credentials

Configure vCC Node

Open a browser and point to: https://<vCC Node IP>:5480/ NB: I’ve had browser compatibility issues so if you get an error when connecting to any try using Chrome). If you use Firefox you may get an error saying “Failed to initialize”. This is a browser issue not an appliance problem. Login using:

Username: admin

Password: vmware

Under the Node tab select Cloud and then select vSphere (or vCloud if you have an internal vCloud deployment) and then type the URL of the vCenter Server (or internal vCloud) Click Update Configuration

Configure the vCC Server

https://<vCC Server IP>:5480/ Login using:

Username: admin

Password: vmware

Click on the Nodes Tab Select Register Node
Select Cloud Type as vSphere (or vCloud if you have an internal vCloud Director environment).
Under Cloud URL type the local vCenter FQDN or vCloud FQDN Click Ignore SSL Cert (unless you have one registered).
Do not select Public at this stage as you’re connecting your internal vCC Server to your internal vCC Node.
Under Cloud Info enter vSphere and then the credentials to connect to your internal vSphere environment
Click Register

Connect to the vCHS Node

Now click on Register Node once more and enter details to connect to the vCHS Node.
Note: The URL can be found in the vCHS Dashboard under the vCloud Directory API URL. This needs an ’8′ added to the port to make it 8443 (not 443)
Select Public as it’s a public node.
Choose to ignore the cert unless you have one installed.
Select Cloud Type = vCloud Director
VCD Org Name can be found at the end of the URL above before the last backslash.
Username is the e-mail address you use to log in to vCHS.

Registering the plugin with the vSphere Console

Connect to the vCC Server once again
Click on the Server tab and click vSphere Client
Type the vCC Server URL using format https://vCCServerIP Enter username / password for the vCenter server
Click Register

Register the vCloud Hybrid Service

Open the vSphere client (not the Web Client, as this isn’t currently supported)
Select on the Clouds icon on the left have side and then click on the green ‘+’ to add the vCHS cloud and also your internal vSphere vCenter
All Done!

The nature of the work we do in the Advantage software practise team here at Xtravirt means that when it comes to software design and development, we get exposure to tons of different technologies and lots of new techniques and … [More]

The nature of the work we do in the Advantage software practise team here at Xtravirt means that when it comes to software design and development, we get exposure to tons of different technologies and lots of new techniques and design principles. Staying on top of all of these elements is not always easy.

Keeping up

As with all other areas in the IT industry, software design and development is constantly evolving. A side-effect of this massive progression means that it can be challenging for us practitioners to keep up with changing times. As developers, we really need to be on the cutting edge, keeping up to speed with new techniques, technologies, design principles and patterns. Doing this allows us to take advantage of the latest and greatest offerings out there, and means we are able to be as efficient as possible at what we do, resulting in software crafted to the highest of standards.
Keeping up with all of this means that we need to constantly study; learning and absorbing information from multiple sources, often with very tight time constraints. As most people know, dedicating time to study and learn about new or existing techniques and technologies whilst working on projects is challenging at the best of times, and finding the kinds of information that allow us to keep on top of trends all in one place rarely ever happens.

Gaining knowledge

Conferences are a great way of solving this predicament if you or members of your team can afford a bit of time away from the office. This year some of the Xtravirt development team attended SDD 2014 (Software Design and Development Conference). Attending the conference gave us the opportunity to keep abreast of current trends, refresh ourselves on the latest technologies, and also to network with fellow developers and designers.
The conference this year had over 100 sessions and workshops on offer, and the Xtravirt team was able to pick up some great knowledge and ideas over the week from the various presenters and workshops on hand. Topics ranged from technical hands-on coding sessions, to design and patterns, and even some UI/UX related content.

Relevance to Xtravirt

As many of you may be aware, from as far back as 2008, Software development at Xtravirt has underpinned our services, and this is continuing as we find new and unique ways of utilising its value for our customer’s benefit. New automation, analytics, and reporting software (with Advantage Engine as the workhorse) are just some of the ways we are augmenting our cloud, datacentre, and workspace services.
Many of the workshops at SDD 2014 focused on different programming methodologies, and one of these that Xtravirt adopts is BDD (Behaviour Driven Development). Attending these sessions was a great opportunity to pick up on the latest trends. As an example, we even picked up on some interesting ways of writing user stories for BDD and getting test stubs automatically generated based off of these user stories written in “plain English”.
The conference also provided a good opportunity to review our own progress and setting against current industry trends. It was quite enlightening to see that the frameworks and techniques currently being used within our software development practise are closely aligned with current trends.
There were also plenty of new frameworks and methodologies that we had exposure to at SDD 2014, so the benefit of this aspect is that we can now easily identify opportunities or benefits in these when faced with certain challenges in our day to day operations.

Key takeaways

Some of the key points we took from this event were:

As CPU architectures evolve and become better at handling multiple tasks, programming for these gets more and more difficult. The ways in which developers program for multiple cores/CPUs is a hotly debated item at the moment, and alot is being done to revolutionise this area of programming.

UI and UX are extremely important for any kind of application, and there is much more to these than just pretty interfaces. All kinds of considerations need to be thought of, from the way a user interacts with controls, to the way feedback and layout affects a user’s conscious and subconscious thoughts about their experience with your application.

Staying current with trends is important in any IT-related profession, and conferences provide an excellent way to discover, learn and network with like-minded practitioners and experts in your field. Putting aside the topic of software design and development, there is much to benefit from conferences related to any industry. If your company does not already have a training or conference budget set aside, perhaps it is worth bringing up the topic with your manager or boss at the next opportunity!

Internally within vCHS there are two types of networks you can create: Isolated and Routed. Isolated networks allow traffic only between virtual machines within that network therefore they are not able to communicate to the outside world. The use cases … [More]

Internally within vCHS there are two types of networks you can create:

Isolated

Routed

Isolated Networks

Isolated networks allow traffic only between virtual machines within that network therefore they are not able to communicate to the outside world. The use cases for these are more limited than for routed networks, but could be useful when testing completely self contained services or infrastructures such as test, dev or lab environments.
When configuring an isolated network you have the option of enabling basic DHCP services. Here you can define the IP range to allocate and the lease time. If you want more advanced DHCP options then you’ll need to deploy your own DHCP server within a VM.
You can also configure the range of static IP addresses and Pri/Sec DNS servers that vCHS will allocate and configure on new VMs. This saves you from manually having to configure yourself but should not to be confused with DHCP.

Routed Networks

Routed networks allow VMs to communicate to other VMs in different networks. These networks could be other routed networks within the vCHS service, directly to the internet or to an internal corporate network via a VPN connection (via the external vCHS internet link). (Direct connection is another option mentioned later)
In addition to DHCP, Routed networks are connected to a gateway (vSphere Edge Gateway) which provides more options including:

NAT

Firewall

Static Routes

VPN

Load Balancer

Each of these options provides the core functionality that one would expect without getting overly complex.

Connecting a routed network to the Internet

In order to connect a routed network to the external internet there are a few things that need to be done:

Open the firewall on the edge gateway

Configure NAT on the edge gateway

Configure DNS

Open the firewall

Opening the firewall is simple, simply click on the gateway within the Gateways tab in the vCHS dashboard and add the required rules (i.e 80, 443 etc)

Apply NAT rules

There are two different types of NAT rules, SNAT and DNAT. (Source and Destination NAT)
SNAT is for traffic leaving vCHS going to the external internet (or other network). DNAT (is for traffic originating outside the vCHS cloud coming in).
To access the internet from a VM within the vCHS network you must create an SNAT rule specifying the source IP or IP range of the vCHS internal VMs and then specify the external or Public IP as the translated range. This public IP will be shown on the gateway in the main dashboard. In the example below, the IP address starting in 213…. is the public IP address

Configure DNS

The last thing that needs to happen is for the VMs to point to a DNS server that can resolve internet addresses. The edge gateway internal IP should be used for the DNS server address if you want the connection to go directly out to the internet.
In my example the Edge gateway is 192.168.<x>.1

Connecting vCHS to your internal network

If you wish to connect to your corporate network there are 3 main options:

Create a Site-to-Site VPN using the Edge Gateway VPN

Setup a Direct Connection between your datacentre and the vCHS datacentre

Deploy an alternative VPN device on a VM within vCHS and connect (via the external network)

Creating a site-to-site VPN

This option uses the Edge Gateway VPN service to create a VPN connection from the Edge gateway to a VPN device within your internal network. The VPN connection will go via the external Internet connection so you should ensure that the firewall rules are configured appropriately.

Creating a Direct Connection

For customers serious about vCHS they can create a direct connection from their existing datacentre to the datacentre hosting their vCHS service.

Deploy an alternative VPN device

There is nothing preventing you from deploying your own VPN service within vCHS running on a VM. A little while ago I had a customer who required a VPN connection from various laptops spread around the UK. A site-to-site VPN connection wasn’t suitable so I installed Routing and Remote Access on a Windows 2008 vCHS server and by opening the appropriate firewall and NAT rules the customer was able to connect using a standard VPN Windows client to a Windows VPN server.
Xtravirt are leaders in planning, designing and transforming organisations to gain the benefits of hybrid cloud. If you would like talk to us about assisting your organisation, please contact us.

The VMware User Groups provide a fantastic opportunity to rub shoulders with VMware technology enthusiasts. Whether you’re running the latest, the previous or perhaps even the unsupported versions of their products there’s always someone to share a story with. You’ll … [More]

The VMware User Groups provide a fantastic opportunity to rub shoulders with VMware technology enthusiasts. Whether you're running the latest, the previous or perhaps even the unsupported versions of their products there's always someone to share a story with. You'll typically find breakout sessions available overviewing products, providing troubleshooting guidance or demonstrations of new features. Occasionally there's an opportunity to ask questions of a roundtable panel of almost any virtualisation topic whether it is product or industry trend related. These events aren't just UK based either, have a look on the www.vmug.com to see where your nearest meeting is.
Last week it was the turn of the London VMUG and while based in the city it pulls in attendees from all over the country. It's a slick operation run by Alaric Davies, Stuart Thompson, Simon Gallagher & Jane Rimmer. Xtravirt were there too, seven of us in fact, as attendees and presenters. I rattled off a 15 minute lightning talk entitled Surprising Replication Machine, where I focussed specifically on VMware’s vSphere Replication and how it could be used to migrate data centres opposed to being treated just as Disaster Recovery tool. Gregg Robertson co-presented with Craig Kilborn overviewing the learning path and upfront work required for the VCDX Programme. Based on their recent experiences they fired out facts and figures of the number of hours, lab time, review cycles and openly admitted how much of their personal time had been swallowed up – and the impacts to home life. Many questions were asked and frantic note taking made by some.
Technical deep-dive sessions were provided by Frank Buecshel delving into SSL Certificates and their usage in vSphere and later in the day discussing SSO Architecture, Deployment & Common Issues. Having attended the first of his sessions it was apparent that his role within VMware as an Escalation Engineer had exposed him to many complex issues to resolve. The vCAC Real World deployment session presented by Simon Gallagher later in the day opened up good discussion from the floor as he ran through the deployment aspects and gotchas, pre-requisites and un-documented configuration requirements (of which there were many). The room was full, people were either standing or sitting on the floor – it was clearly a hot topic for the attendees.
There were many other sessions available throughout the day. The sponsors themselves are able to showcase their products and present their use-cases and market prowess. Unfortunately, I was unable to attend every session but hopefully you’ll understand there’s something for everyone, whether you’re deep into a deployment and want to learn more, or fascinated about a new product.
At the close of the day prize draws were made, hands shaken and with business cards exchanged it was time to head to vBeers, sponsored by PernixData. The majority of the attendees took to foot and made their way to a local pub to talk more tech and continue networking. There’s no doubt it’s a long day but with such great rewards.
Our slides are available for download if you’re interested to see them.
Surprising Replication Machine | VCDX Application – What Does It Take?

A high level view of IT functional relationships Introduction When working in the area of IT strategy and planning it is important to understand the various roles and functional relationships within the business unit. In this article I provide a … [More]

A high level view of IT functional relationships

Introduction

When working in the area of IT strategy and planning it is important to understand the various roles and functional relationships within the business unit. In this article I provide a summary of the highly complex mix of architecture, delivery and project management functions.

The Functions

There are a number of different functions from strategy through to service delivery which are briefly outlined below.

Enterprise Architecture (EA)

Put simply, the function of enterprise architecture (EA) is to document the current state, work with the business to define a target state and plan transition architectures.The formulation of an Architectural Governance Board provides a forum to review transition and future state architectures, this follows the iterative theme of architectural practices.

Solution Architecture (SA)

The practice of solution architecture (SA) is to provide a broad and deep level of expertise to a project. Working alongside a project manager, a solution architect will work with the business to fully understand functional and non-functional requirements, design a solution, produce a detailed financial analysis of the solution, work with subject matter experts during planning, design and delivery, and provide technical governance during solution implementation.

Technical Architecture (TA)

A technical architect (TA) will own the design of a specific technology stack. This will tend to be a deep product subject matter expert who has a great deal of experience with technology often across a number of different streams.

Subject Matter Expert (SME)

A subject matter expert (SME) is an expert in one or more fields. An example of this may be a messaging specialist who is responsible for maintaining and administering a Microsoft Exchange Server Platform.

Programme Management (PGM)

The programme manager (PGM) is responsible for overall project delivery capability. They will have a number of project managers running a number of projects. The PGM is responsible and accountable for the success of the programme.

Project Management (PM)

A project manager (PM) is responsible for the management and delivery of specific projects. They will work with EA, SA, SME and service operations (SO) staff to ensure projects are well managed, documented, have a valid business case and manage risk.

Project Management Office (PMO)

The project management office (PMO) is responsible for managing and administering all project data.The function of a PMO is quite extensive; at a high level it provides:

Service operations (SO) staff are responsible for administering systems from a day to day point of view. Functions may include:

1st Line (service desk or contact centre departments)

2nd Line (server/networking/telephony or client

3rd Line (SME level usually in an 80/20 support to project role)

Access Control (security administration)

Change Management

Problem Management

Service Management

The relationship between the different functions

The relationship between the different functions is that of a layered approach from Strategy through to Service Delivery as shown in the diagram below. This pyramid is not necessarily a hierarchy of authority, but more a structure outlining the layers of governance that each function provides from an enterprise viewpoint.

Supporting Technologies

Enabling this collaborative, governed and controlled working model requires many elements, one of which is tools. Without going into specific solutions, there are some common attributes that systems should have, in order to ensure efficient integration and adoption:

Security - Records should be able to be secured and shared, and where possible, audit logging should be possible

Version Control - Recording changes to systems/documents is vital when working in a collaborate environment

Tracking - From a simple manual field, to automated data/meta data entry, it is key to be able to monitor changes, and where possible, measure the time between changes

Interoperability - Sharing data either in a push or pull model allows, linking of business data and reduces human error, increases efficiency and allows far greater level of analysis to occur

Multi-Device Support - A web front-end is often a very useful attribute, however even a simple spreadsheet can be used to great effect where required

Using a multitude of tools, such as ERP, CRM, ITSM Solutions, programme/project management solutions, spreadsheets, databases and many more, is common within organisations. Whilst there is nothing wrong with this, the key is to use effective tools, they may not always be pretty but they need to be able to manage and govern so that control is maintained.

High Level Project Process Flow

There are many activities involved in a project lifecycle. The following diagram, based on PRINCE2 principles, represents a high level, simple to understand view of a project lifecycle. It should be noted that project lifecycle is a complex and iterative process with many sub processes involved.

The Big Picture

Typical IT Org Structures

Each organisation is unique with each using a different set of terms, roles and descriptions, however, there are usually some common attributes between organisations. The following diagram gives an idea of a typical IT organisational structure. As you can see all the major functions have been shown alongside each other.

Managing Across the Enterprise

The diagram below shows a high level view of how the various functions sit together.

Framework Positioning

There are a number of frameworks that can be used which all use iterative processes. Often these share common ground but are also aligned to different use cases.
The following diagram outlines some use cases and framework alignment. The frameworks do share elements, for example TOGAF and ITIL’s service strategy and service design processes link well. This is also true of TOGAF’s approach and the project initiation phase in PRINCE2.

Adapt and Adopt

Frameworks are exactly as they say, they are processes, methods and tools for guidance that can be utilised to suit your organisation’s needs. Following a framework to the letter would be very costly, time consuming and most likely to fail. It is best practice to adapt and adopt the relevant parts of any framework to suit an organisation requirement.

Summary

By frameworks and best practice advice globally, a mixture of people, process and technology is required to build a successful IT and business aligned solution that is supportable, secure, efficient and cost effective.
While this article takes a very simplistic view on methodologies and their practical implementation it does demonstrate their relationships and how a mixture of governance, control and collaboration are required to achieve an agile, business-enabling IT division.
The common elements of all the frameworks are to document, govern, control, manage and most importantly use effective communication. By taking the right tools and techniques to your organisation you’ll be in a far better position to cope with both business change and keeping the lights on.

I’ve recently had the opportunity to deploy a VDI solution utilising Nutanix Virtual Compute Platform at one of our customer sites, and wanted to discuss some of the benefits it brings to virtualised solutions. Nutanix is a converged infrastructure solution … [More]

I’ve recently had the opportunity to deploy a VDI solution utilising Nutanix Virtual Compute Platform at one of our customer sites, and wanted to discuss some of the benefits it brings to virtualised solutions.
Nutanix is a converged infrastructure solution that consolidates the compute (your virtualisation hosts) and the storage tier into a single appliance.
I’m personally quite impressed with this technology and the capability it brings not only to VDI solutions, but virtualised solutions as a whole.

Background

You will often read about failed VDI projects, the two main reasons for failure comes down to cost and performance.
In the main, these issues are closely related to storage. Lots of high performance storage is costly, so if you don’t provide enough performance to cater for peak usage during IO storms and high usage periods, the solution will underperform when under load.
To address the gap in the market, Storage Optimisation has stepped in, with multiple vendors providing very differing solutions. These solutions range from Flash only arrays, VM-aware storage appliances and storage in memory, a handful of these solutions are also able to optimise local or older storage. One of my colleagues recently published this blog post with his experience of Storage Optimisation using Tintri. At the higher end of the scale, you also have larger consolidated platforms such as VCE’s Vblock and IBM’s PureFlex.
These are all great solutions and remove some of the cost and performance barriers detailed above. However, when you add a layer of optimisation or select a large consolidated platform you can also start to add complexity and ultimately lose flexibility, see my blog post 'Sorry we don't do average IOPS' for more information.

Back to Nutanix

The Nutanix solution offers combined compute and storage in a single box. Starting with just three nodes you can scale upwards to cater for many thousands of workloads with many Nutanix nodes and multiple Nutanix clusters if required.
Like many other solutions, Nutanix optimises the local storage attached to each node, but with BIG differences:

The management of the local storage is all “underwater”, The Nutanix Operating System (NOS) takes care of this for you

The storage is presented as shared storage (NFS) to your virtualisation hosts

As you add additional nodes, you automatically add more storage

The final point above is the deal breaker for me. When you scale out a platform, you have consider increasing compute, then ensuring the storage is matched and that the performance and bandwidth is suitable, you’ll probably have space, power and cooling requirements in the data centre to consider too.
Being able to add compute and storage together in one box in a linier fashion makes planning and installation simple, and delivers balanced storage and compute resources in one go.
Nutanix utilises 10G networking, each node has 2 x 10G uplinks, all you need to ensure is that all hosts within a Nutanix cluster are on the same layer 2 network. Nutanix has excellent reference architectures making the design and implementation process simple.

Deployment

We were working to very aggressive timescales and the Nutanix Solution helped us to deliver on time, networking is the only real infrastructure requirement (no fibre channel investment is required) and so we were been able to stand up the solution in a small number of days. Deployment is simple and very well documented and technical support is extremely responsive.
During testing, we never ran out of IOPS. Utilising Login VSI (a load simulating and benchmarking tool) with a light workload we were hitting well over 100 desktops per host without hitting the VSIMAX figure. With the heavy workload we were hitting over 80 desktops per host with CPU being the constraining resource. These figures are greater than our planned production values and demonstrate that Nutanix delivers as promised on the IOPS front.
I’ve tested boot/logon storms with other optimisation technologies on VMware vSphere before, I’ve noticed that hosts may become disconnected and unresponsive during the tests. Conducting the same tests with Nutanix produced better results, booting 600 desktops to VMTools starting in well under 20 minutes while being able to manage all hosts throughout. This test is a bit subjective, as you’ll attempt to design out such storms and test methodically with a tool such as Login VSI, but it’s a great indication of the performance of the Nutanix platform.
Simulating a Nutanix controller failure is also impressive; VM’s pause and are back on line in around 15 seconds.

For the past few weeks I’ve been working on a customer engagement focusing on cloud automation using vCloud Automation Center (vCAC) and vCenter Orchestrator (vCO). Throughout the project there has been a number of occasions where the typical vCAC way … [More]

For the past few weeks I’ve been working on a customer engagement focusing on cloud automation using vCloud Automation Center (vCAC) and vCenter Orchestrator (vCO). Throughout the project there has been a number of occasions where the typical vCAC way of doing something hasn’t been exactly what we needed, we’ve therefore had to heavily rely on vCO to do these things for us. Thankfully this was made easily possible through vCAC 6’s Advanced Services menu.
Perhaps the most important example of where we’ve had to harness vCO’s power has been in the provisioning/deployment of new virtual machines. Typically we would define rigid machine blueprints that state the size, template to use and storage needs of a machine for vCAC to provision. This however, didn’t suit our customer’s need – they needed something more dynamic. We needed to find a way to support:

Changing the cloning template at the time of machine provisioning (between Ubuntu Server 12.04 and Windows Server 2012)

We decided that the best way we could meet all of these demands was by creating a Service Blueprint in the vCAC Advanced Services menu and then bypassing the default vCAC provisioning workflow and instead passing it to our own, custom vCO workflow. This would therefore allow us to fully define the whole machine deployment process including the layout and fields presented in the blueprint form.
Before we did any Advanced Services configuration in vCAC we decided it would be best to first create the vCO workflow so we knew the inputs and outputs that were needed. This ‘New machine’ workflow consisted of the following parameters:
The first thing that the workflow needed to do was parse these inputs, translating them into an appropriate data type for each parameter so we could pick out the template and machine size and do the custom clone. Our parse script therefore consisted of two switch statements to select the right template/sizing option based on the OS and Size inputs. This is easier illustrated in the code below (getTemplateObjectByName() method omitted):

The script also parsed the string inputs AdditionalHDOne and AdditionalHDTwo, converting them into the number attributes: HDOne and HDTwo. These hard drive sizes were then ready for us to add the new virtual disks at a later stage. We next needed to do the clone itself, this was relatively easily done due to the fact that we could work directly with the vCO vCenter plugin and use the cloneVM_Task() method, passing in a VM clone specification that would be responsible for defining the customisation we needed. For this, we used this simple bit of code:

It is important to note that in this case we were using a fixed Resource Pool and Folder, so we hardcoded these values – although making them dynamic wouldn’t be too much extra effort.
The cloneTask was handed over to an action that ensured it had finished executing successfully before continuing. The workflow was then responsible for adding the additional hard drives; I’m not going to run through a code example for this but effectively we retrieved the new VM as a VC:VirtualMachine object, called the vCenter createVirtualDiskFlatVer2ConfigSpec() method to create the virtual disks and then attached them to the machine using the reconfigVM_Task() method.
By this stage we could fully deploy and customise a vCenter machine through the workflow, our final task was to add this machine to vCAC as a managed machine and return its equivalent vCAC:VirtualMachine object for provisioning as a custom resource in the vCAC portal. Adding the machine to vCAC was achieved using the standard ‘Register a vCenter Virtual Machine’ workflow (available with the vCAC VCO Plugin) and the vCAC machine to return was retrieved using the following script:

After the vCO workflow had been created, we could tie it in to vCAC using the Advanced Services menu. As our workflow returns a vCAC:VirtualMachine object, we first had to create a custom resource of this type for vCAC to provision after execution. This custom resource would then provide us with a link to the machine from the portal and allow us to attach custom resource actions to it.
Once we had configured the custom resource we could create a new Service Blueprint that would provide the request form that would be filled out by the user. We followed the usual process to do this, linking to our ‘New machine’ workflow and giving it an appropriate name. The Blueprint Form however was customised quite heavily, most importantly we made the OS and Size parameters Dropdown lists so the user could only select the values that we defined in our switch statements earlier.
The final stage of the Service Blueprint creation process was to point the output vCAC:VirtualMachine object of the vCO workflow at the custom resource that we had previously defined. We now had our custom made form set up and pointed at our ‘New Machine’ workflow. After adding the blueprint, publishing it and giving it the appropriate entitlements, our solution was complete. A quick test ensured that it worked as expected and efficiently got around our original problem, allowing us to dynamically deploy the machines from a singular, fully customised form.
If you would like talk to us about assisting your organisation with designing a VMware private cloud, please contact us

Introduction Over the years, I’ve worked in and for a number of organisations in a variety of roles and often hear the word ‘strategic’ being used but without any real definition of what, why, how, who and when. In sales … [More]

Introduction

Over the years, I’ve worked in and for a number of organisations in a variety of roles and often hear the word ‘strategic’ being used but without any real definition of what, why, how, who and when. In sales we love to throw the word around by saying we will align to organisations strategic objectives or improve strategy. What we tend not to do is actually work out how we can help the customer begin to understand their strategy and objectives, let alone how we can give them a roadmap to get them there.
In this article I describe some of my experience in writing a number of strategies, whether this is aimed at a specific area (e.g. client device, cloud etc.) or the overarching strategy for a service company.
“The world is changing at an ever increasing pace, the only way I foresee in succeeding is to become a master of change”

Will having a strategy be useful?

The question probably should be, will not having a strategy be useful! It’s a common misconception that strategy requires mountains and mountains of paperwork, what I can say is that what it doesn’t need is to be only in someone’s head.
Defining a plan (yes, strategy is a plan), and managing and communicating it requires some form of documentation. This could be in the form of a word document, intranet page, poster, presentation or any document that fits your need. Having it written down, a) allows you to share it, b) allows you to review the plan in the future, and c) enables you to focus on specific areas. Like most things, to effectively manage you need to be able to measure, monitor, govern, control and most importantly effectively communicate. Having a written IT strategy will certainly enable you to do this far more effectively than if you do not plan and write it down.

Where do I start?

From a green fields to a long established business you’ll need a starting point, the best place to start is with your business goals. From there you should be able to define a skeleton of the areas of priority.
Another important step I recommend is to conduct maturity assessments, even in a green field environment a maturity assessment can draw out current and future state possibilities and enable a gap analysis to be conducted. You can then align capability to objectives to identify likely priority areas.
Some good examples of maturity assessment are the Microsoft Core IO model and ITIL’s Process Maturity Framework. You can also look at the capability maturity model index (CMMI) which has specific modules (e.g. CMMI for services) that may be a good starting point.
I will however point out that CORE IO, ITIL PMF, CMMI or other related maturity assessments are not always quick to pick up. It may be worth utilising an external consultant to assist in this area as they will have knowledge and experience that will cost a fraction of the price and time of trying to do this on your own.

Keeping the lights on

One area people seem to struggle with is when there is no time. When things are always on the go and keeping the lights on is top priority, strategy isn’t important right?
Well sure, if you’re busy 24/7 then there is no time, but perhaps there’s a reason why you have no time. It can be because your organisation really has overcommitted to that extent, the problem then is that without spending time with your head up looking around you may have missed the exact reason why you are overcommitted.
It may be that further resource is required, or that time efficiencies are not being made, projects with little or no value are taking up valuable time or perhaps that ineffective management of systems or people is occurring. What is important is to understand, that to identify these and present a case for change, someone has to put some time into understanding the root cause and planning a way to change the outcome.

Why Change Fails

The organisation had not been clear about the reasons for the change and the overall objectives. This plays into the hands of any vested interests.

They had failed to move from talking to action too quickly. This leads to mixed messages and gives resistance a better opportunity to focus.

The leaders had not been prepared for the change of management style required to manage a changed business or one where change is the norm. "Change programmes" fail in that they are seen as just that: "programmers". The mentality of "now we're going to do change and then we'll get back to normal" causes the failure. Change as the cliché goes is a constant; so a one off programme, which presumably has a start and a finish, doesn't address the long-term change in management style.

They had chosen a change methodology or approach that did not suit the business. Or worse still had piled methodology upon methodology, programme upon programme. One organisation had 6 sigma, balanced scorecard and IIP methodology all at the same time.

The organisation had not been prepared and the internal culture had 'pushed back' against the change.

The business had 'ram raided' certain functions with little regard to the overall business (i.e. they had changed one part of the process and not considered the impact up or downstream) In short they had panicked and were looking for a quick win or to declare victory too soon.

They had set the strategic direction for the change and then the leaders had remained remote from the change (sometimes called 'Distance Transformation') leaving the actual change to less motivated people. Success has many parents; failure is an orphan.

My Experience of Strategy and Change

In both successes and failures to devise or execute strategy/changes I’ve noticed a number of similar traits:

Strategy Confuses People

Strategy is considered a one-time activity

Strategy is poorly designed and planned for

Strategy is poorly communicated

Strategy is kept confidential

Change is not communicated effectively

Training is not provided or effective

Documentation is considered a non-essential extra (and therefore often not created/maintained)

Change is not measured or controlled

People don’t recognise repetition of past mistakes

Help is not sought (internal/external)

Long term gain is overridden by short term expense

The right people are not engaged at the right time

While this list is not exhaustive, to me it provides a summary of common themes I’ve encountered in the last 13 years of my professional career.

Conclusion

Change is a complex and difficult game - a well thought out plan has far greater chance of success. Recognising the need to continually review and manage change should enable you to make far greater progress towards your organisations aims. Biting off more than you can chew can be just as dangerous as doing nothing. Taking the right approach, with the rich processes and people will see you set on a journey that should bring benefit to the business and enable greater IT agility.

I remember the first VMUG that I ever attended. It was a little daunting as I didn’t know what to expect, who I’d meet or whether I would appear amateurish compared to everyone else there. I recall being waved into … [More]

I remember the first VMUG that I ever attended. It was a little daunting as I didn't know what to expect, who I'd meet or whether I would appear amateurish compared to everyone else there. I recall being waved into a room that I'm now very familiar with by a very enthusiastic man who turned out to be already well known to the community that I was joining.
Since that day I've tried to attend VMUGs regularly and through those meetings I've contributed to, and gained from, a network of people who possess a vast wealth of knowledge and experience.
Fast-forward to 2014 and I've gone from a regular attendee of VMUGs to being an organiser of them along with co-leaders Jeremy Bowman, Barry Coombs and Simon Eady.
This month saw the first ever South West UK VMUG, which was held in Bristol at the mShed. Compared to a few years ago, when there wasn't enough interest or awareness to warrant a South West chapter, the support and participation that was shown during the planning and execution of this first event was instrumental in making it the success that it was. Additionally, as my work and Xtravirt are a significant part of my life, the support that they have provided allows me to explore and develop professionally and it is a testament to their people-based philosophy.
Being the first one, striking the right balance between hosting an interesting and engaging event and trying to do too much, too quickly was important. As such, we opted to make this first meeting a half-day event and selected a theme to focus content around. The participation of several key industry sponsors (including Nutanix and Veeam) helped significantly by allowing us to hire a suitable venue and also by providing some relevant content to add to presentations from VMware.
For many of the attendees, this was their first VMUG. The majority of them were from the local Bristol and Bath area with a few coming from further afield. Most of the people that I talked with were in operational roles within VMware customer organisations with a few coming from resellers or other software / hardware vendors. For future events, we're hoping to extend our reach down into Devon and across to Wales a bit more and build up a following of repeat attendees.
We had originally planned for the first ever presentation to be delivered by VMware's EMEA CTO, Joe Baguely, but a change in his schedule had forced us to move things around a little. Peter von Oven from VMware stepped into the breach however with a detailed tour of End User Computing. He was followed up by an excellent technical and architectural overview of Nutanix's converged infrastructure.
In a combined vendor and community presentation, Nathan Prisk from Falmouth University gave an excellent presentation about his VDI rollout project, the challenges faced, the benefits gained and a story about how the recent stormy weather had tripped up data centre power systems in an unexpected way.
Joe Baguely arrived in the nick of time from his event with Lotus F1 in Oxfordshire to deliver a very well received and, as usual, very engaging and thought provoking talk that involved only a single PowerPoint slide. In hindsight, closing with this presentation was very effective and rounded the day off brilliantly before we decamped to the nearby Piano and Pitcher bar.
Our planning for the second event (provisionally 3rd June 2014, also at the mShed in Bristol) was well underway several weeks ago. I look forward to it and hope that it will be as successful as our first one.

Strictly by invitation only, it was quite an honor to be one of the few bloggers invited along to VMware’s London launch event for their new vCloud Hybrid Service (vCHS) offering. With an amazing view of the city from Paramount … [More]

Strictly by invitation only, it was quite an honor to be one of the few bloggers invited along to VMware’s London launch event for their new vCloud Hybrid Service (vCHS) offering. With an amazing view of the city from Paramount skyline bar at the Centre Point building, the scene was perfectly set for an inspiring event.
VMware’s vCloud Hybrid Service became public in the US in September last year. Swiftly afterwards, VMware announced their plans to bring the service to EMEA in 2014 and, as of today, it is now generally available in Europe.
The launch of the service in London today has been anticipated for several weeks following a beta programme that was oversubscribed ten-fold. Initially, vCHS will be available via a single UK data centre. An additional data centre is due to come online in the 2nd quarter of this year and VMware already have plans to expand the service into more European countries.
The relative importance to VMware of this launch was perhaps best emphasized by the presence of their CEO, Pat Gelsinger, who flew in from California for it. VMware have invested heavily in vCHS and will continue to do so as demand for public cloud services grows.
Obviously, VMware aren’t the first to market with a public cloud offering (think Amazon AWS or Microsoft Azure for instance), but a significant portion of the launch briefing was focused around how vCHS benefits existing VMware customers more than a move to a 3rd party cloud provider does. For this, two of the service’s beta participants talked about their experiences.
Betfair’s business activities, as part of the online gaming industry, are heavily regulated within the UK. One of their IT challenges is providing the business with sufficient agility to grow and develop. However, Betfair found that the potential benefits of cloud economics are balanced against the complexity of maintaining regulatory compliance when using cloud service providers. The key differentiator that they picked out in vCHS for them was the integration with their existing virtual platform (vSphere). Being able to migrate workloads from their on-premise platform to their dedicated vCHS space and (using other parts of the vCloud Suite) presenting business users with a single interface to request and manage virtual infrastructure made their adoption of vCHS for development and testing purposes possible.
Cancer Research UK’s story is similar. Their key driver is to reduce their spend on “tin and wires” as they’re not an IT business. As a charity, regular and predictable costs are far more preferable to infrequent capital outlays for growth and hardware refreshes. Cancer Research wanted something they could just plug into and use to maximize their IT efficiency and move away from legacy systems.
So why the UK and why now?
The feedback from EMEA customers indicated that many of them were concerned about data locality and the sovereignty of their datacenters. A Vanson Bourne survey of 200 IT decision makers conducted earlier this year on behalf of VMware indicated that:

86% recognised a business need to keep data within UK borders

85% said current clouds were not integrated with their own internal infrastructure

81% said that they need to make public cloud as easy to manage and control as their own infrastructure

As a Senior Consultant at Xtravirt who is one of VMware’s launch partners for vCHS in the UK, we are looking forward to engaging with customers to accelerate their deployment of vCHS, and leveraging Xtravirt’s expanded professional service offering which will help customers achieve maximum benefit from their hybrid cloud.

This blog post came to life after the initial workshops and subsequent discovery that a previous upgrade of the customer’s VMware View environment was no longer able to meet the adoption demands

Background

Recently I’ve been involved in a VDI refresh for one of our customers, around 800 desktops using VMware’s vSphere and View products. As with any VDI solution the success can only be attributed to careful planning and design as well as thorough understanding of the environment.
This blog post came to life after the initial workshops and subsequent discovery that a previous upgrade of the customer’s VMware View environment was no longer able to meet the adoption demands. A major pain point related to the performance of the infrastructure and virtual desktops for the users. A full assessment identified the storage architecture as one of the major bottlenecks. Multiple RAID5 LUN groups (5 disks each) had been provisioned, with as many as 100 virtual desktops or ‘Linked Clones’ located on each datastore. With the lack of spindles, IOPS, throughput and use of RAID5 with write penalty (x4), this all resulted in a less than desirable architecture and performance to handle the desktop workloads (generally a 20% read, 80% write I/O profile), which differ greatly from server workloads.

Objective

The VDI refresh project was granted additional funding to identify and implement a storage solution that would eliminate these performance pain points. The VDI provision was to remain through VMware and updated to Horizon View v5.2 but had to deliver a measurable performance within 10% of a native physical desktop. Administration overheads were to be reduced where possible by using a less complex infrastructure.
As a truly agnostic organisation we provided the necessary assistance and guidance in conjunction with our customer to ensure the technologies they were reviewing would be fit for purpose. As it transpired the technology and offerings provided by Tintri addressed our customer’s requirements.
In this blog post I wanted to share the initial decision making features followed by a brief overview of some of the product’s features that assisted us during the deployment.

Technology chosen

Following a successful proof of concept and extensive load testing, the Tintri 540 was selected. You can read more about the company and their offerings on their website but for this solution here are the key items identified:

NFS solution – Simple and can leverage existing Ethernet infrastructure and eliminate one of the previous problem points, VMFS locking and SCSI reservations.

Minimal configuration and setup required.

A self-optimising storage appliance without the overhead of manual tuning.

Comprised of 8 3TB disks and 8 300GB SSD (MLC), providing the required total capacity (13TB), and a good amount of flash to serve read and write I/O.

Support for up to 1000 VMs providing enough capacity for day 1 and predicted future growth.

Supports up to 75,000 IOPS. All read and write I/O is delivered from flash and provides low latency performance for VMs.

Note: To achieve the highest possible VDI density Tintri appliances require the use of 10GbE connectivity between the appliance and core switching. The Ethernet-based infrastructure in this implementation consisted of dedicated redundant switches for storage traffic but running only at 1GbE. While greatly reduced this still permitted 80 virtual machines per ESXi host which was well within the design and capacity planned consolidation ratio.

Previous VDI

The initial deployment of the virtual desktop infrastructure was based on using MS Windows 7 serving approximately 700 users. The majority were ‘Linked Clones’ with less than 20 persistent desktops. The virtual desktops remained powered on and controlled by each Horizon View pool policy to enable quick access and logon to each desktop. Various desktop workloads were in use however, none of these were extreme use cases in terms of I/O profile and were typically task workers, knowledge workers, with a small number of power users.
As with all our VDI engagements we completed an assessment of the physical and virtual desktops before proceeding. Identifying use cases and mapping these to the different pools to ensure the new environment was sized correctly and able to handle peaks and additional overhead.

The Tintri Dashboard

Here’s a quick overview of the management user interface, focusing on items used during the initial deployment to assist with performance measurement.

The dashboard provides real-time insight and monitoring. You’re able to drill down further into all of the metrics for further analysis and pull in metrics from VMware’s vCenter to provide deeper insight.

Within the Datastore performance sub-heading the main IOPS, throughput, latency and Flash hit ratio counters are presented in real-time (10 second average), and a 7 day range (10 minute average).

To the right hand side, you can view which VMs are ‘changers’, in terms of performance and space and by what degree of change.

Note: Other VM names have been removed from the screenshot to protect the customer’s data.
The Diagnose, Hardware screen allows visibility into the status of the hardware for components such as disks, fans, memory, CPUs and controllers.

Real world performance

IOPS can be monitored in real-time, 4hrs, 12hrs, 1 day, 2 day or 7 days to a granular level that can even reveal details of a single I/O from any VM.
Using the Datastore chart you can click on different points to view specific offenders (such as VDI-T2-48 in the screenshot below) or hover over a point to bring up the data on screen.
During the first week of production the chart below reveals statistics for the deployed 715 virtual desktops (with a peak of 400 concurrent active sessions). The total IOPS generally remained under 4000 with bursts highlighted by various logon storms throughout the day. The dramatic peaks are largely due to replica VMs or maintenance (recompose) operations.
Note: Horizon View Storage Accelerator (VSA) is enabled on each pool, which can dramatically decrease the number of read I/O that is required from the backend storage system. This feature caches common blocks across the desktops and serves them from a content based read cache (CBRC). This requires and consumes physical RAM (max size 2048MB) on each ESXi host.
You can read more about the VSA feature here.

IOPS versus throughput

The ability to compare two charts, side by side, proved to be very useful feature during the testing and go-live. In the screenshot below there’s a comparison between IOPS and throughput. The total IOPS peaked at 10444 at 6:10 PM, with 8396 read I/O (shown in yellow) and 2048 write I/O (shown in blue). The replica disk shown below, is contributing 13% to the overall total IOPS.

Latency

Latency is a vital statistic to be aware of and monitor because it measures how long it takes a single I/O request to occur end to end, from the VM (guest OS) to the storage disks. If latency is consistently greater than 20 – 30ms then all round performance of the storage and virtual machines will suffer greatly.
In the example screenshot below, green indicates the latency occurring at the host (guest OS), rather than the network, storage or disk. The total latency is 2.68ms, which results from the host (2.05ms), network (0.12ms), storage (0.51ms) and disk (0ms). Maintaining consistent latency around this point will provide excellent end to end performance.

Flash utilisation

This chart excerpt reveals the amount of I/O (read and write) that’s being served from the flash disks. As can be seen, 100% is being served with only a couple of small drops to 98% meaning the best possible I/O performance is being delivered from flash rather than mechanical, spinning disk.

Virtual Machines

Drilling into IOPS and throughput is all very well where forensic analysis and investigation is required, but what is interesting is how this correlates to virtual machines.
This screenshot is taken from a real-time graph. Virtual machines can be seen running on the same Datastore and usual ‘sort’ activities can be completed by clicking on the metric column headings. Double-clicking on a VM will display a graph, which allows historical data and ability to display two graphs side by side, perhaps comparing IOPS and throughput, for example.

Contributors

On each of the graphs which are presented throughout the management user interface, you can observe ‘Contributors’ shown down the right hand side, allowing visibility into individual virtual machines and the contribution to the overall number of IOPS, throughput or latency. Below we can clearly see a couple of Replica VMs recording high IOPS, a result of Linked Clones reading from the parent image (Replica) disk.

7 day zoom – IOPS versus Latency

Taking advantage of the side by side view again, a 7 day view of IOPS and latency can be observed clearly revealing the peaks and troughs of IOPS throughout the week. In this example the majority of I/O activity on the Tintri storage is write based (shown in blue) which means the VMware View Storage Accelerator is really taking the initial hit and reducing the read I/O requirement from the storage.
Total end to end latency (host, network, storage & disk) remains consistently low (around 3ms) with the occasional spike which is to be expected. In the example screenshot below, green indicates the latency is occurring at the ESXi host (guest OS), rather than in the network, storage or on disk.

Conclusion

For this project the Tintri storage appliance has proven it’s been able to deliver in terms of reduced management, with no additional performance tuning required and the capability to handle all workloads during peak periods. Performance monitoring evidences the I/O throughput is well within the device’s capability and delivered using a high flash percentage (that’s I/O served from SSD) with a low end-to-end latency. Virtual desktop performance has been validated to ensure it meets the initial requirement to be within 10% of native physical performance. The testing revealed in certain use cases, the virtual desktop performance exceeded that of physical performance.
In this write-up it’s very clear to see the time investment and due diligence completed by the customer provided a solid starting point and a contributing factor to the success of the project.
If you would like talk to us about accelerating your VDI platform, please contact us.

At VMworld last year, there was much continuing buzz about the “Software Defined Datacenter (SDDC)”. This initiative was announced by VMware last year but now the products have started to appear, making it a reality. We’ve worked with virtualisation and … [More]

At VMworld last year, there was much continuing buzz about the “Software Defined Datacenter (SDDC)”. This initiative was announced by VMware last year but now the products have started to appear, making it a reality. We’ve worked with virtualisation and abstraction of resources for a good few years now but the next level is to bring seamless automation and policies to control resource allocation. When it comes to storage, VMware’s solution for the SDDC is: Virtual SAN.

What is Virtual SAN?

VMware VSAN is a vSphere host-based storage solution to provide fast, scalable and resilient storage to any vSphere environment. The idea is to have hosts with internal storage (has to contain at least one SSD and one traditional disk) and as long as you have three or more, VSAN can create shared storage for you, using a “RAIN” model. There is only one SSD in a “Disk Group” but its job is to provide write buffering and read caching i.e. it’s not included as storage capacity. Write performance is where low cost storage systems suffer but having an SSD to front the spindle-based disks makes VSAN a good choice for most applications. The number of replicas and stripes depend on your resilience requirements and policies can be set on a VM basis. The beauty of this system is that once you provide the storage components required, VSAN takes over and configures them according to your policies for performance and/or availability. The whole environment is scalable and if more performance or storage is required, disks can be added later to scale-up or one can choose to scale-out by adding more hosts.
The most important thing to bear in mind is that this is not an appliance. All data is handled at the VMkernel level, cutting out expensive trips through hardware interfaces and across multiple buses. The result is extremely fast response times and therefore very well suited to a wide number of applications. This is a significant change in strategy as storage is now moving back to the host system, while still providing the shared and resilience aspects of it. More and more architects are finding that while all-flash based storage is brilliant in terms of delivering lots of IOPS, it doesn’t help much if the pipe to the host(s) can’t carry the throughput required. Having resilient shared storage locally solves that problem and delivers many times the throughput as compared to traditional network connectivity options currently in use. Best of all, the solution is generally far cheaper than storage systems offered by big name vendors.
It goes without saying that performance of such a system is reliant on the sum of all components involved so skimping on those would be counter-productive. They still need to be “Enterprise-Level” and even if one can use non-approved hardware, it’s not recommended as that would seriously affect not only performance but also uptime of the system. That’s especially true for SSDs, given consumer-grade SSDs have a relatively short write life span and replacing those regularly, would not help with uptime.
There are so many things that should be mentioned on the subject but I am not covering them here because focus of this article is on expressing my views and enthusiasm about VSAN and not on all that there is to know about VSAN. For that, I would like to point you towards an excellent collection maintained by Duncan Epping here.

Who should use VSAN?

VMware VSAN needs some time to prove itself and it’s certainly not going to replace traditional dedicated storage systems overnight. However, I do think that VSAN will enable virtualisation for a lot of companies that can’t afford enterprise-class shared storage but can do with extremely fast but affordable shared storage. Sure there are appliances out there that do similar things but I don’t feel they’re scalable in the same way as one generally has to buy a whole appliance/enclosure to scale up/out. That coupled with automation/integration with the hypervisor, VSAN looks like one of the simplest, quickest and cheapest solution to me, in terms of overall CAPEX and OPEX.
I think that the biggest subscribers to VSAN would be companies starting their journey towards virtualisation but not wanting to invest in dedicated and resilient shared storage initially to prove capability.
There are also companies that want to have shared storage but with good performance for individual applications that they might want to keep separate from the rest of the environment. These could be test/development, environment for a specific group/application or even a disaster recovery environment. By combining compute and shared storage, VSAN becomes a very attractive option for such applications.
Last but not least, VSAN is quite well suited to smaller VDI environments and enables deployment without investment in expensive storage systems. Despite not having expensive hardware at the backend, VSAN delivers great performance and resilience at a low cost. All this makes it possible for smaller companies to embrace virtual desktop technologies who were previously prevented from going down that route due to the costs involved.

Can I get it now?

At the time of writing, VSAN is still in “Public Beta”. If you have hardware compliant hosts (and SSDs/HDDs to put into them) or a lab with sufficient resources, there is no reason why you can’t start experimenting with it. To get your copy, click here.
If you would like talk to us about assisting your organisation with storage requirements for your data centre, workspace or cloud project, please contact us.

Preamble Small to medium IT environments are typically simple to manage and maintain for the average sysadmin. By utilising a combination of scripts, tools and other utilities, they are generally able to keep these environments in a manageable state, and … [More]

Preamble

Small to medium IT environments are typically simple to manage and maintain for the average sysadmin. By utilising a combination of scripts, tools and other utilities, they are generally able to keep these environments in a manageable state, and to specification. Almost.
Most would however agree that it is almost impossible for a team of people to keep systems and services in a particular desired state. Configuration drift is always prevalent when there are multiple people responsible for maintaining systems. This is never going to work as a long-term solution, and if the business is to scale, this most certainly will not work. IT teams will find themselves fire-fighting on a daily basis, and as a result will struggle to keep up with the demands of the business.
Some sort of configuration management tool is required to help teams management their infrastructure, and today I will be looking at one of these in particular: Puppet.

Introduction

Puppet exists as an open source tool, from a company called Puppet Labs. As the previous paragraph suggests, it is designed to manage configuration of Unix/Linux and MS Windows systems by way of a set of declarations that are setup by the user. These declarations are set to define in high-level terms, the desired state of system nodes managed by Puppet. Puppet then enforces this state upon nodes in an automated fashion, in most cases without the need to supply commands specific to individual OS types.
This post will be looking Puppet Enterprise, which exists as a licensed per-cumulative-number-of-nodes solution. I will be using the free version of Puppet Enterprise, which allows for management of up to 10 nodes.
Puppet Enterprise can be used to automate the provisioning of services, and can handle configuration and setup of all layers on managed nodes, from the OS, to networking, to middleware and even the application layers. Providing for a fully automated infrastructure. It's abilities do not limit it to private cloud usage though. Puppet can also extend to other cloud services, allowing you to manage infrastructure in your public, or hybrid cloud environments too.
At its roots, Puppet is all about describing the desired state of nodes, in terms of what are referred to as "resources". This is done by using a DSL (Domain Specific Language). To give a basic example of what this looks like, let's take a look at a user resource on a particular uBuntu Linux machine. We do this on a linux machine managed by Puppet, by issuing the command "puppet resource user".
Here we can see how the current state of these users looks on this particular machine. This format is written in "puppet configuration language", and if we were to save this to a file called "xtravirt.pp", it could now be used as what is known as a Puppet manifest file. A manifest is what is used to describe the desired state of a resource. So if the "xtravirt" user did not exist on this system, we could demonstrate manually applying this state by running "puppet apply xtravirt.pp" on this node. Puppet would then ensure that this user exists, therefore creating it by running the appropriate Linux command. Puppet is all about automation though, so this would realistically be setup and Puppet would handle this for us when it ran its interval check on all nodes - by default this run interval is every 30 minutes.
Another great feature of Puppet is the ability to simulate a change before applying it. We could do this by running "puppet apply sean.pp --noop" (no operation). Puppet would then output the simulated changes for you to view & ensure you are happy with first. In this case, the user "Sean" has a desired state in the sean.pp manifest file.
To get a basic, fully automated environment up and running, you can follow the Puppet Enterprise quick start guide found here. This will allow you to try out Puppet and follow along with this blog post. As high level overview, the process will entail the following tasks:

Ensure DNS is setup correctly - all machines should be able to resolve DNS correctly for your deployment.

Network connectivity - nodes need to be able to communicate on certain ports - detailed in the quick start guide

This master node is what is responsible for holding all of the configurations and desired states of nodes. Note that you can have more than one master node/server. The client nodes then perform periodic checkins against the master to receive their desired configuration states.

In my case, I deployed a linux VM running uBuntu 12.04 server as my puppet master node. I downloaded the puppet enterprise tarball using wget, and installed it on the system (master.development.lan), specifying to install the master, console, and database roles when prompted by the installer script. I decided to also create an alias CNAME record in DNS to point puppet.development.lan to the same system. Once complete, I was able to access the console on https://puppet.development.lan.

The console is the GUI for Puppet, and is primarily used for classification (which is essentially telling managed nodes what to do) and reporting (i.e. report on what nodes are doing, what has changed, etc...)

From this point on, we can use various built-in classes and modules to define how our nodes should behave. We can also create groups with different classes attached, so that we can manage groups of nodes in different ways.
I always like to learn by example, so below we'll run through an automation scenario using Puppet.

Scenario

Every time we deploy a Windows Server 2008 R2 VM to our "Management" cluster, we would like to ensure that various configurations are applied to this VM, and that going forward they are adhered to. Let's keep this example simple and say that we need PowerCLI to be installed on each of these nodes.

Process

To start, we'll just need to setup one VMware template and a Customization Specification for vCenter to use when cloning the template. This would just involve the following couple of steps:

Create a basic Windows Server template, and place the puppet-enterprise-3.1.0.msi installer in the C:\deploy folder on the template machine

Create a Customization Specification with a run once command to install the .msi file, specifying the PUPPET_MASTER_SERVER parameter for the installation that points to our master puppet server

The above will ensure that each time a VM is deployed using this specification, it is flagged to be managed by our master puppet server. All we have to do is accept the node in our list of nodes awaiting acceptance from the Puppet console when it gets deployed using the "Pending node requests" tab.

Next we'll get into actually defining a module to manage these nodes. This module will contain a single "class" which will define the software package that should always be installed on our nodes.

Normally, you can simply download existing modules from Puppet Forge (a repository of modules written by the community). This is however going to be our own basic module, and instead of using the built-in "puppet module search/install" commands to install modules, we'll just simply create a directory structure and a couple of files to make up a simple module on our master server ourselves.

Using SSH, on the master puppet server ensure you are running with elevated privileges. (sudo -s)

Create a new directory called "vmware_mgmt" under /etc/puppetlabs/puppet/modules/"

Under the new "vmware_mgmt" directory, create another directory called "manifests"

create a file called "vmware_mgmt.pp" under the manifests directory and populate it with the following:

This is our basic vmware_mgmt class, and defines that a folder should exist on our managed nodes called "packages". In this packages folder, we need a file called "VMware vSphere PowerCLI.msi" which can be downloaded from our Puppet files repository (located on our master server), and that this .msi package should always be installed.
This will essentially ensure that the package is downloaded, placed in the folder, and installed, if it is not already installed on the concerned node. You can take a look at this page to learn more about writing custom Windows manifests.

Next we should create a basic metadata.json file in the /etc/puppetlabs/puppet/modules/vmware_mgmt directory

Note: there is a lot more to writing a complete module. You can visit this page for best practises and other guidelines.

Now that we have a module defined, in the Management Console, in the left panel, click "Add classes" and start typing "vmware_mgmt" in the search text box. Once your class name appears, put a check on it, and then choose "Add selected classes".

Click "Add group" in the side panel and create a group for your nodes that should have PowerCLI installed

Add the "vmware_mgmt" class to the group

While editing the group, add any Windows nodes that you have deployed and accepted to be managed by Puppet by typing their names into the "Add a node" text box in the edit node group area

Finish the group creation by clicking "Update"

We should now ensure that we have the actual .msi installer ready for Puppet to fetch and send to nodes when required. There are a variety of options available here - UNC path, local to the node, or on the master puppet server itself. You may have noticed above in our vmware_mgmt.pp file, we defined a source location using source => 'puppet:///files/VMware vSphere PowerCLI.msi' this points to a file on the puppet master server. To do this, you should edit the "fileserver.conf" file under /etc/puppetlabs/puppet/ to specify a location to use to serve up files.
I simply added the following to my configuration file:

[files]path /etc/puppetlabs/puppet/filesallow *

I then made sure to place a copy of the VMware vSphere PowerCLI.msi file in /etc/puppetlabs/puppet/files on my master server.

Finishing up

We now have two options to see the results applied to our nodes in this group:

Once off "run once" - this allows us to invoke a single Puppet run on any number of nodes we select. From the console navigate to "Live management -> Control Puppet -> Run once -> run" (choosing the nodes you wish to invoke Puppet on from the list on the left before clicking run)

Wait for puppet to invoke on our nodes on it's default 30 minute interval. Puppet will simply run every 30 minutes by default and nodes in our custom group will pick up changes automatically

Conclusion

Quite a lot of setup went into this, but we now have something re-usable that can automate the deployment of any number of nodes in our infrastructure. We looked at how to setup and install a basic Puppet Enterprise environment, create a custom Windows VM template that automatically installs the puppet agent on deployment and connects to our master puppet server, and finally we looked at how to define a configuration for Windows nodes that ensures a specific software package is installed.

There are many more powerful features available to use with Puppet, and a lot more can be done. Puppet has a bit of a steep learning curve, but once you have it deployed, configured, and you have your various classes and modules setup, it really shows its power in being able to completely manage and automate an entire infrastructure from the ground up.

Horizon Workspace is VMware’s one-stop-shop product for the End-User Compute experience in a corporate environment. It provides users access to applications, data and virtual desktops via either a single pane of glass Web browser interface, or through a range of … [More]

Horizon Workspace is VMware’s one-stop-shop product for the End-User Compute experience in a corporate environment. It provides users access to applications, data and virtual desktops via either a single pane of glass Web browser interface, or through a range of mobile device applications. Since the recent release of version 1.5, it’s been gaining some traction in the market place and as such, I’ve recently been doing some work in the Xtravirt lab on this, as well as paying special attention to Horizon Workspace during my visit to VMworld last month.
One thing I’ve noticed during my digging is that it’s a powerful product, with lots of scope, and some interesting features on the roadmap. However, it’s a complicated product under-the-hood with many interacting components with complex relationships. This blog entry is essentially a whistle-stop tour of the components of Workspace, how they scale and some of the surrounding architecture.

Lighting the Fuse…

This part of the article isn’t intended to give you a blow-by-blow guide to installing Horizon Workspace, but it’s worth describing briefly for a bit of background.
Workspace is deployed as a vApp on top of vSphere. It has a number of pre-requisites in order to get it installed successfully. One key thing to get right is DNS. It’s important to pre-stage the names for each of the appliances in DNS (including reverse lookup too) as it relies on DNS for configuration and maintenance of the component appliances, as well as communications between them. Equally, you need to establish an IP pool in vCenter to support these too. A load balancer should be present in the estate hosting the Fully Qualified Domain Name that users will use to access the solution, complete with a trusted SSL certificate.
Oh, and make sure you have sufficient resources to run the initial installation – the configurator’s initial installation script does not suffer fools who run out of resources (like me in our lab – schoolboy error!) - If this happens, it’s a re-install from scratch…
When you install the vApp, it creates a total of 6 VMs. Five of them are the first appliances that are established as a minimum for the estate, while a sixth, that isn’t powered up (data-va-template) is a template VM used to deploy further appliances.
Once the vApp is installed via the vSphere console, the configuration wizard needs to be filled out on the Configurator’s console to complete the installation. Once this is all running, the basic estate is up and running and the estate can be configured (mostly) via web interfaces.

Appliances in Workspace

So, we have a set of appliances. Next we need to consider what they are each used for, how many do we need of each and what do we need to do to configure them.

Configurator

As its name describes, it manages and maintains elements of the central configuration – common items required by all appliances are maintained and distributed from here (such as the root password management, networking, vCenter connection, certificates within the vApp). It also hosts the wizard used to carry out the initial configuration. It has a web page for managing a number of key aspects of the estate. Several are detailed below:

System Information – a status page for the appliances deployed and also allows control of those that can be placed in Maintenance Mode.

Module Configuration – This page allows the administrator to enable (though not disable) the various functionality modules of the estate, such as Web applications, View integration etc…

FQDN and SSL – For configuring the Fully Qualified Domain Name used by users to access the estate and the SSL key chain for the estate.

License Key – This is managed centrally for the solution.

Password – the central admin password used on all of the vApps.

Log File Location – A text page describing where the logs various logs are located, rather than a settings page.

There is also a page describing the database connection. It should be noted that Workspace requires a database to function (Postgres 9.1 or later is recommended). For testing, an internal one is provided, however, for production, an external one is recommended. This will be discussed later.
So how large does this VM need to be? Not very, as it doesn’t do masses of work. Out of the box, it comes with a single vCPU, one gigabyte of memory and a 6GB disk. This is sufficient and doesn’t need changing. Only one configurator is needed – it’s not customer facing and it won’t break the estate if it is down for a while.

Connector

The Connector appliance handles a number of roles, including user authentication (Active Directory and RSA SecureID), connectivity to Active Directory and synchronising ThinApp repositories and View Pools.
Multiple Connectors are likely to be required in a production estate. For example, connectors to serve internal users authenticating via AD, while external users might authenticate using RSA SecureID, while multiple connectors would be needed from a resilience perspective too.
According to VMware’s testing, each connector can handle up to 30,000 simultaneous users, with the out-of-the-box configuration of 2 vCPU and 4GB RAM. VMware recommend retaining this sizing, but scaling outward for load. The key thing to consider is that the figure quoted is simultaneous users, and even in a 30,000 seat estate, it’s unlikely that this many simultaneous requests would hit a single node.
Only a single authentication mechanism is supported per node, so this may also define the design decision as to how many Connector appliances are needed.
For ThinApp packages, a repository is required if these are to be distributed using Workspace. Due to how ThinApp functions, a Windows file share (not a basic SMB/CIFS NAS appliance) is required to host this repository. ThinApp packages only require the executable and accompanying DAT file (with the same AppID) in a Workspace environment, so storage needs aren’t massive.

Gateway

The Gateway in some respects is poorly named. It enables a single user-facing domain name for users, but beyond that, it largely serves as a policeman routing requests to the correct appliance node – so if a user selects File resources, requests go to the appropriate Data appliance.
On a fresh installation, a Gateway has two vCPU, 2GB RAM and a 10GB disk. VMware recommend increasing this considerably to 4 vCPU and 16GB RAM.
In terms of numbers, there are a number of design considerations. For High Availability, multiple gateways should be placed behind a load balancer. From a load perspective, a Gateway will support up to 2000 users.
One item of note is that VMware recommend a minimum of 1 gateway for every two Data appliances. The Data appliance puts the greatest load on Horizon Workspace, with the highest number of requests, all passing via the Gateways.

Service

This is the intelligence behind the solution. The administration web page is hosted here (even though logon is via the Gateway). Application catalogues, entitlements, reporting and local Horizon Workspace based groups are all defined and managed here.
It’s the Service appliance that connects to the Database.
Out of the box, one of these is installed, and is configured with 2 vCPU, 4GB RAM and a pair of virtual disks totalling 18GB. VMware’s recommendation is that two of these are deployed for resilience and that the vCPU and RAM configuration be increased to 4 and 8GB respectively. At this level, up to 100,000 users can be handled by each node without issue.

Data

The Data appliance handles the file services part of the solution, including the User Interface component, quota management, file sharing and hosting data itself.
This server, being customer facing and the nature of its role, is under considerable load. As such, VMware recommend a Data appliance per 1000 users. It is recommended that the appliance is increased to at least 4 vCPU and 8GB RAM, possibly even to 8vCPU/16GB RAM.
It is also recommended that, in a production environment, that at least two are established. This is an architectural suggestion.
The Data appliance can have two roles:

Much of the VMware scaling recommendation is dependent on throughput, size of dataset expected as well as design decisions such as not putting all of the users in one basket – do you want to stop 10,000 users working when a file node goes down, or only 1000, leaving 9000 operational?
When a user accesses a file through a web interface, it is possible to preview the document within the browser. This can be implemented using two different methods. The first is to use Libre Office, which is free and integrated in the Data appliance, while the other is to implement MS Office Preview Server. Where Libre Office is enabled, it may be necessary to increase the RAM/CPU provisioning of the Data appliance.
Data storage space must be provisioned additive to the Data appliances. While VMDKs are supported, due to the high number of files involved and the relative performance of VMDKs used in this fashion, it is recommended that NFS storage is used instead as these are generally easier to scale and have superior performance with high file counts. It should be noted though that each Data appliance will require its own export.
Storage sizing is a relatively simple proposition. For each user, take the proposed quota to be applied and multiply by three (to handle version retention) and add 20%. For example, a 1000 users with a 100GB quota will require 1000 x (100GB x3 x20%) = 360,000GB, or 351TB of capacity. NFS appliances fit well here as they often support hot expansion, so extending storage as it’s required becomes more tenable.

External Services

As stated, a number of services can be provisioned externally from the Horizon Workspace vApp package.

Document Preview using MS Office Preview Server

Rather than use Libre Office for document preview, it is possible to set up Microsoft Office Preview Servers. These have the advantage of off-loading rendering operations from the already busy Data appliances, as well as rendering Microsoft Office documents using Microsoft’s engine rather than a third party solution. On the flip-side, it does entail license costs for Server and Office licensing, as well as requiring additional VMs to manage and protect.
Office Preview requires at least one Windows Server 2008 R2 VM with MS Office 2010 Professional x64 and the Horizon Data Preview agent installed. Size wise, the VM needs at least 4 vCPU and 4GB RAM. As conversion of documents is processed in real-time, this is quite CPU and memory intensive. It may be necessary to scale upwards and outwards if many users are expecting to use preview services on lots of devices, and if large documents such as PowerPoint presentations need rendering.
The Preview service requires an account with permissions to add local users to the Server and UAC to be disabled.

Database

As mentioned previously, in a production environment, a Database server is required by the Service appliances for retention of their data. Horizon Workspace 1.5 supports either Oracle 11g or Postgres 9.1, though VMware recommend Postgres (possibly related to the fact they offer the VMware vFabric Postgres appliance).
For CPU and RAM purposes, both databases should run adequately with 4 vCPU and 8GB RAM. The documentation states that the database supports 100,000 users in 64GB, with 20GB per 10,000 users beyond that. VMware’s recommendation is that 32GB is sufficient for most engagements.

So what do we end up with…?

So although the basic vApp gives us a basic five appliance estate, a production estate requires somewhat more, depending on the services required. For example, a cursory 2000 seat estate might look something like the diagram below.
There are a couple of Gateways tucked behind a load balancer. Although one can handle 2000 users, we need two for resilience and to support our Data appliances. There are three Data appliances, following the rule of 1000 per appliance, plus a Master. Two Service appliances are provided for resilience. There are four Connectors, two for RSA authentication and two for AD authentication, purely driven by the need for resilience and one authentication method per connector rule. Last, but not least, a single configurator.
As you can see, the five VM estate soon becomes one with twelve VMs, before we consider the database, NFS and ThinApp repository and possible Office Preview servers. The estate can end up with quite a significant footprint, but this isn’t too surprising when consideration is given to the various roles it serves and the number of users it needs to support.
It’s pretty clear that careful design is a critical task when implementing this product - ascertaining the use cases, sizing the solution as accurately as possible and then sitting down and putting the design on paper.
If you would like talk to us about assisting your organisation with VMware Horizon Workspace or any aspects of VMware Horizon Suite and their management, please contact us.

Well, this year’s VMworld Europe has been and gone for another year. Barcelona was a pleasant place, but it wasn’t all “sun and sangria”, there was treasure to be had…

Introduction

(Gran Via Conference Centre)
Well, this year’s VMworld Europe has been and gone for another year. Barcelona was a pleasant place, but it wasn’t all “sun and sangria”, there was treasure to be had, and Xtravirt sent a ragtag band of hardened consultants to do some digging. This particular consultant decided to put a little focus particularly on End User Compute, given that’s been the focus of a number of projects of late.

Horizon Mirage

Horizon Mirage hit release 4.3 at this VMworld. I attended a number of sessions on best practice and so on, which were quite interesting, but I managed to catch up with Alon Goldin, who’s been involved with Mirage since before VMware took over Wanova. He pointed out a few useful features that have been added.
Firstly, one nice addition to the Windows 7 migration wizard is that it’s now possible to apply both a Base Layer and App Layers as a single task, rather than as separate jobs. This should speed up deployments and reduce complexity nicely.
A new management policy has been added that allows deployment of images without the need to upload from an endpoint first. This is useful in scenarios where user data on a client is minimal (for instance, redirected document folders etc...) and whether the data is protected is less critical. A time-saver in these instances.
With Horizon Mirage 4.3, the Client agent has been optimised for use in a virtual machine and is now officially supported by VMware within persistent View desktops. This is useful in many ways, particularly in the case of persistent desktops, where managing and maintaining compliancy isn’t as simple as re-composing non-persistent desktops.
The Web Management console has seen some changes too, with the addition of a Protection Manager role that’s permitted to edit policies and build collections. VMware’s intention is to move away from the legacy MMC console to the Web Console in the long term, pretty much in-line with the rest of the VMware portfolio (such as vSphere 5.5).

VMware Horizon View 5.3 and nVidia – Dedicated Graphics

One subject creating a bit of a buzz of late is support for high-end graphics in Virtual Desktops. Up until this week, VMware’s support has been somewhat limited, for example nVidia’s GRiD series GPUs were limited to Virtual Shared Graphics Acceleration (vSGA) in VMware View, which, in itself is superior to the normal VMware SVGA driver, but lacked the horsepower required for CAD users (or gamers). However, announced at VMworld Europe, VMware Horizon View 5.3 now has support for the nVidia GRiD GPU in Virtual Dedicated Graphics Acceleration (vDGA) mode.
What this means is that a persistent virtual desktop can be attached directly to an nVidia GPU, essentially bypassing the virtual layer, complete with native nVidia driver support. The main stream server vendors were demonstrating this capability in the Solutions Exchange running complex graphical models over VDI sessions. Given that a single Nvidia GRiD adapter has two (in the case of the K2) or 4 (for the K1) GPUs, it offers great potential to host a handful of CAD users (or gamers) per server node if your hardware has available expansion slots.

VMware Horizon Workspace 1.5

Horizon Workspace is viewed by VMware as a centralised portal for all things EUC. As a central portal, it provides users with the following services:

File services – A private equivalent of the popular consumer offering from Dropbox, complete with synchronisation capabilities, file sharing and client applications for Android, iOS and Windows (desktop), as well as browser access, complete with preview services using either MS Office Preview Server or LibreOffice Preview.

Web Application Publishing – Web application shortcuts can be defined and published via Horizon Workspace. There is integration into single sign-on capabilities where available.

Thick Application Delivery - ThinApp packaged applications can be distributed via Horizon Workspace, complete with policies and access control (either within Horizon or via Active Directory Groups). ThinApp deployment currently requires that the client be a member of the same Active Directory as the workspace estate, although this is going to change soon. Likewise, Citrix XenApp application delivery is in development for imminent release.

Access to VMware View desktops – VMware View desktops can be accessed via the Horizon Workspace Portal. While provisioning is all carried out in View, presentation via Workspace is possible. If Blast protocol support is installed, the View session is directly accessible via a HTML5 browser, without need for a client.

There were a number of sessions on installation, scaling and other subjects too, enough for a healthy blog post on the whole subject of Workspace (watch this space).
Another feature, still in its infancy, is mobile device access. For Apple’s iOS, the single unified app from Horizon Workspace 1.5 has been replaced by a File app and an Applications app. Android is evolving even faster, with a product feature that essentially behaves like VMware Player but for Android – VMware Switch. Basically, this allows a managed Android image to be run on a user’s Android device in secure isolation, separating private applications and data from work functionality – an Android phone within a phone.

Hands-On Labs

As well as the technical breakout sessions, one stand-out area was the Hands-on Labs. Given the queues, it was consistently popular. Accessible either though a BYOD access, or through timeslot limited thin clients, these provided access to a range of VMware or partner provided technical lab sessions where guests could try many of the VMware technologies.
I tried a couple of the lab sessions, predominantly on Horizon Workspace and ThinApp, plus a demo of the latest version of NetApp’s Virtual Storage Console.

The Workspace lab I followed was an introductory guide, demonstrating provisioning file storage, applications and desktops to end users. A useful guide for administrators, rather than implementers.

The ThinApp lab was more broad-ranging, covering how to package applications, and then how to implement them in Horizon Workspace, Horizon View or as part of a Horizon Mirage App Layer. There was also a sneak-peak of a packaged 64-bit ThinApp package – this feature was formally released as part of ThinApp 5.0 in VMworld Europe.

The NetApp Virtual Storage Console lab demonstrated the vSphere integration tool for NetApp storage. I’ve had quite a bit of experience with NetApp tools back to Virtual Infrastructure 3.0 days, including SnapManager for VMware, and this is by far the most impressive, including full integration into the vSphere 5.x vCenter web console. It has also streamlined many processes for configuration that required additional work (such as implementing RBAC). The lab was a pretty impressive tour of the application, including rapid VM provisioning, storage provisioning, backup and recovery.

The Solutions Exchange

The Solutions Exchange is a more traditional conference environment with many software, hardware and services vendors demonstrating their products. I picked up a number of key items here. In summary –

Liquidware Labs are adding VMware Horizon Mirage support to a number of products. One element is to extend their Stratusphere FIT VDI assessment tools to be able to carry out assessments for Mirage migrations. Another element is to use ProfileUnity to replace the user layer of Mirage to allow decentralised environments to manage user data transfer more efficiently (rather than replicating everything to a single point, as Mirage would).

NetApp demonstrated their E Series storage systems. While not as fully featured or general purpose as a FAS series filer, they’re aimed at high performance, data-intensive work. In particular, they push it as part of their StorageGRID object-based storage solution, though I was advised that they were about to bring out a completely new solution in this area.

HP, Dell and Lenovo were amongst a contingent of hardware vendors all demonstrating VMware View with shard or direct graphic support using nVidia GPUs, as well as Thin Clients to support this.

Conclusion

So, to conclude, VMworld proved to be quite the showcase of the latest and greatest from an end-user compute perspective, with both gains in performance, particularly on the graphical front, as well as new features for client and application management and end-user access.

During our customer engagements, we’re privileged to see multiple deployments utilising differing technologies whether that be for Data Centre or VDI workloads. Storage is a key factor in any deployment and in this article I’m going to look at some … [More]

It’s all about the Storage!

During our customer engagements, we’re privileged to see multiple deployments utilising differing technologies whether that be for Data Centre or VDI workloads. Storage is a key factor in any deployment and in this article I’m going to look at some of the storage approaches for VDI deployments and discuss their benefits and disadvantages.

Performance

Let’s start with performance, calculating IOPS in a VDI deployment is a hotly debated subject, you can “assume” industry averages for each workload, but these averages leave little room for growth and don’t cater for those peak workloads, such as a logon storms. Taking the maximum figures is a safer bet but you’ll pay a premium for a lot of performance that will go unused most of the time.
So how do we find the correct sizing? One approach is to use the peak average. Using a planning tool to look at the total IOPS hour by hour will show when your largest IO spike occurs, determining how many machines are online during that short period and dividing the IOPS by those machines will give you a “peak average”; This approach can provide more realistic IOPS requirement, but the figures need to be monitored over a long period to ensure key business periods (such as month end; patching; AV scans) are included. This is a more realistic approach than the day long or industry average.
If your planning tools don’t present this in a report, you’ll need to track the data hour by hour to gain the information, which can be labour intensive; some planning tools offer a 95 percentile rule, where the top 5% of IO is not accounted for, this can offer reduced storage requirements, but can actually contribute to a slowdown, as you’re effectively not providing the peak performance when it’s most required.
A final note on performance is make sure you monitor the correct workload; don’t monitor XP machines then deploy Windows 7 machines with completely different agents installed as the figures will be skewed. If this is the only option open to you, update your findings during the proof of concept or pilot deployment when running the final build.

What storage should I use?

Now you have some performance figures what do you do? Get some quotes from some storage vendors, but make sure you’re sitting down as the cost is probably going to be high. Another option is to deploy one of the myriad of storage optimisation technologies that are on the market.
These optimisation technologies offer great potential; increased IOPS, de-duplication and help remove the CapEx barrier to deployment by reducing cost. On the down side there’s potential for the solution to be a little more complex in design, complicate operational tasks and many are still in their infancy frequently updating versions or architecture - so choose carefully and test thoroughly.

Licencing

You’ll need to consider the license model, is it per GB or per user, concurrent user or named user? It can make quite a difference especially when coupled with maintenance.

Sizing

Sizing comes in two flavours. You’ll need to work out how much space you’ll require and this will depend on a number of aspects. Where the user persona is, pooled or dedicated assignments, if you’re using linked clones (Citrix MCS or VMware View Composer) or full clones and what the de-duplication rate is for your selected storage.
When sizing for pooled desktops, you need to size for the peak concurrency, plus room for growth and breathing space. When sizing for dedicated desktops you’ll need to cater for 100% of users and growth, as each user is assigned to a persistent desktop.
The other sizing aspect is the impact on the host if your solution utilises a virtual appliance, you’ll need to account for the CPU and memory requirements when sizing your hosts, or deduct that resource from what’s available for desktop workloads.
Some storage designs can produce additional or surplus space which is a by-product of adding spindles to provide performance; this should not be used for other purposes, or seen as available space as it will impact the performance of your planned workloads - guard it from misuse!

Linked versus Full clones

Linked clones offer a way to reduce storage requirements by storing a single base image and linking multiple “delta” disks to it, you can create massive savings. You’ll need to account for a number of base images, perhaps multiple base images per data store depending on the broker you use.
Linked clones are great for pooled desktops, but not so great for dedicated as you’re tied to the base image / replica and therefore VMFS datastore. As long as you plan your storage for growth and performance they’re a perfectly viable solution, but you’ll lose storage vMotion capability and perhaps vMotion across clusters which can impact maintenance.
Using full clones for dedicated desktops provides less of a tie than linked clones, it gives you full mobility within the data centre, and you’ll just need to leverage a storage solution that can de-dupe the data to a sensible point to make it affordable. Additionally, orchestration toward the build and integration into the broker may be required.

Local or shared storage

Optimising local storage provides the lowest cost storage, but in some cases can introduce increased complexity or loss of key hypervisor features such as VMware’s HA or DRS. HA however, can be provided by other means for pooled desktops, such as by the broker but not for dedicated desktops.
Without DRS you’ll have to be more cautious with your sizing per host and manage the capacity at a host level rather than the cluster, as you’re unable to automatically balance out workloads across the cluster. While desktop workloads may not be as sustained as server workloads, it’s quite easy to find hosts in a cluster may consume excessive CPU while others use less. In certain cases it doesn’t take many guests with runaway CPU processes to put a host under pressure, thankfully you can manage these CPU processes with tools like RES and AppSense.
Maintenance is also complicated with local storage, as you’ll have to manually drain a server of its users before entering maintenance mode rather than moving those workloads off to a standby host, you may even have to wait until out of hours for your maintenance window. With shared storage DRS is possible (and storage DRS if using full clones), dedicated desktops can be hosted and protected by HA without any additional solutions, but it potentially costs more than local storage, it’s a question of balance and what your main requirements are.

Management

Finally, regardless of your virtual infrastructure make sure that you work closely with the storage team so that your management servers have guaranteed / isolated performance. This ensures that you can manage your environment even if your workloads are consuming all the performance for their allocated disks.
If you would like talk to us about assisting your organisation with storage requirements for your data centre, workspace or cloud project, please contact us.

As a virtualisation practice market leader we monitor industry events to see what trends, makes waves and generally seeds out snippets that grab our attention. Around about this time last year the 44CON Security Conference was brought to my attention … [More]

As a virtualisation practice market leader we monitor industry events to see what trends, makes waves and generally seeds out snippets that grab our attention. Around about this time last year the 44CON Security Conference was brought to my attention through my Twitter stream. A quick bit of digging and I soon learned this event is held annually in London and offered an independent and non-vendor approach to current security issues for customers and vendors. While vendor sponsored it still purported to offer a very non-product biased content schedule. Tempted by this I made a note in my diary to investigate the next conference and establish exactly what was on offer and how I could align it to my role within Xtravirt.

Purpose

In a nutshell the conference covers many aspects; items that I drew upon were:

Opportunity to actively participate in guided workshops to learn more about common security flaws and pitfalls

Open floor panel discussion with during the evening chaired by InfoSec recognised experts

For me?

A few items that I wanted to focus upon:

As an advisor within the Technology Office I wanted to hear the stories from presenters of how they're still fighting many of the same data centre issues

In a world where cloud computing is apparently adopted by everyone I wanted to meet people who'd been responsible for or a part of the securing aspects and learn what they experienced

If there's no sight of securing cloud environments then I'd aim to find out why

Open my eyes a lot more to this aspect of the industry

Get my hands dirty in one the guided workshops

So how did I get on?

How did the event unfold?

The networking opportunities exceeded my expectations. As a frequent attendee to virtualisation industry related events my networking peers are usually present but to arrive to an event without knowing who you’re going to meet can be a little daunting. Well, that certainly wasn’t the case. After picking up my badge it was relatively plain sailing after a few conversations were had.

Keynote

The keynote session by Haroon Meer from Thinkst discussed the quantity and quality of InfoSec conferences globally and how much of the content is likely to be repeated and the quality of speakers may not be to the audience liking. The value of the conference is then in jeopardy as it stalls the value of the content. Is this speaker’s problem or the organiser? Further discussion leant this more toward the organiser and their due diligence plus pressures applied from sponsors to shape content for return on promotion. Big events command high entrance fees and travel expenses opposed to local events are often offered for minimal cost or even free – smaller events are now seeing a growing adoption due to their greater geographical placement. Smaller events provide opportunity for up and coming speakers to pave the way upward. The downside to the smaller events is the signal to noise ratio is often far less.
There was far more content and context but the overall messaging seemed to resonate across many conferences regardless of whether they’re IT related or not.

Context clues

Moving on from the 147-slide epic I threw myself into a classroom session hosted by Carbon Black. Their session opened talking about the approach to cyber-attacks and how using traditional ‘prevention play’ tactics are dead but there are other ways. Global anti-virus companies make decisions about their product promoting that a single install will cover all eventualities but as we know this simply isn’t the case. How do you protect your company? Say ‘no’ to everything? Well that’s simply not going to work, users always find a way.
Tracking down threats and assessing their viability of a reality can be broken into four headline areas.

1.Visibility – do you know what’s going on in your environment? How many versions of the same product are deployed? – A threat to one version may not be a threat to another.

2.Metadata – do you know your environment? Use your data to consider what you think is an anomaly. This is where the global anti-virus companies can’t help you.

3.Frequency – Irregular patterns of activity don’t necessarily mean there’s a problem. If you have a grasp of your metadata you’d know whether it was a problem.

4.Relationships – Combine the three topics above to create a relationship mapping and then you have far more intelligence than any one global anti-virus company would ever know.

Zero false positives and zero false negatives is far more achievable with this style of approach.
The classroom exercise presented attendees with an environment using real world anonymised data of discovered files and it was from there the we had to review the versions, the frequency and relationships to each other to elaborate the points above but by applying a human identification approach. What made this exercise fascinating was how my thought process changed as I moved from one identification stage to the next and that the 2 guys I worked alongside with also challenged some of their previous decisions. Collectively we challenged each other as well as ourselves. Gut feel and experience (the human touch) meant we achieved a higher success rate initially but the further we progressed our ability to remember previous decisions lead to a far reduced outcome. A thoroughly enjoyable session.

Culture & CNA Behaviours

Char Sample presented my next session; she discussed Culture and Computer Network Attack behaviours. Much of the talk was based upon her recent work and discussed Hofstede’s cultural dimension framework and how much of this assisted and provoked more questions in her studies. Out of respect for the level of depth in the work Char has completed I’ll just impart a few areas that really stood out for me.
The opening gambit posed the scenario about applying new methods to old problems:

Rather than thinking about IP addresses think about what the attacker is thinking to give an idea of the next move

Psychological profiling provides mixed results and placing people into different buckets usually peaks at 10 ‘types of people’

Introduce the cultural angle and, as an example, how we approach problems evidences that we’ll all get the same answer but establish it in many different ways. Why? The way we’re culturally brought up and exposed to experiences shapes this. The definition of the word ‘culture’ in this session was defined as, “The collective mental programming of the human mind which distinguishes one group of people from another.”. Another everyday analogy was offered relating to football and The World Cup. Every team plays football but they all do it differently and at times it’s clear to observe. The session continued discussing many cultural facets and how we’re moulded into a way of functioning throughout our lives, and that the influence of culture in cognition is inescapable and habitual. An example comparison was thrown out to the audience. Eastern culture takes more of a holistic approach to problems with everything considered to form an answer opposed to the western approach is to do what’s needed, fix the challenge and move on. Applying this thought process to software development could assist a would-be attacker to consider the originating development team location and style of code creation. Perhaps an initiative is needed to offer code reviews within designated Universities to understand what role cultures and personality play with blind spot and bug introduction.
A very deep session that provoked many questions from the audience and opened up an area outside of the typical offensive and defensive stereotype attitudes.

Cyber Defence or Defending the Business?

The session delivered by Bruce Wynn focused on the pressures and challenges of how areas of the business are forced to make important decisions about Cyber protection but how it can often lead to a distraction and oversight in protecting the business itself. The content at times resonated with recent discussions I’d been party to and as a result drew me into the session further.
There’s a perception by some that applying a traditional technical approach using penetration testing, AKA ‘pen-testing’, would be a one off exercise and would mitigate all concerns once issues had been addressed. That is of course not the case and in many respects could be seen to be opening the door to wider abuse. Penetration testing provides the ‘tester’ with a full report of your organisation’s technical vulnerabilities and so presents immediate areas to consider:

Are you using a trusted and known company or an independent contractor?

What happens if the recommendations aren’t implemented for a period of time?

The trusted ‘pen-tester’ has opportunity to gain access?

The trusted ‘pen-tester’ has the responsibility to keep the information safe, but what if it’s shared internally?

Assuming a test has been undertaken and identified issues addressed, where’s the update cycle? Defining a baseline ‘standard’ version or design in itself places an organisation into a known published state. A compromise will always exist and there will always be a need to update or upgrade.

Know what you have

What’s important to your business? An example was discussed openly in the session of a well known brand & their products. When the audience were challenged as to what we thought the most important aspect of their business was no one managed to provide the correct answer. In the context of the discussion it had nothing to do with the product or it’s design. In fact it was the financial aspects due to the nature of how the company trades. Until the right questions are asked you should never assume what’s important.
Where third party supply chains are involved are you able to trust the suppliers? What about their suppliers? You may pass confidential information to a close provider but can you ensure that information doesn’t leave their environment? Keeping your own house in order is of a course a must. IT System Administrators and Security Team members have varying degrees of privileged access to the heart of IT systems for internal and external functionality.
This was my last session for the day and was certainly a good way to bring it to a close, but did I manage to answer the question…

Who isn't moving to cloud?

Organisations that aren’t adopting cloud tended to be firmly rooted around issues where data retention regulations are the make or break of a company. This is something that I’ve gleaned snippets of from during my attendance at the CloudCamp conferences where it’s been drummed home that “laws are local and the internet is global” by Kuan. Once data is out of your physical control where does it go? A vendor will tell you exactly where but there’s a huge element of trust. Trust in the administration and the vendor’s ability to maintain their internal governance, the transmission method of data to and from your organisation and the ultimately where the data resides and isn’t moved to. Of the people I spoke with there is certainly a view that workloads shouldn’t be just shifted. Locale critical data should be considered to remain in-house and use of Public cloud for less mission critical workloads and thoughts around SaaS for service provision refresh.
I think the message was clear here, people are moving to the Cloud but in small leaps of faith.

In preparation for an upcoming project, I’m installing vCAC 5.2 in my home lab. Anyone who has installed vCAC will have used the vCAC pre-requisite checker tool. This tool is simply fantastic. vCAC has a huge amount…

In preparation for an upcoming project, I’m installing vCAC 5.2 in my home lab. Anyone who has installed vCAC will have used the vCAC pre-requisite checker tool. This tool is simply fantastic. vCAC has a huge amount of pre-requisites that need to be configured, this tool does a great job in capturing everything. I like that it provides instructions on how to resolve issues when components need resolving. There is also a ‘Fix Issue’ button which allows for an automated fix of a handful of the requirements.
With the checker reporting I was good to go, I proceeded with the install. All was going well until I came to install the DEM worker and I was met with the following error.
I completed a few basic checks to ensure DNS was functioning correctly in the lab however; I found no issues there. Upon further investigation I looked to see what services the vCAC Server Setup had installed previously.
There is only one service which is the “VMware vCloud Automation Center Service” and it wasn’t started.
The vCAC server setup allows you to specify a service account to assign to this service, which I had ensured was a local admin on the server. When trying to start the service it halted with a permission error.
After granting the account the right to ‘Log on as a service’ the service did start and I was able to finish the installation of the DEM worker.
It seems odd to me that the Pre-requisite tool checker doesn’t check this, as it seems such a comprehensive tool.
Anyway, problem resolved.
If you would like talk to us about assisting your organisation with cloud automation solutions or their management with the vCenter Operations Management suite, please contact us.

At today’s VMworld Europe general session VMware announced the launch of the new Cloud Management suite of products. I run through a highlight summary here with some of my thoughts too.

VMworld Europe Cloud Management Launch

At today’s VMworld Europe general session VMware announced the launch of the new Cloud Management suite of products. Of the products launched, vCenter Operations Management Suite version 5.8 was discussed in detail, and I run through a highlight summary here with some of my thoughts too.

vCenter Operations Management Suite 5.8

The new version of vC Ops defined three key areas of focus for VMware’s approach to simplify and automate Operations Management.
Intelligent Operations

Headline features aside, what does this mean? VMware have released a version with an abundance of new features. There’s the capability to link into Microsoft applications such as Microsoft SQL and Exchange with out of the box (OOTB) dashboards. There’s support for monitoring of Microsoft Cluster Services (MSCS) and Database Availability Groups (DAG) clusters. Additional OOTB storage dashboards providing visibility into physical storage infrastructure and data paths (HBA, Fabric, and Arrays), Hyper-V support and the ability to monitor and manage hybrid cloud deployments with Amazon AWS. In a nutshell VMware are addressing the core services as well as the service provision

The New Features

Let’s have a look at the features and overview how they look.

Intelligent Operations

Enhanced monitoring of Microsoft applications
Clear to see and easy review is the traditional Red/Amber/Green presentation.
Out of the box dashboards for Tier one applications are now available with the release of specific Management Packs for Microsoft applications.
What's in a Management Pack?

Knowledge

Based upon research conducted with SMEs on application specific deployment and common issues

Discovery

Automatically discover application components, their inter-dependencies and the connection to their underlying infrastructure

Policies

Built-in monitoring policies for common applications that include default metrics, collection intervals, thresholds and alerts

Dashboards

Pre-configured and pre-defined application specific dashboards for visibility and troubleshooting

Unified Management
There's an acceptance of other vendor hypervisors, below I cover Hyper-V with screenshot evidencing connection and statistics:
What can you expect to see?
Using the vCenter Hyperic and Hyperic Management Pack you'll be able to review results in a custom user interface:

Discovery

Hyperic agent deployed in Hyper-V Host

Discovery of Hyper-V hosts & associated VMs

Topology

Relationships created in vC Ops

Hyper-V Host -> Virtual Machine -> Operating System

Monitoring of critical levels

CPU

Storage

Network

Memory

One of the box Hyper-V dashboards

Cluster, Host, VM Utilisation

Top 25 by CPU, Memory, DISK IOPS, Network, etc

Database Capacity and Performance

Disk Space Used, Usage by VM, Latency, Commands per Second

Load Heat maps

CPU, Memory, Disk, Network

Additional Items

Support for SCOM Maintenance

Identify items from SCOM that are in maintenance mode

Hyper-V Events

Two Options for getting Hyper-V Information

Through vCenter Hyperic and the Hyperic Management Pack for vCenter Operations

Through Microsoft SCOM and the SCOM Management Pack for vCenter Operations

Hyper-V Data and Dashboards are the same for each source

Amazon AWS Support

Amazon’s web service management is now also accessible and I’ll run through a few highlights below.

AWS Management Pack

Results available in the Custom UI

Pulls data from AWS Cloudwatch

Leverages the REST API exposed by AWS

Supports multiple AWS services such as:

Elastic Cloud Compute

EC2 instances

Elastic Block Store (EBS) volumes

Elastic Map Reduce (EMR)

Elastic Load Balancing (ELB)

Auto Scaling Group (ASG)

Configurable by Service

Only bring in the services you wish to monitor

Out of Box Dashboards bring it all together

Monitoring of EC2 Instances

Pulls all default metrics from Cloudwatch

Imports AWS alarms as vC Ops Hard Threshold violations

Group by region

AWS currently has eight regions globally. You can subscribe to specific regions.

Ex: To subscribe to Eastern USA use the region identifier us-east-1 in the region field

Regions drive dashboards

Visibility into relationships between AWS objects

EMR resources and EC2 instances

Auto Scale Grouping

Automatically aggregate instance metrics on groups

AWS Entity Status

Determine the power state of the AWS resources

I’m very excited about this announcement as I personally have been completing a number of customer rollouts of the management suite over the past couple of years with Xtravirt. As we’re a VMware Management Competency Partner as well as an official Consulting Partner of Amazon Web Services in the AWS Partner Network and gaining the Microsoft® Silver Competency as a Midmarket Solution Provider for SMB Customers the future looks great for the use of this new version across our customer base. The general availability for vC Ops 5.8 has been cited for mid-December 2013 and with my experience using the beta version so far I see there will be plenty of opportunity to introduce some of the features into our customers, for their current data centre deployments and for cloud migration exercises.
If you would like talk to us about assisting your organisation with VMware vSphere, VMware vCloud, AWS or Microsoft Hyper-V based solutions and their management by the vCenter Operations Management suite, please contact us.

VMware are continuing to evolve their Horizon Mirage product, regularly adding new features. One of these latest additions with the release of version 4.2 is the web management console.

VMware are continuing to evolve their Horizon Mirage product, regularly adding new features. One of these latest additions with the release of version 4.2 is the web management console. Rather than being just a traditional end user tool, the web management console is for helpdesk personnel to undertake a range of client-side tasks on selected devices. Tasks such as enforcing layers, system reboots or reverting a client to a previous snapshot.
Documentation for the web management console feature is limited in my personal opinion so I thought a guide would prove useful to people looking to use the feature and hence the reason for this blog posting.

Plumbing it in

The installation is not too difficult; it uses a standard Microsoft Installer, and requires a server with Microsoft IIS 7 (or later) and Microsoft .NET framework v4.0 installed. As the console is intended for use by support personnel it is unlikely to be under intensive load or require high levels of resilience, so co-existence with the Mirage Management Server role has proven to be acceptable.

Accessing the console

To login and access the console you will need a web browser and Microsoft .NET v4.0 (or later) installed on your client. The portal provides access to two functions. The first function, and more important, is the Helpdesk Interface where daily routine tasks can be completed. The second is the Protection Manager dashboard and is where status reporting for Mirage can be viewed.
If you are using Microsoft’s Internet Explorer, you will need to be on or above version 9 (This is referred to in VMware’s documentation - anything less results in a ‘Browser is not supported message’). With respect to supporting any other browsers, no other is listed in the official documentation, however, when it was tested with Internet Explorer 8, the browser warning did state that Mozilla FireFox, Google Chrome and Apple Safari are supported (though nothing on which version, unfortunately).
Figure 1: Don't use Internet Explorer 8!
The Helpdesk interface of the Horizon Mirage Web Manager can be accessed from type in: http://(WebManagerServer)/HorizonMirage, while the Protection Manager Dashboard can be accessed using http://(WebManagerServer)/HorizonMirage/Dashboard.
In either case, you will be required to provide authorised credentials.
Figure 2: Mirage web management console
It is probably worth taking a look at role based access in Horizon Mirage prior to letting your local helpdesk users onto the system. The Horizon Mirage console has a section ‘User and Roles’ that permits administrators to set up and grant role based access. A number of pre-defined roles are already available which can be granted Active Directory groups for ease of use.
Figure 3: Role Based Access

Web management console

Once a user is logged into the web management console it is possible to search by either User or Device. For example, in the screenshot below it can be seen that the client ‘MXP’ was entered:
Figure 4: Web management - searching for a device
Once found and selected, a console specific to that client is presented and available to work with.
Figure 5: Web management - client console
Reviewing the top toolbar (from left to right) the following action functions are available:

Enforce Layers - Enforces all layers (Base and Apps) assigned to the client. There is no function to select here; it will just enforce what is assigned with a requirement to review a confirmation screen

Set Drivers - Sets the Driver Library for the client. This is functionally similar to Enforce Layers and is used to update operating system drivers on a client

Protection Manager dashboard

The Protection Manager dashboard provides the administrator with a high level view of the status of the estate. Once logged in the administrator is presented with the following screen.
Figure 8: Dashboard - opening screen
Each of these sections is active and can be clicked upon to drill into and provide further information.
Figure 9: Dashboard - report
Note the search button – this returns the Administrator to the regular web management console view.

Conclusion

My exposure and experiences in real world deployments with this tool has certainly proven it is very useful and powerful. It is also clear that much of the work VMware are applying to the product at the moment is to improve manageability and ease of use as well as loading up on functionality. Even from a deployment perspective there’s little effort needed to get it packaged and distributed to client machines as it does not have a dependency on an MMC. So far, it ticks a lot of boxes for me.
If you would like to learn more about VMware’s Mirage, other aspects of the Horizon suite or require assistance with your End User Computing challenges please contact us, we have a lot of experience to share.

Working for Xtravirt, not only are you consistently working with cutting edge technologies in large-scale enterprise environments, but you’re…

Working for Xtravirt, not only are you consistently working with cutting edge technologies in large-scale enterprise environments, but you’re also encouraged to participate in and contribute to the online community.
I’ve been to a few VMUG’s now (London and the UK), and always found the community sessions some of the most enjoyable. While the vendor presentations are also good and provide a great opportunity to learn about new technology they’re there to deliver a marketing message. To have a community member stood in front of the group, talking about something that they are obviously very passionate about with real world experience is what the user groups are all about. So, during the closing speech of the January 2013 London VMUG, Alaric Davies (one of the London VMUG steering committee) was seeking out community member presentations for the next VMUG. Whilst I didn’t volunteer at the time it got me thinking.
After a few days an idea sprung to mind, why not talk about the pain points of working on a 4000 seat VDI deployment? VDI and EUC are still industry buzzwords with every year being labeled as the ‘Year of VDI’. I speak with people embarking on VDI projects, some are just finishing, others that are struggling and some that have failed. Having just come off of a successful 4000 seat EMEA VDI project that would get a few people in the room surely?
My colleague, Grant Friend, and I prepared an overview of this session idea and submitted it to the VMUG committee and were fortunate enough to be approved. I’m sure I’m not alone when I say I’m not a fan of presentations that are death by PowerPoint and personally, I find some of the best presentations that require audience participation. With this in mind we kept the slide deck short and sweet dropping in trigger points to explain how we had overcome project challenges to see if others were experiencing the same issues and if so, how they overcame them.
The presentation was well attended and we had some good audience participation, the hot topics were:

Application auditing - how was it done?

MS Windows 7 image - getting user buy-in

Stateful or stateless?

It seems that no matter how hard people try and no matter what the flavour of product, things always get missed. There always seems to be that hidden application somewhere that only one person uses which is critical to a business process. A common theme from the audience contribution was auditing web applications and the difficulty they presented. The outcome of this conversation was that attention to detail, careful analysis and interviews with the business were critical areas for success.
The next hot topic was locking down the Windows 7 image and how far was too far? The key point I tried to get across here was that user buy-in is key in any VDI project. If your users aren’t happy then your project has a high risk of failing. In my experience, I’ve seen many people lock down their Windows images so that they look like something out of the 1990’s, yes they run fast but who in their right mind want’s to replace their existing desktop with a VDI desktop with an 800x600 resolution and a 32Bit colour scheme? The key here is to leave some visuals but remove some of the background items that consume resource, easy examples would be leave the orb, yet disable the ‘show window content when dragging’ feature.
The final talking point of the afternoon was the argument between stateless and stateful desktops. All too often people state what desktop they want the project to use without thinking through if it is right for every use case. Whilst stateless is usually the end goal, we found success with utilising stateless desktops for the quick win use cases, however for the more complex use cases, using stateful. We could then get user buy-in, get everyone working on the VDI solution, then work with business owners and vendors to get applications working in the new environment without impacting user performance. The end goal being that all users are working on a stateless desktop.
Following the success of the VMUG we were approached by the EMEA vBrownBag team to run through the same presentation again in one of their online sessions, which has since aired and can be viewed here.
I had great fun delivering both of these sessions and felt a great sense of pride and achievement in being able to share some of the knowledge learned with other community members. I’ll certainly be putting myself forward to speak again at future events.
If you would like to talk to us about assisting your organisation with an End User Computing based solution, please contact us.

I was recently involved in a data centre transformation project collapsing and migrating smaller distributed IT solutions across EMEA to a central location.

I was recently involved in a data centre transformation project collapsing and migrating smaller distributed IT solutions across EMEA to a central location. Part of my remit meant I had the responsibility to investigate and establish application dependencies. Some of that information was relatively easy to obtain but for a few applications their original experts had left the organisation leaving gaps in the existing knowledge. Another contributing factor that complicated my investigation was the issue of confirming what network communication actually existed between those applications, specifically at port level detail. Establishing that in a reliable fashion was important as some applications were due to communicate over the WAN after migration.
That’s where “netstat” came in quite handy. I could have used TCPView but some of the systems I was dealing with were quite old and running MS Windows NT/2000 on which TCPView is not supported. More importantly, the organisation was not entirely sure about the inner workings of these applications and therefore, these systems were under strict change control. So, built-in tools were the way to go. Netstat has always been part of MS Windows NT and therefore, was my tool of choice.
I approached the investigation in two ways. Firstly, where machines were identified as being a part of an application I’d review which connections remained persistent and active. Secondly, on the same machines I’d observe and monitor ports that were open but only listening. This was the most interesting as it invariably revealed the machines were undertaking other tasks that the customer wasn’t aware of. Not surprisingly, I found a few of those!
I used netstat with another built-in function “findstr” to filter out the unwanted entries like this:
netstat –an | findstr –i ESTABLISHED
This command lists all connections and ports from the local machine to remote machines for ESTABLISHED connections. These can change over time or some might be missing as it’s a “point-in-time” snapshot of the connection state, but it does give a good idea of conversations going on between machines. This process can be repeated every now and then to ensure connections are not missed.
Depending on the system, there might be a large number of established connections. In case of a data centre migration investigation, the focus should be on machines that are connecting from a remote network. That said, the rest of the connections might also be of interest and might reveal unknown connections.
For example, here’s a screenshot of one of my machines:
Here you can see my machine is connected to a well-known service using port 5938 (remote machine - third column from the left).
For listening connections, I simply changed the string to:
netstat –an | findstr –i LISTENING
As you can see, the string has a minor change but this time, it lists all the ports the machine is listening on (for the local machine – second column from the left). It’s useful to run this to see exactly what is running as checking the running services doesn’t always provide an accurate picture. Also, it’s a useful way to reveal if an application is talking on non-standard TCP ports e.g. someone manually changing the SMTP port from 25 to 26.
There are a thousand well-known ports and we might be interested in others as well but generally the focus is on the ports that are less than 4096 and important to the role of that system. As before, in a data centre migration project, ports of particular importance are those from which a service is provided and/or will have to be accessed from a remote location.
Now let's take this one step further. If you have Windows XP/2003 or above you can add the switch “o” to the command i.e.
netstat –ano | findstr –I ESTABLISHED
Doing that, exposes PID (Process ID) information as well on the far right of the output, as shown in the screenshot below:
Now this is extremely useful if used in conjunction with “Task Manager”. Process ID information is generally switched off in Task Manager but can be switched on simply by:

Selecting the "Processes" tab

Clicking "View"

Clicking "Select Columns..."

Tick the "Process ID" box

Sorting the resulting processes list by “PID”, shows the following result:
In the screenshot above the highlighted PID matches the PID in the command window capture previously shown. So, at the time of capture and in this example, SkyDrive had three connections made from my machine to the service. The remote IP addresses do indeed belong to Microsoft. How to verify that is left as an exercise for the reader. Where possible, this switch allowed me to extract the information required with even greater ease.
Using this method, I was not only able to discover services running from machines that nobody knew about but were also able to establish communication relationships between old distributed systems. As a result, I was able to migrate those services with greater confidence having pre-staged the pre-requisite firewall changes. Most importantly, all of this was undertaken in accordance with the client’s policy of not making any changes to the software environment of these machines.
If you would like talk to us about assisting your organisation with data centre transformation, please contact us

Last week I was lucky enough to be able to do a live demonstration of Xtravirt’s vPi, the free and open VMware integrated OS based on Raspbian for Raspberry Pi devices, on the weekly vBrownBag EMEA session…

Last week I was lucky enough to be able to do a live demonstration of Xtravirt's vPi, the free and open VMware integrated OS based on Raspbian for Raspberry Pi devices, on the weekly vBrownBag EMEA session hosted by Gregg Robertson and Arjan Timmerman.
In this presentation I cover off the basics around Raspberry Pi, then move on to what vPi has to offer in terms of its default feature set. After this I then carry out the live demonstration showing what it is capable of, and how the various tools, SDKs, and scripting languages can be put to use together to create some impressive automation capability.
You can watch the presentation and lab demonstration in the linked Vimeo recording below.
http://vimeo.com/71875957
If you would like to find out more about vPi, or download a free copy, click this link.

At a recent engagement I was involved with the design and deployment of a Virtual Desktop Infrastructure (VDI) and hit upon a problem where the MS Windows 7 computers appeared to always be using the ‘Classic’ theme.

At a recent engagement I was involved with the design and deployment of a Virtual Desktop Infrastructure (VDI) and hit upon a problem where the MS Windows 7 computers appeared to always be using the ‘Classic’ theme.
Simply put, the user experience looked like this rather than

The problem

As users logged in they would be presented with the Windows 7 theme momentarily only to be replaced by the ‘Classic’ theme. A little digging revealed this appeared to be happening when users were logging in and their credentials traversed domains, due to cross domain policy processing a very restrictive policy. Through further investigation I established it was random, even users in the domain local to the computer were also experiencing the same problem.
There were a number of steps I went through before resolving the issue and below I outline my tests and results. As my Maths teacher always said, “It’s always best to show your working out”.

The environment

This deployment consisted of:

MS Windows 7 32bit clients

Citrix XenDesktop 5.6 - Pooled and dedicated desktops

AppSense Environment Manager 8.2

MS Windows 2008 R2 domain hosting the end point computers known as the computer domain

Users are members of one Active Directory Forest with multiple MS Windows 2003 child domains at various functional levels from Windows 2000 to 2003, known as the user domain

Two way trust in place between all domains, including shortcut trusts

Group Policy

To begin with I opted to check the most obvious place, the restrictive GPO. One of the settings in the restrictive policy was forcing the ‘Classic’ Start Menu, found on the User side of the policy in ‘Administrator Templates\Start Menu and Taskbar’ (see screenshot below). I’ve read many articles about this and that it isn’t compatible with Windows 7, even the support on list doesn’t cover Windows 7, so it shouldn’t be taking effect. This can be found under ‘Changes to legacy Group Policy settings’ in http://technet.microsoft.com/en-us/library/ee617162(v=ws.10).aspx
I configured a policy in the computer domain using loopback processing to reverse this setting to ‘Disabled’. After many reboots and having ensured the group policy was synchronised and being applied correctly the problem still continued.
Result: No change

Performance Options

As I delved deeper into this problem I looked toward the visual effects settings found in the Performance Options under Advanced System Properties, in particular ‘Use visual styles on windows and buttons’
When checked, you receive the updated Windows 7 Start button, left unchecked and the old ‘Classic’ Start button is shown. Unfortunately, in this deployment not all users have the ability to change this setting. Even if they did, the environment consisted of pooled desktops which meant they wouldn’t always be presented with the same desktop.
However, as a test I forced this option within the master image and pushed out an update.
Result: No change

Personalization Themes

This led me to think more about the pre-defined Windows themes and how they contain these configurations out of the box.
The configuration is managed within Control Panel within Appearance and Personalization. The themes are only files that set various visuals, they can be found in ‘C:\Windows\Resources\Themes’, with each theme having its own sub-directory.
As I previously mentioned, the users are not permitted to adjust or apply themes, also it would not be desirable to allow them to change the settings. I decided the simplest way to apply this would be via a logon script and force the theme and desired effects configuration.

The script runs the relevant API to set the theme and then after 10 seconds closes the console. This gives the theme enough time to take effect before the desktop is presented. For it to complete properly the Theme needs to be in the theme location detailed in the script above.
AppSense Environment Manager formed part of this deployment so I was able to utilise the scripting tools to execute the script when the user logged on.
Result: It worked however; this impacted and increased the logon time by 10 seconds. Watching the system run through its configuration while displaying a ‘Please Wait’ message as it applied the theme wouldn’t present a positive starting point for user experience. While this worked, it wasn’t ideal.

Registry Keys

The registry is where I next chose to concentrate the bulk of my efforts and there were three areas I focused upon.

Set ‘Use visual styles on windows and buttons’

As I mentioned earlier in this article detailed , I tried setting ‘Use visual styles on windows and buttons’ under Performance Options but this failed, but I was adamant this was the correct setting.
I looked at the registry key for this setting:

Within this should be a string value of ThemeActive and a value of 1 to enable the settings.
However, this registry key alone is not enough and it will need to call the API to make the change which in turn requires a reboot to take effect. I knew this wouldn’t work on pooled desktops but wanted to see the outcome on dedicated desktops, alas this didn’t work.
Result: No change
I then considered instigating a desktop refresh and try and force the setting to apply using a scripted action:

I threw the key creation and the above line of script into AppSense and rebooted a client machine a number of times.
Result: No change

Change Visual Effects through the Registry

A colleague pointed me in the direction of where the visual effects are configured via a hex value in the Desktop registry settings. I’m not going to go into the detail of how this key was configured as there is an excellent post on it here.
The registry location is:

HKEY_CURRENT_USER\Control Panel\Desktop

The binary key in question is UserPreferencesMask and needs to be altered to include the correct hex value to configure custom performance options.
Once again a restart is required and AppSense was configured to write this registry key at logon and thus the relevant API should be called.
I could see the registry key was being applied but it made no change to the appearance. I checked the ‘Default User’ configured on the image to ensure this was configured as desired and it had the default hex value of 9E3E03.
Result: No change

Themes Registry Key

With my previous experience with VDI and visualizations I knew this could be applied in the registry. It then occurred to me the best course of action would be to configure a vanilla desktop with the Theme I wanted and export the relevant registry keys. Once exported I could get these imported into AppSense and then get the System to set these at logon.
However, there are a lot of settings and I really wanted to keep the amount of options to be configured to a minimum. I knew the Windows Aero Theme had the majority of the settings I required but wanted to keep a couple of features to provide users with the feel of Windows 7, i.e. ‘Font Smoothing’. Any features that would be detrimental to the performance, i.e. ‘Drag Full Windows’, should be configured as disabled.
With all these thoughts in mind I set about gathering the registry keys that were required. This involved using a registry comparison tool, such as Regshot, and making a comparison of the registry before I made the change to the desktop appearance and after.
The list of the registry settings that I required focused around three main areas:

In addition to the Windows Aero Theme I left the Basic Theme in place, I would recommend this because it allows Windows 7 to downscale desktop settings if there is a problem with visual effects and allows easy deployment of a basic theme if one is required.
Here’s a full extract of the Registry Keys
I’d previously mentioned that I’d managed to get the AppSense actions to run under the System account; this was to ensure consistency and to stop any user restrictions hindering the settings. Attached is an output of the AppSense EM configuration which can be easily imported into another policy. Just open in an EM console and copy/paste to a production policy.
Result: Settings configured as expected and a worthwhile dive into desktop appearance registry settings.

Conclusion

Although I managed to provide a workable and consistent solution my investigation and evidence highlighted the fault was caused by an incorrectly configured base image and/or an incorrectly configured default user profile. Even though I checked both of these parameters and ensured they were configured as expected I still firmly believe either one of these was the culprit.

With the IT industry pressurising organisations to move their technology services in to some form of cloud service it presents both technical and software licensing challenges….

Moving a Server from On-Premise to Windows Azure™

With the IT industry pressurising organisations to move their technology services in to some form of cloud service it presents both technical and software licensing challenges.
In this blog post I’m going to cover off the consideration needed, and provoke thoughts about this topic, specifically using public cloud Infrastructure as a Service (IaaS) offerings and using Microsoft Windows Azure™ as the reference example. It’s worth highlighting that the thoughts and consideration required will also still apply to anyone running Microsoft Service Provider License Agreements.

I want to move an on premise workload to a Cloud Service Provider

To move a server workload to a cloud service provider you must ensure the associated operating system and application license(s) include “license mobility” or have “mobility” features built into its license, but importantly you must have Software Assurance.

What’s covered by License Mobility through Software Assurance?

At the time of publishing this blog post I reviewed the Microsoft Product Use Rights (MPUR) document; it provides a great level of detail around the License Mobility aspects and I’ve extracted the following key points relevant for this discussion:

Any aspects of Microsoft product licensing with respect to License Mobility require that products are licensed and their Software Assurance is up to date and active.

All products that are currently eligible for “License Mobility within Server Farms” and covered by Software Assurance are eligible for License Mobility.

These specifically defined products are also eligible for License Mobility through Software Assurance alone:

Microsoft System Center™ – all Server Management Licenses (MLs), including SMSE and SMSD

Microsoft Dynamics™ ERP products are not available through Microsoft Volume Licensing and are not activated online but have mobility rules that allow for similar use as License Mobility through Software Assurance when deploying in shared environments.

Windows Server™, the Windows® client operating system, and desktop application products are not included in License Mobility through Software Assurance.

Customers can exercise License Mobility through Software Assurance rights only with Authorised Mobility Partners, the list of Authorised Mobility Partners is available here.

After reviewing the MPUR document I’ve included the more popular operating systems and applications here as having License Mobility incorporated in their license:

I highly recommend you refer to the Microsoft Product Usage Rights list to confirm the official statement prior to moving your operating system and / or application to a public cloud. Where 3rd party licensing is part of the application ensure that the vendor provides an official statement of support too.

What can’t I move?

Moving workloads between on-premise and cloud services while technically is not too complicated, it does have licensing implications as I touched on above. While the list I include below is by no means definitive it’s worth noting a few items aren’t completely supported:

Microsoft Remote Desktop Services (RDS)

Microsoft only premits this in Remote Administration Mode as the RDS CALs are not covered under License Mobility and cannot be allocated within Azure.

Citrix

XenApp and XenDesktop rely on RDS client access licenses.

VDI

RDS Client access licenses are not eligible.

Windows Client

RDS Client access licenses are not eligible.

Windows Client is not covered for License Mobility.

Any Microsoft product that does not have Software Assurance and does not include License Mobility in the Product Usage Rights.

Summary

If you thought you could just pick up your current environment and drop it in the cloud you may find that it’s not just technical issues that require consideration. I’ve touched on only a few areas above and you’ll notice it’s nothing to do with technology, choosing a service provider or industry standard hypervisor but moreover organisational readiness of product version, product life-cycle and product licensing.
At Xtravirt we assist organisations with these types of challenges and no two projects are the same, so please contact us to learn how we can assist and support your journey of moving from on-premise in to the cloud.

Introduction This blog post has been developed to give some insight into the technical aspects of migrating services to a public Microsoft cloud solution, and how to bring it back on premise. The focus in this article is on the … [More]

Introduction

This blog post has been developed to give some insight into the technical aspects of migrating services to a public Microsoft cloud solution, and how to bring it back on premise.
The focus in this article is on the Windows Azure™ Platform.

Source Architecture

For the purposes of this blog I am assuming that the source architecture comprises of physical or virtual servers, running a Microsoft Windows Server™ OS

Strategy

Aligning the IT strategy with the business strategy is key to providing IT services that meet the demands of the business. The use of Enterprise Architecture tools and methodologies provide a solid foundation for mapping out your target architectures.

Financial Analysis

It is important to understand the cost models involved in both the source and destination architectures. Financial modelling should be conducted on a per Service basis. Both CAPEX and OPEX models are typically explored. Everything from staff costs, training, power, cooling, hardware software and a wide range of other facilities need to be included to understand the current costs. Modelling likely usage costs for Public cloud based resources is key to understanding if moving a service to the public cloud makes good business sense.

Licensing in the Public Cloud Domain

It would be wrong to assume that what you are licensed for on-premise will carry over to the cloud. Any service that you want to move will need to be licensed for “Licensing Mobility” and the cloud provider will need to be either Microsoft or an authorised License Mobility partner.
There a number of on-premise systems that you can’t license for cloud usage. If an on-premise service contains one of these products then that is an immediate no-go for moving that particular service to the public cloud.

Assessment

Depending upon the scale and complexity of the source environment, a combination of automated and manual discovery and assessment techniques may be required. It’s highly recommended to run the Microsoft Assessment Planning Toolkit to analyse the environment. (Note: While MAP provides a good level of information it’s recommended that additional steps are made to qualify licensing violations)

Connectivity

To be able to fully utilise a public cloud and have confidence in its capability to deliver the services you need, connectivity is key. Redundant private and Internet based Links are recommended. In addition to this, it is highly recommended to establish a site-2-site virtual private network . For this you will require a supported device (Microsoft Windows Server™ 2012 Remote Access Services, Cisco or Juniper Device) and a free static public IP address.

Use Cases

One of the key points I would raise about IT in general is that there is rarely a single solution that fits all. The main areas where I would look at using Windows Azure™ for initially, from an Infrastructure as-a Service (IaaS) point of view are the following:

Microsoft’s IaaS offering on Windows Azure™ is in continuous development, and can be considered for critical line of business systems. Transitioning workloads to Windows Azure™ should be staged, by environment, as with any workload, server and or DC transformation initiative.

Technical Feasibility

Once we have established that the service(s) is suitable from a business, cost and licensing point of view we must also establish if the current service state is in a support configuration to be moved.

Technical Checklist

Before moving a server into the public cloud ensure only a single network card on the virtual machine exists and is set to use DHCP.

The maximum data disk size in Windows Azure™ is 999GB.

Where possible make sure your on-premise disks are VHD format not VHDX otherwise they will need to be converted before storing.

The OS disk in Windows Azure™ has a maximum supported size of 127GB, article here.

Template virtual machines must be SYSPREP’d prior to upload.

If the on-premise application uses the drive letter, D:, this needs to be re-assigned. Windows Azure™ assigns this drive for non-persistent storage.

You cannot migrate virtual machines with snapshots.

If you’re planning on moving domain controllers or creating new ones, the NTDS and SYSVOL directories need to be placed onto a Data drive (not D:) and Windows Azure™ Disk Caching must be disabled (further reading here)

Using System Center™ Application Controller is the easiest method.

As with any virtualisation of service the same rules apply, any physical dependency that can’t be virtualised will strike the service off the list (Dongles, HBAs, Smart Card Readers etc…).

Technical Process

In-place service migration

There are two main methods for in-place migration of an on-premise service to the public cloud:

Utilise System Center™ Virtual Machine Manager to import the service into a virtual workload. Move the data into the Virtual Machine Manager library then utilising System Center™ Application Controller to copy the data to Windows Azure™.

Convert the source service using DISK2VHD (or another 3rd party tool) then upload using POWERSHELL 3.0/SCVMM/SCAC/CSUpload Command-Line Tool (further reading)

Advantages

Reduced Time to transition

Exact like for like copy

Agile Deployment

Low Cost

Legacy data/configuration will be copied

System configuration may not be suitable for Public Cloud – e.g. Disk Size

Disadvantages

Side-by-Side Service Migration

Outside of the in-place migration the other method is to build out the service architecture onto a new platform and migrate the service data/configuration and connectivity.

Advantages

Clean Environment

Only required data is copied across

Running two systems side-by-side can allow for a shorter service outage window

Possible Higher Cost due to the running of two systems side-by-side

Potentially Greater Complexity

Possible licensing implications

Disadvantages

End to End Process for converting a physical server and moving to Windows Azure™

The steps in summary:
The timeline above provides a high level representation of the technical steps taken if a service has been deemed suitable for migration.

What if I want to go the other way?

Moving a virtual machine from Windows Azure™ to on premise requires some consideration, but is achievable. You can’t directly move the virtual machine. You can however download the Virtual Hard Drives (VHDs) contained within the blobs in Windows Azure™ and then attach those to newly created on-premise virtual machines. You will either need to know some PowerShell 3.0 (Save-AzureVhd cmdlet) or use a 3rd party tool to achieve this. Another way would be to use Windows Server™ backup to create a backup of the system and data volumes to another data volume then copy that data down. This is a rather convoluted method and using the PowerShell command would be much simpler.

Making life easier moving forward

One of the cool features of System Center™ Virtual Machine Manager 2012 SP1 is the ability to create capability profiles. Out of the box three are provided:

Microsoft Hyper-V™

Citrix XenServer™

VMware ESX Server™

If you want to include governance within a hybrid cloud it would be wise to create a capability profile for your cloud provider.
The following is an example System Center™ Virtual Machine Manager profile that will ensure all virtual machines fit within the Windows Azure™ specification.

Having the capability mapped between Azure and your Private cloud platform will provide a greater degree of flexibility when moving workloads between the two clouds.
To streamline this process Hardware templates corresponding to the Windows Azure VM options should also be created.
An example of this is if your test and development functions are utilising Windows Azure. When a system is ready to move into production the virtual machine can be downloaded from Azure and then placed into the Private Cloud.

Summary

“To get through the hardest journey we need take only one step at a time, but we must keep on stepping”
As this Chinese proverb states the main aim here is to keep working on continual service improvement. As with many transformation activities not everything can ascend to a modern state, some services will need to remain, some will be due to retirement, others will need to be upgraded and a number won’t make sense to move.
Cloud IaaS features provide many advantages over CAPEX heavy data centre builds but it’s not necessarily a click of the heels to get you there. Hopefully this article has helped shine some light in areas that were hidden or confusing.

During a recent client engagement I was presented the opportunity to upgrade a client’s VMware environment from vSphere 4.1 to vSphere 5.1 including their SRM estate, also from v4.1 to 5.1.

During a recent client engagement I was presented the opportunity to upgrade a client’s VMware environment from vSphere 4.1 to vSphere 5.1 including their SRM estate, also from v4.1 to 5.1. As part of the due diligence of planning I collected and compiled a fair amount of resource which can be found in a separate blog posting here. While the information in that blog posting is very helpful I wanted to create this article around the experience rather than a step by step update guide.
For those of you who are not aware how the new vSphere 5.1 management infrastructure is laid out, VMware has introduced two new core components in this release, these being Single Sign On and the VMware Web Client. If you’ve been involved with VMware products in recent years you’ll recall this was available previously but now is likely to be the only access method in the next version of vSphere, it’s been substantially upgraded hence why I’m referring to it as new.

The Upgrade Process

As part of my initial discovery I established there were 4 individual environments requiring to be upgraded, these were consistent in their build revision at vSphere 4.1 Update 2.
The steps I followed:

Upgrade vCenter 4.1 to vCenter 5.0 on the primary site. This is quite straight forward and I previously documented the process on my personal blog here.

Upgrade SRM 4.1 to SRM 5.0 on the primary site. The upgrade is extremely simple and the only advice I would give is to ensure you take a database backup prior to the upgrade.

Update the SRA software to the SRM 5.0 version for the primary site.

Upgrade vCenter 4.1 to 5.0 on the recovery site.

Upgrade SRM 4.1 to 5.0 on the recovery site.

Update the SRA software to SRM 5.0 for the recovery site.

Use the Test Failover feature within SRM to ensure all the components are communicating and functioning correctly between the two sites.

Install the SSO service on the primary site. In this deployment a separate virtual machine was dedicated to cover role separation and allow for future growth.

Make a note of the Lookup Service URL as this will be needed in the next steps.

Upgrade the Inventory Service from version 5.0 to 5.1 on the primary site, this is an extremely straight forward process and you will be asked to insert the Lookup Service URL, mentioned in the previous step.

Upgrade vCenter from 5.0 to 5.1 on the primary site. Again, the upgrade is very straight forward and you will be requested to provide the Lookup Service URL during the installation.

Upgrade SRM from 5.0 to 5.1 on the primary site.

Update the SRA software to SRM 5.1 for the primary site.

Install the SSO service on the recovery site.

Upgrade the Inventory Service from version 5.0 to 5.1 on the recovery site.

Upgrade vCenter 5.0 to 5.1 on the recovery site.

Upgrade SRM 5.0 to 5.1 on the recovery site.

Update the SRA software to SRM 5.1 for the recovery site.

Use the Test Failover feature within SRM to ensure all the components are communicating and functioning correctly between the two sites.

Install the Web Client software on the Primary and Recovery sites.

At the time of completing this engagement the Web Client was not able to manage the SRM component so the last step was more in readiness for future compatibility.

Why 4.1, to 5.0 then 5.1?

The primary reason for needing to stagger the upgrade process from 4.1 to 5.0 and then 5.0 to 5.1 was due to SRM. It’s possible to upgrade vCenter straight from 4.1 to 5.1 but doing so prevents the SRM component upgrade to 5.0 or 5.1. This inability is recorded in the SRM 5.1 Release Notes, excerpt applicable below:

Upgrade an Existing SRM 4.1.x Installation to SRM 5.1.0.1

Upgrade versions of SRM earlier than 5.0 to SRM 5.0.x before you upgrade to SRM 5.1.0.1.

IMPORTANT: Upgrading vCenter Server directly from 4.1.x to 5.1 is a supported upgrade path. However, upgrading SRM directly from 4.1.x to 5.1 is not a supported upgrade path. When upgrading a vCenter Server 4.1.x instance that includes an SRM 4.1.x installation, you must upgrade vCenter Server to version 5.0 or 5.0 u1 before you upgrade SRM to 5.0 or 5.0.1. If you upgrade vCenter Server from 4.1.x to 5.1 directly, when you attempt to upgrade SRM from 4.1.x to 5.0 or 5.0.1, the SRM upgrade fails. SRM 5.0.x cannot connect to a vCenter Server 5.1 instance.”

Conclusion

In this engagement I found the upgrade process was relatively straight forward and fortunate that I did not have to utilise internally or externally signed certificates - apologies to those of you who have to use this process! The success of the upgrade was very much to do with the planning and I cannot emphasise enough how much time should be spent investigating the current environment, the build revisions, it’s dependencies and what components depend of each other. VMware update their release notes with Knowledgebase articles as ‘known issues’ are discovered so always check the text on the website rather than bundle versions within a download.
If you would like talk to us about assisting your organisation with VMware vSphere 5.1 or VMware vCloud 5.1 based solutions, please contact us.

In this blog I’m not going to be revealing performance metrics and comparing it to SANs at the same price point; as that has been done by many others, however it is possible to get approximately 30-35k IOPs when this

Recently during a customer engagement I was involved with deploying a technology from one of our partners, Atlantis Computing. Their diskless storage appliance, called ILIO, presents the RAM from a virtual machine to the host as an NFS Datastore. One aspect of the project required a little script intervention to assist with the migration of the ILIO controller and supporting virtual machines, which I wanted to share.
Now in this blog I’m not going to be revealing performance metrics and comparing it to SANs at the same price point; as that has been done by many others including the venerable Brian Madden in this blog post, however it is possible to get approximately 30-35k IOPs when this storage acceleration is supported by fast DDR3 RAM – this isn’t to be sniffed at.

Configure for best Performance

Before jumping straight to the script I wanted to highlight a few items which can easily be overlooked but prove incredibly detrimental to the overall performance of the ILIO appliance if not correctly configured. Out of the box the ILIO controller needs a few small tweaks to enable it to reach those speeds but before you open the ILIO Center management application, check your physical and virtual hardware configurations first.
These are by no means definitive but core items to consider:

In the BIOS of the host hardware, make sure that Power Management is set to “Max performance”

Change the ILIO virtual machine appliance NICs to VMXnet3. A large performance increase can be observed using these over the E1000 NICs

Set a CPU reservation for 2x host CPU speed

Set a Memory reservation for the whole amount of memory presented to ILIO

Set CPU Hyperthreaded Core Sharing to “None”

These settings ensure that ILIO will always have the resource it requires without any concern for contention delivering a fast local NFS Datastore that will outperform anything else at a similar price point. However, notice that word ‘local’. ILIO can theoretically be configured as a top of rack storage array, but Atlantis advise this is no longer a supported deployment method, and you will not see anywhere near the performance maximums without using multiple 10Gb/s NICs. In this deployment the environment was configured to use a 1Gb/s Network.
In this deployment locally presented storage will be used but this presents a couple of concerns that must be understood and mitigated, these being:

Disaster Recovery – how do you recover from a host failure?

Maintenance – how do you perform a host upgrade with minimum downtime?

Disaster Recovery

Locating the ILIO Controller and virtual machines on shared storage is imperative to mitigate host failure and take full advantage of VMware’s HA (High Availability) feature. Luckily, as we are using “Diskless” ILIO we don’t require that shared storage needs to be particularly fast as it will only read from it when starting up and restoring a SAN snapshot.
Once invoked, VMware HA will only return the running virtual machines (VMs) to service at the point when the host failed. As it’s likely you’ll only have a subset of your total VMware View desktop estate assigned to the failed host you’d observe a number of orphaned VMs within VMware vCenter and View Manager, but at least there’d be enough VMs immediately available for those users who were already logged on to re-connect. As for the dealing with the migration of orphaned VMs this is discussed further down.
At this point you may be wondering, just how you recover data that sits on non-persistent storage. The answer is a feature called SnapClone which is similar to a SAN snapshot. To use this feature you require a disk to be attached to the ILIO appliance; either VMDK or vRDM. The idea behind this is that you deploy all the VMs you require; or are licensed for, and then shut them all down and perform a backup, this writes a copy of the data kept in RAM to the disk. When the appliance starts, it then copies this data from the disk to memory. By using this feature, it means that you don’t have to manually clean up your ADAM database and hosts every time ILIO is shutdown.

Maintenance

By following Atlantis recommendations, you will have assigned the second NIC of the ILIO appliance to an internal vSwitch. This means that if you want to put the ILIO controller and associated VMs onto another host, there is a fair amount of work required. Having run through this several times and getting to the point of deciding it would be much more efficient to just script it, I have done just that.
This script will unprotect the replica, vMotion the ILIO controller, de-register and re-register the VMs on the new host and finally clean up after itself.
Before this script can be used, you will need to size your ILIO Controllers so that you can comfortably run two of them on a single host. Atlantis provide a calculator with the ILIO deployment tool to help you estimate the storage requirements of your desktops, however with the use of the ‘Floating Pools’, ‘Redirected Profiles’ and ‘Refresh on Logoff’ features, you can keep the storage requirements down even further.
To allow the ILIO appliance to automatically connect to an NFS Datastore on any host within the VMware HA cluster each appliance will need to be manually migrated and NFS mount performed. However, if you would prefer not to have disconnected Datastores cluttering up your host(s), then the script below could be amended to dismount and mount the NFS Datastore(s).

I’m often accused of being easily pleased by shiny buttons and new features, but the folks at VMware’s Horizon Mirage team have added some genuinely nice extras to the latest flavour of Mirage, now called Horizon Mirage…

Introduction

I’m often accused of being easily pleased by shiny buttons and new features, but the folks at VMware’s Horizon Mirage team have added some genuinely nice extras to the latest flavour of Mirage, now called Horizon Mirage to keep it consistent with the now bundled Horizon Suite. In terms of version numbering, we’ve reached the heady heights of version 4.0.
So, for those of you kind readers who took the time to read my previous blog item "Mirage: VMware reaches out to the PC…", this is somewhat a follow-up piece as the last one was primarily based around version 3.x. Version 4.0 added some nice tweaks to the existing functionality, such as improvements in the Windows 7 migration wizard, but the most important feature for me is the introduction of Application Layers.
Much had been made of the concept of layering on Mirage prior to version 4.0, but for me, there was a key element that was missing that in some ways made Mirage a little cumbersome. Functionally, Mirage provided Base Layers that provide a template including the base operating system and a set of core applications, as well as a post-application script. This was (and still is) a great feature, providing a base standard that could be deployed, used for conformance and so on. Where it became a little fuzzy was application handling in that Base Layers alone lacked flexibility. If you wanted to provide any flexibility for applications, the choice was either multiple base layers or a 3rd party application delivery mechanism, such as Horizon Workspace (using ThinApp packages) or Microsoft SCCM.
Version 4.0 brings a further, somewhat different approach to the party – Application Layers. Application Layers are applied onto client Endpoints in addition to Base Layers. Essentially, they provide a means to deploy applications to clients as discrete components, separate from the OS-centric Base Layers.
From a manageability perspective, this is great – it now means that only a few base layers are really necessary – departmental or user variations can be dealt with in Application Layers.
So, how does this work…..?

Capturing an Application Layer

Fundamentally, capturing an Application Layer as a process is not too dissimilar to ThinApp or other software packagers. You provide a basic operating installation and put the packaging tool (in this case, the Mirage Agent) onto the client. Hit ‘Record’, install the application (or applications, if you want to capture more than one in the layer), then hit ‘Stop’. Mirage then scoops up all the differences between the client before and after the installation.
From a more-in-depth perspective, this takes a little more thought (doesn’t it always?). Firstly, you need to use the operating system that you plan to run the application on when deployed (so an application captured on Windows 7 can’t be deployed to Windows XP clients). You also need to be aware of the application’s requirements:

Will it register unique identifiers ‘per installation’ that can affect licensing or use? A good example is McAfee ePO Agent that has a unique GUID per client in the registry – you don’t want a hundred PCs registering the same GUID to the ePO server!

Are there application dependencies, such as Java? If so, do you want these in your Application Layer, or are they already in your Base Layer? In some cases, the latter may be more appropriate, but the recommendation is that the Client you’re packaging on should adhere to the Base Layer configuration as much as possible.

Whether the application is 64-bit or 32-bit only matters with respect to the compatibility with the Endpoint OS – so 64-bit can handle either, while 32-bit can’t do 64-bit.

Application Layers can handle the installation of drivers and Windows Services – often an issue (or at least not terribly convenient) with Application Virtualisation methods. So you CAN package iTunes…

It can’t deliver Windows OS components – such as .Net Framework, Windows Updates, Windows licenses, user accounts etc. In most cases, these can be covered through Base Layers though.

Disk Encryption software and applications that change the boot record are only partially supported – It should be pointed out that I’ve deployed applications onto machines with encrypted disk without issue.

So, once you’ve battered your way through this, you can go ahead! Generate a Windows machine with the Mirage Client installed, but do nothing else – don’t centralise it or anything. Instead, just confirm that the client is visible in the Mirage Admin console as ‘Pending’.
Next, go to Common Wizards and select Capture App Layer.
It’s a pretty straight forward Wizard (selecting the client you want to capture the application on, selecting an upload policy, where you want to put it). One thing that is quite nice is that it carries out a validation – so if the PC has any pending reboots, for example, it’ll tell you to do them first. Once the wizard is complete, the job is visible in the Console’s Task Monitoring screen. This is important, as you’ll need this later.
Meanwhile, the Mirage Client audits the endpoint’s current state…
This takes a little while, but then the fun can begin…
For this example, I’ve just installed two simple applications with default settings (VMware View Client and the VMware Horizon Agent). If there are client-specific operations that need to be run following application of the layer, such as an executable or a script to run that generate something unique on a specific client, batch scripts following the name convention post_layer_update_*.bat can be placed into the capture machines “%programdata%\Wanova\Mirage Service” path.
I’d advise a reboot of the endpoint after everything is installed, even if one isn’t requested, just to ensure all necessary files are in place. Once these applications are installed, the capture process can be ended.
Remember my point above about the Task Monitoring screen on the Management Console? This is where we end our capture. Right-clicking the task and selecting the ‘Finalize…’ option launches a final wizard. It summarizes what applications (and components) are installed, then allows you to apply a name and version to the application layer. It’s also possible to update an existing layer here. Once complete, the final state is captured from the client, so completing the creation of the layer.
One thing to note is that the Mirage Client returns to pending after this is complete, leaving the client available for further use as required.
Next, we deploy our layer…

Deploying an Application Layer

This is pretty straight forward. Find the Update App Layers wizard in the Common Wizards page in the management Console. This will ask you what you want to apply this to and what it is you want to apply. With regards to the ‘what you apply to’, this can be either a specific client (CVD – Centralised Virtual Device) or a collection. My choice would be to create collections for each application layer, similar to SCCM. Membership to collections can be provided using a variety of roles, including the user’s AD group memberships or physical attributes.
Once applied, any clients will immediately start applying the layer in the background, in the same manner as all other Mirage tasks.
The client will then prompt for a reboot - which can be delayed, but the next reboot will apply the layer.
After the reboot, the applications will be present and ready for use. Mirage, in the background, will run a further conformance check to make sure that all is well.

A Few Thoughts….

One common question is ‘how is this different from application virtualisation such as ThinApp or traditional thick application installations such as MSI packages via SCCM?’
When compared to ThinApp (or similar mechanisms), the intent is to provide a layering distinct from the endpoint’s operating system by encapsulating the application in its own bubble. This is a great approach for a large number of applications, but poses issues with others. If an application requires greater integration into the parent operating system, or even hardware, this is not straight forward (and in many cases not possible) due to this separation.
When an MSI package is deployed to a PC, regardless of what mechanism (from CD or via a delivery system such as SCCM), it is left to the control purely of the local Windows Installer service on the PC to manage the installation (or removal) of an application, not always 100% successfully. In general, traditional application installation on the PC provides the greatest integration into the operating system, avoiding the problems associated with application virtualisation. If the application has a complicated installation routine, the process can be fraught with problems and points of failure, possibly limiting the ease at which such a package installation can be automated.
Mirage provides a middle ground. The net result is similar to an MSI package installation in the way that the application layer deposits the binaries, drivers and registry settings into the operating system natively – applications can even be removed using the Windows control panel applet. By virtue of the way the layer is inserted into an endpoint rather than as a scripted installation, collating a layer for a complex multi-package bespoke application is much easier than manual disk swapping or horrendously complex nested scripts.
There is a degree of separation reminiscent of application virtualisation in that the layers can be removed or added independently of the Windows stack via the Mirage framework. From a repair perspective, application layers, being clearly denoted in this way, can be repaired more easily than traditional methods – simply by telling the client to Enforce Layers from the Management Console – returning them to the original state.
All this is not without its caveats. For example, there are concerns around conflicting file versions – one layer has a DLL of one version replaced by an incompatible version in another. Equally, some applications still won’t work with this method (MS SQL is mentioned in the VMware documentation, for example), so other options are recommended, VMware ThinApp being complimentary in this case. It’s notable that VMware markets ThinApp alongside Mirage, much in the same way Microsoft market App-V as one of numerous application delivery options. In this game, there is seldom one answer.
So Mirage Application layering is pretty straight forward to implement and use. It offers a great enhancement to Mirage, providing a more focused way of deploying applications than the product was capable of previously.
If you’d like any assistance with a Horizon Mirage project or simply wnat to learn more about it or any aspects of VMware Horizon please contact us, and we’d be more than happy to use our real world experiences to support you.

Consulting throws up many challenges during the design and implementation stages but none more than the actual environment integration. Being at the ‘coal face’ invariably provides a point at which things don’t always go to plan…

Consulting throws up many challenges during the design and implementation stages but none more than the actual environment integration. Being at the ‘coal face’ invariably provides a point at which things don’t always go to plan and it’s this real world experience that we at Xtravirt excel at.
In this, my first blog posting, I’m going to discuss VMware snapshots and the possibility that you can recover from corrupted ones.
Particular events can create situations where a VM might start rebooting or shut down completely, and during this unplanned process one or more snapshots for that machine may get corrupted.
A common scenario for this kind of corruption is when:

A VM starts displaying the message in the console:

“The redo log of <Machine Name>.vmdk is corrupted. Power off the virtual machine. If the problem still persists, discard the redo log.”

Pressing OK to the message mentioned above, causes the machine to display the message again

Powering-off the VM might not be possible and could be displaying the message in the console:

“The attempted operation cannot be performed in the current state”

Depending on the type of failure, recovery from such a situation is possible and at times, with all data intact. The latter is especially true in the case for backup solutions that utilize the snapshot feature as part of their process but become corrupt just after it’s taken; therefore there isn’t a lot of changed data at that point. A complete recovery in this example is achievable.
I’ve recovered from such scenarios a few times and thought the process should be documented to help others. This blog posting came about as I felt that while different KB articles document the process in parts, I couldn’t find one that guides someone through the whole recovery process.
Some of the assumptions that I am making here are:

The failure is occurring on VM(s) with one or more snapshots, created either manually or via an automated mechanism eg: a backup solution

The virtual machine is displaying errors about inconsistent, corrupt or invalid snapshots

The person working through the issue is familiar with VMware operations and can deal with minor variations in the discussed scenario

The process to force shutdown of a VM is required for ESXi 5.x hosts (while syntax for other versions will be different, the process remains the same)

Virtual Machine Restore Process

Step 1: Save Virtual Machine Logs

The first action is to save logs for this VM; these can be found in the virtual machine folder on the datastore. This is to avoid losing potentially valuable diagnostic data in the event of a catastrophic failure. Due to the state the virtual machine is in, it might not be able to save vmware.log but the other log files should be copied directly from the datastore to a safe location.

Step 2: Shutdown Virtual Machine

This is to avoid having any further damage to the current snapshots before a copy of the machine is made. It’s possible for vCenter to lose control of the virtual machine in such situations and power operations might not work from the VI Client. If that happens, refer to “Force Virtual Machine Shutdown Process” section near the end of this posting for techniques to force the shutdown of the machine.

Step 3: Make a copy of the Virtual Machine folder

Once the virtual machine is shut down, make a copy of the virtual machine folder to another location on the same or another datastore. Name the folder something appropriate eg: <Machine Name>-Backup.
Note: A clone is not what is required and it probably won’t work in such a situation.

Step 4: Attempt to fix the snapshots

First check if the datastore has enough space remaining; snapshots do become corrupted if there isn’t enough space available. As there might be other snapshots in the background, estimate generously and if there isn’t enough space, use Storage vMotion to migrate machines off that datastore, to have a safe level of headroom available.
Once there is enough space available, try taking another snapshot, and if successful, try committing it. This operation might fix the snapshot chain and consolidate all data into the disks. If this process fails, then follow the remainder of the process to manually restore the machine from remaining snapshots.

Step 5: Confirmation of existing virtual disk configuration

Go into the VM settings and confirm the number and names of the existing virtual disks. As there are snapshots present, the disk(s) will be pointing to the last-known snapshot(s). Also, make note of the datastore the machine resides on.

Step 6: Command-Line access to ESXi server

Gain shell access to an ESXi server in the cluster which can see the datastore with the virtual machine in question. The ESXi server should also have access to the datastore where the repair will be carried out. As SSH may be disabled (by default), you may have to start the service manually.
Note: Seek approval (if security policy requires it) before this is done.
Once SSH is enabled, use PuTTY (or a similar tool) to connect and login using “root” credentials

Step 7: Confirmation of snapshots present

Once logged in, change directory to:

/vmfs/volumes/<Datastore Name>/<Machine Name>

Run:

ls *.vmdk –lrt

to display all virtual disk components.
Make note of what “Flat” and “Delta” disks are present. While it can vary in certain situations, the virtual machine’s original disks will be named the same as the virtual machine name by default. If there is more than one virtual disk present, it should have “_1” appended to the base name and so on. If there are snapshots present, they will have “-000001” appended to each disk name for the first snapshot and “-000002” for the second and so on, by default. Make note of all this information.

Step 8: Repair of the virtual disks

Start with the highest set of snapshots and for each disk in that set run the following command, where <Source Disk> is the source snapshot:

vmkfstools –i <Source Disk> <Destination Disk>

Please note: <Source Disk> is the base .vmdk name, ie: not the one with –flat, -delta or –ctk in the name. <Destination Disk> is the new disk, where all disk changes need to be consolidated. The new name should be similar to the source but not identical. <Machine Name>-Recovered.vmdk is one example for the first disk. Keep the same naming convention throughout for all disk names eg: <Machine Name>-Recovered_1.vmdk, <Machine Name>-Recovered_2.vmdk and so on.
For example:

for the second disk in the same set and so on.
Repeat the process for all disks in the snapshot set identified earlier in step 7. If the process is successful, move on to step 9.
If there is failure on one or more disks in the set, the following error message may be displayed:

Failed to clone disk: Bad File descriptor (589833)

If that error occurs, skip that disk and keep running the process for other disks as they might still be useful. However, the set will likely be rejected to run as production so the next recent snapshot set should be tried. Follow the same process until all disks in a snapshot set are successfully consolidated into a new disk set If this is an investigation into the events leading up to the failure then additional sets might have to be consolidated in the same way. All sets should now consolidate successfully.

Step 9: Restoration of the virtual machine

Using the “Datastore Browser”, create a new folder called “<Machine Name>-Recovered”, either on the same datastore or another. Move the newly-created “Recovered” vmdk file(s) to the new folder. Also, copy <Machine Name>.vmx and <Machine Name>.nvram to the new folder and rename both files to become <Machine Name>-Recovered.*
Download <Machine Name>-Recovered.vmx to the local machine and edit it in Wordpad or similar. Replace all instances of <Machine Name>-00000x (where “x” is the last snapshot the machine’s disks are pointing to) with <Machine Name>-Recovered. Repeat for other disks if present e.g. _1, _2 and save the file. This should make the .vmx match all newly-consolidated disks. Rename the original vmx file in the datastore to <Machine Name>.vmx.bak and upload the edited <Machine Name>.vmx back into the same location. Once uploaded, go to the “Datastore Browser”, right-click the vmx file and follow the standard process of adding a virtual machine to inventory, possibly naming it “<Machine Name>-Recovered”.
Once in the list, edit the VM settings and disconnect the network adapter. It might require connecting to a valid VM network first but the main thing is that the network adapter should be disconnected.
Once done, take a snapshot of the virtVM and power the machine up. At this point, a “Virtual Machine Question” will come up. Answer it by selecting the “I copied it” answer. If the disk consolidation operation was successful for all disks, the machine will come up successfully. The machine can now be inspected and put into service or investigated for a problem.
Once operation of the machine has been tested and the decision has been made to bring it into service, shutdown the virtual machine, reconnect the virtual network adapter to the correct network and power it back up. After boot is complete, login to the machine to confirm service status, network connectivity, domain membership and other operations. If all operations are as expected then the restore process is complete and the snapshot can be deleted.

Force Virtual Machine Shutdown Process

First Technique: Using vim-cmd to identify and shutdown the VM

While connected to the ESXi shell and logged in as “root”, run the following command to get a list of all VMs running on the target host:

vim-cmd vmsvc/getallvms

The command will return all the VMs currently running on the host. Note the Vmid of the VM in question. Get the current state of that VM as seen by the host first, by running:

vim-cmd vmsvc/power.getstate <Vmid>

If the VM is still running, try to shut it down gracefully using:

vim-cmd vmsvc/power.shutdown <Vmid>

If the graceful shutdown fails, try the power.off option:

vim-cmd vmsvc/power.off <Vmid>

Second Technique: Using ps to identify and kill the VM

Warning: Only use the following process as a last resort. Terminating the wrong process could render the host non-responsive.
While connected to the ESXi shell and logged in as “root”, list all processes for target virtual machine on the current host by running:

ps | grep vmx

That will return a number of lines. Identify entries containing vmx-vcpu-0:<Machine Name> and others. Make note of the number in the second column of numbers, which represents the Parent Process ID. For most of the lines returned for that machine, this number should be the same in the second column. One line belonging to “vmx” will contain that number in both first and second columns. That is the ProcessID of the target virtual machine.
Once identified, terminate the process using the following command:

kill <ProcessID>

Wait for a minute or so as it might take some time. If after that, the VM hasn’t powered-off, then run the following command:

kill -9 <ProcessID>

The method in the section will not result in a graceful shutdown but it should terminate the machine, allowing for the recovery to take place. If the machine still cannot be terminated, further investigation will be required on the host and the only option left will be to vMotion other virtual machines off this host and rebooting the host in question.

Final Words

The beauty of virtualization is that one can test most service scenarios without actually causing impact to service and this process is no exception. For that reason, I would strongly recommend practicing this process in your lab environment so that you are well prepared in case disaster strikes.
If you would like talk to us about assisting your organisation with VMware vSphere troubleshooting, please contact us

Working within a Consulting practice presents new challenges with every project or engagement you’re involved with. Of course, the challenges aren’t always technical…

Working within a Consulting practice presents new challenges with every project or engagement you’re involved with. Of course, the challenges aren’t always technical and as an outsider the learning points around a business hierarchy, internal process or people can be equally as absorbing. In this post it’s a technical challenge I’d like to share from a recent engagement that I found to be a real head scratcher, as usual the answer was obvious once I’d managed to fathom it out.

Setting the scene

Our customer had procured additional IBM HS22 Blades to increase their compute capability further to a data centre project we, Xtravirt, had delivered in a previous project last year. These extra blade servers were installed (by a 3rd party) into their existing ‘H Series’ BladeCenter chassis and I was brought in to assist with ESXi builds, configuration and environment assurance during the expansion.
For this article there’s no need to divulge the entire equipment specification other than the networking hardware. The BladeCenter had 2 x BNTs (Blade Network Technologies) installed each with 4 external ports connected, overviewed in a diagram later in this blog post.

The environment

The ESXi configuration took next to no time to apply as we’d previously introduced the concept of Host Profiles which, in this type of environment where high density compute is concerned, is ideally suited. VMware’s Update Manager and pre-defined baselines ensured the build and patching levels mirrored that of the current live environment. The new hosts were kept outside of the live production cluster so as not to disrupt any service provision and also to allow the customer to review and accept the new hosts before expanding out the cluster. All was ticking over very well until the testing started…

The head scratching moment

A test virtual machine was introduced to one of the new ESXi hosts to facilitate a pre-defined test schedule and report; a few Command Prompt windows were opened with a continuous ‘PING’ issued to different IP subnets to evidence the functionality of the network. Using vMotion, the virtual machine was migrated between the new hosts to ensure no loss of service however; the testing revealed actual loss of network connectivity but only on some blades.
Starting with the simple things first I checked the status of NICs, were they up or down? Realistically I had expected to see a uniform outcome given that a converged network infrastructure should be presented consistently to all blade servers within a single blade chassis.
In VMware vCenter this is what I observed for a server that was working (ESX61):
For a server that wasn’t working (ESX62):
Note: In these screen grabs it can be seen that the Observed IP ranges differ but I can qualify that the VLAN presentations were consistent on both servers.
The 4 active networks had ‘shifted’ up 2 vmnics and looking at the MAC Address order I realised these too were out of line.
The basic diagram below shows the rear of the BladeCenter chassis with 2 x I/O modules populated each with a BNT (IBM Blade Network Technologies).
This physical presentation translates to a logical presentation within VMware ESXi, the table below elaborates this. The working host, ESX61, conformed to the table.
The ESXi host with the shifted NIC presentation, ESX62, clearly showed a difference.

Further investigation

Taking a step back from the console I sketched out the rough path of how I understood the physical to logical transition took place. Aspects I felt that would be an instant win related to the IBM Blade BIOS and the BladeCenter’s BNT configuration for the blade slots. Before charging down either of those routes I simply swapped a working and non-working blade server between slots, the purpose was to prove whether the ‘fault’ remained to the slot or followed the blade.
What happened? It followed the blade which meant the configuration of the blade was ‘suspect’, so with the fault clearly related to the blade configuration I compared the BIOS of a working ESXi host against one of the troublesome ones, and both blades were identical.
I took a step back further to the hardware procurement and installation, and soon established from the 3rd Party installer that some of blades had their 8 x NIC provision and configuration enabled using a local installation of the Emulex OneCommand software on a USB key, whereas some of the other blades had this applied using the Emulex OneCommand VMware vCenter plug-in. This was the key differentiator and highlighted that the blades configured within vCenter were reporting their NIC orders incorrectly.
If I explain the requirement and need for Emulex OneCommand software it’ll start to pave a route toward the resolution and you’ll start to see why the difference followed the NIC provision.

IBM HS22 Blade NIC roles

The introduction of the Emulex 10GbE Virtual Fabric Adapter Advanced II (Part 90Y3566) daughter board to a blade not only increases the NIC quantity but also assigns 2 of them (vmnic4 & vmnic6) a personality of ‘iSCSI’, a minor inconvenience especially for a large blade installation. To change this default state the Emulex OneCommand software is required, it’s available as a standalone MS Windows application and also as a VMware vCenter plug-in direct from Emulex’s website.

Emulex OneCommand software

The MS Windows application can be installed locally on the blade (if it’s running a Microsoft operating system), on another networked computer (blades must be participating on the network to be administered), on a Windows PE disk or bootable USB key.
The VMware vCenter plug-in has to be installed on the vCenter server and then ‘Enabled’ within the vSphere Client. This plug-in will only be able to provide adapter information and the option to configure a blade’s NIC once the ESXi host has been joined to the network.

Why were the NICs out of order?

The issue with changing the NIC personality from ‘iSCSI’ to ‘NIC-only’ after a server has been configured to participate on the network is that the presentation of the new NICs will not shuffle the existing NICs and re-order them based on their MAC Address. So now the 2 new NICs appear at the end of the list rather in the positions you’d expect, this is to preserve the first time NIC enumeration.
This VMware KB article describes the changing of vmnic numbers post PCI card installation; this was in effect what was happening – new hardware being introduced.
So you see, the centralised management through the application is great assuming all the devices are configured that way. In my scenario I had a mixture which is why the blades were consistently inconsistent.

The solution?

Initially I considered creating a new Host Profile as a workaround but soon realised the use of a Distributed Virtual Switch (DVS) meant I couldn’t leave the NICs in their mismatched state. The DVS Uplinks were already defined and active in the live environment so I had no option other than to re-install IBM’s OEM VMware ESXi on each of the blades where their NICs were incorrectly assigned. With all the NICs present the discovery during installation worked perfectly and the application of the Host Profile pulled them back into line without too much effort.
If you would like talk to us about assisting your organisation with resolving an issue or providing a solution, pleasecontact us.

This is the third and final post in a three part series discussing the default user profile in Windows 7 for VDI. If you haven’t already read my first posts, I’d recommend doing so first. In the first article I … [More]

This is the third and final post in a three part series discussing the default user profile in Windows 7 for VDI. If you haven’t already read my first posts, I’d recommend doing so first.
In the first article I covered off the default configuration options along with some battlefield tips, and also some food for thought discussion topics for consideration when preparing your ‘Master’ image.
The second article discussed and walked through the creation process of the answer unattend.xml file, utilising the Windows Automated Installation Kit (AIK).
This final post covers the use of the unattend.xml file created in Part 2 to copy the default profile in your Windows 7 build, and preparing the image for Sysprep and deployment within a VDI infrastructure.
Microsoft have published a knowledge base article describing the process of customising the default profile, which can be referenced here.
So we have a Windows 7 virtual machine, built, configured, tweaked to our own/company specification(s). Now we need to convert this virtual machine to be our ‘Master’ image using the unattend.xml file.
From the template machine, create a folder on C:\ named deploy and place your unattend.xml file in this folder. The screenshot below shows this.
Once done, locate (but do not select) the Command Prompt shortcut from your MS Windows Start menu, use the mouse right-click to offer the context menu of the Command Prompt shortcut. Choose, Run As Administrator.
From here, we need to issue the command that will launch sysprep, shutdown, generalize our machine and call the unattend.xml file with out the ‘Copy Profile’ parameter set. Enter the following:

Note: This command assumes that your system drive is labeled ‘C’, you have created and placed your unattend.xml file in a folder named ‘Deploy’ and your xml file is called ‘unattend’.
Once the command is executed, Windows will launch and start the sysprep process and when complete the virtual machine will shutdown.
This process has created a virtual machine that will appear like an ‘Out of the box build’ with our customizations, however we need to be sure the process has worked before finalizing the image.
Power on the machine once again. You will notice that VM console shows the computer being prepared for first use and you are presented with the Windows Setup screens again; so run through these as you have done previously (Sysprep generated this).
Once Windows has finished running through the setup process, logon to your desktop. You should notice all your customizations are still in place.
To ensure the CopyProfile command has completed successfully open the following file:

C:\Windows\Panther\unattendgc\setupact.log

Search for the following:

[shell unattend] CopyProfile from C:\Users\%usrrname% succeeded.

[shell unattend] CopyProfile succeeded.

The image is now ready to be shutdown, and have a snapshot taken to be used as a master image within your desktop broker.
If you would like talk to us about assisting your organisation with an End User Computing based solution, pleasecontact us.

You may well have been living under a rock if you have not yet heard about the Raspberry Pi. For those of you who are unfamiliar with the Raspberry…

[Update] 5 July - vPi has now been released: http://xtravirt.com/product-information/vpi/
You may well have been living under a rock if you have not yet heard about the Raspberry Pi. For those of you who are unfamiliar with the Raspberry Pi, it is a slightly bigger-than-credit-card sized computer that is both cheap, and capable, developed by the Raspberry Pi Foundation. It runs a 700MHz ARM CPU and the latest revisions come with 512MB RAM. Storage is handled by an SD card, and you even get HDMI output as well as network connectivity right out of the box.
So, if you are a VMware evangelist and love the Raspberry Pi, what do you do? You create vPi of course! That was Xtravirt's co-founder Alex Mittell's thought when he decided to put together the vPi project. vPi is a modified version of the Raspbian distribution (which is based on Debian). It aims to provide a "plug and play" platform for administrators, consultants, or anyone else really, to use to connect up to any VMware vSphere virtual infrastructure and quickly and easily perform administration tasks, gather information, or run scripts against it. You could even think of it as a beefed up mobile vMA appliance, with a huge scope for customisation.
Here is a quick list of some of the features and utilities that are included with vPi:

Xtravirt are currently working on the vPi project with the aim to release it out into the wild as a community supported project. The hope is to get everyone involved and using vPi. We would love to see other virtualization evangelists, script writers and automation experts writing content and improving on the vPi project, hence the reason we want to keep it as a free and open initiative. Join us on the Xtravirt forums to discuss the vPi project, whether it be ideas, questions or anything else related!
For now, here are a few screenshots showing example usage of some of the utilities to whet your appetite.

[caption id="attachment_2803" align="alignnone" width="600"] Some of the VMware utilities built in to the vPi image[/caption]
[caption id="attachment_2804" align="alignnone" width="300"] Some more utilities and examples run on the vPi[/caption]
[caption id="attachment_2805" align="alignnone" width="300"] The Ruby vSphere Console fling being demonstrated[/caption]
[caption id="attachment_2806" align="alignnone" width="300"] The excellent vGhetto script collection on the vPi and demo of the updater script[/caption]
[caption id="attachment_2807" align="alignnone" width="300"] ESXCLI - the ARM compiled version in action on the vPi[/caption]
[caption id="attachment_2808" align="alignnone" width="300"] Running the vGhetto perl health check script[/caption]
[caption id="attachment_2809" align="alignnone" width="300"] vGhetto health check script report output after running from the vPi[/caption]
We are working hard to get the image ready for release, so keep your eyes on the Xtravirt blog and Twitter for updates. Otherwise, feel free to create a new thread or join an existing discussion on the forums!

This is the second in a three part series discussing the default user profile in Windows 7 for VDI. If you have not already read my first post, I would recommend doing so, it can be found here. In my … [More]

This is the second in a three part series discussing the default user profile in Windows 7 for VDI. If you have not already read my first post, I would recommend doing so, it can be found here.
In my first article, I covered off the default configuration options along with some battlefield tips and some food for thought discussion topics. This article will discuss and walk through the creation process of the answer (‘unattend.xml’) file, utilising the Windows Automated Installation Kit (AIK).
Microsoft has published a knowledge base article describing the process of customising the default profile, which can be referenced here.

So what is the answer file?

The answer file configures settings during the installation of Windows. In this scenario, the answer file will be used to create the master image that will then be used to deploy virtual desktops. The file will then be used to feed information to the Windows Installer through Sysprep to build a master image for VDI, specifying the configured parameters.
I would like to point out; the answer file is a very powerful tool and is able to set a huge number of configuration options for Windows 7 deployment; however, we will simply be concentrating on the default profile scenario.

Let’s get started

I would suggest having a machine (virtual) with a clean install of Windows 7 available for this process that can be used as our build machine. This can be deleted afterwards and well within the evaluation time allowed by Microsoft.
If you do not already have the Windows AIK for Windows 7, you need to download it. The AIK can be downloaded from here.
Once the AIK is downloaded and the ISO is mounted, the AIK splash screen should launch as shown in Figure 1.
Figure 1: AIK Splash Screen
From the options menu, select Windows AIK Setup. The Setup Wizard will then start as shown in Figure 2.
Figure 2: AIK Setup Wizard
Run through the wizard to complete the installation.
Once installed, launch Windows System Image Manager (SIM) from the Start Menu as shown in Figure 3.
Figure 3: Launch System Image Manager
Once SIM is launched, the following screen will appear as in Figure 4.
Figure 4: System Image Manager
Before creating the answer file, we need to copy a Windows Image file (WIM) to the computer running SIM from the media disk from the chosen flavor of Windows 7.
Unmount the Windows AIK ISO file and replace it with the Windows 7 media that will be used for out Master image for VDI.
Locate the install.win file and copy it to the machine you are working on.
The WIM file can be found in the following location:

<Media_Source> > Sources > install.wim

Open the image file. From SIM select the file menu from SIM and click Select Windows Image.
Figure 5: SIM File Menu
If your media has multiple versions of Windows available, select the version relevant for the VDI deployment.
Figure 6 :Image Selection
When prompted to create a new catalogue file click Yes.
Figure 7: Create New Catalog
Once complete, we need to create a new answer file. From the File Menu, click New Answer File and you’ll see the Answer File Pane populates similar to that shown in Figure 8.
Figure 8: Answer File Pane
You’ll see from the answer file layout it is comprised of different phases of the Windows setup process called configuration parses.
From the Windows Image pane, you can expand Windows Components, right-click the required component and add it to your answer file.
From the Windows Image Pane, expand Components, and navigate to amd64_Microsoft-Windows-Shell-Setup_6.1.7600.16385_neutral, right-click and select Add Setting to Pass 4 specialize as shown in Figure 9.
Figure 9: Windows Shell Setup
You should notice section 4 of the answer file will now be populated with our chosen configuration option.
Click on the configuration parse just added and the configuration options will appear in the Properties window.
From the dropdown menu, change select true as the value for the CopyProfile field.
Figure 10: CopyProfile Configuration
This is the only change required to ensure that the currently logged on user profile is copied to the default user profile during Sysprep.
Save the answerfile by clicking the File Menu and selecting Save Answer File as and change the name to unattend.
Once the file is saved, open the file using a web browser, and notice the CopyProfile is set to true.
Figure 11: XML Output
In summary, this article has walked through the process of creating an unattend.xml file that can be used with Sysprep in a Windows 7 image deployment to copy a configured users profile as the default profile for that image.
In Part 3 of this series, I walk through the process of finalising the image using Sysprep and ensuring it is ready to be provisioned by your virtual desktop broker of choice.
If you would like talk to us about assisting your organisation with an End User Computing based solution, pleasecontact us.

One of the great things about being an Xtravirt consultant is that we’re generally frontrunners for the latest and greatest technologies, and get to apply them in real world use-cases – and this is an excellent case in point.

One of the great things about being an Xtravirt consultant is that we're generally frontrunners for the latest and greatest technologies, and get to apply them in real world use-cases - and this is an excellent case in point.
Last year, VMware acquired California based company, Wanova. Wanova’s key product is Mirage. This is a client-server software stack that provides the management, standardisation and protective measures usually found in a VDI solution to the thick-client Windows PC. Mirage provides a solution that both compliments VDI approaches, or, in some circumstances, provides a better alternative.

How does it work?

It’s probably easiest to first discuss the components of the solution. It’s a very straight forward client-server affair made up of the following components:

Mirage servers. These provide the processing muscle for the Mirage solution, where data is transferred to and from. Where multiple servers are used, they can use shared NAS storage and be presented as a load balance cluster using a load balancer (such as Windows NLB).

Mirage Management Servers. Where the Mirage Servers provide the muscle, the brains for the solution are provided using the Mirage Management server.

Mirage Management Console. This is an MMC based application provided for administration.

Mirage Web Server. The Web server provides end-users the ability to recover data from the Mirage solution in the event of loss on the client.

Mirage Client. The end point device requires the Mirage client – this provides all the functionality required in about 5MB.

Branch Reflector. This is a PC with the Mirage Client installed (usually on a branch site, hence the name) that is ‘promoted’ to act as a local cache for drivers and layers to be downloaded to clients.

So far, so good, and as you can see it’s quite a simple structure. But what are the mechanics?
The concept is pretty straight forward – Mirage works on the basis that a client is the sum of a set of layers. These boil down to:

User settings and data

Applications

Operating system

The Mirage Client is installed on an installation of a Windows desktop operating system (XP, Vista or 7, in either 32 or 64 bit flavours) and registered via the management server – this is referred to as ‘Centralising the Endpoint’. In English, this means Mirage takes an audit of the PC, processes what it needs to protect and ships it back to the Mirage Server estate for storage.
The auditing and processing phase allows Mirage to apply file and block level de-duplication to data both locally held and at the Mirage Server estate to reduce the amount of data to be transferred. The data is then compressed and transferred back. The de-duplication/compression is the special sauce. Once there are a number of clients registered in Mirage, the centralisation drops to essentially just the user data. The first client will upload practically the whole PC, whereas subsequent client will not need to upload common operating system, application files and some data files, ie: two users have the same Excel document.
Once uploaded, Mirage essentially becomes an incremental backup tool, uploading changes subject to policy (schedule, file types etc.). This backup capability leads, inevitably, to the ability to recover. It’s possible to rescue the entire installation, or just recover the application settings and data to another device, including virtual machines. It’s even possible to recover to a client running a newer operating system, opening up avenues for VDI migrations and hardware refreshes.
But were this merely a backup tool, then it wouldn’t be nearly so interesting to VMware. The next part is where it gets interesting. Mirage is able to define the differences between the operating system, applications and data. This capability allows Mirage to be used to manage these abstract layers.
By taking a client PC with the Mirage Client software and registering it instead as a Reference Client, it is possible to establish standardised layers; base Layers containing operating system and additional software that might be required. These layers can serve a number of purposes:

Ensuring a Client is returned to a standard is a useful measure for ensuring a PC maintains a baseline configuration, not to mention fixing faults on clients.

Upgrades are base layers can be version controlled, so it is possible to use it for deploying patches, additional applications, or the biggest item of all, an in-place migration from Windows XP or Vista to Windows 7.

It should be noted that Mirage Servers can be provisioned with hardware drivers; this is to provide support when applying an upgrade to a client.
As mentioned above, in a multiple site scenario, Branch Reflectors can be used to further lighten the load on WAN links. While these machines don’t cache uploads from the client, they are able to act as a cache for layers and drivers, so when a client requires these, they don’t need to pull them over the WAN link. Consider them in the same light as, for example, an SCCM distribution point.

Windows 7 Migration

With Mirage, it’s possible to essentially ‘slide out’ an old operating system and ‘slide in’ a Windows 7 installation. Again, it boils down to the layers. By defining OS/application layers, Mirage can substitute one for another.
The Migration wizard takes the audit of a given target PC and packages up the files required and downloads them to the PC, again, with de-duplication and compression. All this happens in the background. The client is smart enough to throttle bandwidth to reduce impact on the user.
When the data is in place, the user is prompted to reboot the PC. Mirage engages its Pivot process.
The Pivot process basically swaps the legacy operating system and its associated applications and boot loader and swaps them for the downloaded layer. The legacy install gets dropped into a Windows.old folder.
During the reboot process, the PC starts up with a splash screen explaining what’s going on to the end user. Under the hood, it’s loading the correct device drivers, using Microsoft User State Migration Tool. This is installed on the server and fed down during the migration to switch the user profile and data to the Windows 7 install and re-connect the PC to Active Directory. Another reboot and the client is ready for use. Between the prompt and the last reboot, it’s around 20-30 minutes.

Migration Observations

There are a number of things to consider when planning an OS upgrade using Mirage.

Base Layers can include more than just the operating system; they can include applications, so it’s possible to establish multiple Base Layers to cover different departments, for example

When developing Base Layers, consider them in the same way as would be the case with any imaging approach. For example, applications that hard-code local identifiers (application GUIDs) should be avoided or a least managed. Fortunately, Mirage can be configured with a post-migration script that can be used to install such applications cleanly

Some device drivers include associated software, for example, SoundMax audio adapters leave legacy software components that tend to generate a ‘missing hardware’ warning when the generated layer is deployed to different hardware. It can be worth removing the driver/software prior to recording the Base Layer

Disk partitioning. Separate boot/system partitions aren’t supported, at least in terms of upgrading operating system or recovering the whole system

Disk encryption. The Mirage agent can be installed and will protect a system with encrypted disks as the client software runs within the operating system layer. However, the disk must be decrypted in order to do any operating system layer work

So where does Mirage sit in a VMware End User Compute world?

So, given it’s a VMware product, where does Mirage fit in VMware’s end user suite? The answer is not quite straight forward at this time.
If you’re looking at an estate which is predominantly formed of roaming users, this can be a better fit than VDI, even using roaming virtual desktops. Obviously, straight forward VDI needs a connection, but even using View Local Mode has limitations that make Mirage attractive, mainly from a performance as well as licensing efficiency stand point, for example, a laptop isn’t running two operating systems. There are also the practicalities of checking in and out roaming desktops to consider. Mirage is very WAN tolerant due to its ability to de-duplicate and compress data, not to mention it can stop and resume transfers, so suits road-warriors quite well.
Within an estate where roaming users aren’t a major consideration, Mirage is complementary. It can be used to ensure that thick clients adhere to a standardised software stack, while also be used as a means to migrate users in and out of a virtual environment. It’s also useful in legacy environments where users have had carte-blanche rights to keep data on ‘their’ PC, even beyond ‘My Documents’ – it can handle protecting and restoring user files even in this circumstance.
With respect to ThinApp and Mirage, the layering approach fits nicely; especially with Mirage 4.0’s improvements with Application level layering. A layer could include ThinApp packages, so providing an alternative means of deploying them.
Overall, Wanova was a crafty acquisition on VMware’s part. Mirage is a product that gives them footprint right down to the client device as a management and protective tool, but one that plays well both with a VDI implementation, or as an alternative in some circumstances.
If you would like talk to us about assisting your organisation with a VMware Mirage based solution, pleasecontact us.

Recently at a customer site I worked on deploying a XenDesktop virtual infrastructure utilising Wyse T10 Thin Clients. One issue I discovered was locally attached USB printers. You’d think it would be a straight…

Recently at a customer site I worked on deploying a XenDesktop virtual infrastructure utilising Wyse T10 Thin Clients. One issue I discovered was locally attached USB printers. You'd think it would be a straight forward enough task - install the drivers in the Windows image and attach the USB printer. This is certainly what Wyse advise you should do and as long as it’s supported by Microsoft and Citrix then you should be good to go. However, this certainly wasn’t happening no matter what USB printer I connected.
Before I dive into how I got this working it’s worth explaining a bit of history of USB devices and in particular what a VID/PID is because we’ll need to understand it for portions covered later. All USB devices have a Vendor ID (VID) or Product (PID) as their unique identifier, much like a MAC address for a network card. A VID is a 16 bit value that identifies the manufacturer of a USB device. A PID also has a 16 bit value and is used to identify the particular product from the manufacturer. Together these form a 32 bit unique code for each and every USB product.
When I connected a USB key to the Wyse client it would be identified by the device and passed through to the endpoint, but when I connected a printer it wouldn’t. This initially led me to me to believe that USB printer redirection on the Windows client wasn’t enabled, in this case the USB printer class. This is handled by the Citrix VDA and USB classes can be added or removed via the Citrix XenDesktop GPO settings. So I checked the registry of the endpoint, specifically in two locations:

The class for printers is 08h which would fall under ‘ALLOW: # Otherwise allow everything else‘. In theory then this should work then.
Citrix have published a knowledgebase article discussing USB configuration here, CTX132716.
In addition to the registry settings are the Citrix GPO setting ‘Client USB device redirection’ found under User Configuration in the GPO. When I checked, it was explicitly set to ’Allowed’ as per the screenshot below.
So with the endpoint configuration all checked and correct the next logical step was the Wyse client itself. I considered that maybe the Wyse client wasn’t passing the USB printer device through to the endpoint. Thankfully the Wyse system event log provides some good detail and after plugging the printer in the log provided the detail of the VID/PID of the printer.
Once the device is connected the Wyse client details the USB device has been found along with the complete ID of the device. The first hex block is the VID, in the example ‘090c’ and the second hex block is the PID, in the example 1000.
If you’re unable to get this information from the Wyse client you can also obtain it from device manager on a Windows client. In the example below the USB mouse attached to my laptop is used.
Now having all the relevant information to hand I needed to switch my focus to the Wyse client and force redirection of the printer on it. This is completed via the Wyse Device Manager (WDM) and the specifically within the INI file used to configure the device. As I didn’t want to apply this change to all devices I can create a MAC INI file to tie to the Wyse client or a USER INI file to tie to a specific user. As the printer is locally attached I created an MAC INI file with the following line:

Device=vusb ForceRedirect=0x04f2,0x0112,0x03,0x01,0x01

(The hex string after ForceRedirect= is the exact VID/PID of the device)
The Wyse device was rebooted to force a re-negotiation with the WDM and discover the INI file. Remember if you use a MAC based INI file and the Wyse terminal is swapped out due to failure you’ll need to create a new MAC INI file for the new terminal.
If you're using a global wnos.ini file for the Wyse devices you’ll need to use Include=$mac.ini for MAC based INI file or Include=$un.ini for User based INI file. In addition the files need to be in ‘inc’ folder for MAC INI files and ‘ini’ folder for User INI files, under the wnos folder.
Now when I plugged the printer into the Wyse terminal it was discovered as you would expect from the usual Windows process with the drivers installed as per normal. If I wanted to use a pooled/stateless image I would have to ensure the drivers were already installed in the master image for the printer driver installation to successfully complete.

If you deploy Citrix XenDesktop and use Machine Creation Services (MCS), you’ll need to create a Host Connection so the Broker can access the Hypervisor …

If you deploy Citrix XenDesktop and use Machine Creation Services (MCS), you’ll need to create a Host Connection so the Broker can access the Hypervisor. The connection is made using either, Microsoft’s SCVMM for HyperV, the SDK for vSphere or a direct connection to XenServer. Host Connections are used by MCS to provision machines.

A Quick word on PVS vs. MCS

Choosing whether to use MCS or Provisioning Services (PVS), is a topic for another day, as there are benefits for both. I think Citrix did not do MCS any favours in the XenDesktop 5 documented FAQ’s stating, “Until additional scalability information is available, Machine Creation Services should be used only for small to medium size VDI deployments”. This caused a poor initial perception of MCS and led many to believe that PVS was the only viable solution for large VDI deployment. I’ve since heard from Citrix Professional Services, that MCS has or is being tested to the same limits as PVS.
MCS scalability is related to the scalability of:

Host Connections

Should you decide that MCS fits your requirements, creating Host Connections allows you to define the network that the Virtual Desktops will be placed on and the storage the machines will sit on. Tying both networks and disk together in this way can offer mixed benefits:
For example, in VMware View you have to create multiple snapshots of your master image to provision machines to differing networks, in XenDesktop you just define the host connection to use when deploying your catalogue. However; in VMware View you can dynamically select the storage when provisioning a pool, in XenDesktop you have to select the appropriate Host Connection and need to ensure this has the correct network assigned.
You can define a Host Connection to use Local or Shared Storage, it depends on your configuration which way you go, but beware, you’ll need to make sure that your capacity plan / design is adhered to as its possible to reuse storage and networks on multiple host connections, keep an eye on this.
You need to take care to ensure that Host Connections are balanced and don’t conflict with other host connections, thereby compete for resource. I.e. the number of desktops supported on a host connection should not exceed the number of IP addresses available, or exceed the performance of the disks assigned to it; if you do reuse a network or a set of disks between multiple Host Connections, the capacity of those Host Connections will be contended.

A final few words

Give your Host Connections meaningful, but short names. The Host Connection name is used when deploying Base disk images, if it’s too long for your file system, provisioning may fail when multiple pools are deployed as the truncated name will conflict. I use a format of “Host-Cluster-VLAN” e.g. “VC01-CLU1-1234”

I’ve managed to get a few VDI projects under my belt now ranging from small 50 seat deployments up to 4000 seat cross country deployments. Some have been from cradle to grave, whilst others have been bit part roles. I’ve … [More]

I’ve managed to get a few VDI projects under my belt now ranging from small 50 seat deployments up to 4000 seat cross country deployments. Some have been from cradle to grave, whilst others have been bit part roles.
I’ve seen multiple issues in some deployments, many of which generally point back to the default profile. One common theme I’ve noticed (especially with Windows 7) is people often underestimate the importance of the Windows 7 default profile in their master image. Now don’t get me wrong, there are some brilliant user persona products on the market, whether they’re built in solutions or standalone products, but I see these predominately as 'enhancers'.
Over my next three blog posts I’m going to describe how best to create a default user profile. Starting with this post I’ll discuss the initial configuration options, what should be done in the default profile and how it should be done.
The second post will concentrate on creating the unattended.xml file that contains the ‘Copy Profile’ parameter that will utilise the Windows Automated Installation Kit.
The third and final part of this series will be customising the default user profile in the unattended.xml file.
Microsoft have published a knowledge base article describing the process of customising the default profile which can be referenced here.
However many of my customers still experience issues getting this right first time, so the aim of these posts is to breakdown each step and help walk you through the process.
Firstly, I must start by re-iterating this point Microsoft state in their KB article. The only supported method for customising the default user profile is by using the Microsoft-Windows-Shell-Setup\CopyProfile parameter in the Unattend.xml answer file. The Unattend.xml answer file is passed to the System Preparation Tool (Sysprep.exe). From Vista onwards this is the only supported method, unlike in the days of Windows XP where it was acceptable (and supported) to simply copy a temporary profile over the default.

Step 1: Configuring the default profile

You should always ensure that when configuring the default profile you always use a local Administrator user account; the process will not work with a domain user account.
Ensure you remove all user accounts except the built-in Administrator account from the template machine. Note, any service accounts can be added in via GPO at a later stage.
Start to configure any settings you want managed in the default profile. I’m not going to go into too much detail here, as each use case is different. It’s worth pointing out that VMware, Citrix and Quest have their own best practice guide, scripts and tools for customisation, which should be followed if applicable on that platform.

I would highly recommend reading each guide and understanding what each change is doing instead of just applying all changes or running the recommended scripts and decipher if these changes are applicable to your business case.
For example, one of the changes the VMware View script makes is disabling the themes service within the image. This is fine, if your requirements are for the classic interface but from experience, most companies want the Aero/Orb theme for their users. A cut back version yes, but the whole concept of pushing VDI is bringing enterprise desktop environments into the 21st century, not pinning users back with a classic windows theme not dissimilar to Windows 98.
If utilising a persona management application, you may want to hold back some changes and apply them from this level so they are easier to manage even if the master image changes, or different business units require different optimisations. Consideration needs to be given here, to ensure that you are not applying too many changes at logon or startup, which could have a negative impact on the performance of your VDI infrastructure.
Finally, I would strongly suggest making use of snapshots throughout your image creation for failback purposes. So often people rush through making a number of configuration changes to find something doesn’t work further down the line and have to roll back a whole heap of changes to discover the issue. Document each configuration change/snapshot so you can quickly and easily roll back to a point at a later stage. I tend to get an image to a production ready state then clone off the final snapshot as a new virtual machine. Thus having a clean master image, yet still being able to go back to use my original template again if required.
So, to close - decide what your master image contains, discuss with each use case owner and determine at what level the persona management works and importantly that you are meeting the customer requirement. After all, getting the base image architecture wrong will pave the way for all manner of issues as the applications are deployed.
Read Part 2 in the seriesIf you would like talk to us about assisting your organisation with an End User Computing based solution, pleasecontact us.

The title of this post is a quote delivered by Steve Herrod, CTO R&D of VMware during his keynote speech in the General Session at VMworld Barcelona. To put this quote into context the area he was discussing focused around … [More]

The title of this post is a quote delivered by Steve Herrod, CTO R&D of VMware during his keynote speech in the General Session at VMworld Barcelona. To put this quote into context the area he was discussing focused around VMware's Horizon product suite.

What is the Horizon suite?

Broadly speaking, VMware have created an architecture that attempts to deal with user applications and user data while at the same time allowing it to be presented through any device.

What’s the point of creating this architecture?

Organisations are now faced with many service provisioning dilemmas, do they:

1. Provide all users with a standard desktop and/or laptop with internal security and compliance?

Problem/Disadvantage: This presents internal IT departments with a headache; device management (by this I'm referring to hardware, OS and applications) is only part of it. Today’s users are far more technically literate and want to be able to access social networks, online shopping and watch television feeds – even while at work. If they can't do this they'll certainly try and find a way or work around to do so.

2. Allow users to bring their own device to an organisation as well as still being able to use a standard desktop and/or laptop.

Problem/Disadvantage: This is Point 1 (above) plus users’ devices that are introduced to the network are not managed by IT, have an unknown security policy, an unknown configuration and still need to function on the corporate network. The corporate network mustn't be compromised by these devices but also not be a barrier to them functioning. These are incredibly high risks.

VMware’s approach

Quite simply, the Horizon suite takes on the role of a mediator or broker. Users connect into their corporate infrastructure and are presented with applications that suit their device and need. Governed by management policies, applications are only presented to the relevant users and groups making use of single sign-on from directory service pass-through authentication.
The diagram here, a cut-down version from a VMware original, shows users and their devices meeting the broker; the broker reviews the request and provides the service(s) defined by the pre-defined user rule sets.
During the VMworld keynote presentation, the idea that applications were the sole purpose of this device enablement was dispelled. A demonstration was delivered showing that a broken laptop didn’t mean a user was out of action until it was fixed. A user that’s managed through Horizon is able to use another device immediately and continue to work. Admittedly the choice of device could limit the productivity but in the example shown, the user lost their MS Windows laptop but was able to continue working on their Apple MacBook. This may seem too good to be true but VMware do have many multi-platform ‘type 2’ hypervisors and application virtualisation techniques, so the groundwork had already been completed.
Another demonstration was aimed specifically towards the use of corporate managed applications on an Apple iPhone, until now only VMware Mobile offered this feature. The audience witnessed a corporate application on a personal mobile device with execution within isolation from other running applications. The isolation, as demonstrated, prevented sensitive data being copied to non-corporate applications.
Even Non-MS Windows tablet devices feature into the Horizon solution using the VMware View Client for a MS Windows VDI session. The View Client isn’t new; the challenge here is dealing with device native gestures, passing them through to the View client and preventing them from being cumbersome in navigation or within applications. Rather than battle against them VMware have introduced their own gesture layer through, User Interface Virtualisation. This feature allows typical swipe, tap and tap & hold gestures but they’re controlled by the interface and pass directly through to the VDI session Operating System. Additional features on top of this provide quick access to application switching and tricky tasks such as selecting / copying / pasting text. Ideal if a user were to swap between device manufacturers.

Wrapping up

While I’ve lightly touched on the Horizon product suite it’s clear to see that VMware’s direction is very much towards End User Computing (EUC) opposed to dealing with just Virtual Desktop Infrastructure (VDI). These technologies are very often muddled and considered to be one and the same but, as I hope you can see they’re clearly not.
For those of you familiar with Brian Madden (http://www.brianmadden.com) and his prolific blog posts you’re probably aware of his book, “The VDI Delusion”, if you’re not, it’s worth a read. It reminds you of technologies past, market statements of intent and vendors promising their utopian solution. Of course, a utopian solution doesn’t exist, there are many possibilities and vendor technologies to assist with EUC, and it’s all about understanding what is best for the requirement.

Over the past few months I have been a part of a number of Auto Deploy designs and Proof of Concepts. This has allowed me to really learn the feature that was introduced as part of vSphere 5 and is … [More]

Over the past few months I have been a part of a number of Auto Deploy designs and Proof of Concepts. This has allowed me to really learn the feature that was introduced as part of vSphere 5 and is now updated with vSphere 5.1. I also spent a fair amount of time learning and practicing with the feature in preparation for my VCAP5-DCA, which I sat recently. From these engagements and my studies, I have picked up a fair amount of tips and tricks for Auto Deploy and accumulated quite a few great resources to help people looking to learn it and deploy it within their environment.
The tips and tricks I have learnt and covered in this article are applicable to versions 5.0 and 5.1 of vSphere.

Tips and Tricks

Host Profiles

I have made this the first of the tips mainly because Auto Deploy is a relatively simple solution, but there are advanced settings that are required for host profiles to ensure your stateless ESXi hosts connect to the network. Host Profiles applied to the hosts/cluster are extremely important to ensure your hosts are available for use in the shortest possible time.
Below I have listed some of the advanced settings I had to apply for my stateless Auto Deployed hosts to enable them to work. The settings below are over and above the obvious settings of configuring a syslog server, a scratch location and pointing your hosts to the network core dump collector.

Allows you to configure from where the Auto Deployed ESXi host will retrieve its host name.

Configure to the Obtain hostname from DHCP setting to ensure the ESXi hosts obtain their names via the DHCP static entries.

VMware-FDM driver

This should not be a tip or trick as it should be obvious, but it seems many people forget to add this to their Image Profile. If you do not, when you add the Auto Deployed ESXi host to an HA Cluster the host will not have the HA Driver installed and therefore cannot participate in HA Failovers. Adding it is relatively simple as shown below:

With the commands above you have now created an EsxImageProfile and added the VMware-FDM driver to the ImageProfile. Simple but very important. The above steps are just an excerpt of creating an EsxImageProfile and are not all the steps you need to follow to create a whole EsxImageProfile for the usage by Auto Deploy.

Saving your EsxImageProfile

After you have spent a fair amount of your time creating an EsxImageProfile, it is good practice to save it. This is due to your Image Profile not being saved and held in PowerCLI when you exit the command line. There are two formats you can save/export your EsxImageProfile to, ISO format or as a Bundle. You will also need to specify where you want to save this exported ImageProfile so you can use it for another DeployRule or distribute it to another location to ensure consistency for all your images.
Continuing from the steps we followed above to add the VMware-FDM driver to the Image:

With these two commands, you have now exported your custom-built ImageProfile to a .zip bundle and .ISO file.

Setting the Execution Policy

This is another basic piece especially if you use PowerCLI daily but if you are new to PowerCLI then this may not be as obvious. If you don’t set your execution policy then none of your Auto Deploy Cmdlets will be available for you to run in PowerCLI. When you open PowerCLI, you will most likely see this warning / failure message.
The enabling of the Execution Policy is very simple.
Open PowerCLI and type:

Set-ExecutionPolicy RemoteSigned

When it asks if you want to change the Execution Policy type in Y to confirm Yes you wish to change it as shown below
The AutoDeploy Cmdlets will now be available to use.

Converged Networking

An interesting point to note, a warning perhaps, is the use of Auto Deploy in a converged networking configuration as there is a limitation of which network card types are supported. VMware have published a KB article about this although it doesn't appear to be widely known & publicised.
VMware's KB article states, "You cannot provision EFI hosts with Auto Deploy unless you switch the EFI system to BIOS compatibility mode."
The statement is referring to the requirement of 'legacy' NICs, traditional 1GbE NICs that are controlled by the server BIOS rather than the dedicated converged networking hardware. The enablement varies greatly between manufacturers, it can be a BIOS configuration or additional hardware may be required.

Top Resources

There are lots of great resources both from VMware officially and from virtualisation community blog sites, the ones that helped me the most are listed below.

I hope that the above tips, tricks and resources will prove helpful to you if your boss asks to evaluate Auto Deploy for your environment or, if you’re just learning to keep your knowledge up to date.
Gregg

I was very fortunate to attend VMworld yet again, and with this being my third attendance in a row; it allowed me to see the differences in both my personal interest and growing expertise. So I thought I would give … [More]

I was very fortunate to attend VMworld yet again, and with this being my third attendance in a row; it allowed me to see the differences in both my personal interest and growing expertise. So I thought I would give my perspective on this year’s VMworld and the areas that caught my interest.

Day 1 (Partner Day)

Monday of VMworld is dedicated to VMware Partners with session content focused directly and around the relationships and partner ecosystem, and as Xtravirt are a VMware Solutions Partner it meant I was able to attend. This proved highly beneficial as a number of the sessions and discussions showed how dedicated VMware are to their partners and how much they are willing to help even SMB partners grow their sales and market share. If you work for a partner and are thinking of coming to VMworld US or EU next year then I would highly recommend signing up for the Partner Day and attending the partner tracks as a number of great announcements and tips were shared.

Day 2

Day 2 was the first full day of VMworld for everyone no matter if you were a blogger, a partner or an attendee. The day started off early with the VMworld keynote, and I was fortunate enough to get a great spot in the bloggers area and found the keynote and announcements in it interesting. I won’t go into too much detail around the keynote as you can watch the recordings on the VMworldTV YouTube channel here. One of the big announcements of the day that caught my interest was the release of the new VMware Cloud Management suite.
After the keynote I attended a session around vSphere 5 design and then hit the Solutions Exchange where I was able to get a number of my questions answered around a couple of products I had my eye on. The Solutions Exchange was unfortunately in an adjacent building, which meant you had to factor in a 10 minute walk in your planning of attending sessions if you were moving between buildings, in my opinion it wasn’t situated as well as previous years. Mind you, the walking helped to burn off the calories on offer from the abundance of cakes.
Recently, I have been deploying vCenter Operations Manager and vCenter Configurations Manager for customers. There have been a number of questions around the settings of the metrics and thresholds of vCenter operations Manager and how to customise it. Not forgetting of course the reporting thresholds and how to prevent an information overload and being bombarded by alerts. With vCenter Operations Manager 5.6 this has now been fixed with the addition of intelligent alerts and thresholds based on group management policies. I am really looking forward to using the updated product and was able to complete a Hands-On-Lab using vCenter Operations Manager 5.6 with the new ‘root cause’ description feature. Screenshot below shows this.
Plus the ability to click on the link and find out what the error means and gain guidance on how to fix it from VMware, this is another step in ensuring you can manage your vSphere Private Cloud and Public Cloud all in one place.
In the evening I attended the vExpert/VCDX/Office of the CTO combined party where fellow Xtravirt colleague Darren Woollard and I were invited due to us both being VMware vExperts. The event was amazing to say the least as we were able to chat to like-minded people from the vExpert group but also to VCDX’s and Office of the CTO employees. Darren and I decided to make sure we introduced ourselves to as many people as possible and spoke with Kit Colbert, Damian Karlson, Josh Atwell, Andrea Mauro and even to VMware’s CTO Steve Herrod. We snuck in a sneaky invite to attend one of the London VMware User Group meetings, he said he would try to make one next May. Fingers crossed.
L-R: John Troyer (VMware Communities); Erik Ullanderson (Director Global Certifications); Steve Herrod (VMware CTO)

Day 3

Day 3 started with the second keynote focused around End User Computing (EUC) and set out to prove that End User Computing doesn’t equal VDI. The sessions were really great and yet again another demo, but this time around Mirage from Vittorio Viarengo who is the VP of Marketing for EUC at VMware. This keynote really showed how even mobile phones and tablets are being targeted by VMware as tools for the enterprise. He detailed how users can utilise these in their daily jobs and how easy it will be to do your work from these devices whilst still being safe and secure with corporate data. I would recommend watching the keynote here as it really gives some great insight into VMware’s vision for the future around EUC. The rest of my morning was spent in the Hands on Labs doing the vCenter Configuration Manager lab and the vCloud Automation Center 5.1 which were both really good.
The remainder of my day was booked up. I had been asked to participate in a Customer Reference Video around my deployment of VMware vCenter Configuration Manager at a client of Xtravirt’s. After this videoing session I then contributed to another interview with the VMware UK social media crew around my experience of VMworld 2012, and to chat about the London VMUG. The VMware UK social media video is below which also includes fellow LonVMUG attendee Barry Coombshttp://www.youtube.com/watch
In the evening, it was the VMworld party, which was fairly good although yet again a headline band wasn’t booked, unlike the US. The crowd we were subjected to a couple of cover bands and some Spanish dancing meshed with street dance.

Day 4

Day 4 was my last day at VMworld and due to my flying out in the afternoon was quite short. I attended session INF-VSP1475 VMware vSphere 5 Design Discussions which was really informative and caught my interest around designs for my day job and for my planned attempt at the VCDX. The remainder of the day was spent watching the highly interesting TechTalk vBrownbags in the VMworld hang space / Bloggers Area and chatting about all the projects and technologies everyone is currently undertaking. For me this part of VMworld is less understood by some people in management (fortunately Xtravirt management is not one of these) but it’s so valuable. Being part of the community and being able to know who is doing what, can help you for future deployments, especially as you may need to call on the assistance of your peers. This aspect is really worth its weight in gold.
The day was now over and this year’s VMworld was finished for me, so I made my way to the airport. I really enjoyed this VMworld and I am very grateful I was able to attend again. The direction VMware seems to be heading with all their tools and new solutions makes it a tall order keeping up to date with whilst doing your day job. However, with a week like VMworld it gives you the opportunity to update that knowledge and as previously stated allows you to make connections in community and within VMware that will help you deliver bigger and better solutions for your company and customers.
Gregg

In this article I want to overview my perception of a VMworld conference, what the marketing engine offers and what the few days in Barcelona meant for me. What’s all the fuss about? The VMworld conference is the defacto ‘must-go-to’ … [More]

In this article I want to overview my perception of a VMworld conference, what the marketing engine offers and what the few days in Barcelona meant for me.

What's all the fuss about?

The VMworld conference is the defacto 'must-go-to' in the world of Virtualisation. Not only are attendees treated to product launches and new technologies appearing on the horizon (no pun intended), but the conference is buzzing with like-minded techies rubbing shoulders with each other. A dedicated area for vendors called the Solutions Exchange, is provided where all manner of free gifts, software demonstrations and business contacts are there to be extracted. As a veteran attendee I've built up many contacts over time, learned how to survive the Solutions Exchange, the evening gatherings and, surprisingly still manage to learn about new technologies. Attending this conference shouldn't be underestimated though; the days are long and tiring but phenomenally rewarding.

Vendors

While the delegates are expected to pay a fee to gain access to the conference it's certainly not enough to cover the event. In the US, the Moscone Centre in San Francisco or The Venetian hotel in Las Vegas has to cater for upwards of 17,000 eager attendees. In Europe, over 8000 people now sign up. So, VMware partners up with the global giants to fund aspects the conference and in return they are given air time at General Sessions, plus many opportunities to brandish their logos on every serviette, sign and local piece of transport. For the more subtle vendors or, for those with a smaller marketing budget, the Solutions Exchange is where they make their mark. You'd expect to find most, if not all, of the market players that plug into virtualisation in one way or another. Booths are manned by keen account managers, technical SME’s and sales team all vying for your attention. Vendors will typically squeeze in as much they can into their allotted space. It's not uncommon to see fully populated SAN’s, or a Blade Chassis, as well as software demonstrations in a lab environment. Remarkably I've even seen Microsoft attending and touting their Hypervisor, and their hook-in was to win Xboxes, which gets most techies excited.

People

The infamous and best within the virtualisation industry are usually in attendance, maybe not at both conferences but at least at one of them. Whether at a technical, account management or architectural level, you'll find many of these people in and around the conference. The popularity of social networking tools and blogging now promotes their profiles which in turn promote more about the reason to be attending. It's not just technical content that makes VMworld a success; it's the people networking too. This is everybody's opportunity to meet, greet and engage. Many attendees already know their networking peers electronically, and it's the conference that helps to finalise the associations.

Learning

There are numerous routes to learn more about VMware's product suite; it's not just about sitting in a darkened hall listening to presenter hiding behind a lectern. Of course throughout the week you can knock yourself out attending session after session, scribbling notes and overloading on information (if that's your thing). Alternatively, take time out to attend the Hands-on-Labs, a dedicated area where you'll find hundreds of VDI sessions offering the opportunity to literally explore every aspect within VMware's suite of products. This avenue to access the latest technology is incredibly popular, after all, it's not every day you can deploy an entire Private Cloud, break it, and then fix it. After this, why not walk around the Solutions Exchange, find the VMware area and pick the brains of product suite experts? Demonstrations are always on hand for all the technologies. If you've exhausted that route then schedule a session to 'Meet the Experts'. Here the VMware subject matter experts are available to discuss existing and new technologies on a one to one or, one to few basis. Finally, you mustn't forget the vendors. Hunt down the ones you're already aligned with or potentially planning to be with and quiz them, it makes their day go quicker if they're occupied.

The evening events

There's no point in denying the fact that the benefits of attending a conference of this size comes with some perks too. If your people networking is working well you'll soon find yourself receiving invites from vendors and solutions partners to attend post conference drinks and nibbles. Some vendors go mad and hire an entire nightclub or a local brewery, whereas others just provide champagne and canapés. The VMware User Group (VMUG) typically arranges a meet up to bring the user group community together. There's a vExpert meet-up, VMware Customer and Partner recognition dinners, and so on. You get the idea. During the 3 day conference VMware provide a party for all conference attendees. In recent years the US VMworld has offered party headliners of the likes of the Foo Fighters and Bon Jovi whereas Europe tends just to have a themed event. Either way, there's free food and drinks on offer.

This year for me?

Well, it was a whirlwind of people networking and partner conversations. I notched up more contacts and linked Twitter IDs to physical people; in turn this of course leads to more connections. I attended an NDA session for a partner, Nutanix. Through this I added more people to my contacts, VMware and community based. You can see how the people-networking perpetuates so quickly.
I had the added incentive this year of blogging too on my own website. Through my extracurricular activities outside of my daily consulting role I've notched up the VMware vExpert accolade in 2011 & 2012. As part of this community there's an opportunity to apply for a blogger's pass for either of the VMworld conferences, this year I applied and was accepted for the event in Barcelona. The challenge I then set myself was to bring something different to my blog postings, there are many bloggers in attendance dissecting technical sessions and business direction. So, I went for something that traced the day of a delegate, quite simple and easy to do with a pocket camera. I snapped a shot wherever I happened to be, starting from the Sunday morning when I set off to the airport to the end of the conference on Thursday. Each day I extracted the images, plugged them into iMovie, set a little music for background noise and hey presto. Every day a 1 to 2 minute video appeared on my site.
You can see the videos here:

While I attended a couple of End User Computing technical sessions I found the learning from others far more beneficial. Chatting with other delegates is the only time when you’ll hear the real world stories of what has (or hasn’t) worked. Although, the most memorable moments during the week were meeting and chatting with Steve Herrod (CTO & Senior Vice President of R&D – VMware) and, very briefly, Pat Gelsinger (CEO – VMware). That’s name dropping for you.
After reading all this I certainly hope to see you at one the future VMworld conferences.
Darren

Having worked with Citrix’s long standing XenApp technology in its many forms for 18 years, and now VMware View for the last three, I keep coming up against the question “which one is right for my organisation?”. It used to … [More]

Having worked with Citrix’s long standing XenApp technology in its many forms for 18 years, and now VMware View for the last three, I keep coming up against the question “which one is right for my organisation?”. It used to be a much simpler decision with Citrix the clear leader in the market on both features and performance. But do IT decision makers have enough independent information to make the technology decision today?
Before I begin, I’d just like to caveat that there are other good solutions in the marketplace, and each has their own use cases, but for this post I’m keeping my focus limited to the top two established market leaders.

What are the differences?

Firstly let’s look at the primary differences and bring XenDesktop into the picture. It is much easier to compare Citrix XenDesktop and VMware View as they both work in a similar way. They deliver desktop operating systems and applications hosted on a hypervisor from the data centre. Citrix XenApp however shares a server operating system across many users, providing potential greater density of users. This comes partly from reducing the number of operating system instances required to support the connecting users. The comparison challenge when developing a desktop strategy which is influenced by technology options comes with XenApp being bundled with XenDesktop. Most XenDesktop deployments I have worked on have a mixture of XenDesktop and XenApp. The general rule of thumb applied when developing the business case is the 80/20 rule, 80% of users delivered by XenApp and 20% with a full Windows 7 virtual desktop delivered by XenDesktop. However, use case analysis that includes applications may further influence the actual design to create a different split.

A simple decision?

Given the potential increased user density per physical host and other costs such as infrastructure and licensing, with Microsoft licensing being a significant influence, it initially appears a decision should be fairly straight forward. However it’s not as simple as it seems.
To understand why, the decision needs to be taken in the context of what the business is trying to achieve and some of the realities these solutions and end to end architectures bring with them. Assuming that budget is available based on a realistic business case, the key influencing items for making a decision can be summarised in three points:

What the business wants to achieve

What the use cases are

Scope of applications

VMware’s View solution has matured a lot over the last few years, and the difference in the user experience and device support between it and Citrix has narrowed greatly. Both vendors’ solutions now provide a realistic solution to support most use cases and client devices that are broadly used, even in businesses considering BYOD. The difference is in the effort to implement and manage each solution, with applications being the biggest challenge. Application virtualisation has helped to reduce application deployment challenges but there are still many hurdles to overcome.
One difference between XenApp and dedicated virtual desktops is the risk of an application to have a negative effect on all logged on users. This creates the need to silo applications onto dedicated hardware, effectively increasing the compute resources needed to support a set number of users. Applications silos occur for a number of reasons including:

High resource utilisation

Memory leaking

Application compatibility

Multiple application versions

So where you can achieve greater density of users with XenApp, application silos are often required, which reduce the average density of users achieved across the entire estate. There is also the need for increased testing in XenApp. Applications both purchased and developed in house may also need additional testing and optimising to work effectively in the XenApp environment.

Reduce complexity

By adopting a virtual desktop strategy, many of these complexities are reduced. Applications need to be proven on the required operating system such as Windows 7, but for many this activity is happening if not completed already. Each virtual desktop has its own operating system and allocated resources greatly reducing the impact on other users when there is an issue. It is also easier to cater for the requirements of web based applications with more control over individuals web based settings and browsers. This reduced complexity when dealing with applications comes at a price, including greater infrastructure. But with the advances in hardware and software, virtual desktop infrastructures are becoming simpler and more cost effective.
The more applications and complexity you have in an environment, the more compelling using virtual desktops over XenApp hosted desktops can be. But where the tipping point occurs will vary from one organisation to another.
There is then the choice between Citrix XenDesktop and VMware’s View. For many organisations the decision is influenced by experience with a given vendors technology and internal skills. However the decision should not be made on this alone, rather weighed up in a wider decision making process. There will also be instances where the decision comes down to a single essential capability or feature of a particular vendor's solution. All these points demonstrate the need to fully understand the business requirements, use cases and application landscape.
With applications moving more to web-based architectures, and HTML providing more functionality as a user interface, is investing in complex desktop environments the right thing to do? For many there is a compelling reason to do this but with the pace of change it is worth striving to keep complexity and cost to a minimum. Looking at vendors roadmaps and aligning these to business vision will help future-proof any investment made now. Where there may not be a single solution that meets all requirements, many organisations will find that the ecosystem of tools in the virtualisation market place will help meet short term business goals while providing a credible step forward towards truly flexible, device independent application access.

Many organisations have added a virtualization capability to fulfil infrastructure needs, which in turn, has led to many running critical backend system workloads. The level of dependency and trust in virtualization is forever growing, particularly as multiple vendors provide wider … [More]

Many organisations have added a virtualization capability to fulfil infrastructure needs, which in turn, has led to many running critical backend system workloads. The level of dependency and trust in virtualization is forever growing, particularly as multiple vendors provide wider reaching interoperability, as well as competitive vendor compatibility. The level of trust in delivering business critical applications using virtualization varies based on a number of factors:

Size of organisation

Security

Compliance

Performance

Needs

Country laws etc.

Product Developments
Vendors have been focusing on developing new products, and feature improvements for existing products, targeting the needs of delivering business critical applications using virtualization. The primary focus is to provide seamless, configurable, and centralised capability to deliver business critical applications. Integrated features such as high availability protecting networking, storage, and compute resources are a few technology focused features. Tooling is also playing a key role, with monitoring and management that, with the right business requirements input can provide meaningful KPI data. On target availability and performance for example, will strengthen trust and prove the capability of virtualization delivering business critical applications.
The Use Case Approach
As with any technology project, approaching each scenario with a use case approach will lead to the right technology decisions and design. If we use the example of a Financial Trading company, it will produce many use cases, but it’s also likely that there will be common use cases throughout, eg:

Low latency

Highly available

Auditable

Any device, anywhere

Each key virtualization vendor has business critical application focused virtualization products and a supporting roadmap that fit these use cases. In some cases, organisations may need to employ a mix for the right solution for them.
5 Steps
So, one of the early activities is to build those use cases. The level of depth will vary, but here are 5 suggested steps to follow:

Build standard and edge use cases (edge cases being one-off’s or very unique)

With the app landscape catalogued, usage statics collected, users defined, and use cases built, you are now ready to move forward with a technology selection phase. I’ll cover the next phase in an upcoming Part II.

This post covers a recent experience I had when updating persistent desktops in Citrix Machine Creation Services. If you’ve got a deployment of dedicated virtual desktops in Citrix XenDesktop, you may have a requirement to update the master image. This may … [More]

This post covers a recent experience I had when updating persistent desktops in Citrix Machine Creation Services.
If you’ve got a deployment of dedicated virtual desktops in Citrix XenDesktop, you may have a requirement to update the master image. This may be when the number or type of changes made to the master image is large (this could be patches or applications), and means that newly provisioned machines take a long time to apply updates when first used.
Updating Pooled desktops is easy; it can be done through the GUI, but for dedicated desktops it needs to be updated via PowerShell, and in all cases it can have an impact on your storage.
It’s worth noting that once you change the master image for a dedicated Desktop Group, the existing desktops will not be affected as the updated master image only applies to new desktops. This is great though, as the existing desktops have probably had the updates applied already through enterprise management tools, and this is the best way, I believe, to manage dedicated desktops once they’ve been deployed.
If you have the need to update your master image, then you will first need to load the Citrix PowerShell snap-in, you can run this from PowerShell on the Desktop Delivery Controller:

add-pssnapin citrix*

A Quick note on PowerShell: While some people seem to try and craft PowerShell “one liners”, for important tasks such as this, I prefer to write all my commands out in a script and run each line in turn, judging the output and value of variables before progressing to the next task. You can use any script editor, the built in PowerShell Integrated Scripting Environment (ISE) within Windows is a useful free tool; just press F8 to run the selected line of code.

Once you’ve loaded the Citrix snap-in you need to get the Provisioning Scheme details. You can just run the “provscheme” command, but as this returns a lot of information, its best to capture all the information in a variable then just loop through and display the valid pieces of information:

#Get Provisioning Scheme

$ProvisioningScheme = provscheme

#Loop Each Desktop Group and Get Master Image

ForEach ($Group in $ProvisioningScheme)

{

Write-Host "######"

$Group.ProvisioningSchemeName # This is the Desktop Group

$Group.MasterImageVM # This is the Current SnapShot

}

So now we have a list of Desktop Groups and their associated master images, each separated by the hash marks. The first value is the Desktop Group, the second value is the master image and shows the current snapshot:

Now that the master image in use is known, you can use the information to run the following command which will return the new/current snapshot of your image (note that you only need to specify the VM name, not the full snapshot path) e.g:

Once the command is running, you can check the progress as normal in your hypervisor and watch the provisioning process; finally within the Citrix Desktop Studio action Tab you’ll see confirmation that the task was successful, you can also run the “provscheme” command again to check that the Desktop Group is using the new image.
With the PowerShell console still open, now is the ideal time to set new parameters for the Desktop Group such as the memory, CPU’s or disk size if required, e.g., to alter the amount of memory new VM’s are deployed with run this command:

You’re now sitting back on your laurels after successfully updating your desktops, but watch out… before you leave for the day there are a couple of gotchas.
In XenDesktop 5.5, you will soon start to notice that when users logoff, their machines shutdown and don’t restart. This issue is fixed in XenDesktop 5.6 but is caused by XenDesktop marking the existing machines as out of date and pending an image update, in fact you can see this out of date message in Desktop Director. Remember that updating the master image does not affect existing machines so I’m assuming this code is sitting there for a future purpose or has been reused from updating pooled desktops, the fix for those affected is in CTX132211.
The second is a matter of space, for each update you perform, a new master image must sit on each data store/storage repository defined in the host connection. This can soon start to consume a large amount of space and can be cumbersome to manage; space is one of the main considerations when updating dedicated images and planning your storage requirements. You can mitigate this with de-duplication, but that’s a blog for another day.
Finally, it’s worth noting that there is currently no function in Citrix XenDesktop similar to VMware View’s “recompose” to update an existing dedicated desktop to use a new master image, if you want to do that you need to delete the users machine and issue them a new desktop provisioned from the new image. Remember that you’ll potentially lose some user settings if you re-issue new desktops unless you are completely managing the user’s persona and application delivery. However, issuing new desktops may be a viable action in certain circumstances such as a broken desktop, or when the user’s difference/delta disk has grown in size due to installed applications, and those applications are now in the base image.

VMware SRM v5 with an EMC RecoverPoint SRA v2 – Array Pair missing In a recent deployment of VMware’s SRM v5 I was unable to successfully create a Protection Group. The first step in the Protection Group wizard presented an … [More]

VMware SRM v5 with an EMC RecoverPoint SRA v2 – Array Pair missing

In a recent deployment of VMware’s SRM v5 I was unable to successfully create a Protection Group. The first step in the Protection Group wizard presented an empty Array Pair pane.
The screenshot below shows this empty pane.

Figure 1: Empty Array Pair

At this point you would expect the pane to be populated with the SRA previously configured for the Selected Site. From here you cannot continue.
I naturally assumed the SRA Array Pair was Disabled but upon reviewing the status of the SRAs they were indeed, Enabled. Puzzled, I set about checking the status of the VMware SRM service on both dedicated SRM servers, these were both running. The MS Windows Event Logs revealed no errors from either the VMware SRM service or the EMC SRA.
Using the bountiful resources available on the trusty internet I couldn’t locate an article, VMware (or) EMC forum post or blog entry that referred to this exact problem. At this point I began to wonder if the problem wasn’t at the VMware layer but at the underlying storage presentation or even the communication to the EMC RecoverPoint Appliance(s).
The investigation now continued on the EMC RecoverPoint installation. Initially checking through each of the categories within the console revealed many green ticks. All the references to their status, replication, pairing and, VMware vCenter connectivity were reporting correctly.

Figure 2: RecoverPoint appliance configuration

The Consistency Group Status showed the animation of data traffic, the storage access reported correctly as either Direct Access or No Access.

Figure 3: Consistency Group Status

It was actually from this screen the answer presented itself, specifically in the Policy tab. By default a list of categories is shown, with all of them collapsed.
Expand the category:

Recently I deployed VMware vCenter Configuration Manager 5.5 and came across a number of hurdles and pain points along the way. Some of them were due to the configurations of the SQL servers in that they weren’t configured as requested, … [More]

Recently I deployed VMware vCenter Configuration Manager 5.5 and came across a number of hurdles and pain points along the way. Some of them were due to the configurations of the SQL servers in that they weren’t configured as requested, and others down to the unusual way VCM is configured and how problematic it can be to reinstall if you make a mistake and need to start again.
So, to prevent other administrators from burning valuable time figuring out how to fix the varying hurdles I experienced along the way, I thought I would do a posting about how I worked around each problem.

Uninstalling and Reinstalling VCM

Due to my VCM server having a problem in the SQL Server Reporting Services database, I had to stop the installation half way through. The application attempted a rollback but returned a number of errors stating, “INSTALL.LOG not found”. I had to accept these errors and allow the rollback to complete but then when I tried to run the installation again I received the error below.

To remove this error and allow the installation to proceed you need to go to the location where VCM is installed, then the Uninstall folder, Packages, and then each folder underneath there and run the Uninstall agent for each piece. You will get the INSTALL.LOG not found error, but the workaround I found is to cut and paste the log file to the machine’s desktop, and then when it asks for the location of the log file, you point it there.

Make sure you remove every single package before running the installation again.

CM Agent won’t uninstall

I received a very strange problem where even though I ran the CMAgent uninstaller as shown in the screenshot above, when I went through the VCM Checker, I would receive the error below, stating the CM Agent was already installed.

The relatively simple but effective way of getting this uninstalled is to mount the ISO for VCM and run the CMAgent installer.

Once installed, uninstall it from the Packages folder location as detailed above in this posting and it will uninstall completely and successfully allowing the Checker to pass all its checks.

SSRS Insecure State

During the installation of VCM, specifically the SSRS portion, I pointed the installation to the SSRS database and instance. I received the error: “Insecure state detected while validating SSRS Instance MSSQLSERVER. The instance is not configured for HTTPS, please consult documentation before continuing”. The error isn’t a show stopper and you can continue, but for me this wasn’t an option as I wanted the SSRS instance secured correctly. After a fair bit of research and trying a few different options (this is where I had to reinstall VCM as mentioned in my first hurdle above) I found the solution to the “problem”.
The problem is "fixed" by adding a certificate to the web server URL to create and allow SSL connectivity to the Reporting Server.

You will need to get an internally signed certificate from your internal CA or an externally signed one

Go to Web Service URL and add the certificate you have added to your machine in the SSL Certificate drop down, click Apply, and ensure it does not give you any errors in the results panel at the bottom of the page

Now when you go through the installation you will not receive the error because HTTPS / SSL is enabled.

SQL Integrity Instance Error

I received the error below when VCM was running its VCM Checker utility; it kept failing on the SQL checks resulting in the error below.

For this problem, I spent ages trying to get it to work and even completed a whole rebuild including the deletion and recreation of all the VCM databases, but to no avail. Only after having stepped through line by line of the installation document specifically around VCM required SQL components did I find the solution. The local language of the SQL server was incorrect even though the languages of the SQL instances were all correct and the collation was correct.

The local language was corrected and the SQL instances recreated to include the now correct language in the collation and it passed all the checks.

Dashboard Reports Fail in VCM

After VCM was installed I thought all my hurdles were behind me now but unfortunately after running a template collection and then trying to view the report I received the error shown in the panel below. “You must use Internet Explorer with the Run as administrator option to view dashboard reports when working locally on the Collector.”

2. If you do not see the folder ECM Reports on the screen, run this command:

<InstallPath>\WebConsole\Files\Reports\RSInstall.bat

where <InstallPath> is the base path of your vCM installation. For me the Files folder was in the L1033 folder within the WebConsole folder.

3. Navigate to ECM Reports > ECM.

4. Click the Security tab.

5. Click New Role Assignment.

6. For the group or user name, type:

<ServerName>\ECMSRSUser

where <ServerName> is the short name of your server

7. Select the Content Manager role.

8. Click OK.

9. Restart the SQL Server Reporting Services service.

After completing the steps above, I was able to view my reports after running a collection against one of the VCM Templates.

No Agent Proxy Machine Found

The next problem I encountered was that every time I tried to configure the settings of my vCenter servers under Licenced Virtual Environments I would receive the error below. I had my vCenter server showing under Agent Proxies and the agent was the most current and it wasn't showing any errors.

My problem came about because the servers had not been trusted and the managing agent status had not been enabled. I browsed to administration>certificates in the VCM administration portal and followed the followed the steps detailed from page 27 of the VCM Administration Guide to set the managing agents as enabled and trusted.
Even though there are a few errors above and there is a fair amount of configuring to do to get vCenter Configuration Manager 5.5 working, when it is working it’s an amazing tool and one I would highly recommend to anyone looking for PCI, ISO and vSphere 5 Hardening guidelines compliancy, to name a few.

During a recent customer engagement deploying Citrix XenDesktop 5.6 to 4000 users across EMEA, we came across an issue for a group of users that had a requirement to access a Citrix Metaframe XP farm hosted on a W2K server … [More]

During a recent customer engagement deploying Citrix XenDesktop 5.6 to 4000 users across EMEA, we came across an issue for a group of users that had a requirement to access a Citrix Metaframe XP farm hosted on a W2K server to access some legacy apps hosted by a third party.To meet this requirement we used something called ICA Piggyback.
So what is ICA Piggyback? Essentially, it allows us to use a double hop process to bounce from one farm to another allowing us to make use of different ICA client versions.
I published an ICA file from our XenApp 6.5 farm to the specific group of users, however we had complaints that when the published desktop was maximised to a full window the session would disconnect. Also if the user left the desktop as a window, the session would randomly disconnect at random intervals even when the desktop was in use.
The desktops published from Citrix XenDesktop were running Citrix Receiver 3.3, so I started to investigate the issue by ensuring we had met current supported levels.
A quick look at the receiver documentation showed the target destination was unsupported, not surprising really….

Figure 1: Citrix Receiver 3.3 System Requirements

My first approach was to contact the third party to see if it was possible to access the legacy applications on a supported platform, to no avail. Therefore, it was back the drawing board.
I started searching through the Citrix documentation looking for a client that supported both W2K and Metaframe XP, eventually finding the XenApp Plugin for hosted Apps version 11.0.150.5357.

Figure 2: Citrix XenApp Plugin System Requirements

After finding what was going to be client for the job, I had to figure out how I could get this client to the users. After some head scratching, I figured I had two options, package the application, or deploy it out from a XenApp server and use a piggyback method.
We had a number of Citrix XenApp 6.5 servers available in a farm, yet I didn’t want to install such an old client on these servers in case I lost any functionality going forward. We had a separate requirement to host some legacy IE6 applications, so I deployed out a small Citrix XenApp 5.0 farm hosted on W2K3 R2.
After a number of functional tests for performance and stability, this allowed for smooth connections from a hosted desktop, running Citrix Receiver 3.1 passing through a XenApp 5 farm, into a Metaframe XP farm.
Whilst this is not a permanent solution it does provide a handy workaround and functionality to a number of users whilst the final legacy apps are retired and replaced.

I was looking forward to the London VMUG meeting a great deal as aside from the interesting and thought provoking sessions I hadn’t been able to get along to a London VMUG since May 2014. VMUG meetings are also a … [More]

Most popular posts

Consulting throws up many challenges during the design and implementation stages but none more than the actual environment integration. Being at the ‘coal face’ invariably provides a point at which … [More]