DefinIThttp://www.definit.co.uk
Fri, 02 Dec 2016 18:45:45 +0000en-UShourly1https://wordpress.org/?v=4.6.1http://www.definit.co.uk/wp-content/uploads/2016/07/definit-150x150.jpgDefinIThttp://www.definit.co.uk
3232110230237#vROps Webinar Series – Part 11 – Getting more out of #vROps with PowerCLIhttp://www.definit.co.uk/2016/11/vrops-webinar-series-part-11-getting-more-out-of-vrops-with-powercli-2/
http://www.definit.co.uk/2016/11/vrops-webinar-series-part-11-getting-more-out-of-vrops-with-powercli-2/#respondSat, 26 Nov 2016 12:10:06 +0000http://www.definit.co.uk/?p=7868Time to publish the recording for the 11th episode of the vROps Webinar Series. This time we were joined by Vinith Menon who spoke about getting more from your vROps builds with PowerCLI. Vinith demonstrated the many useful ways of leveraging PowerCLI to manage your vROps environments and also communicate with the vROps API.

]]>http://www.definit.co.uk/2016/11/vrops-webinar-series-part-11-getting-more-out-of-vrops-with-powercli-2/feed/07868#vROps Webinar Series – Announcing Part 11 – Getting more out of #vROps with PowerCLIhttp://www.definit.co.uk/2016/11/vrops-webinar-series-part-11-getting-more-out-of-vrops-with-powercli/
http://www.definit.co.uk/2016/11/vrops-webinar-series-part-11-getting-more-out-of-vrops-with-powercli/#respondTue, 22 Nov 2016 13:05:55 +0000http://www.definit.co.uk/?p=7859Another month has gone and Christmas is now looming large! It has been extremely busy but we still want to continue with the momentum of webinar series getting to the business end of the year. This time around we will talk about getting more out of vRealize Operations Manager using PowerCLI.

This session we will be joined by Vinith Menon who will show us all kinds of PowerCLI goodness.

So without further a do, save the date in your calendars and join use for the next episode of vRealize Operations Webinar Series 2016.

Recently I’ve been working on some ideas in my lab to leverage the AWS endpoint on vRealize Automation. One of the things I needed to get working was getting Software Components working on my AWS deployed instances.

The diagram to the right shows my end-stage network – the instance deployed by vRA into AWS should be in a private subnet in my VPC, and should use my local lab DNS server and be able to access my vRA instance. This allows me to make use of the vRA guest agent for software components on the deployed VMs. I also wanted to have the deployed VMs use their local NAT gateway for internet traffic, rather than paying for the data over my VPN connection.

Configure a VPN connection

As I wrote in my previous article, the vRealize Automation Guest Agent requires a connection back to the vRealize Appliance in order to be able to receive tasks. Make sure you have a VPN connection configured and you can communicate back and forth with your internal network.

Create DHCP Option Set

A custom DHCP Option set allows you to specify the DNS domain name and DNS servers that are set by the DHCP server when handing out IPs to your Instance. It’s needed here because the vRA Guest Agent will need to use my lab DNS server to resolve my lab vRealize Automation deployment.

VPC -> DHCP Option Sets > Create DHCP options set

Assign DHCP Option Set to VPC

Assign the newly created DHCP option set to the VPC so that it’s used whenever an Instance is deployed.

Create a custom AMI

Shutdown the AMI instance that was just configured and then from the Instance actions, select Image, Create Image.

Configure the Image

Check that the new AMI is available – EC2 > AMIs

Configure vRealize Automation

At this point the AMI should be collectable in vRealize Automation, so we need to trigger a data collection from the AWS Compute Resource:

Once that completes, create a new blueprint and drag in an Amazon Machine:

Configure the Reservation policy:

Switch to the Build Information tab and select a machine image. Use the filter to select private AMIs

Configure the key pair (not specified will use the reservation configured key), and select the instance types you want to be available.

To test the software component I don’t need to configure the Machine Resources but we do need to set the osfamily custom property in the Properties tabs.

Drag a Software Component onto the Amazon Machine – I’m using a simple test one that writes custom text into a file in /tmp/text.txt:

Save, publish and entitle the new Blueprint.

Requesting the Amazon Blueprint

Select the instance type, and provision into values:

Select the subnet and a security group that will allow the AMI to talk back to the vRealize Automation appliance and manager service (I have recycled an existing one).

Once the request has been submitted, you can view the Execution Information to ensure the blueprint deployed successfully:

View the machine under the Items tab to get the IP address and export the SSH certificate to then connect to the VM. As you can see, the contents of the text.txt have been updated:

]]>

http://www.definit.co.uk/2016/11/deploying-to-aws-with-software-components-on-vrealize-automation-7/feed/07826Creating an AWS Hardware VPN Connection with Ubiquiti EdgeRouterXhttp://www.definit.co.uk/2016/11/creating-an-aws-hardware-vpn-connection-with-ubiquiti-edgerouterx/
http://www.definit.co.uk/2016/11/creating-an-aws-hardware-vpn-connection-with-ubiquiti-edgerouterx/#commentsMon, 07 Nov 2016 22:38:07 +0000http://www.definit.co.uk/?p=7804When you’re working with Amazon and vRealize Automation Software Components, one of the requirements is for the Guest Agent (gugent) to talk back to the vRealize Automation APIs – the gugent polls the API for tasks it should perform, downloads them from the API and executes them, then updates the tasks with a status.

This means that Virtual Machines deployed as EC2 instances in an AWS VPC require the ability to talk back to internal corporate networks – not something you’d want to publish on the internet! That’s where AWS’s VPN connections come in – you can create several types of VPN that allow such communication over a secure (encrypted) virtual private network.

For the purposes of this post, I’m going to look at setting up the “AWS Hardware VPN”, which is described by Amazon:

You can create an IPsec, hardware VPN connection between your VPC and your remote network. On the AWS side of the VPN connection, a virtual private gateway provides two VPN endpoints for automatic failover. You configure your customer gateway, which is the physical device or software application on the remote side of the VPN connection.

As with all things AWS, you have to create two of anything in different Availability Zones to call it Highly Available, or for any kind of SLA to apply. The VPN is no different, and as you can see from the diagram above you create two tunnels to provide a Highly Available connection to AWS.

Create a Customer Gateway

A customer gateway defines your VPN endpoint from AWS’s point of view. Create a new one from the VPC Management console

Create a Virtual Private Gateway

A Virtual Private Gateway is needed to provide a VPN concentrator on the Amazon side of things. It must be attached to the VPC you’re using for AWS in vRealize Automation.

Now we can see the VPG is attached to the DefinIT-vRA VPC

Create a VPN Connection

Now we have the components configured, we can link it together with a VPN connection. Note that once you complete the connection configuration your VPN charges apply.

Select the VPG and Customer Gateway we just configured, ensure you select Dynamic routing and give it a good name.

Creating the VPN connection took a couple of minutes to deploy – wait until you see the “available” state

Download the VPN Configuration

Since the EdgeRouter range are based on Vyatta, we can download the VPN configuration for Vyatta gateways. Hit the “Download Configuration” button and select Vyatta:

:

By default the configuration file will include a line to advertise 0.0.0.0/0 via BGP, which isn’t really desirable. Modify BOTH instances of this line:

set protocols bgp 64512 network 0.0.0.0/0

And added the network that I want to advertise:

set protocols bgp 64512 network 192.168.1.0/24

This subnet (192.168.1.0/24) is the subnet I have my vRA infrastructure on – the deployed VMs need to talk back to the vRA infrastructure to do my Application Components. If you want to advertise multiple subnets, simply duplicate this line and change the subnet.

Configuring the EdgeRouter

Firstly, take a backup of your configuration in case you need to roll back.

SSH to your EdgeRouter, enter configuration more (type configure) and paste in the modified configuration script from AWS:

Commit and save your new configuration

commit
save

The VPN connection should now come up, to verify this I logged onto my EdgeMax dashboard and could see the new VPC tunnel connections were indeed connected:

Logging on to my VPC console, I can see that the VPN connection’s tunnel details are also up.

Enable route propagation

The final step is to enable route propagation on the Route Tables tab in the VPC dashboard. Select the route table and then select the Route Propagation tab, click Edit and then check the Propagate box:

In a second or two, the subnet behind the EdgeRouter should be in the Routes table:

Testing Connectivity

Now when I spin up a new EC2 instance in the DefinIT-vRA VPC, I can configure the Security Group to allow ICMP and SSH from my local subnet

And testing I can ping and SSH to the instance.

]]>http://www.definit.co.uk/2016/11/creating-an-aws-hardware-vpn-connection-with-ubiquiti-edgerouterx/feed/17804Security “fun” and embracing 2FAhttp://www.definit.co.uk/2016/11/security-fun-and-embracing-2fa/
http://www.definit.co.uk/2016/11/security-fun-and-embracing-2fa/#respondWed, 02 Nov 2016 10:47:18 +0000http://www.definit.co.uk/?p=7796So the other day my Skype account was briefly compromised, a successful login from Russia (after digging through activity logs) and this was after many attempts from IP addresses all around the world (China, Korea, Argentina the list goes on). You can see from the picture below the successful login attempt.

My initial reaction was stress and panic, as I didn’t know precisely where I had been compromised I ran scans on my local machines while resetting passwords a plenty. Once I had calmed down a bit and reviewed where I had gone wrong I set about uping my game.

You see I had been using what I considered reasonably secure 8 character(or longer) alpha numeric passwords (also avoiding to use the same password on any accounts) so I felt pretty secure.

What I discovered and purely my own fault was one of my old passwords was not inline with above but was still active (but not used) so it was indeed the weakest link.

My guess at this point is that after numerous brute force attempts (which I could see in the activity logs) mister hacker and his bots managed to get one successful login to my Skype account and briefly spam people (interestingly not all the contacts in Skype were sent a nasty link)

I was already using 2 Factor Authentication for a few things but after the incident I went all in.

It took a little while to get everything that I wanted locked down and enabled for 2FA but all of the products I use seemed to support either 3rd party 2FA or have a suitable offering themselves.

Setting it up was a pain for a few things as some of the documentation was not so easy to find and frankly some applications do not cope so well with 2FA.

A couple of the 2FA offerings I am using now which work very well.

Microsoft 2 Factor Authentication

Google Authenticator

In short this was a lesson learned, I was complacent (with just one password) but it was clearly enough therefore for me, it doesn’t matter how small your weakest link is in your personal security is it can and will get exploited if left long enough.

]]>http://www.definit.co.uk/2016/11/security-fun-and-embracing-2fa/feed/07796#vROps Webinar Series Slide decks – available for downloadhttp://www.definit.co.uk/2016/11/vrops-webinar-series-slide-decks-available-for-download/
http://www.definit.co.uk/2016/11/vrops-webinar-series-slide-decks-available-for-download/#respondTue, 01 Nov 2016 23:27:20 +0000http://www.definit.co.uk/?p=7792If you have been following and enjoying the vROps Webinar series both Sunny and I have been running this year you may find it useful to know Sunny has put all the slide decks up and availble in one place for you to download and share if you so wish.

If you have any questions or comments just drop either Sunny or myself a comment on our respective blogs or contact us on twitter.

]]>http://www.definit.co.uk/2016/11/vrops-webinar-series-slide-decks-available-for-download/feed/07792Adding and removing #vROps nodeshttp://www.definit.co.uk/2016/10/adding-and-removing-vrops-nodes/
http://www.definit.co.uk/2016/10/adding-and-removing-vrops-nodes/#respondSat, 29 Oct 2016 21:32:04 +0000http://www.definit.co.uk/?p=7733I was asked recently if there were any materials I could direct people to with regards to expanding existing vROps deployments. I did a brief search on the web to see what material was “out there” on how to perform what is a straight forward task. To my surprise very little came up. So I have decided to create this guide with all the things you need to consider when you are going to either expand or need to remove a node from one of your active vROps deployments.

Prerequisites

DNS DNS DNS, did I mention DNS? Make sure your new node has a DNS entry (and reverse lookup)

vROps relies on DNS (if you use VMware products regularly this will not be a new thing to you) if you have not already got DNS records in place for your vROps cluster get it done. vROps will not play nice if you get problems in this area and you have more than just one node.

Make sure the node you will add is using the same version as the existing cluster.

You can check the version by logging into the vROps instance and clicking the “about” button on the top right corner of the vROps UI.

If you are going to add a node, make sure it is the same size as the existing nodes. A quick check in the vSphere client will help you see how the existing nodes are sized.

You will be in an unsupported configuration if you try to mix and match your node sizes in a cluster.

Extra Small – 2x vCPU – 8GB Memory

Small – 4x vCPU – 16GB Memory

Medium – 8x vCPU – 32GB Memory

Large – 16x vCPU – 48GB Memory

Deploying a node and adding it to your cluster

Once you have determined the correct node size go ahead and deploy the OVA (I am assuming you are deploying the appliance but the same rules apply to a windows deployment) and then start the VM up for the first time.

Go to the FQDN or IP address of your new node in your preferred web browser.

You will be presented with the following

Click “Expand an Existing Installation”

On the above screen click next.

Lots of boxes to fill in now, give your node a friendly name (does not affect DNS) and set the node type to Data.

Enter the FQDN of your Master Node and click validate.

Check the box to accept the certificate.

Click next.

Enter the admin password for vROps and click next.

Everything is now set, just click finish.

The node will now go through the process of being added to the cluster

The browser page will refresh and jump to a new page (likely prompting you regarding certs if you are using self signed) and you will then be presented with a view of the vROps cluster.

After a good while you will see the option to complete the process, click “Finish Adding New Node(s)” A pop up dialogue will appear, click OK.

You will see the node firing up while the process completes

After a while you will be taken to the normal vROps login page, enter your admin creds and go to administration and then to cluster management and you will see your new node up and running and a member of the cluster.

Nice and easy eh?

Removing a node safely from an existing vROps Cluster

Important notice – this process will restart your vROps cluster.

If the node you want to remove is the Replica node you will need to disable HA first and then go ahead with the following steps.

Login to vROps on the admin URL. (not the node you are going to remove)

https://master-node-fqdn-or-IP/admin

Select the node you want to remove FIRST

Then click the green and red “take node offline” button.

Give a reason (think of something sensible for auditing purposes)

Notice the node is now “not running”

Click the red cross to remove the node from the cluster.

Read the warning, check the box and click yes.

The cluster will begin to remove the node.

Cluster then restarts.

As you can see the data node has disappeared and the cluster is restarting.

That’s basically it, once the cluster is online you are good to go.

]]>http://www.definit.co.uk/2016/10/adding-and-removing-vrops-nodes/feed/07733#vROps Webinar Series 2016 – Part 10 – A Deep Dive into vROps APIhttp://www.definit.co.uk/2016/10/vrops-webinar-series-2016-part-10-a-deep-dive-into-vrops-api/
http://www.definit.co.uk/2016/10/vrops-webinar-series-2016-part-10-a-deep-dive-into-vrops-api/#respondSat, 29 Oct 2016 12:37:56 +0000http://www.definit.co.uk/?p=7729Time to publish the recording for the 10th episode of vROps Webinar Series. This time around we spoke about vRealize Operations Manager Resful API and how to use it. Post 20 minutes of slide-ware, I jumped into the lab and thanks to the demo god, we demonstrated a number of use cases and browsed through the documentation to make it easier for you to consume and use the same.

Big thanks to @sunny_dua for doing the session while I was MIA you are a legend buddy!

So without further ado, here is the recording for this session:

]]>http://www.definit.co.uk/2016/10/vrops-webinar-series-2016-part-10-a-deep-dive-into-vrops-api/feed/07729#vROps Webinar 2016 – Announcing Part 10 : A Deep Dive into vROps APIhttp://www.definit.co.uk/2016/10/vrops-webinar-2016-announcing-part-10-a-deep-dive-into-vrops-api/
http://www.definit.co.uk/2016/10/vrops-webinar-2016-announcing-part-10-a-deep-dive-into-vrops-api/#respondFri, 21 Oct 2016 11:57:30 +0000http://www.definit.co.uk/?p=7724The month has been extremely busy but we still want to continue with the momentum of webinar series getting to the business end of the year. This time around we will talk about vRealize Operations Manager API. API is your friend if you are trying to automate things which you would normally do on GUI. While GUI is a favourite of most, the geeks prefer the API since that helps them to programatically initiate tasks and go out for a coffee. By the time they are back from the LONG coffee break, the work is done

This session would help you understand the API framework of vROps and as always we would jump into the lab to run a couple of scenarios which we would want to access through API and Geek Out!!

So without further a do, save the date in your calendars and join use for the next episode of vRealize Operations Webinar Series 2016.

NOTE – Don’t forget to mark your calendars by saving the Date!! Feel free to forward the invite to anyone who might be interested. It’s open to all!!

Sharing & Spread the Knowledge!!

]]>http://www.definit.co.uk/2016/10/vrops-webinar-2016-announcing-part-10-a-deep-dive-into-vrops-api/feed/07724There is no one cloud solution to rule them all..http://www.definit.co.uk/2016/10/there-is-no-one-cloud-solution-to-rule-them-all/
http://www.definit.co.uk/2016/10/there-is-no-one-cloud-solution-to-rule-them-all/#commentsTue, 18 Oct 2016 17:01:21 +0000http://www.definit.co.uk/?p=7711I have been musing this a little while and decided to write this post/rant/opinion post, feel free to post your thoughts and opinions in the comments.

OK so here it is, one thing I have observed for a good while now is how much noise there is about how -you- should be in the cloud (Public) and if you are not you’re already dead.

I call Bull****!

Public, Hybrid and Private clouds are solutions not final destinations, regardless of whether you are a customer, partner, or any other third thing you should be purely focused on what is best for you or (if you provide IT services) your customer. It is plainly obvious to me that in -all- of the customers I have visited there has been no appetite or reason to mass adopt any single one solution/option as it simply would not fit how they do business and function day to day. There will of course be exceptions to this rule but they are indeed exceptions not the norm.

As I have heard in the past, the reason you don’t hear lot’s about Private cloud success stories is because they are Private (the clue is in the name) not every customer wants to shout about it for reasons and lets not forget the speed of business is in no way on a par with the speed of IT innovation.

It is reckless to blindly suggest to folk that only one type of cloud will meet all their needs. Public cloud is here to stay but the sweeping statements about it being the final destination for everyone is nonsense. All three cloud types are here to stay, they give customers choice (which is critical).

From my point of view, it is my professional duty to provide my customers with informed and appropriate solutions for their needs. For many I know this is plainly obvious but with all the noise at present I felt I needed to at least write something about my thoughts on the matter.

Again feel free to comment if you wish I am keen to hear what you think.

]]>http://www.definit.co.uk/2016/10/there-is-no-one-cloud-solution-to-rule-them-all/feed/17711Installing .NET core, PowerShell and PowerCLI on macOS Sierrahttp://www.definit.co.uk/2016/10/installing-net-core-powershell-and-powercli-on-macos-sierra/
http://www.definit.co.uk/2016/10/installing-net-core-powershell-and-powercli-on-macos-sierra/#respondTue, 18 Oct 2016 10:52:22 +0000http://www.definit.co.uk/?p=7697So, this is something I’ve been waiting to write up for a while! PowerShell for macOS has been available for a while now, but what a lot of PowerCLI fans have been waiting for is to be able to use PowerCLI direct from their Mac.

Today, amidst all of the noise from VMWorld, PowerCLI Core dropped as a Fling! That means that although it’s not ready for production use yet, it is ready to start testing – and I’m way more excited than I should be!

At the moment it’s a limited subset of PowerCLI functionality (as PowerShell Core is a limited subset of PowerShell), but both PowerShell and PowerCLI are actively adding functionality at a really good rate – and VMware Flings have a pretty decent track record for being released as production (H5 client, Migrate2VCSA, VSAN HCL, Embedded Host Client – it goes on!)

I was having a chat with my Dad recently, about how those who work in IT keep their skills up to date. He is now retired but had spent 25+ years in the IT business so I always value his opinion. What occurred to me while we chewed the fat was the following.

<cliche quote>If there is one thing we can be sure of in IT, it is that change will happen and faster than we think.</cliche quote>

But seriously, how the heck can we keep up? I mean seriously every few months the next new “big thing” hits the ground in the shape of a new method, new tech or even a new tech start up. Regardless of what it is if you are even slightly serious about your IT career you have to pay attention and at the very least be aware of what it does, how it does it and what impact it has or will have.

Then you have the obvious learning on the job, finding those unique tips and tricks that only come from experience and lets not forget courses (if you are fortunate enough to be sent on one) and the endless certificate treadmill.

As we all know IT by nature requires you to stay relevant that’s why most of us are in the job right? We enjoy gadgets and technology we get excited by the “latest thing”. But this in itself does not give you all you need to be able to keep up.

Unless you are one of the very blessed few who can absorb information like a sponge, or perhaps have no other commitments in life thus giving you time to learn all this stuff what can we do?

Well what occurred to me is that the speed of change with the average business is quite a bit behind the bleeding edge of change at the forefront of IT itself. How many of you have seen large quantities of businesses suddenly adopt the latest tech overnight? There is always a lead time, sure some businesses are very “agile” and at the least take a look at whats hot and try it out, but in my experience so far the speed of change is slower than the propaganda and noise would have you believe.

If you have been in IT for at least a while you should already have the ability to identify (or have a good idea) what new tech is going to be “big” and therefore earmark it as one to watch and one to learn, but you don’t necessarily need to know it all -now-.

Here is my little epiphany, there is always going to be gap between new tech emerging and the time you really need to get to grips with learn and understand it. Finding that sweet spot that is just ahead of the speed of business adoption but not placing yourself so far ahead you are chasing your tail with the number of new things to learn. This of course requires some experience and practice but I think it all it requires is you yourself paying attention. Watch the trends, listen to trusted and respected peers and the customer. Just because it’s the new thing doesn’t mean they need it (yet).

There are also regional differences, the US typically is faster to adopt tech than the UK or EU so that is something to factor in and comes with experience.

As a good friend told me, you don’t need to be an expert in these new things, you just need ot have the basics down and then when things get mahoosively popular you can concentrate on the advanced stuff.

Please feel free to comment on this post I would be keen to hear what you think and how you keep up with IT without burn out.

]]>http://www.definit.co.uk/2016/10/how-do-you-keep-your-it-skills-relevant-without-burn-out/feed/27676#vROps Webinar Series – Part 9 – What’s New with vRealize Operations 6.3http://www.definit.co.uk/2016/10/vrops-webinar-series-part-9-whats-new-with-vrealize-operations-6-3/
http://www.definit.co.uk/2016/10/vrops-webinar-series-part-9-whats-new-with-vrealize-operations-6-3/#respondSat, 01 Oct 2016 08:27:54 +0000http://www.definit.co.uk/?p=7673Here is the recording for the episode 9 vRealize Operations Manager Webinar Series 2016. During this episode we discussed about the new features and functionalities of vRealize Operations Manager 6.3. With this release of the product, we can clearly see that VMware is clearly working on enhancing the user experience and the user interface with some great new features and UI changes.

I would encourage you to watch this session to understand the full potential of the product and how you can use the new features to meet your requirements and ease out operations in your Virtual / Cloud environments.

]]>http://www.definit.co.uk/2016/10/vrops-webinar-series-part-9-whats-new-with-vrealize-operations-6-3/feed/07673#vROps Webinar 2016 – Announcing Part 9 : What’s New with vRealize Operations Manager 6.3http://www.definit.co.uk/2016/09/vrops-webinar-2016-announcing-part-9-whats-new-with-vrealize-operations-manager-6-3/
http://www.definit.co.uk/2016/09/vrops-webinar-2016-announcing-part-9-whats-new-with-vrealize-operations-manager-6-3/#respondMon, 19 Sep 2016 12:42:05 +0000http://www.definit.co.uk/?p=7668It’s the time of the month when we invite you to join the next episode of our year long vROps Webinar Series. As we move towards winter, we would like to take a zoom out view at the vRealize Operations Manager solution with a What’s New Episode. For the past 8 months, we have give pretty deep into most of the product features and we believe it is time when we review the product in it’s current form and shape. In my opinion, this could not be done better than sharing about the new features & functionalities available in the latest version of vRealize Operations Manager.

While throughout the journey of this series we have discussed various versions of the product, this time around our focus would be on vRealize Operations 6.3.

There are a number of blog articles available which talk about the new features available in the product, however with this episode we will look into the new features in action through a Live Demo as always. We think it is important to see the new features in action to understand the use cases associated with those new features.

So without further a do, save the date in your calendars and join use for the next episode of our vRealize Operations Webinar Series 2016.

It seems that, yet again, VMware’s certificate tooling does not replace a key certificate, and this is the root cause of the problem. When I deployed the VCSA, I configured the PSC as a subordinate Certificate Authority and followed the documented procedure to replace the certificates. Clearly this one was missed!

I attended a workshop with Jad El Zein in Barcelona last year and it was one of the best sessions I attended, especially as it was just on the news of vRA7’s release. This workshop was this year’s equivelant, but because the session included a lot of people just getting started with vRA a lot of the time was taken to explain what vRA does, so there was no real time for the more in depth details on the new features that are coming (some already announced and in Tech Preview). I certainly don’t begrudge the guys just starting the chance to hear the basics from Jad, but I think a more expert session would also be good.

vRealize Automation and NSX Design Experts Panel – #MGT9220

This design panel was hosted by Jad El Zein (I’m not stalking him, I promise) and had Cody De Arkland, Francesco Vigo and Grant Orchard on the panel. I went primarily to hear some more customer views and maybe some of the challenges that they’re facing, to compare with my customers in the UK and the solutions that were proposed. The session didn’t disappoint, with Cody taking the lions share of questions – as a customer he had a unique insight into the challenges and solutions he’d already faced. The other panelist were really knowledgeable too, so this was a really good session.

vSphere 6.x Host Resource Deep Dive – #INF8430

Having heard Frank Denneman speak before, I don’t miss a chance to listen to him when one comes up. This guy is on another level when it comes to understanding and explaining things in a really simple way, from the inner workings of flash drives to his current series on NUMA. Frank covered a lot of NUMA design considerations and the implications of things like DIMMs placement. I’d not heard Niels Hagoort speak before, and it’s definitely a tough ask to share the stage with Frank, but he did a great job covering some advanced configuration for network cards and analysis on technologies like RSS and VXLAN offload. This was a really in depth and technical session, one of the best I’ve attended.

VMware {Code}

I spent some time at the VMware {Code} booth and got to hang out with Kimberley Delgado, Luc Dekens, William Lam and Alan Renouf – it was a proper who’s who of VMware automation.

Hall crawl and vendor chat

The Solutions Exchange is full of the usual suspects, touting their toys and competitions in exchange for your valuable contact details. It has to be my least favourite part of VMworld, but I do like to take a look around the outside of the hall at the smaller and startup vendors to see what they’re offering and what’s new.

I’ve met with the guys at Velostrata before at the London VMUG and I have to say that I was hugely impressed with their cloud mobility solution. It’s still in startup phase, I thought they were still in startup phase, but they’re now at v2.0 of the product. These guys are definitely one to watch out for, I have frequently said whoever cracks proper cloud mobility first is going to be big, and these guys have the best solution I’ve seen so far! Check them out at velostrata.com and request an evaluation.

It was great to catch up with the guys at Pluralsight, who I’ve mentioned quite a few times on my blog. I am lucky to have access to Pluralsight via the vExpert program – it’s one of the best perks because they have such an awesome and huge catalog of teaching. If you’re not familiar with them, go and take a look and sign up for a trial, if I didn’t have access already it’s something I’d genuinely pay for myself.

]]>http://www.definit.co.uk/2016/08/vmworld-2016-day-2-workshops-and-sessions/feed/07658#vROps Webinar 2016 – Part 8 : SDDC Operations with vROps Custom Dashboardshttp://www.definit.co.uk/2016/08/vrops-webinar-2016-part-8-sddc-operations-with-vrops-custom-dashboards/
http://www.definit.co.uk/2016/08/vrops-webinar-2016-part-8-sddc-operations-with-vrops-custom-dashboards/#respondWed, 31 Aug 2016 14:52:14 +0000http://www.definit.co.uk/?p=7656With today, being the last day of the month, we wanted to make sure that we share this out with you folks out there to keep up with the practice.

Iwan Rahabok, did a great job on this one, talking about the the concept of SDDC operations and I would recommend this session to anyone who wants a jump-start into transforming their operations for SDDC. Quickly want to thank Iwan here for spending his valuable time with us and giving back to the community through this Webinar Series, Blogs, Books etc.

Once again I would like to thank my friend and partner in this project Sunny as without him this would not be possible.

I’m not going to lie, the start to the keynote was weird. There was drums and poetry. I will say no more. Another thing I’m not going to do (because many people do it far better than me – I’m looking at you Julian) is do a blow-by-blow account of the keynotes, for me the headlines are that VMware announced VMware Cloud Foundation and Cross Cloud Services.

VCAP6-DCV Deploy

After the keynote I headed over to the exam center to sit the VCAP6-DCV deploy. I covered my exam experience here, so I won’t repeat that!

vBrownbag

I sat through a few vBrownbag community talks after the exam, two in particular I enjoyed:

This was my first breakout session this year and it had some really cool demos using docker-machine to build containerised applications out in vCloud Air. The speaker Kevin Gorman was engaging and took the audience from the basics of containerisation and a comparison of Virtual Machines and Containers to some more advanced theory around how to successfully create next-gen microservice applications.

VMware Code Hackathon

My evening’s entertainment was taking part in the first VMware Code Hackathon! I was on a team with PowerShell and automation legend Luc Dekens and some other great guys, developing a new class for Luc’s PowerShell DSC module for vSphere. We actually made some great progress in the 3 hours but unfortunately our lab let us down and we weren’t able to show our work. You can, however, see the addition on Luc’s Github page for the project.

The hackathon has been one of the best learning experiences I’ve had at VMworld (and I’ve had a lot of really good ones!) – it was great to be part of a team that is formed and creates something in a few hours, as well as learning by osmosis when you work with really smart people.

Oh, and there was beer and food too! Rumour has it that there’s a good chance this will happen at VMworld Barcelona too, so if you’re going, get involved!

]]>http://www.definit.co.uk/2016/08/vmworld-2016-day-1-vbrownbag-and-vmware-code-hackathon/feed/07654#vROps 6.3 – SNMP filteringhttp://www.definit.co.uk/2016/08/vrops-6-3-snmp-filtering/
http://www.definit.co.uk/2016/08/vrops-6-3-snmp-filtering/#commentsTue, 30 Aug 2016 09:42:07 +0000http://www.definit.co.uk/?p=7644So one of the most common questions I get from customers regarding vROps is alert filtering and up until very recently the only way to filter was either by email alerting or via the REST plugin. However 9/10 customers wanted to use SNMP and with vROps it was a question of switching on the fire-hose and pointing it to the receiver and let that end point or middle-ware handle the torrent of data. This was naturally not always well received and in some cases rendered that option as a dead end.

However I was delighted to see in 6.3 that you can now indeed filter SNMP as well.

So how do we do this?

From the home page click content

Then from the content menu select notifications

Then in the main window click the green plus icon to create a new notification filter.

You can then create the filter you want and then when happy click save.

Every year I try and make use of the VMworld discount at the exam centre, and this year was no exception I sat the VCAP6-DCV Deploy exam, and results are still pending! Overall it’s a good exam, there was very little of the spelling and grammatical errors I’ve complained about in the past, performance was OK and the level of difficulty was good.

In terms of study, I have to confess that I didn’t really study for this exam – I’ve been using vSphere 6 since it’s first beta and I felt fairly confident in the blueprint content. There’s a tonne of stuff out there if you are studying, a quick google will find what you need. We’ll see if my confidence was misplaced when my results come through!

My exam strategy is not a new one, I complete all the questions I am 100% confident on, and write down the numbers of the questions that I’m not to circle back on if I have time at the end. I got through the first pass with about 45m remaining, and about 6 questions partially completed or not touched. At this point I went back through and used the documentation (which was pretty quick to access, top tip!) to answer the questions I felt I could complete. By the end of the allotted time I’d completed all but 2 questions, one was partially complete and the other untouched.

Overall, I think this exam is a step up from the VCAP5-DCA and is in line with the quality that seems to be coming through the pipeline now. I’ll update this page later when my results come in.

Update 15:59 29/08/2016

I just received my results and I’m pleased to say that I passed with a score of 377!

]]>http://www.definit.co.uk/2016/08/vmworld-2016-vcap6-dca-deploy-exam-experience/feed/27639#VMworld 2016 – Day 0, #vmunderground opening acts, #beVCDXhttp://www.definit.co.uk/2016/08/vmworld-2016-day-0-vmunderground-opening-acts-bevcdx/
http://www.definit.co.uk/2016/08/vmworld-2016-day-0-vmunderground-opening-acts-bevcdx/#respondMon, 29 Aug 2016 17:18:23 +0000http://www.definit.co.uk/?p=7635I landed in Las Vegas about 9:30PM local time on Saturday evening, having not executed my plan to sleep on the flight! I had planned to sleep on the Toronto to Vegas leg, which would have meant I could head over to the Sips and Stogies pre-event, however a very rowdy hen do a couple of rows away meant that I didn’t sleep at all, so I grabbed a taxi down to my hotel, the Excalibur, and got myself checked in.

After a relatively decent sleep I headed of to registration, which was a really slick process once I joined the right wifi network! I grabbed my VMworld 2016 bag, which contained the standard water bottle, t-shirt and half a rain forest of advertising.

I headed over to the vbrownbag opening acts which included a vBrisket BBQ and finally managed to catch up with my colleage and chief troublemaker, Gregg.

The vBrisket BBQ was tasty and offered the opportunity to network – there were some great conversations, a lot of them focussed around VCDX. It was great to meet Andy Smith, who passed VCDX at the same round of defences that I did. It was also great to chat to some of the guys who are currently aiming at VCDX and have recently had unsuccessful defences – Brett and Rebecca. They currently feel the same pain as I felt when I failed the first time – hopefully I was able to encourage them that we all felt like that, and passed second time!

From the New York New York casino we headed over to the Mandalay Bay to the Solution Exchange for the Welcome reception, which is the first opportunity to crawl the Solutions Exchange, with some beer and food. As is the way at VMworld, the chats and the people you meet are way more important than the sessions. It was great to meet Niran Even-chen, Agustin Malanco and Harold Simon on the VMware stand.

The final event of the day for me was the #vmunderground party, held back in the New York New York casino, which was another great networking event. By this point the time zone change was catching up with me and I was flagging! It was good to catch up with a few guys there including Julian Wood.

It’s amazing how much value you can get even before the event starts, just meeting and talking to some really smart people. I’m looking forward to tomorrow, the first of the general sessions and getting stuck in to the main event.

]]>http://www.definit.co.uk/2016/08/vmworld-2016-day-0-vmunderground-opening-acts-bevcdx/feed/07635Whats new in #vROps 6.3 part 2http://www.definit.co.uk/2016/08/whats-new-in-vrops-6-3-part-2/
http://www.definit.co.uk/2016/08/whats-new-in-vrops-6-3-part-2/#commentsThu, 25 Aug 2016 21:16:52 +0000http://www.definit.co.uk/?p=7620In this post I will drill down to look at some of the enhancements, improvements and or new additions one by one.

New Home Dashboard – as you can see this is a very useful and helpful way or presenting a high level set of data about your environment with also many ways to interact and interrogate the data you are being presented with. Among other items you can filter on (Health/Risk/Efficiency), alter the scope, see alerts relative to the objects you are interested in.

Workload Balance & DRS – TO further enhance the integration for the Workload placement engine this new dashboard lets you see your clusters and respective hosts and also allows you to set the DRS level per cluster.

Workload Balance – This has been enhanced from the previous version allowing you to filter by CPU/Memory only demand and also the ability to re-balance with vROps actions.

Monitoring goals – Now this has been asked for by many user and customers, this allows you to recreate a default monitoring policy by answering the questionnaire.

vROps Self Services Dashboards – This was another request from many customers and users alike who wanted to be able to effectively monitor vROps with this set of dashboards your wish has been granted.

]]>http://www.definit.co.uk/2016/08/whats-new-in-vrops-6-3-part-2/feed/47620Whats new in #vROps 6.3http://www.definit.co.uk/2016/08/whats-new-in-vrops-6-3/
http://www.definit.co.uk/2016/08/whats-new-in-vrops-6-3/#respondTue, 23 Aug 2016 16:40:32 +0000http://www.definit.co.uk/?p=7604So the latest release of vROps has some cool new things and at a very high level below are the key stand out improvements and changes.

Many new Hardening policies (6.0, ESXi, vCenter, VMs, Network) and increased number of checks + reporting

New SDDC Health Dashboards (installed as a MP)

New Self services dashbaords (vROps)

Improved UI/UX Visuals

Heatmap improvements

Improved dashboards (free widget layout)

New Widget (recommended actions)

Improved Log insight integrations (3.6 vLI)

]]>http://www.definit.co.uk/2016/08/whats-new-in-vrops-6-3/feed/07604#vROps Webinar 2016 – Announcing Part 8 : SDDC Operations with vROps Custom Dashboardshttp://www.definit.co.uk/2016/08/vrops-webinar-2016-announcing-part-8-sddc-operations-with-vrops-custom-dashboards/
http://www.definit.co.uk/2016/08/vrops-webinar-2016-announcing-part-8-sddc-operations-with-vrops-custom-dashboards/#respondWed, 17 Aug 2016 08:30:02 +0000http://www.definit.co.uk/?p=7608This time around Iwan Rahabok will lead the next session of the vROps Webinar Series while Sunny and myself will support him to deliver some awesome content which Iwan has developed over the past few months.

Yes, this time around we will move our focus from vROps as a Product and related features to the concept of running your SDDC operations with vRealize Operations Dashboards. Just to clarify, this is not a session where we will teach you to create dashboards, but this is a session where we would share how a set of Customised Dashboards can help any organisation’s IT to get an insight into Storage, Network & Compute within your SDDC. While vROps is primarily a Performance Management and Capacity Planning tool, we will take you to the other important aspects as well such as Availability and Configuration.

The session would start with discussing the concept first. Then you will see the concept turning into reality with the Custom Dashboards which we are going to showcase. Later, we will also share details on how to get all those dashboards into your environments with a few easy steps and hopefully this will either get you started on your vROps journey, or accelerate the journey for those who have already started and now looking to maximise their investments they made in to vROps.

So join us for this edition and learn more about how to operationalize Software Defined Datacenter.

]]>http://www.definit.co.uk/2016/08/vrops-webinar-2016-announcing-part-8-sddc-operations-with-vrops-custom-dashboards/feed/07608#vROps Webinar 2016 – Part 7 : Working with Alerts & Symptomshttp://www.definit.co.uk/2016/07/vrops-webinar-2016-part-7-working-with-alerts-symptoms/
http://www.definit.co.uk/2016/07/vrops-webinar-2016-part-7-working-with-alerts-symptoms/#respondSat, 30 Jul 2016 09:03:50 +0000http://www.definit.co.uk/?p=7601Time to release the recording for the latest part of the vROps Webinar Series. We completed the 7th session of the series where we spoke about vRealize Operations Manager Alerts and Symptoms.

Alerts as we all know would always remain the heart and soul for the operations teams to run the data centers, whether old school or the modern software defined.

In all cases you need alerts and more importantly you need meaningful and actionable alerts. In this part of the series, Simon and myself concentrated on making you aware of the alert constructs in vROps and as usual share experiences around how we help customers leverage the strong feature set of vROps to customize alerts and related symptoms, recommendations and actions to drastically reduce the Mean Time to Resolution of issues.

The journey so far has been fantastic and we would continue to add more content as we progress into the rest of the year.

Stay tuned for more and enjoy this recording!!!

]]>http://www.definit.co.uk/2016/07/vrops-webinar-2016-part-7-working-with-alerts-symptoms/feed/07601Adding an AWS endpoint to vRealize Automation 7http://www.definit.co.uk/2016/07/adding-an-amazon-web-services-endpoint-to-vrealize-automation-7/
http://www.definit.co.uk/2016/07/adding-an-amazon-web-services-endpoint-to-vrealize-automation-7/#commentsThu, 28 Jul 2016 09:46:43 +0000http://www.definit.co.uk/?p=7508Although it’s fairly limited, you can add AWS as an endpoint for vRealize Automation 7 and consume EC2 AMIs as part of a blueprint. You can even add the deployed instances to an existing Elastic Load Balancer at deploy time. In this post I’ll run through the basics to get up and running and deploy your first highly available (multiple Availability Zone, load balanced) blueprint.

Preparing AWS for use as a vRA endpoint

There are some obvious pre-requisites for attaching an AWS endpoint – for example, you need to have a VPC configured. There are plenty of resources out there for creating a VPC, so I won’t extend this post by replicating them. This is what I’m using:

A VPC with a network CIDR of 10.0.0.0/16

Subnet “Pub-10.0.1.0/24” in “eu-west-1a”

Subnet “Pub-10.0.0.0/24” in “eu-west-1b”

EC2

Elastic Load Balancer enabled and pointing to both subnets on port 80

AWS endpoints are not configured using a user name and password, instead you need to create a user within AWS’s Identity & Access Management console. You can find it on your AWS console under Security and Identity:

Create a group and assign policy

AWS best practice is to assign permissions at a group level, rather than at the user level, so lets create a group for vRA. Select the Groups page and then “Create new Group”. I’ve called my group “DefinIT-Lab”, then click Next Step.

If you look at the AWS User Roles and Credentials Required in the vRA7 documentation, you’ll see that we need to assign the Power User role to our user. To do this we can filter the list of policies and attach the PowerUserAccess policy to the new group.

Review the group name and attached polices, then click create group.

Create a user and an access key for vRA

Select the Users page and then “Create New Users”. You can bulk add users, but for my purposes I need just the one for my vRA instance. I’m creating a user called DefinIT-vRA. Ensure the “Generate an access key for each user” option is ticked.

Once successfully created, the user’s credentials are also available – and before you try and access my AWS account, the user in this post has been deleted Be sure to make a note of the credentials – once you finish the create wizard you won’t see the Secret Access Key again. You can also download the credentials as a CSV file if needed.

At this point the user has no permissions, so we need to assign a group and some permissions. Fortunately, we created a group for that purpose just now! Select the newly created user and click “Add User to Groups” under the Groups tab.

Configuring vRA for AWS

Creating an AWS Endpoint in vRA

Firstly, lets set up some credentials based on the user we created earlier. Log into vRA with a user that has Infrastructure Admin permissions, select Infrastructure > Endpoints > Credentials. Click “New” to create a new credential, then enter a Name and Description that suits you. Enter the Access Key ID generated for your user as the User Name, and the Secret Access Key as the password.

Next select Endpoints and click New > Cloud > Amazon EC2

Add a Name, Description and select the Credentials we just created.

vRA will now kick off a data collection against AWS using your credentials. To check on it’s progress you can select Data Collection from the contextual menu.

Create a Fabric Group for AWS

Next, create a new fabric group for the AWS regions. You need to be logged in with a user that has Fabric Administrator rights. Select Infrastructure > Endpoints > Fabric Groups > New and enter a Name, Description and select the Fabric Administrators. I want to make use of the EU regions, so I called mine “AWS Free Tier EU”, and I used my existing AD group “vRA Fabric Admins”. Next select the regions you want to be able to deploy to – bearing in mind you will need a VPC in each to be able to deploy to them.

Create a Reservation for AWS

The General tab is configured as you would any other Reservation in vRA. Create a Name, assign to a Tenant, Business Group and Reservation Policy (optional), and assign a Priority.

On the Resources tab it gets a little more in depth. I have a VPC configured in eu-west-1, so I’m going to create a reservation there – I select the AWS Free Tier-eu-west-1 compute resource. I’m going to set the Machine Quota to 10, for my own peace of mind. Next specify how you’d like to handle the key pairs for the deployed VMs – you can select:

Not specified

Auto-Generated per business group

Auto-Generated per machine

Specific key pair

They’re pretty self explanatory, but I’m going to set it to use my existing key-pair.

<SNIP> There’s a bit more to this post that I’ll update soon, I don’t normally publish half-finish posts, but this one is for Steven Viljoen who needed some help on twitter!

Setting up an #AWS endpoint in #vRA7 shouldn’t be this difficult. Add IAM keys to credentials and boom…Unauthorised! Any hints welcome!

]]>http://www.definit.co.uk/2016/07/adding-an-amazon-web-services-endpoint-to-vrealize-automation-7/feed/37508#vROps Webinar 2016 – Announcing Part 7 : Working with Alerts & Symptomshttp://www.definit.co.uk/2016/07/vrops-webinar-2016-announcing-part-7-working-with-alerts-symptoms/
http://www.definit.co.uk/2016/07/vrops-webinar-2016-announcing-part-7-working-with-alerts-symptoms/#respondThu, 21 Jul 2016 17:54:58 +0000http://www.definit.co.uk/?p=7591Time to announce the next part of the year long webinar series on vRealize Operations Manager. With the last part of the series, we started focusing on content withing vROps. We will continue the trend and talk about a major function of vROps a.k.a Alerts. Alerts is the most used and most ab-used part of vROps, and with this session we want to give you some insights on the entire life-cycle of the alerting function of vROps.

We will touch upon, defining alerts, making sure they are actionable, understand how they are manged within vROps and also talk about the components which make up an alert.

Wednesday

08:30-09:30 – Advanced NSX Troubleshooting: Tips & Tricks for Experienced Users [NET8680]
10:00-11:00 – How I Learned to Stop Worrying and Love the vRealize Automation API [MGT8332]
11:30-12:30 – Save Time With Everything and Anything as a Service (XXXAAS) using vRealize Automation (vRA) [MGT8085]
13:00-14:00 – PowerNSX and PyNSXv: Using PowerShell and Python for Automation and Management of VMware NSX for vSphere [NET7514]
14:30-15:30 – Evolving the vSphere API for the Modern Era [INF8255](Alt) Multisite Networking and Security with Cross-vCenter NSX—Part 1 [NET7854R]
16:00-17:00 – Multisite Networking and Security with Cross-vCenter NSX: Part 2 [NET7861R]

]]>http://www.definit.co.uk/2016/07/vmworld-2016-my-session-picks/feed/07588Heading to Las Vegas! #VMworld 2016http://www.definit.co.uk/2016/07/heading-to-vmworld-us-2016/
http://www.definit.co.uk/2016/07/heading-to-vmworld-us-2016/#respondTue, 19 Jul 2016 10:24:55 +0000http://www.definit.co.uk/?p=7579I’ve been very fortunate to be able to go to VMworld Europe for the past 3 years, mainly thanks to the vExpert program and the availability of the blogger pass. This year I knew that I wouldn’t be able to get to Barcelona because of work, so I thought I’d apply for a VMworld US blogger pass – I’m very excited to have been given one! As with previous years, I’ll be blogging at least once a day with my thoughts and any useful info I’ve gleaned – I definitely don’t want to take the pass for granted!

The blogger pass is awesome, but I still need to fund my flights, hotel, and some spending money while there. I wanted to thank my blog sponsors VMTurbo and Veeam for their continued support which has made it possible to pay for these flights without costing my family. On that note, I still have one more slot for a sponsor, so if you represent someone who’d like to advertise on DefinIT.co.uk, get in touch!

I will be flying to Las Vegas (via Toronto?! cheap flights) on Saturday 27th, arriving late evening, and I’m staying at the Excalibur hotel. I am looking forward to comparing US and EU – just in sheer numbers the US event is far bigger and it will be fantastic to be part of such a huge gathering of people. This will be my first visit to the US, so it will be interesting seeing Las Vegas – I’m fully aware that Vegas is a crazy place and not representative of the US – but it will be fun seeing the crazy!

I’m not sure if I’ll take advantage of the cheap certifications this year – in previous years I’ve take VCIX-NV, VCP-NV and VCAP5-DCD all with great results, so it’s something I’d consider, although at this point I’m not 100% sure what exam I’d take (I’ve done a lot of betas recently!)

Obviously there are some pretty awesome sessions to be attended (I’ve not picked yet as the session planner hasn’t been published, but I’ll post here when I have) – I’ll be focussing on my normal bag:

vRealize Automation

NSX

VSAN

PowerCLI, APIs and automation

Plus I’ll also be looking to pick up any sessions on emerging technologies like Cloud Native Applications, and some of the technology previews for upcoming feature releases. Hopefully there will be some sessions like last year’s excellent vExpert vRA7 briefing.

As it’s my first VMworld US, I am open to suggestions from any veterans on what are the “must-do” activities around the conference, so far on my list…

I quite like a bit of Fall Out Boy, so the VMworld Party should be fun, I have to be honest that I’ve not been so much a fan of the last few acts in Barcelona!

I’m assuming there will be a VCDX reception, it’s always great to meet up with fellow VCDXs

Likewise I’m hoping there will be a vExpert event, though given the numbers of vExpert now it might be a big event!

As and when the after hours events come out they’re added to running-system.com here.

Lastly, I’m really looking forward to meeting some of my fellow vExperts, VCDXs and members of the community while I’m in the US and while there are a few of my European friends and colleagues attending I’m not naturally the most outgoing of people so please come and say hi if you see me!

]]>http://www.definit.co.uk/2016/07/heading-to-vmworld-us-2016/feed/07579#vROps Webinar 2016 – Part 6 : Understanding vROps SuperMetric & Viewshttp://www.definit.co.uk/2016/06/vrops-webinar-2016-part-6-understanding-vrops-supermetric-views/
http://www.definit.co.uk/2016/06/vrops-webinar-2016-part-6-understanding-vrops-supermetric-views/#respondWed, 29 Jun 2016 22:42:40 +0000http://www.definit.co.uk/?p=7549Sunny just completed the editing and upload of the the latest episode of the vROps Webinar Series 2016. With this we have completed 6 sessions this year. I am glad that we are able to dish out sessions every month and the interest level in these sessions just keeps on growing. Looking forward to deliver more content for the rest of the year.

Session Details:-In this episode, we talk about content within vROps. To begin with, we speak about Views and SuperMetrics. With this you will get a lot of insights into how you can use these features to build some useful, Reports and Dashboards within your vROps environments.

Once again I would like to thank my friend and partner in this project Sunnyas without him this would not be possible.

So without further ado,here is the recording for this session:

]]>http://www.definit.co.uk/2016/06/vrops-webinar-2016-part-6-understanding-vrops-supermetric-views/feed/07549#vROps Webinar 2016 – Announcing Part 6 : Understanding vROps SuperMetric & Viewshttp://www.definit.co.uk/2016/06/vrops-webinar-2016-announcing-part-6-understanding-vrops-supermetric-views/
http://www.definit.co.uk/2016/06/vrops-webinar-2016-announcing-part-6-understanding-vrops-supermetric-views/#respondTue, 21 Jun 2016 09:31:30 +0000http://www.definit.co.uk/?p=7542Time to announce the next part of the year long webinar series on vRealize Operations Manager. This time around, Sunny and I will take you to the world of Content within vROps. I will be honest in saying that this is an ocean in itself and we might not be able to touch upon all the aspects of developing content within vROps. During this effort we plan to start with giving you an overview of the various options available to develop content within vROps.

Among all those options available, we would talk in detail about SuperMetrics and Views and would continue to talk about other option such as Dashboards and Reports in the upcoming sessions. We want to ensure that we go through all the options so that it can help you develop useful content within your environments.

NOTE – Don’t forget to mark your calendars by saving the Date!! Feel free to forward the invite to anyone who might be interested. It’s open to all!!

]]>http://www.definit.co.uk/2016/06/vrops-webinar-2016-announcing-part-6-understanding-vrops-supermetric-views/feed/07542My Top Ten albums of all timehttp://www.definit.co.uk/2016/06/my-top-ten-albums-of-all-time/
http://www.definit.co.uk/2016/06/my-top-ten-albums-of-all-time/#respondSun, 19 Jun 2016 20:41:59 +0000http://www.definit.co.uk/?p=7524So recently a few folk whom I respect a great deal posted up their top 10 albums of all time, initially I was like, OK, but having gone through the exercise myself I found it really enjoyable. Sure its quite self indulgent but seeing other people’s lists gives you a small glimpse into their life. After all musical taste is a hugely personal thing.

I approached my top 10 by considering which bands and albums had the biggest impact on me at a given time in my life and that I still enjoy today. It was kinda cool to go through my own play lists and see what “oldies” cropped up still in my current playlists of which I listen to on a regular basis.

This top 10 list is only compromised of albums from bands or solo artists, I did not include film sound tracks (of which I am also a huge fan of).

10. Dire Straits – Alchemy – Live

When the Brothers in Arms album landed in the 80’s I was blown away I loved every track but then a friend of mine way back then pointed me in the direction of this Live Album, in truth it was hard to choose between the two albums but for me this held more memories.

9. Levellers – Levelling The Land

In the early to mid 90’s I was a crusty wannabie, sure I never quite dived all the way in but with a few very good friends back then we had an absolute blast. Every song on this album holds good memories of simply being very care free, enjoying simple things. It goes without saying the Levellers had a great sound and mixed folk and rock in a fantastic and very catchy manner while also being very punchy with their lyrics.

8. Def Leppard – Hysteria

This album introduced me to rock music (back in 1986) and what an introduction. I cannot think of a single track on this album that I don’t love. I think it was the first album I purchased (on tape) and I practically wore it out. I still think its the best album they have ever produced.

7. Metallica – And Justice for All

Now this album introduced me to metal, I can remember begging a friend to lend me this album after hearing it for the first time. Walking home in the evenings with the driving riffs on my walkman (far too loud IIRC). The huge sound of the guitars on this album really struck me. Ever since any track from any artist that has something similar will always pique my interest.

6. Sting – Ten Summoner’s Tales

I can easily recall first hearing the track “Shape of my Heart” in Wembley Drum Center (back then I was an aspiring drummer) and it suck in my head for days. I was quick to get a copy of the album and was then introduced to the legend that is Vinnie Coliauta (on the drums). The varying time signatures laid down in an effortless fashion to fantastically written songs only pushed me to enjoy and learn more on the drums.

5. Foo Fighters – In Your Honor

I really like Nirvana but I was in crusty land when they were big so in terms of memories the Foo’s hold a lot more for me, it was this album that so far I have liked the most and this was a hard choice given the amount of quality the continue to produce.

4. Porcupine Tree – Deadwing

A good friend of mine introduced me to these guys and everything about them is creative and hugely talented. I can remember many times simply chilling on my daily commute to work listening to them, this album being my personal favourite. I really liked how they followed their own path musically and in arrangements while still being able to write immensely catchy tracks. Also the drumming is nothing short of awesome. (Gavin Harrison)

3. Guns N’ Roses – Use Your Illusion II

So many of the tracks GnR wrote in general hold great memories for me but this album out of all of them holds the most. I was into them right before I went into my “crusty” phase. I can remember listening to this album (as well as others) with my mates, loving the raw sound and being amazed by just how good Slash was on the guitar.

2. Pink Floyd – The Division Bell

I am ashamed to say I had not really listened to Pink Floyd until this album landed in 1994 -that- was when I started to pay attention to what they had to offer. While I know many will argue that some of their older albums maybe be a better choice, for me it was this album that introduced me to them So for -that- reason alone I am most fond of it over the others. Listening to these guys in my car as loud as I liked was just a joy.

1. Joe Bonamassa – Live at the Royal Albert Hall

What to say, I was introduced to Mister Bonamassa in 2010 (late to the party I know) and I was utterly blown away. This album/DVD is for me is practically perfect, having his own blues sound and clearly enjoying every second that he plays, with fantastic vocals to boot. The track I have linked is nothing short of a religious experience. I would argue there are few harder working musicians/artists out there today. Producing album after album of great quality and gigging at a rate I personally think few can match.

So there we have it my current Top 10 favourite albums, feel free to comment.

]]>http://www.definit.co.uk/2016/06/my-top-ten-albums-of-all-time/feed/07524#vROps Webinar 2016 : Part 5 : Design and Deployment considerationshttp://www.definit.co.uk/2016/05/vrops-webinar-2016-part-5-design-and-deployment-considerations/
http://www.definit.co.uk/2016/05/vrops-webinar-2016-part-5-design-and-deployment-considerations/#respondFri, 27 May 2016 13:25:03 +0000http://www.definit.co.uk/?p=7493As promised, I am posting the recording for the 5th Session of vROps Webinar Series 2016. Both Sunny and I successfully delivered the session on Design and Deployment considerations.

Session Details:- In this instalment of the series, we discussed the steps and thought processes that should be used before and during the design and deployment of vRealize Operations Manager. During the session among other things we will cover the planning, core components, correct sizing, HA, clustering, DR and future growth.

Once again I would like to thank my friend and partner in this project Sunny as without him this would not be possible.

So without further ado, here is the recording for this session:

Note : It is recommended that you watch the video in HD quality for a great experience.

]]>http://www.definit.co.uk/2016/05/vrops-webinar-2016-part-5-design-and-deployment-considerations/feed/07493#vROps Webinar 2016 – Announcing Part 5 : Design & Deployment Considerationshttp://www.definit.co.uk/2016/05/vrops-webinar-2016-announcing-part-5-design-deployment-considerations/
http://www.definit.co.uk/2016/05/vrops-webinar-2016-announcing-part-5-design-deployment-considerations/#respondFri, 20 May 2016 06:52:12 +0000http://www.definit.co.uk/?p=7484Time to announce the next part of the year long webinar series on vRealize Operations Manager. This time around, Sunny and I thought about discussing Architecture of vROps. To some, it might sound strange as for smaller deployments you might not have to worry about Sizing and Architecture much since it is pretty simple to install and configure a small or a medium node for a small shop. However as your monitoring needs grow and you start adding solutions for monitoring data sources beyond vSphere, you would need to think about scaling up or scaling out. As your monitoring environment weaves into your incident ticketing system, you would start to see the need to HA of vROps and as you have a DR strategy for your workloads, you will start thinking about DR for your operations tools as well.

We have seen these questions and situations come up in many of our engagements and hence we thought that we should share some of our experience around this area. Below are the Webex Details:

NOTE – Don’t forget to mark your calendars by saving the calendar invite!! Feel free to forward the invite to anyone who might be interested. It’s open to all!!

]]>http://www.definit.co.uk/2016/05/vrops-webinar-2016-announcing-part-5-design-deployment-considerations/feed/07484Log Insight services not starting after cluster IP changeshttp://www.definit.co.uk/2016/05/log-insight-services-not-starting-after-cluster-ip-changes/
http://www.definit.co.uk/2016/05/log-insight-services-not-starting-after-cluster-ip-changes/#respondMon, 16 May 2016 15:26:29 +0000http://www.definit.co.uk/?p=7480I ran into this problem at a customer site where all the Log Insight nodes were changed due to some IP address conflicts. I think the problem occurred because the IP addresses were all changed and the VMs shut down, without time for the application to update the node IPs.

The symptoms:

The web interface was down, a netstat -ano | grep -i “443” showed the service was listening

service loginsight status|restart|stop|start hung and then timed out on the Master node

Running /usr/lib/loginsight/application/lib/apache-cassandra-*/bin/nodetool status showed the two data nodes down (DN) and used the old IP address

All of the nodes were up with their new IP addresses, however Cassandra on the Master node was still looking for the old IP addresses for the Worker nodes. The Worker nodes were in a similar state, knowing their own new IP addresses but not being able to update the Master node because they didn’t have the Master node’s new IP address.

To fix the problem I edited the latest Log Insight configuration file on each of the nodes and updated the IP addresses for Workers under the daemon hosts sections, which were still on the old IP address. As you can see below, the Master node is defined by FQDN and not IP address, which is why the service starts on the Master, but then hangs waiting for a second node.

Once the Worker nodes’ configuration was updated with the correct IPs I was able to start the loginsight service (service loginsight start).

I then checked the Master node’s Cassandra status using the nodetool to ensure that the IP address for the Worker node was updated to the new IP.

At this point the Log Insight web interface was availabe, but the Admin/Cluster page showed an error – I restarted the Master node Log Insight service which resolved the issue.

]]>http://www.definit.co.uk/2016/05/log-insight-services-not-starting-after-cluster-ip-changes/feed/07480What if I want full metrics? in #vROpshttp://www.definit.co.uk/2016/05/what-if-i-want-full-metrics-in-vrops/
http://www.definit.co.uk/2016/05/what-if-i-want-full-metrics-in-vrops/#respondThu, 05 May 2016 16:06:41 +0000http://www.definit.co.uk/?p=7446If you have used vCOps (previous version of vROPs) you will likely remember the option in the admin panel where you could change the number of metrics being collected from “balanced” to “full”.

As many found, this option drop down was not present in vROps and there appeared to be no replacement.

Here is the good news, the ability to collect those metrics has not been removed rather it’s just a little more hidden (by design)

So how do we enable “full” metrics?

The answer is policies! so before you start a quick word of caution, if you plan to modify the default policy don’t rather clone it and then carry out your changes. Better still create a new policy and have it applied to a custom group against a few objects that you are keen to have full metrics collected against.

A good example I came across recently was the lack of visibility of vSphere host hardware objects. When you check in vSphere/Hardware Status there are all manner of useful bits of information that you may wish to monitor and see in vROps and OOTB they are not enabled.

Click edit on the policy you wish to modify.Click option “5. Collect Metrics and Properties” from the left hand menu

You now have a few options on how you can proceed remember in this instance I am interested in getting host hardware information “enabled”

You will now have listed in one long page all of the metrics that are currently disabled that can be captured from a host system.

To narrow things down in the search box type “fan”, you will then see two metrics listed which you can enable as shown in the image.Once done click save in the bottom right corner and voila you have a policy that will now give you the fan metrics for a host system.

You can elect to enable a lot more but remember this will put greater pressure on your vROps cluster so choose wisely!

This time around I could not not attend at short notice due to circumstances beyond my control but as you will see the session was awesome!

Session Details:- With this instalment of the series, we have a new speaker Iwan Rahabok, who will entertain and educate us He is no stranger to the world of vRealize Operations Manager and has written a couple of books on vROps as well.

During this session we will help you understand a few basic concepts of using the right counters to monitor performance and capacity in your infrastructure. We will deep dive into the concept of the consumer and the provider layer and help you with solving issues which you might face as a provider of infrastructure to your business.

I also want to personally thank Sunny & Iwanfor their continued support to this series.

Here is the recording for this session:::

Note : It is recommended that you watch the video in HD quality for a great experience.

]]>http://www.definit.co.uk/2016/04/vrops-webinar-2016-part-4-contention-based-performance-capacity-management/feed/07442Deploying F5 BIG-IP Virtual Edition (VE) in AWShttp://www.definit.co.uk/2016/04/deploying-f5-big-ip-virtual-edition-ve-in-aws/
http://www.definit.co.uk/2016/04/deploying-f5-big-ip-virtual-edition-ve-in-aws/#respondWed, 20 Apr 2016 10:00:46 +0000http://www.definit.co.uk/?p=7307Recently I was asked to develop some vRealize Orchestrator workflows against the F5 BIG-IP iControl REST API, but I was not able to test freely against a production appliance. After a lot of attempts to get in contact with F5 for a 90-day trial of the full version, or to purchase a lab license, I came up empty handed. The free version you can download from F5’s website is version 11.3, which does not feature the iControl REST API, which was released in 11.4.

What I did notice while on the F5 site was the links to the AWS Marketplace where you can rent F5 BIG-IP Virtual Editions by the hour – $0.83/hr + AWS usage fees.

If you happen to have a license for BIG-IP there’s also a Bring Your Own License version, which would be handy. You can view all the options on F5 Network’s AWS Marketplace page.

Here’s how you set up your F5-in-AWS!

Prepare a VPC

First of all, we need a VPC with three networks configured – Management, Internal and External. Log into the AWS Console and select the VPC Dashboard. You can either configure a new VPC, or configure the networks required in an existing VPC. For this post I’m going to use the “Start VPC Wizard” to create a new one.

Select the “VPC with a Single Public Subnet” and click Select. Next we configure the first subnet – Management. AWS veterans will probably want to skim this bit, but for the uninitiated, here’s what we’re doing:

IP CIDR block – this is a large IP block we can use within our VPC and sub-divide into smaller subnets. I’ve kept the default class B subnet (/16).

VPC name – the name of the VPC we are creating

Public subnet – a subnet of the IP CIDR block assigned to the VPC, in this case a /24 subnet.

Availability Zone – AWS regions are broken up into Availability Zones, we don’t need to worry about that too much for this, except that we need all of the components assigned to the same availability zone – I’ve selected “eu-west-1a”.

Subnet Name – the name of the Subnet we are creating – “Management”.

The rest can stay as default

Hit “Create VPC” to finish.

Configuring Networks

Select the “Subnets” page from the VPC Dashboard – you’ll see the Management network we just created. Hit the “Create Subnet” button and create two more subnets (don’t forget to select the same Availability Zone!):

External

Internal

Now you should have 3 networks configured:

Next, we need to check the configuration of our Management network to use the Internet Gateway that is created by default with our VPC. Select the “Route Tables” page from the VPC Dashboard, then select the route table that has “1 Subnet” in the “Explicitly Associated” column. Select the “Subnet Associations” tab and ensure the Management subnet is associated:

Configure Security Groups

Select the “Security Groups” page from the VPC Dashboard – you should see a single default group has been created. Click “Create Security Group” and create a new security group:

allow-ssh-https

Select the new group, then select the “Inbound Rules” tab and configure to allow SSH and HTTPS:

Configuring EC2

The final pre-requisite is to create a Key Pair to be used for authentication with the F5 instance. Open the EC2 dashboard and select the “Key Pairs” page. You can import your own Key Pair if preferred, but for ease of use we’ll hit “Create Key Pair”. Name the Key Pair something sensible, and save the resulting .pem file somewhere safe – losing it will mean losing access to your VM.

Launching F5 BIG-IP Virtual Edition

I elected to use the “F5 BIG-IP Virtual Edition 200Mbps – Good” option and signed in to my AWS account, selected the option and clicked continue. From there you can “Launch with EC2 Console. Launch it in the same Region as your newly created/configured VPC.

Next, we choose an Instance Type. From the previous page, we can see the minimum supported EC2 Instance Type is an m3.large ($0.146/hr), but there are plenty of options to go bigger:

Ensure that the Network and Subnet selected are the correct VPC and the Management network we configured earlier – the rest of the settings can remain as they are, except the network interfaces.

Add a second network interface and assign it to the External network. We can’t add a third interface just yet – we’ll do that later.

Hit “Next: Add Storage” and configure the storage for the instance. I picked SSD because I wanted to have some performance, but to be honest I think Magnetic would be fine in this instance.

Click “Next: Tag instance” and configure any tags you’d like – I left mine blank, then click “Next: Configure Security Group”. Select an existing security group and then select the group we created earlier “allow-ssh-https”. Then click Review and launch:

Review the settings again before finally hitting “Launch” – you’ll be prompted for your key pair – select the key pair created earlier and check the box to say you’ve got it!

Configure External Access

While the instance is starting we can add the final network card by selecting the Network Interfaces tab from the EC2 console page, then using the “Create Network Interface” button. Name the interface something relevant like “Internal”, select the Internal network subnet we created earlier, let the IP assign automatically and select the default VPC security group for the interface.

Once it’s created we need to attach it to the F5 instance – select the network and use the actions menu to attach the network interface to the F5 instance:

There should now be 3 networks assigned to the instance:

To make our management interface accessible to the internet we need to assign an elastic IP address to the interface. Select the Elastic IPs page from the EC2 console and click “Allocate New Address”, and confirm. Select the new address and from the Actions menu select “Associate Address”:

Select the F5 instance and associate it with the Management private IP address (if you followed my subnets, it’s the 10.0.0.x address) and click associate:

And that’s it – you can access the BIG-IP management website and begin to configure the instance:

You can also SSH to the instance using the private key file we created earlier:

ssh –i “KeyPairName.pem” admin@<IP or FQDN>

]]>http://www.definit.co.uk/2016/04/deploying-f5-big-ip-virtual-edition-ve-in-aws/feed/07307Adding a vCloud Air (PAYG/Gen2) instance to vRealize Orchestrator as a vCloud Director hosthttp://www.definit.co.uk/2016/04/adding-a-vcloud-air-payggen2-instance-to-vrealize-orchestrator-as-a-vcloud-director-host/
http://www.definit.co.uk/2016/04/adding-a-vcloud-air-payggen2-instance-to-vrealize-orchestrator-as-a-vcloud-director-host/#respondTue, 19 Apr 2016 13:26:56 +0000http://www.definit.co.uk/?p=7401Big thanks to Jose Luis Gomez for this solution, his response to my tweet was spot on and invaluable!

I’ve been trying to configure vCloud Air as a vCloud Director host in vRealize Orchestrator in order to create some custom resource actions for Day 2 operations in vRealize Automation. What I found was that there’s *very* little information out there on how to do this, and I ended up writing my own custom resource mapping for the virtual machines to VCAC:VirtualMachine objects – at least that way I could add my resource action. But this still didn’t expose the vCloud Director functionality for those machines. To do this I needed vCloud Air added as a vCloud Director host.

As per Jose’s advice, I duplicated the “com.vmware.library.vCloud.Host/addHost” action, named it “addHost_vCA_G2”:

I then modified the following line to include “/api/compute”:

newHost.url = "https://" + host + ":" + port;

Becomes

newHost.url = "https://" + host + ":" + port + "/api/compute";

I then duplicated the “Add a connection” workflow to create “Add a connection (vCloud Air Gen2)” and swapped the old action for the new action:

Now I can add vCloud Air (PAYG/Gen2) as an endpoint in the normal way:

The out-of-the-box “IaaS vCD VM” Resource Mapping now works in vRA and I can create custom Resource Actions against the vCloud:VM object type.

]]>http://www.definit.co.uk/2016/04/adding-a-vcloud-air-payggen2-instance-to-vrealize-orchestrator-as-a-vcloud-director-host/feed/07401#vROps what if I want to scale up?http://www.definit.co.uk/2016/04/vrops-what-if-i-want-to-scale-up/
http://www.definit.co.uk/2016/04/vrops-what-if-i-want-to-scale-up/#respondWed, 13 Apr 2016 21:42:49 +0000http://www.definit.co.uk/?p=7295So a lot of noise is being made about vROps being able to scale out and rightly so it works -very- well.

However what if you want to scale up your node or nodes, going from say small to medium?

Reasons why?

Perhaps your POC has proved so useful you want to move it into production?

You have lots of really useful historical on your existing smaller build and you don’t want to redeploy (therefore losing your historical data)

You have limitations where scaling out is simply not an option but you need vROps to take on more work.

There are likely other compelling reasons for you to want to scale up your vROps nodes so lets look at how we can do that.

First of all it’s important to know it is indeed supported to carry out this action but you really need to give it some careful consideration. Before electing to do this work I would urge you to take a look at the sizing spreadsheets VMware have made available so you can be sure you are making the right decision.

Once you know what size you are going up to you will need to consider the Hardware version, for instance the medium sized nodes is version 7 limiting it to 8vCPU so you will NEED to increase the Hardware version to accommodate any increase in node size.

Finally the process to to increase the node size is quite straight forward.

First of all take the cluster offline in the admin section. (this is critical)

Gracefully shutdown each of the nodes until all of the vROps appliances are powered off.

Increase the vCPU, RAM and disk where required.

Power on your Master node first allow a few minutes..

Power on your HA replica (if you have one) and wait a few minutes..

Power on the data nodes..

Once they are all up and running verify the cluster can see all of the nodes and that they are ready for the cluster to be switched back on.

Switch on the vROps cluster.

Job done.

It goes without saying vROps is a beast so make sure your hardware can handle the increase (hosts and storage) and be mindful of the reference architecture.

At present I am not aware of any KB article from VMware for this.. but as soon as I find one I will update this post.

]]>http://www.definit.co.uk/2016/04/vrops-what-if-i-want-to-scale-up/feed/07295#vROps Webinar 2016 – Announcing Part 4 : Contention Based Performance & Capacity Managementhttp://www.definit.co.uk/2016/04/vrops-webinar-2016-announcing-part-4-contention-based-performance-capacity-management/
http://www.definit.co.uk/2016/04/vrops-webinar-2016-announcing-part-4-contention-based-performance-capacity-management/#commentsMon, 11 Apr 2016 11:33:20 +0000http://www.definit.co.uk/?p=7282We hope you are having fun with the vROps Webinar Series and learning in the process!

With the next instalment of this series, we have a new speaker who will entertain us He is no stranger to the world of vRealize Operations Manager and has written a couple of books on vROps as well. I am referring to Iwan Rahabokwho has been my partner in crime on all the vROps related work which we do, inside VMware or with the community.

During this session we will help you understand a few basic concepts of using the right counters to monitor performance and capacity in your infrastructure. We will deep dive into the concept of the consumer and the provider layer and help you with solving issues which you might face as a provider of infrastructure to your business.

NOTE – Don’t forget to mark your calendars by saving the calendar invite!! Feel free to forward the invite to anyone who might be interested. It’s open to all!!

]]>http://www.definit.co.uk/2016/04/vrops-webinar-2016-announcing-part-4-contention-based-performance-capacity-management/feed/37282When everyones an Architect.. no one is..http://www.definit.co.uk/2016/04/when-everyones-an-architect-no-one-is/
http://www.definit.co.uk/2016/04/when-everyones-an-architect-no-one-is/#respondFri, 01 Apr 2016 10:34:45 +0000http://www.definit.co.uk/?p=7186I think this is the first time I have written a “rant” post (for a while at least) so if you don’t want to hear me whine run away now!

In the past 12 months I think I have met more folk with Architect in their job title than I truly believe is valid. Now I will be clear this is not a slight on the individuals all of them have been exceptional (at least the ones I have had the pleasure to work with).

An architect is a person who plans, designs, and oversees..

However what I feel is happening is a devaluation of the job title/meaning. While I understand many people pursue that job role/title as an end goal for their current career plan imo it’s getting quite ridiculous how many now seem to exist.

Too many chiefs (in title) not enough indians?

Maybe I am off base, what do you think?

]]>http://www.definit.co.uk/2016/04/when-everyones-an-architect-no-one-is/feed/07186#vROps Hidden Gem – Automation Action Frameworkhttp://www.definit.co.uk/2016/03/vrops-hidden-gem-automation-action-framework/
http://www.definit.co.uk/2016/03/vrops-hidden-gem-automation-action-framework/#respondThu, 31 Mar 2016 18:23:31 +0000http://www.definit.co.uk/?p=7201In any large complex product it is often the case that there are “hidden” gems deep inside, vROps in this case is not without exception.

The Automation Action Framework (AAF) is one of these, a really powerful vROps in-built automation tool.

Now when I have discussed this capability with my peers and friends many ask why not use vRO etc? My reply is quite simple, “what if you do not have vRO or the in house skills to utilise it properly?” also “With the AAF you do -not- need any scripting knowledge or have to setup any workflows.”

Make no mistake vRO is the daddy when it comes to serious automation in your environment and the AAF has a set number of actions available to it, however vRO requires a good deal of time and effort to get up and running and working while in comparison the AAF can be setup and ready to roll in a short space of time.

So what can the AAF actually do?

The following actions are available in vROps and can be manually initiated or automated:

Delete Powered Off VM

Move VM

Power Off VM

Power On VM

Set CPU Count And Memory for VM

Set CPU Count And Memory for VM Power Off Allowed

Set CPU Count for VM

Set CPU Count for VM Power Off Allowed

Set CPU Resources for VM

Set Memory for VM

Set Memory for VM Power Off Allowed

Set Memory Resources for VM

Shut Down Guest OS for VM

With vROps you can either have it recommend actions as listed above and then click manually click a button or fully automate it, the latter should be planned carefully but you can learn more about how to do so in a recent webinar I presented “Building self-healing environments” along with my good friend Sunny Dua.

As with many things in vROps you can define permissions on who can do what so you can pick and choose who can do what. (support staff/admins/etc)

The bottom line here is there is some real strength you can utilise here in vROps it’s up to you how you wish to use it.

]]>http://www.definit.co.uk/2016/03/vrops-hidden-gem-automation-action-framework/feed/07201What’s in my vRA7 EBS Payloads?http://www.definit.co.uk/2016/03/whats-in-my-vra7-ebs-payloads/
http://www.definit.co.uk/2016/03/whats-in-my-vra7-ebs-payloads/#commentsWed, 30 Mar 2016 15:52:35 +0000http://www.definit.co.uk/?p=7192As I discussed in my previous post, vRA7 Event Broker integration is a big change from previous version of vRA – we no longer receive the same objects that used to be passed from vCAC/vRA to vCO. Instead we receive the mysterious “payload” – a properties object.

I wanted to create a workflow that I could enable to log all of the keys, values and types of the properties object for each stage of the vRA7 MachineProvisioning workflows, and create a reference for myself on the payload for each stage.

To do this I created a new workflow “debugProperties” and added an input variable called “payload”, type Properties. Next I added a single scriptable task and cycled through the properties. Some of the properties’ values are actually other properties objects, so there’s a function to test the type and iterate through if required.

To make it work, you will need to tell your blueprint to pass the payloads to Orchestrator using the following properties:

I also added a custom property of my own, to watch it go through the events.

I then added a subscription to the MachineProvisioning Events, using the following conditions to ensure it only ran on the Blueprint with “DEBUG” in the name (each machine build generates a lot of events, so we don’t want every build to run every event!)

As you can see – even for one run there are a lot of events triggered – this is just a standard VM build, and then expire.

The output of these workflows looks like this:

Finally, for my own reference I wanted to view my example data in an easier format – so I have created a spreadsheet and pulled in the relevant data, collating it by workflow stage. The spreadsheet is also available on my vRA7 EBS Examples github page

Not the worlds most exciting vRA7 post – but hopefully a useful reference!

]]>http://www.definit.co.uk/2016/03/whats-in-my-vra7-ebs-payloads/feed/27192#vROps Webinar 2016 : Part 3 – Capacity Planning De-Mystified!http://www.definit.co.uk/2016/03/vrops-webinar-2016-part-3-capacity-planning-de-mystified/
http://www.definit.co.uk/2016/03/vrops-webinar-2016-part-3-capacity-planning-de-mystified/#respondSat, 26 Mar 2016 16:29:47 +0000http://definit.mcgeown.co.uk/?p=7188Here is our 3rd installment of vROps Webinar Series 2016, in this session both Sunny and myself delivered the session on vROps Capacity Planning with some demo scenarios which will help you define your capacity monitoring goals.

Session Details:-

Capacity Planning Demystified!

Capacity Planning is an integral part of vRealize Operations Manager and through this session we want to de-mystify the concepts of Capacity Planning in vROps using features such as projects and the out-of-box capacity views. We look at defining the capacity planning policies based on the business requirements of an organization

A big thank you to Rawlinson Rivera to provide all the lab equipment required for the live and thanks again to my Co-host Sunny you’re a legend!

So without further ado here is the recording for this session on Capacity Planning

Note : It is recommended that you watch the video in HD quality for a great experience.

The KB describes it occuring when “more than one VMware vRealize Orchestrator instance is configured for different tenants“. The issue I faced is not the same – in my case, I had the system default tenant configured to use the embedded vRO, and the customer tenant configured to use the system default (which would be the embedded vRO!)

The article itself does not give a work-around for this issue, but it’s possible to resolve it by editing the customer tenant Orchestrator Server configuration (Administration > vRO Configuration > Server Configuration) to use the external load balanced URL for the appliances (or for a PoC/small deploy with a single appliance, the appliance URL).

]]>http://www.definit.co.uk/2016/03/vrealize-automation-7-xaas-blueprint-form-displays-failed-to-retrieve-form-from-provider/feed/07156vROps Webinar 2016 – Announcing Part 3 : Capacity Planning De-Mystified!http://www.definit.co.uk/2016/03/vrops-webinar-2016-announcing-part-3-capacity-planning-de-mystified/
http://www.definit.co.uk/2016/03/vrops-webinar-2016-announcing-part-3-capacity-planning-de-mystified/#respondSun, 13 Mar 2016 16:08:17 +0000http://www.definit.co.uk/?p=7151vROps Webinar Series 2016 is back and as promised during the last sessionwe will now deep-dive into vRealize Operations Manager Policies, specifically around the area of Capacity Planning.

Capacity Planning Demystified!

Capacity Planning is an integral part of vRealize Operations Manager and through this session we want to de-mystify the concepts of Capacity Planning in vROps using features such as projects and the out-of-box capacity views. We will also look at defining the capacity planning policies based on the business requirements of an organization & yeah this would be all done through LIVE DEMOS

]]>http://www.definit.co.uk/2016/03/vrops-webinar-2016-announcing-part-3-capacity-planning-de-mystified/feed/07151Is there any #vROps reference architecture?http://www.definit.co.uk/2016/03/is-there-any-vrops-reference-architecture/
http://www.definit.co.uk/2016/03/is-there-any-vrops-reference-architecture/#respondThu, 10 Mar 2016 12:17:38 +0000http://definit.mcgeown.co.uk/?p=7147Given the flexibility in which you can choose to use and deploy vROps a question that frequently comes out is “is there a best practice?”

While that phrase is getting pretty tired it is still valid if you are just starting a design for a new vROps build or trying to make the best of a bad implementation. Rather than me trying to tell you how you should use vROps in your place of work I would direct you to a very useful PDF VMware have produced vRealize Operations Manager Reference Architecture.

Needless to say this is a tremendously useful resource and should be a port of call for anyone validating their vROps design choices.

]]>http://www.definit.co.uk/2016/03/is-there-any-vrops-reference-architecture/feed/07147vRealize Automation 7 Custom Hostname with Event Broker (EB) Subscriptionhttp://www.definit.co.uk/2016/03/vrealize-automation-7-custom-hostname-with-event-broker-eb-subscription/
http://www.definit.co.uk/2016/03/vrealize-automation-7-custom-hostname-with-event-broker-eb-subscription/#commentsWed, 09 Mar 2016 17:54:21 +0000http://www.definit.co.uk/?p=6957The new Event Broker service in vRA7 is one of the most exciting features of this latest release, the possibilities for extensibility are huge. At this point it time you can still use the old method of using workflow stubs to customise machine lifecycle events, but at some point in the future this will be deprecated and the Event Broker will be the only way to extend.

With this in mind, I wanted to use the Event Broker to do something that I am asked on almost every customer engagement – custom hostnames beyond what the Machine Prefixes mechanism can do.

This, as it turns out, Event Broker extensibility is not the simplest to get your head around – there is a very large (>100 page) extensibility document to parse!

For the purposes of this post, I will be using the BuildingMachine lifecycle state. This is because I want to modify the hostname before the VM is built – in much the same way as you would with the old ExternalWFStubs.BuildingMachine method.

One of the key differences between using the old WFStubs method and the new EB method is that it does not pass the same objects, there is no vCAC:Entity or vCAC:VirtualMachine – only a Properties object which holds the request properties. These include any custom properties from the blueprint.

My naming problem

I have a Machine Prefix setup for my lab which names machines “DefinIT-” and a three digit identifier. I want to be able to name my VMs based on properties in my blueprint, so lets say I want to include a value for location, type of VM, application and then have the three digit unique ID that is generated from the Machine Prefix.

In my blueprint, I have created three custom Properties:

DefinIT.Hostname.Location = “LAB”

DefinIT.Hostname.Type = “VM”

DefinIT.Hostname.Application = “WEB

My desired outcome is that the machine that would be DefinIT-001 gets renamed as LAB-VM-WEB-001.

Create an Action

Not wishing to reinvent the wheel, I’ll use the renameVirtualMachine action from Ted Spinks’ blog article “Manage Hostnames with vRealize Automation – Part 2: Use a small vRO workflow“. Create the action as described in the article and then drag it onto the canvas. You can see that it requires two inputs – a vCAC:Entity representing the Virtual Machine, and a string for the newHostname.

Creating the main workflow

I created a new workflow – updateVirtualMachineName and created a “payload” input of type “Properties” – this is the properties object that the vRA EB will pass to Orchestrator.

Move to the schema tab and drag a new scriptable task onto the canvas. Edit it and name it “getVirtualMachineID“. Using the Visual Binding tab, drag the payload in parameter to the input pane so we get the Properties as an input to this task. Create two OUT parameters that map to Attributes in the main workflow. These are both strings “newHostname” and “virtualMachineID“:

Edit the scripting tab and enter the script (copy from below).

Basically we retrieve a properties variable “machine” from the “payload” input (2), then retrieve the “id” value of “machine” (6). We will use this in a minute to retrieve the vCAC:Entity.

Next we retrieve another Properties object, this time from the machine (9). That Properties object contains the properties defined in the blueprint – including our Hostname variables.

We also define retrieve the “name” property of the “machine” object (11), which is the vRealize generated hostname (e.g. DefinIT-001). Using string replacement, we strip out the “DefinIT” portion of the hostname to leave the unique ID (e.g. “-001”) (14).

Then we retrieve the variables by name from the “machineProperties” object for the new hostname (16-18). Finally, we build the string and assign it to the “newHostname” out parameter (20).

Close the scripting action and then search for the “getVirtualMachineEntityFromId” action, and drag it onto the Schema. Search for the “renameVirtualMachine” action we created earlier and drag that onto the Schema too. The schema should look something like this:

Use the Visual Binding view of “getVirtualMachineEntityFromId” to link up the “virtualMachineId” attribute and drag the “actionResult” OUT to the Out Attributes pane (I changed the attribute name to “vCACVMEntity”). Don’t forget to go to the IN tab and bind “host” to NULL – the action will find the host for us.

Next bind the input parameters for the “renameVirtualMachine action”, mapping the “newHostname” and “vCACVMEntity” attributes.

Save and close the workflow.

Creating the event subscription

Log into vRealize Automation and navigate Administration > Events > Subscriptions, then click for a new Subscription. Select the “Machine provisioning” topic and click next.

As discussed, we need the workflow to run at BuildingMachine, before any work has been done with the VM. There’s a bug in the EB subscription wizard that won’t let you create an “All of the following” condition without adding three clauses (you can add three, then remove two of them later, it’s just the initial creation that fails!).

Next, name the subscription, leave the priority as default and make sure you check the “Blocking” option. If we don’t create a blocking subscription, the workflow won’t wait for the result of the Orchestrator workflow and will continue to the next state. When the workflow attempts to rename the vCAC:Entity, it will be too late. Click finish to complete the subscription.

Select the subscription in the list and click the “Publish” button.

Prepare the blueprint

Finally, we need to add the custom properties to the blueprint to ensure the “payload” is passed through to orchestrator from the Event Broker.

Edit the blueprint and add the three properties from before, as well as the following:

This property is critical because it tells the EB to pass through all of the properties related to the BuildingMachine lifecycle state. The value of “*” is a wildcard and allows us to inject all the values relating to BuildingMachine.

Download

Conclusion

Getting my head around how the Event Broker passes information to the Orchestrator workflow has been a very steep learning curve, but I know that there are some excellent resources soon to be published from VMware that will make adoption of the EB subscription model much easier to consume.

For me the Event Broker is the single biggest leap forward for vRealize Automation 7 – the sheer flexibility and power of the tool is huge. It’s well worth drudging through the documentation and scant examples out there to learn this tool.

I hope this example allows others to get their head around it all quicker!

]]>http://www.definit.co.uk/2016/03/vrealize-automation-7-custom-hostname-with-event-broker-eb-subscription/feed/106957Can I secure #vROps?http://www.definit.co.uk/2016/03/can-i-secure-vrops/
http://www.definit.co.uk/2016/03/can-i-secure-vrops/#respondTue, 08 Mar 2016 13:42:05 +0000http://definit.mcgeown.co.uk/?p=6985Those of you used to using vSphere on a regular basis will already be aware of the hardening guide for ESXi and vSphere but what about vROps?

If the vROps appliance needs to be hardened there is already a VMware provided guide and tool to accommodate.

VMware vRealize Hardening Tool 2.0.0
The vRealize Hardening Tool automates the hardening activity by applying appliance-specific configuration changes to a system. For more information about hardening vRealize and on how to use the vRealize Hardening Tool

Please remember that this is an introductory session to policies and we intend to go deeper into policies in the next session in the month of March, where we deep dive into more details around capacity planning with vRealize Operations Manager. You will hear more about that as we progress into the next month.

A big thank you to Sunny Dua for presenting this session and Iwan Rahabok to provide all the lab equipment required for live demonstrations….

Note : It is recommended that you watch the video in HD quality for a great experience.

]]>http://www.definit.co.uk/2016/02/vrops-webinar-2016-part-2-understanding-vrops-policies-2/feed/26940Running Platypus in Docker on Photon with AppCatalysthttp://www.definit.co.uk/2016/02/platypusindockeronphotonwithappcatalyst/
http://www.definit.co.uk/2016/02/platypusindockeronphotonwithappcatalyst/#respondWed, 03 Feb 2016 08:09:08 +0000http://www.definit.co.uk/?p=6916Just a quick little post this morning! Anyone who works with the vRealize Automation APIs should definitely check out Grant Orchard and Roman Tarnavski’s awesome little side project, Platypus.

It only took me a couple of minutes to get it running on my MacBook – here’s how!

]]>http://www.definit.co.uk/2016/02/platypusindockeronphotonwithappcatalyst/feed/06916vROps Webinar 2016 – Announcing Part 2 : Understanding Policies.http://www.definit.co.uk/2016/02/vrops-webinar-2016-announcing-part-2-understanding-policies-2/
http://www.definit.co.uk/2016/02/vrops-webinar-2016-announcing-part-2-understanding-policies-2/#commentsMon, 01 Feb 2016 14:17:36 +0000http://definit.mcgeown.co.uk/?p=6910vROps Webinar Series 2016 is back and as promised during the last sessionwe would now take you to the world of vRealize Operations Policies Policies.

What is a Policy??

“A policy is a deliberate system of principles to guide decisions and achieve rational outcomes. A policy is a statement of intent, and is implemented as a procedure or protocol”

While the role of vRealize Operations Manager is to help you with Performance & Capacity Management of your Software Defined Datacenter (SDDC), it is important that you feed in the guiding principles of your Business Environment into vROps to get the rational outcomes. These outcomes span across Health, Risk & Efficiency of your SDDC environment.
Join us to learn more about policies and leverage the knowledge to enhance your SDDC environments. As always, it would be a combination of theory and hands on.

Sharing of this article is highly appreciated because Knowledge Increases By Sharing

]]>http://www.definit.co.uk/2016/02/vrops-webinar-2016-announcing-part-2-understanding-policies-2/feed/66910#vROps 6.2 – upgrade and utilization dashboardshttp://www.definit.co.uk/2016/01/vrops-6-2upgrade-and-utilization-dashboards/
http://www.definit.co.uk/2016/01/vrops-6-2upgrade-and-utilization-dashboards/#commentsFri, 29 Jan 2016 08:25:30 +0000http://definit.mcgeown.co.uk/?p=6848As you will see the upgrade is simple and even though its early days I haven’t seen anything that has been broken!

If you are interested in seeing an example of the new utilization dashboard scroll to the bottom of this article.

Upgrading from 6.1

I will be doing this upgrade on a VA so to begin with I will need the vRealize Operations Manager – Virtual Appliance Operating System upgrade .pak file (Realize_Operations_Manager-VA-OS-6.2.0.3445569.pak)

One applied I will then perform the product upgrade of the VA using (vRealize_Operations_Manager-VA-6.2.0.3445569.pak)

Performing the upgrade is simple enough, login to the admin area click software update and then click install a software update.

Browse for the .pak file and click upload

The staging process can take a while..

Once the staging was complete it was a matter of clicking next and agreeing to the EULA and the upgrade process would begin. (note the warning regarding the restart of the cluster)

The upgrade process took roughly 30 minutes or so. (single master node in my lab atm) I took screenshots while the upgrade took place FYI

Ability to Import Single Sign-On Users
As an Administrator, you can now add and authorize new users for vRealize Operations Manager by importing them from a Single Sign-On source.

Telemetry Enablement on Upgrade
This release includes a one-time dialog after you upgrade that allows you to participate in the VMware Customer Experience Improvement Program. This program collects anonymous product configuration and usage data to enhance future versions of vRealize Operations.

Portable Licensing
The portable licensing feature adds the ability for customers to license use of the product in vSphere as well as non-vSphere environments

]]>http://www.definit.co.uk/2016/01/vrops-6-2upgrade-and-utilization-dashboards/feed/26848#vROps 6.2 what’s new?http://www.definit.co.uk/2016/01/vrops-6-2-whats-new/
http://www.definit.co.uk/2016/01/vrops-6-2-whats-new/#respondFri, 29 Jan 2016 07:25:36 +0000http://definit.mcgeown.co.uk/?p=6844vRealize Operations 6.2 was released last night and is now available for download!

Looking at what’s new very quickly there are some good new enhancements but when you compare this to the 6.1 release it’s perhaps a little light, nevertheless there appears to be some new cool features and enhancements to be had in this version.

There does not appear to be any sizing/scale increases.

Upgrading from existing 6.1 versions can be done via a .pak file.

The main focus of this update appears to be on stability (which is no bad thing given a few horror stories I have heard)

New Workload Utilization Dashboard
The Workload Utilization Dashboard enables you to see the object workload utilization for Cluster, DataCenter, and Custom DataCenter containers. The new dashboard incorporates an updated Utilization widget, capable of operating in either a capacity or workload utilization mode.

Ability to Import Single Sign-On Users
As an Administrator, you can now add and authorize new users for vRealize Operations Manager by importing them from a Single Sign-On source.

Telemetry Enablement on Upgrade
This release includes a one-time dialog after you upgrade that allows you to participate in the VMware Customer Experience Improvement Program. This program collects anonymous product configuration and usage data to enhance future versions of vRealize Operations.

Portable Licensing
The portable licensing feature adds the ability for customers to license use of the product in vSphere as well as non-vSphere environments.

]]>http://www.definit.co.uk/2016/01/vrops-6-2-whats-new/feed/06844#vROps a new diva in the datacentrehttp://www.definit.co.uk/2016/01/vrops-a-new-diva-in-the-datacentre/
http://www.definit.co.uk/2016/01/vrops-a-new-diva-in-the-datacentre/#respondTue, 26 Jan 2016 16:06:36 +0000http://definit.mcgeown.co.uk/?p=6838For the last few months and certainly very recently (at the London VMUG meeting) I have had the chance to talk to peers and #vExperts and share “war stories” with regards to vRealize Operations Manager.

What has become a consistent theme in all the stories is just how much compute resources vROps requires when you “go big” and not just what is clearly defined in the sizing spreadsheet.

For example some of the large deployments I have either been involved in or heard about (to monitor upwards and beyond of 30000 VMs) required the deployment of Large vROps nodes.

A single Large node collecting data for a large deployment as outlined above requires the following resources.

16x vCPU

48GB RAM

2TB Storage

1700 IOPS

As you can see a single node is not to be sniffed at and you would need 7 of them! (if you required HA) thats a total of 112 vCPUs and 336GB RAM. When you consider the IOPS required per node, we are already well into SSD/Flash territory so that will also need to be considered. Another important issue that has come up is vROps really does need (at this scale) a 1:1 ratio of pCPU to vCPU else it has been seen to behave erratically.

Then you will need to consider things like DRS affinity and or anti-affinity rules so as not to have your nodes ever sharing a host. Resource pools would need to be considered.

With all the above to consider vROps is no longer just another monitoring tool in my opinion it should be treated like a tier one application (even if its on your management cluster). I know of many businesses and organisations where they are now extremely dependant the alerting, capacity planning and other features vROps brings to the table as a product. It has now become the hub of a massive quantity of data and with more features and functions being added with each release this will only increase.

With all that’s being thown at it and with that only being set to increase vROps (like a diva) will need and demand special attention when it comes to planning, deploying and day to day running.

]]>http://www.definit.co.uk/2016/01/vrops-a-new-diva-in-the-datacentre/feed/06838vRealize Operations Webinar Series : 2015 :Part 1 : Building Self Healing Environments with #vROpshttp://www.definit.co.uk/2016/01/vrealize-operations-webinar-series-2015-part-1-building-self-healing-environments-with-vrops/
http://www.definit.co.uk/2016/01/vrealize-operations-webinar-series-2015-part-1-building-self-healing-environments-with-vrops/#commentsSat, 23 Jan 2016 10:20:52 +0000http://definit.mcgeown.co.uk/?p=6829A series of webcasts on vRealize Operations Manager 6.x, helping you learn about anything and everything about the solution. Some of the examples include, vROps Policies, Alert Definitions, Automated Action Framework, integration to third party etc. This would include power point and live demonstrations. The session would range anywhere between 60 to 90 minutes.

With 100+ registrations and more than 70 live attendees, it was definitely a great start to this series. On popular demand of many who could not attend the session due to time-zone differences, we have recorded the session and you can watch the same right here. It is recommended that you watch the video in HD quality for a great experience.

As mentioned during the session, if you have any feedback regarding the session or any requests, then feel free to leave that in the comments section or use twitter to reach out to us. Hope you enjoy this session. Stay tuned for more in the upcoming session

With a new year comes new things and I am delighted to announce the start of a vROps based webinar series.

These will consist of a set of live webcasts on different topics around vRealize Operations Manager open to public to attend live or view the recorded sessions later.

Personally I am delighted to get the chance to work with Sunny Dua on this new project. Sunny is a well known and respected Senior Consultant at VMware and can be found often on twitter @Sunny_Dua. If you have used vCOPs or vROps then its very likely you will have come across his name and his blog (http://vxpresss.blogspot.co.uk/) on top of all of that he is also a member of the VMware CTO Ambassador program.

We thought that we need to uncover some great use cases which we have been delivering for customers and share them with anyone who is interested in learning about vROps. Let me list down the basic rules of this series so that you can hook on to this and can learn more about vROps.

WHAT

A series of webcasts on vRealize Operations Manager 6.x, helping you learn about anything and everything about the solution. Some of the examples include, vROps Policies, Alert Definitions, Automated Action Framework, integration to third party etc. This would include power point and live demonstrations. The session would range anywhere between 60 to 90 minutes.

WHEN

We plan to do 12 sessions this year, one for each month. We target to deliver this in a time zone which is suitable for most of the regions, however it is impossible to cover the entire world. To solve this we would use the recording capabilities of WebEx and share these sessions via our blogs. We would run this on every 3rd Friday of the month.

FOR WHOM

While this session is open for anyone and everyone, the people who would really benefit out of this would be Administrations, Consultants, Architects, Support Professionals etc. The idea is to make you a pro so that you can share the goodness of vROps

HOW

This would be a WebEx session where you can join as a participant. You can either dial-in using a toll-free number or get a call back from WebEx server. As a participant you would be muted to begin with and would be able to ask questions or contribute either via WebEx chat or audio during the Q&A session.

ABOUT SPEAKERS

We will start with Sunny and myself and will add more experts from the field as we move along in the year…

WHEN WILL THIS START ??

As I mentioned earlier; Third Friday of this month i.e. 22nd January 2016. Here are the complete details:-

So save that calendar invite and we will see you with the first instalment. We will also share a short survey post the event for your feedback, as we want to ensure that we have your feedback to improve the series as we go into the year.

Sharing of this article is highly appreciated because Knowledge Increases By Sharing

]]>http://www.definit.co.uk/2016/01/starting-2016-with-a-bang-vrops-webinar-series/feed/66819vRealize Orchestrator REST API – "Connection pool shut down"http://www.definit.co.uk/2015/12/vrealize-orchestrator-rest-api-connection-pool-shut-down/
http://www.definit.co.uk/2015/12/vrealize-orchestrator-rest-api-connection-pool-shut-down/#respondTue, 01 Dec 2015 17:29:10 +0000http://www.definit.co.uk/?p=6812If you use the in-built vRealize Orchestrator instance shipped with the vRealize Automation appliance then you might run into this issue when working with the REST client:

The vRA appliance version I have (6.2 – note to self, need to update lab!) includes the plugin version 1.0.4 for REST. According to the release notes, this was fixed in 1.0.5 – typical!

So the solution is to upgrade the REST API plugin

]]>http://www.definit.co.uk/2015/12/vrealize-orchestrator-rest-api-connection-pool-shut-down/feed/06812vROps multinode SNMP gotchahttp://www.definit.co.uk/2015/11/vrops-multinode-snmp-gotcha/
http://www.definit.co.uk/2015/11/vrops-multinode-snmp-gotcha/#respondFri, 27 Nov 2015 20:43:43 +0000http://definit.mcgeown.co.uk/?p=6804So if you are already familiar with vROps you will know you can now have multiple nodes in your cluster.

8 nodes if you are on versions 6.0.x

16 nodes if you are on version 6.1

Why does this matter if you plan to use SNMP with your vROps cluster?

Quite simple really, what I discovered today is that even though you have a master node SNMP traps can and will be sent from all of the nodes to the destination you have configured.

Why is this a problem?

If you do not configure your SNMP trap receiver (lets say a vRO instance) to receive SNMP traps from all of the possible node IP’s (or FQDNs) you will miss the traps sent from the other nodes in your vROps cluster.

That’s a pretty big deal if you are reliant on those traps for event/alert notification!

As far as I can determine traps will be sent from the nodes that have been configured as collectors for any given monitored source. But of that I am not 100% sure.

So there we have it, if you plan to use SNMP or already have it configured, make sure you configure your SNMP receiver to account for all of the nodes in your vROps cluster.

]]>http://www.definit.co.uk/2015/11/vrops-multinode-snmp-gotcha/feed/06804#UKVMUG – Curry – Beer – Sessions – Weddingshttp://www.definit.co.uk/2015/11/ukvmug-curry-beer-sessions-weddings-2/
http://www.definit.co.uk/2015/11/ukvmug-curry-beer-sessions-weddings-2/#respondFri, 20 Nov 2015 13:25:33 +0000http://definit.mcgeown.co.uk/?p=6794This was my second UKVMUG and it was my first vCurry evening! so I was really looking forward to getting up to the event and seeing everyone.

The vCurry event was excellent and it was a great chance to enjoy good food in great company. Had the chance to meet with many people I would not normally get the chance to see, Frank @fbuechsel and Brad @VMUG_CEO to name a couple! To top off a really great evening we (my table/team) won the vQuiz which was organised and conducted by Stuart @Virtual_Stu (thanks Stuart!), a big thank you to Matt @Twickersmatt for the beers (one of the prizes).

Needless to say for me that was a great way to start the conference.

The next day the Usercon start in earnest for which I made the error in assuming bacon would be served for breakfast suffice to say @sammcgeown has been giving me grief ever since..

The opening keynote delivered by @JoeBaguley titled “Containers, Microservices, Turtles, Chickens and Other Animals” was superb, kittens, chickens, “tri-modal”, Docker, whiteboxes and unikernels! were just some of the subject matter. If you did not get chance to see it I would highly recommend keeping an eye out for the recording (which I believe will be available on the VMUG website).

Next up after quickly hitting the vendor floor I sat in on @sammcgeown’s Ravello home lab session. This was of great interest to many as naturally the need for home lab these days is pressing but how do you choose what is right for you? Sam demonstrated the strength of the Ravello product and how it is a very real and valid candidate for your new home lab or at very least an extension to your existing one.

After that session I had to quickly made ready for my own session titles vROps Rox, this was my first ever session at the UKVMUG so I was keen to be as prepared as I could be.

I am glad to say my session went really well (no technical hitches) and had a great deal of interaction from the attendee’s, lots of discussion on deployment considerations and the Automation framework, I want to say a really big thank you to everyone that came along!

After finishing my session I took time to hit the vendor floor once more and catch up with attendee’s, just like VMworld the UKVMUG is a great place to meet people from all around the nation as well as abroad!

The only other session I got along to was to a packed @Steiner_Matthew vROps session which was superb covering specifically Performance Monitoring.

The day was finished off with a closing keynote by @jtroyer “Architecting Your IT Career” which was excellent another session to add to your watch list once it becomes available.

The event was rounded in a bitter sweet fashion by the vendor prizes being handed out and then the sad news that not one but three of the four UKVMUG/LonVMUG commitee members would be standing down. Alaric aptly named it on his slide deck “The Red Wedding” while not quite as brutal it is with out question a big shame and they will be very sorely missed as they really have set the bench mark for VMUG events in the UK. Alaric, Jane and Stuart have worked tirelessly for years for the good of the community. So it will be big shoes to fill for whomever steps up to take their place along side Simon Gallagher. I wish the three of you the very best in your new ventures!

So quite a Usercon but incredibly worthwhile thank you again to the organisers and the VMUG team (@VMUGBrandi, @VMUGEmily) who travelled over from the states to support and (as Alaric put it) “made everything look great”.

Already looking forward to the Usercon next year!

]]>http://www.definit.co.uk/2015/11/ukvmug-curry-beer-sessions-weddings-2/feed/06794#UKVMUG Ravello Home Lab Winner!http://www.definit.co.uk/2015/11/ukvmug-ravello-home-lab-winner/
http://www.definit.co.uk/2015/11/ukvmug-ravello-home-lab-winner/#commentsFri, 20 Nov 2015 11:11:09 +0000http://www.definit.co.uk/?p=6773First of all, thank you to everyone who came along to my session at the UKVMUG yesterday, it was great to see so many people at a round table discussion, sorry for those that had to stand! I hope that it was helpful and maybe a few of you will be building some awesome labs in the cloud!

Ravello very kindly sponsored a free home lab, equivalent to the vExpert 1000 hours account as a prize for my session at the UKVMUG yesterday. Using a high tech random number generator and an Excel spread sheet the winner was picked, so without further ado, congratulations go to…

Chris Good

I’ve passed Chris’ details onto my contact at Ravello who will be setting him up with his account – enjoy your lab!

Once again, thank you to everyone who came and participated in the session, I very much enjoyed it, and thank you to Ravello for sponsoring the home lab!

Photo credit: Oliver Happy (@OliverH4ppy)

Disclaimer: I am a Ravello user and I receive a free vExpert 1000h account from Ravello – however I am not paid to endorse them, and I have no official affiliation with them – I just think it’s cool tech!

]]>http://www.definit.co.uk/2015/11/ukvmug-ravello-home-lab-winner/feed/16773MindMap: vRealize Automation Roleshttp://www.definit.co.uk/2015/11/mindmap-vrealize-automation-roles/
http://www.definit.co.uk/2015/11/mindmap-vrealize-automation-roles/#commentsTue, 10 Nov 2015 20:45:19 +0000http://www.definit.co.uk/?p=6766I use mind maps quite a lot for study, I find the visual representation of info makes it a lot easier for me to remember! Below is a mind map I created for learning the roles in vRealize Automation, which I used during my presentation for #vBrownBag on VCP6-CMA objective 2.

Apologies in advance if this is post is a jumbled nonsense, I’m still way too excited!

This morning I woke to the news that I have passed my VCDX-CMA!

This was my second attempt at VCDX and although the first failure was a painful experience, the lessons learned from it were invaluable to take into the defence the second time around. Failing doesn’t have to be a negative experience – if there is one thing that I will take from the VCDX program it is that there is ALWAYS more I need to learn, and I can always do better. Learning has to be a way of life (in this industry especially!) and the minute you stop, you start falling back.

My defence went better than last time (clearly!) but I still wasn’t confident that I’d pass. In fact as the wait went on I started thinking more and more like I’d failed, but I think it’s easier to remember the bits you struggled with than the questions you answered effortlessly!

According to the VCDX directory, and it looks like I’m only the 3rd 5th person in the UK to hold the VCDX-CMA! *Edited: There are some additional CMAs for double VCDX’s added now*

What next?

Ideally I’d like to submit a DCV and an NV design at some point – I have the VCAP exams for DCV and the VCIX exam for the NV track, so it’s just a case of having time to find and write up a design.

For now though, I think I’ll just enjoy this one and take a break!

Thanks

A massive thank you has to go to…

Ruth, my wife – she’s been very understanding about lots of late nights and weekends working, missing time with her and the kids to study and also had to deal with me being stressed out at times. She always encourages me, always pushes me to be better and to go further and always believes in me, even when I don’t.

Gregg Robertson – he originally pushed/tricked/coerced me into aiming for the VCDX and has been a great study partner, also passing his VCDX-DCV this round – massive congratulations.

Lior Kamrat – Lior was the 3rd member of our little study group and unfortunately didn’t make it this time round. His design was very similar to mine in a lot of ways so it was great to bounce off each other’s ideas and preparation. I am sure that his experience this time around will be the platform for success next time – as mine was.

Larus Hjartarson – Larus has written some excellent articles on preparing for the VCDX defence (especially his scenario preparation) which were invaluable resources getting ready. I also had the privilege of meeting up with Larus a few times at VMworld this year to do mock defences and discuss strategy.

Karl Childs and Chris Colotti – these guys worked really hard to put together the defences and manage the VCDX program, even when it looked like there wouldn’t be a slot for me to defend Karl worked feverishly to make sure that it was sorted.

My panellists – I’m not supposed to discuss who my panellists were, but I wanted to thank them for their time and the effort that they put in – they spend a lot of personal time and effort on the defences on a voluntary basis.

Xtravirt – my employer, they gave me the opportunity to step up to consulting nearly two years ago now, and through that I have had the opportunity to develop and hone my skills and become someone VCDX level!

Wednesday

I had lined up several sessions so I was quick to get along to my first session – Operational Remediation with vRealize Operations… Tying it All Together – #MGT5735. The session was excellent and gave a great overview and introduction to what is possible with remediation in vROps 6.1. Big thanks to Chima Njaka for this session.

Aside from a few other sessions I spent a lot of the afternoon in the VMUG lounge with my colleagues from @xtravirt (see pic below) and meeting lots of other folk from the around the globe (this is what makes this conference so very great for me), putting faces to twitter handles for the first time and so on. Networking with your peers at an event such as this is incredibly valuable, educational and really enjoyable.

After hitting the solutions exchange once again in the late afternoon it didn’t take long before the VMworld Party began. In short it was flipping awesome, great food and side entertainment with Faithless 2.0 headlining.

Thursday

As my flight was at 14:15 I only had the morning I was once again a booth babe at the VMUG lounge for a short while and then spent the rest of the morning in the bloggers lounge meeting and having discussion with great people such as (@alexgalbraith@pmcsharry@julian_wood@LiorKamrat to name a few) once again underlining the value of attending VMworld by the sheer quality of it’s attendee’s.

All in all #VMworld2015 was superb by only real concern is the lack of real depth in the technical advanced sessions this is an important part of attending such an event so if the content carries on becoming lighter and lighter an extremely important aspect of the conference will be lost I hope this gets picked up and is therefore addressed.

Nevertheless I had a fantastic time hopefully I will be fortunate enough to attend again next year!

I was fortunate to attend a vExpert briefing for vRA.Next, which was announced this morning to be vRealize Automation 7. The briefing was run by Jad El-Zein (@virtualjad) along with Grant Orchard (@grantorchard), Brian Graf (@vbriangraf), Kimberly Delgado (@KCDAutomate) and Jon Schulman (@vaficionado) – if that list of names doesn’t fill you with confidence for vRA.Next, then I suggest you follow them on twitter and trust me that it’s a crack team!

So, my highlights:

Completely automated deployment…almost. The deployment of appliances and installation of IaaS components and pre-requisites will be wizard driven, the Window Servers will need to exist and have an agent installed, and the MSSQL server will also need to be installed. Anyone who’s done a distributed vRA install will know that this is a massive improvement over the current state of affairs.

The vRealize Automation appliances will be clustered automatically for core services such as identity, cafe (portal), vPostgres and embedded vRealize Orchestrator (Embedded vRO is now recommended for production).

A new identity service. No more vSphere SSO or PSC – VMware Identity Management (vIDM) is a new, highly scalable and performing federated identity platform. Any SAML identity source, and more than 3m users supported per source.

An initial setup wizard that creates your first tenant, configuring things like fabric groups, business groups and vSphere endpoints automatically. It will even import your existing vSphere templates as clone blueprints.

The old CDK is gone! Instead you can use any event within vRA that is pushed through the RabbitMQ message bus to trigger extensibility through workflow subscriptions.

vRealize Orchestrator has a new HTML5 Control Center which is your single admin point for plugin configuration as well as adding metrics and monitoring for all workflows being executed.

There’s no need for unique tenant URLs – the new vIDM platform allows a single logon interface for all tenants. (Though you can keep your URLs if you want!)

vIDM can also be used to control authentication from IP source, e.g. to restrict logon to a specific subnet regardless of whether the credentials are valid or not. This has some cool ramifications for having the web layer in a DMZ, for example.

Functionality is slowly being migrated from the old IaaS/DynamicOps layer to the appliance – this is fantastic news. The migrated portions (such as vSphere Endpoint configuration) are now accessible through the vRA API, as well as gaining the speed and stability that the appliances provide.

The new blueprint designer is awesome. Added to that what was AppD is now called App Services and allows you to take a base blueprint (e.g. a CentOS VM) and drag and drop software components that you’ve scripted on top (e.g. Apache, then PHP). You can also drag and drop XaaS (vRO workflows) onto the blueprint, as well as existing blueprints to create nested blueprints.

Much fuller integration between NSX and vRA. There’s a whole raft of improvements in the integration between vRA and NSX – e.g. you can drag a new routed network onto a blueprint and it will automatically create a new Logical Switch and Distributed Logical Router to attach the Logical Switch to. Similarly load balancing applications is a drag and drop operation, as is applying existing security groups.

All blueprints can be imported and exported in YAML, which opens up exciting possibilities for storing versioned blueprints and retrieving programmatically.

There are over 60 lifecycle events out of the box on which you can trigger Orchestrator workflows, but you can create custom filters based on properties and events to extend functionality – the only limitation is what you can imagine!

There are still several months of development to go between now and the GA of vRA 7 and the development seems to be moving at a great pace. Between beta 1 and beta 2 there was a huge amount of change, and even the version demoed today had new features and UI.

Someone did ask the inevitable upgrade question – there is an upgrade path but it will have caveats – e.g. if you’ve got vCloud Director (or vCloud Air) endpoints, or are using physical endpoints then it’s likely you will need to remove them and re-import post upgrade.

I’m properly excited for this next version of vRA – some of my biggest problems with 6.x have been removed or improved, and the new extensibility features look awesome. This looks like a really mature release and builds in great new functionality.

On phrase stuck out and for me sums up the whole release beautifully – “extensibility gone wild”

After the keynote (highlights were the docker announcements) I was on “booth babe” duty at the VMUG lounge. It was great to meet so many folk who were existing VMUG members and leaders as well as prospective new members. If you are at VMworld this year you should definitely stop by!

After lunch I hit the solutions exchange to catch up with a few vendors whom I had a few queries for also took the time to collect the #vExpert hoodie from @simplivity and the #vExpert tile from Tegile (thank you!) Suffice to say it was very busy and naturally noisy but judging by the many happy faces of people with various items of “swag” under their arms things were going well!

The only session I was booked in for was the “5 functions of software defined availability” presented by Frank Denneman and Duncan Epping, naturally the session was excellent but perhaps wrongly I was hoping for more of a deep dive on the content as a good deal of the content seemed to be geared to folk who were new to the subject matter, if naught else it was an excellent refresher.

After the session I had the pleasure to meet Steve Flanders @smflanders who had just wrapped up his vbrownbag session on Log insight so it was great to not only listen about but also discuss the product.

To finish the day I headed out to Goucho’s with my fellow blogger @sammcgeown here at Definit for a beer and a good meal!

All in all today was excellent and tomorrow promises to be even busier!

My trip started with a farcical attempt to fly – my 11:50am flight on Sunday didn’t leave ‘til 3:30pm, but in light of William Lam’s travel woes on the same day, I don’t think I’ll complain to heavily. After a quick stop off at the Fira to register and grab my VMworld bag and I headed off to meet DefinIT co-author Simon (@simoneady) at our AirBnB apartment (which, by the way is awesome and a whole load cheaper than a hotel).

Sunday evening we headed out to the Hard Rock Cafe Barcelona for the annual vRockstar event, which was pretty good. The venue was packed but it was great to meet some old and new faces and drink some free beer. The less said about the rum and coke (90% rum, 5% coke, 5% ice) the better.

Monday morning we were up early to get to the venue for my (foolishly) early exam, which I blogged about yesterday. Short story, I took the VCIX-NV expecting to fail and passed! The exam is long, so after that I grabbed some food and spent time hanging out at the blogger space, networking and generally discussing VCDX strategy at length with Lior Kamrat (@LiorKamrat). For the rest of the Partner Day I did some VCDX mock design and troubleshooting scenarios with Lior and Larus Hjartarson (@lhjartarson), which was a really good session and helpful for confidence building prior to my defence next week. In the evening I headed out to the PernixData party down Las Ramblas, in the same bar as last year (Ocana). This was a slightly more chilled affair than vRockstar and was mostly spent drinking mojitos and meeting loads of nice guys. It’s always a pleasure catching up with Frank Denneman and James Smith from PernixData.

For the last few years at VMworld I’ve taken advantage of the discounted exam price and booked a “have-a-go” exam – typically an exam I’ve been wanting to do but not necessarily had the time I wanted to study for it. Since I have been fairly immersed in the NSX world for the last week, sitting in an NSX design and deploy class and surrounded by some very smart networking guys, I changed my “have-a-go” exam from the VCP6-CMA to the VCIX-NV.

The exam experience was a double edged sword – on the one hand I really enjoyed the tasks and found all the questions to be fair. On the other hand I found the latency and the interface to be a real struggle, I needed to reload the web interface 20-30 times, each time costing me 30 seconds – that’s 10-15 minutes of wasted time. I also had to be swapped over to another terminal because mine crashed, with 10 minutes to go to the end.

The exam room was cold…very cold. Nearly four hours in a t-shirt in a heavily air-conditioned room and I was shivering. I should’ve learned from the last time but I didn’t! It’s also a long time to go without a drink too – I was gasping by the time I left.

My initial impression was that I’d failed the exam – I didn’t complete several (3-4) of the questions and I made a mistake early on which I had to spend a long time unpicking. After and hour and 20 minutes I’d only completed 4 of the 18 questions. So when I hit the “finish” button I assumed I’d have a 10 day wait for a failure – fortunately the exam was marked and the result sent to me a couple of hours after the exam was completed, a pass with a decent score

I’d recommend anyone planning to do the exam to go through the blueprint – Martijn Smit (@smitmartijn) has a great VCIX-NV study guide based on the blueprint which you can also download in PDF format. There are two VMware Hands On Labs that cover all of the blueprint functionality, so I would strongly recommend doing those too – and you can do the HOL but not follow any of the guides – break it and fix it.

It’s PEX (Partner Exchange Day) at VMworld today so its busy but not the Tuesday (first day) of VMworld busy, last night saw a fantastic #vRockstar party and a great chance to meet and have a beer with many vTwitteratti.

Today of course has been largely overshadowed by the big new concerning the Dell announcement to purchase EMC which has triggered a great deal of discussion and a healthy amount of banter and snark! It should be interesting to see how it all pans out and tomorrows keynote will have a great deal of interest regarding any comments on the matter.

The hang out area looks awesome as you can see..

The evenings entertainment, we are spoilt for choice, vExpert, VMUG and PernixData are all having their respective parties. so I am looking forward to getting along and meeting more of the community and enjoying a beer (or more).

Tomorrow it will really begin and I am looking forward to getting stuck in to a few sessions, helping out in the VMUG Lounge and enjoying more of the goodness that is VMworld.

]]>http://www.definit.co.uk/2015/10/vmworld-just-getting-started/feed/06704Building a vRealize Automation NSX Lab on Ravellohttp://www.definit.co.uk/2015/09/building-a-vrealize-automation-nsx-lab-on-ravello/
http://www.definit.co.uk/2015/09/building-a-vrealize-automation-nsx-lab-on-ravello/#commentsTue, 29 Sep 2015 17:06:15 +0000http://www.definit.co.uk/?p=6699As a vExpert, I am blessed to get 1000 CPU hours access to Ravello’s awesome platform and recently I’ve been playing with the AutoLab deployments tailored for Ravello.

If you’re unfamiliar with Ravello’s offering (where have you been?!) then it’s basically a custom hypervisor (HVX) running on either AWS or Google Cloud that allows you to run nested environments on those platforms. I did say it’s awesome.

As an avid home-lab enthusiast Ravello initially felt weird, but having used it for a while I can definitely see the potential to augment, and in some cases completely replace the home lab. I spent some time going through Nigel Poulton’s AWS course on Pluralsight to get a better understanding of the AWS platform and I think that helped, but it’s definitely not required to get started on Ravello.

One more thing to add before I start the setup – even if I didn’t have 1000 hours free, the pricing model means that you could run your lab on Ravello for a fraction of the cost of a higher spec home lab. It’s definitely an option to consider unless you’re running your lab 24/7.

Now hit Publish and configure your application. I have had the best success with selecting Performance and Amazon, set a decent length of time for the Auto-stop and make sure the option to start all VMs automatically is unchecked – AutoLab needs to build in the correct sequence.

Once it’s published, power on the NAS and DC VMs by selecting both and clicking Start – this will start the build.

Those VMs will need about an hour to build, so in the mean time you can sort out NSX and vRA.

Once the build has completed, connect to the DC VM using RDP and run the validate PowerShell script. This will let you change the default password and set the VC config to add ESXi hosts automatically. It will also prompt you to download PowerCLI, which you should save in the B:\ network drive.

The build will fail – that’s OK, it’s because we don’t have PowerCLI downloaded again. Once PowerCLI is in the B:\, run the script again and it will complete:

Back in the Ravello Application, select the three hosts and the VC and start them.

Once Host1, 2 and 3 have started, open the console and select the PXE build option:

They can then be left to their own devices while they build.

Once the ESXi hosts have deployed, and the vCenter Server has built, we can log on using RDP to the vCenter server and run the AutoLab Scipt Menu and select option 1 to validate the build:

The vCenter should now be available – use the vSphere Client to connect to connect and you can view the standard AutoLabs setup

Deploying NSX Manager to Ravello

The extended OVA functionality to deploy the NSX manager to Ravello is not available directly, so you can’t just upload the OVA and expect it to work. The easiest method I found was to extract the NSX ova using 7-Zip and use the upload tool to upload the OVF

Once the appliance has been uploaded it needs to be verified, I’ll be using the below settings:

Hostname: nsx

IP: 192.168.199.20

Subnet: 255.255.255.0

Gateway: 192.168.199.1

DNS: 192.168.199.4

Search: lab.local

Make sure you drop the RAM down to 8GB, otherwise it won’t start on Ravello. (I’ve configured a static IP address and used a NAT (port forwarding) to provide HTTPS 443 access but I’ll remove that later – it’s better to use the VC or DC to access it!)

Once the VM is verified, you can drag it onto your canvas and publish the application. The VM will boot and you can then configure the NSX manager via the console:

Log in using admin/default, and enter priviledged mode (enable) using the password “default”. Type setup to begin the initial configuration:

Once rebooted, check you can access the NSX admin console via the DC or VC:

From here, the NSX install/deploy is as you would do it in a physical environment.

Deploying vRealize Automation to Ravello

Using the same method as the NSX manager deployment, extract the Identity Appliance and vRealize Automation Appliance and upload the OVF appliances directly to Ravello using the import tool.

Identity Appliance

vRealize Automation Appliance

Hostname: sso

IP: 192.168.199.21

Subnet: 255.255.255.0

Gateway: 192.168.199.1

DNS: 192.168.199.4

Search: lab.local

Hostname: vra

IP: 192.168.199.22

Subnet: 255.255.255.0

Gateway: 192.168.199.1

DNS: 192.168.199.4

Search: lab.local

Once the Identity Appliance is uploaded, navigate to your Ravello Library, VMs and select the it. The configuration needs to be verified before you can move on:

Now drag the SSO appliance from the add VMs to your canvas, publish and power on:

Once it’s published, switch to the console where you’ll see a warning about the hypervisor allows – press any key and ignore it! It’ll also want a password input, since the OVF environment it’s expecting isn’t there – enter a password twice to continue. The appliance will pick up a DHCP address as the IP configuration isn’t included in the OVF. From the console, take a note of the IP and access it via the VC or DC servers.

Log in using the root user and the password you specified on boot. Change the IP address to the specified static IP and reboot. The SSO appliance is now ready to continue in the normal install process.

Next, open the Library > VM page and select the vRealize Automation appliance to verify the VM settings. The only real configuration I’ve made is to add the static IP address:

Once again, drag the vRA VM from the menu and publish the VM using the “Update” button. Once the VM is provisioned, open the console to acknowledge the warning, as with the SSO appliance:

Enter an initial password and let the appliance boot until you see the splash screen – grab the IP address and then use it to configure the appliance, as with the Identity Appliance, from the DC or VC server.

Log in using root and the password you entered in the console, and configure the IP address:

The vRealize Automation appliance is now ready to configure.

To create the last component, the IaaS server – from the Application canvas, click + and drag the “Empty” VM template onto the canvas.

Select it and configure as follows, ensuring you configure the name, IP addressing, CD mapping and resources. Click save, then publish the VM.

Install and configure the Windows OS ready for the IaaS installation with a static IP address.

My application now looks like this:

That concludes this post – getting everything ready for configuration has been a long process, but overall a lot less taxing than I expected!

]]>http://www.definit.co.uk/2015/09/building-a-vrealize-automation-nsx-lab-on-ravello/feed/16699Missing actions in vRealize Automationhttp://www.definit.co.uk/2015/09/missing-actions-in-vrealize-automation/
http://www.definit.co.uk/2015/09/missing-actions-in-vrealize-automation/#commentsMon, 21 Sep 2015 11:36:16 +0000http://www.definit.co.uk/?p=6567Recently the team I am working with came across an interesting bug/issue with actions missing from deployed VMs. We had checked and double checked the entitlements yet the actions that should be available to the end-user/customer were not listed.

Everything appeared to point to a permissions issue until one of the team members noticed something with regards to blueprints in the catalog.

So as I stated before the end-user for the deployed VMs were not getting the actions assigned from the entitlements, only 4 actions were available.

The entitlements however were far more comprehensive as you can see.

We tried various avenues of investigation until one of the team members noticed that the blueprints did not have any descriptions.

To correct this we clicked on each of the respective blueprints (which already had descriptions)..

and selected edit.

Then immediately exiting (making no changes) but clicking OK to exit the blueprint configuration. You will notice even with no changes made, click OK seems to trigger something to happy to the right of the cancel button.

Returning to the catalog the descriptions were now present but also all the missing actions were now available to any newly deployed VMs and also any previously deployed VMs (which is great news as you can imagine).

I do not know yet know why this resolves the issue but suffice to say it was a welcome fix albeit confusing.

I plan to ask VMware to learn more and update this post accordingly when I have a more informed answer/reason.

]]>http://www.definit.co.uk/2015/09/missing-actions-in-vrealize-automation/feed/46567PernixData FVP 3.0 GA and Lab Installhttp://www.definit.co.uk/2015/09/pernixdata-fvp-3-0-ga-and-lab-install/
http://www.definit.co.uk/2015/09/pernixdata-fvp-3-0-ga-and-lab-install/#respondWed, 16 Sep 2015 12:56:01 +0000http://www.definit.co.uk/?p=6554I’ve been running “Pernix-less” since vSphere 6 was released, simply because I can’t afford to wait on learning new versions until 3rd party software catches up. It makes you truly appreciate the awesome power of FVP, even on my less than spectacular hardware in my lab, when it’s taken away for a while.

Obviously support for vSphere 6.0 was the big one I was waiting for, but don’t discount the rather understated “Performance and scalability improvements”. Not sure if renaming the database is a headline for release, but I’ll let that go. I’m really, really, REALLY hoping the license activation has improved because I found it a little clunky and frustrating before…we’ll see…

Prerequisites

You need to be running ESXi 5.1, 5.5 or 6.0 (no 5.0 support)

You need a server running 64-bit Microsoft Windows Server 2008 or 2012 for the management server

This should have 4 vCPU and 4GB of free RAM for a production installation. It’s possible to install with less in the lab, but…well it will directly affect the performance, there’s no point in scrimping on CPU/RAM for a performance solution. A VM with 4vCPU and 8GB RAM is recommended.

.NET Framework 3.5.1 SP1 or later (activate the feature)

FVP Management Server will install Java SE 7, and Java Runtime Environment (JRE) 1.8u40. The setup program overwrites any existing installation of the JRE so be careful if you’re sharing a server with something else that uses JRE – best practice is to give FVP a dedicated server anyway.

A service account for FVP to run under, access the SQL database and connect to vCenter Server. I’m using an account I’ve created svc_pernix@definit.local which is added to the local administrators group on my FVP Management server.

This account will also be added to the “Log on as a Service” policy

I have added it to my vCenter appliance as an administrator

I have also granted admin rights on my SQL server (this can be dropped to DBO rights on the Pernix database after installation).

Installing the PernixData Host Extension

The PernixData Host Extension needs to be installed on all hosts in the cluster that you are activating FVP. You can install using esxcli or using VMware Update Manager. Since I’m only installing on 3 hosts, I’ll use esxcli, but I think that’s probably the threshold for installing manually, any more and I’d go with VUM. Make sure you download the correct package for manual or VUM install.

Copy the extension zip to somewhere accessible by your ESXi hosts. I’ve dropped it on an NFS share that’s mounted on all my hosts.

SSH or console onto the hosts and check there are no old FVP extensions installed

esxcli software vib list | grep pernix

Put the host into maintenance mode (type out this command as the “-” character can be pasted incorrectly!)

Installing the PernixData Management Server

Run the installer and accept the license agreement

I was unable to change the location of the install – not sure why, but I’ll be reporting it back to Pernix. Select the “Complete” installation to install the Management server and the CLI tools. If you just want the Management Server or CLI tools you can do a custom install.

Configure the connection to vCenter Server using the previously created account (make sure to use DOMAIN\user format, a UPN such as svc_pernix@definit.local is not supported).

As noted in the screenshot, you can run the Manager server as “Local System” and enter details for a vCenter Server local user to connect to the vCenter Server, and then configure a SQL Server Authentication user for the database connection. I imagine this may be required in some high security implementations, but it’s more complex and may cause confusion later down the road.

Enter the SQL server details, and either select a pre-created database or enter the name of a new database – I’m going for “PernixDataFVP”. I’m also using Windows authentication to connect as I’m running under the context of the service account.

Select the name or IP address of the management server – this is the address vSphere clients must be able to access. Configure the HTTP or HTTPS port if required – I’ve left mine as default.

After that, continue through the installer pages until we get to the installation

Because I’m doing an install in my lab I have less than the required RAM/CPU allocated. The Management server will install OK.

And finish!

Configuring FVP

So far the install process has been identical to previous versions of FVP – now here comes that new management interface mentioned in the release notes!

Log onto the PernixData Management Console using the HTTP address we configured earlier. Remember, if you changed ports during the install then you need to change them here:

FVP will then go online and authorise your license…and that’s it, done! This is a MASSIVE improvement on the old method which involve license servers and downloading response files. It’s how it should work – well done Pernix!

Creating FVP Clusters

Next, select FVP from the drop down and you can see an overview of your FVP clusters – I’ve not got anything configured yet.

Click “Create” under FVP Clusters to begin the creation of your first FVP cluster (or use the FVP Clusters dropdown)

Name your new FVP cluster and select the vSphere cluster you want to enable FVP on:

Click on the new cluster to open the cluster details

Configuring Acceleration Resources

Select “Configuration” to begin configuring acceleration resources. Click “Add…” to select the flash (or RAM) you wish to use. In this cluster I have a 40GB SSD in each host that I will be using to accelerate Read/Write performance. If you wish to use RAM to accelerate your storage then you can allocate a minimum of 4GB, in 4GB intervals, up to 1TB of RAM per host. You cannot use RAM and flash together to accelerate on the same host.

Add Datastores to be accelerated

Select the Datastores/VMs tab and click “Add Datastores” to open the datastore dialogue. Select the datastores you wish to accelerate – in my case two NFS datastores on my Synology NAS. Adding the datastore automatically adds all VMs associated with that datastore to the FVP cluster, and any new VMs added to the datastore will receive the datastore’s default policy.

Next select the Write policy – the two options are Write Through, or Write Back. I won’t go into detail on the differences because 1) that would be a blog post in itself, and 2) the legendenary Frank Denneman has already done so here and I’m likely to fall far short of his explanations. I’m going to select “Write Back” with a local and 1 peer copy, to provide write acceleration as well as some data protection.

To add only specific VMs, or to change the Write Policy for individual VMs, you can add VMs individually to the cluster using the “Add VMs…” button. This works in the same way so I won’t go into details, but I like the idea of setting a datastore policy that fits most VMs – e.g. Write Back with Local host and 1 peer – then assigning more peers to VMs where the data is more critical, or Write through to VMs that aren’t in need of write acceleration.

Other configuration

Fault Domains – configuring a fault domain allows you to replicate write data intelligently – e.g. to ensure a copy of each write is replicated to a different rack, or blade chassis to protect against the failure of such. For my lab, there’s no point in doing that, so I will leave the default Fault Domain in place, with all hosts in it.

Blacklist – if you want to ensure a VM is never accelerated, you can add it to the black list

Network Configuration – by default FVP will chose the vMotion network for all of it’s replication traffic, but you can specify a network dedicated just to FVP. Since my lab is all on 1GB networking, I like to dedicate a single 1GB NIC just for FVP rather than sharing with vMotion. To do this, ensure you have a VMkernel on each ESXi host on the network you wish to use, and ensure they can all communicate with each other.

I have a network called “VLAN-3-Pernix” and a VMkernel configured on each host, so I can select “Chose one network for all hosts” in the “Edit Networks” dialogue, then select my network of choice:

Disclaimer: In the interests of transparency, I want to state that I am part of PernixData’s PernixPro scheme, and as such I receive an NFR license and some additional access to briefings. I would also say that I was a fan of FVP long before I was a PernixPro and would still be, even if I wasn’t part of it – it’s just a very good software solution!

]]>http://www.definit.co.uk/2015/09/pernixdata-fvp-3-0-ga-and-lab-install/feed/06554vROps 6.1 – first glancehttp://www.definit.co.uk/2015/09/vrops-6-1-first-glance/
http://www.definit.co.uk/2015/09/vrops-6-1-first-glance/#commentsFri, 11 Sep 2015 15:04:05 +0000http://www.definit.co.uk/?p=6478I must admit I have been quite keen to get a look at vROps 6.1 so I was quick to upgrade my lab from 6.0 to 6.1 (with not problems I might add) and have a good look at the new bits and pieces.

The bits I will cover briefly in this post are..

Upgrading from 6.0

EPO functionality

Dashboards

Automated actions (ON?)

Upgrading from 6.0

Ensure you upgrade the OS of the appliance first (if you have one) – vRealize Operations Manager – Virtual Appliance Operating System upgrade (vRealize_Operations_Manager-VA-OS-6.1.0.3038037.pak )

This was successful but the staging process took a while (at least 10-15 minutes for me)

Once the staging was complete it was a matter of clicking next and agreeing to the EULA and the upgrade process would begin. (note the warning regarding the restart of the cluster)

The upgrade process took roughly 30 minutes or so and I have a 2 node HA cluster.

Once the upgrade was completed and validated the first thing I wanted to check out was the inclusion of Hyperic functionality and while its not like for like just yet, its a big step forward and not too soon, Hyperic is really starting to look jaded and dated.

EP Ops Adapter

A few things to note for the agent installation.

The new EPO agent cannot run alongside a hyperic agent on any given OS, I installed the agent onto a windows server which had an existing Hyperic agent and while it installs “successfully” it conflicts with the hyperic agent (even if you disable the hyperic agent). So if you are going to run some tests bear this in mind. I found it worked perfectly if you removed the Hyperic agent first of all then installed the EPO agent (no reboot required)

You will need the SSL certificate thumb print for your vROps instance, this can be obtained from the http://vrops-url/admin and clicking golden coin icon on the top right of the UI.

Once it has been installed you can validate it is up and running by checking in the services and the log files if you so wish.

Quickly navigating to “All Objects” in “Environment” you can expand the new “EP Ops Adapter” which for any of you you familiar with Hyperic does not include as much as a Hyperic adapter but its still a good start imo.

You can select each server/VM that has an agent installed and get more information as per the example below.

I was lucky enough to have a late preview 6.1 before it went GA and one of the bits I really liked was a new Widget for dashboards

Capacity Utilization Dashboards

I will allow the screenshot to speak for itself pretty flipping awesome..

The only thing that made me raise an eye brow (but will need more investigation) is “Automated Actions” being enabled by default I am not sure thats a hot idea.. but we will see.

What have the guys at VMware added? I have listed what I consider the hi-lights below

The maximum of 8 nodes has been doubled to 16!

SSO integration has been added (requires vSphere 6.0)

Support for SRM has been added

vRealize Hyperic functionality has been added
With the addition of End Point Operations Management, the value of vRealize Hyperic functionality has been extended to the vRealize Operations Manager core product, without the need to deploy vRealize Hyperic

Remote collector resiliency
New functionality enables you to assign solutions to collector groups. Collector groups provide high availability access to data collection for the solution.

Support for IPv6
You can deploy vRealize Operations Manager in Internet Protocol version 6 (IPv6) environments.

Support for Windows Server 2012 R2
However it is still recommended to go with the appliances.

Dashboard and report enhancements
New functionality enables you to post a dashboard as a report, and post a report to a shared drive.

Automated workload placement and re-balancingYou now have the ability to re-balance workloads to optimize performance and preserve license optimization.

Telemetry
A new collection of deployment and usage statistics for vRealize Operations Manager has been added, to help improve product usability and performance.

Upgrade optionsDirect upgrade path from 6.0 or migrate 5.8 to 6.0, then upgrade to 6.1

So some really great enhancements in 6.1 however there is one key disappointment for me and that is that HA is still limited to a logical DC. #sadpanda

]]>http://www.definit.co.uk/2015/09/vrops-6-1-whats-new/feed/26442Joining vSphere 6 Platform Services Controller Appliance to an Active Directory domain via the command linehttp://www.definit.co.uk/2015/09/joining-vsphere-6-platform-services-controller-appliance-to-an-active-directory-domain-via-the-command-line-2/
http://www.definit.co.uk/2015/09/joining-vsphere-6-platform-services-controller-appliance-to-an-active-directory-domain-via-the-command-line-2/#respondThu, 03 Sep 2015 14:48:40 +0000http://definit.mcgeown.co.uk/?p=6435With a Platform Services Controller appliance deployed as part of a vCenter Server installation, either integrated as part of the vCSA or as a separate PSC appliance, you can easily join the PSC to an Active Directory domain using the Web Client.

When you’ve deployed the PSC as the single sign on layer of a distributed vRealize Automation deployment, you don’t have the vSphere Web Client to configure it in the same way. This means that you can’t add an integrated Active Directory identity source to the default tenant, either using the PSC machine account or an SPN for Kerberos.

Joining the PSC to the domain is actually a really simple operation, it uses the Likewise command line domainjoin-cli in exactly the same way as you do for ESXi 6.0 hosts.

Log into your PSC as the root user via SSH or the console, then run the command (I used “find / -name domainjoin-cli” to locate the executable):

domainjoin-cli join <domain> <user> <password>

Reboot the PSC node to take effect, if you are using multiple nodes make sure you join all of them to the domain. You may also need to create an SPN for the load balancer URL for PSC, as per KB2090617.

As you can see below, you can then configure the integrated Active Directory identity source for the default tenant:

]]>http://www.definit.co.uk/2015/09/joining-vsphere-6-platform-services-controller-appliance-to-an-active-directory-domain-via-the-command-line-2/feed/06435The vROps HA conundrumhttp://www.definit.co.uk/2015/08/the-vrops-ha-conundrum/
http://www.definit.co.uk/2015/08/the-vrops-ha-conundrum/#respondThu, 27 Aug 2015 14:54:29 +0000http://www.definit.co.uk/?p=6424One of the great new features included in vROps is High Availability, however when you look a little closer at how it works careful thought needs to go into whether you want to use it or not.

I have had several discussions with my colleagues on the subject about whether you should or should not enable it in any given deployment of a vROps cluster.

So the following are my thoughts and bullet points for you to consider when faced with same dilemma.

By its very name, I assumed wrongly that it could be used as a way to tackle BC/DR concerns, it turns out the HA feature cannot span multiple logical datacenters – KB article – Forum discussion. I am hoping in future editions this gets resolved as it would be -very- handy.

So what other things do I need to take into account?

The Master node behaves like an index for your cluster, lose it and lose your cluster, so HA can protect it although is no substitute for a proper backup solution. “.. Global xDB is used for data that cannot be sharded, it is solely located on the master node (and master replica if high availability is enabled)“

HA takes several minutes to “kick in” so one could argue why not rely on vSphere HA (especially if your management cluster is tight on resources)

HA would protect you against a LUN/Datastore failure assuming you had sensibly separated your nodes.

HA adds an additional node so if you are tight on resources it might not be an option for you.

Removal of data nodes (if you need to downsize your cluster) will result in data loss unless you have HA enabled.

The bullet points are by no means exhaustive but they are essential information while you muse your design choice for you next vROps cluster.

With a fully distributed vRealize Automation instance one of the critical components to maintaining uptime is determining whether any particular service is “up”. Out-of-the-box monitors allow us to detect if the port we are load balancing is open, but don’t determine whether the service on that port is functioning correctly.

Important: None of these monitors should be created until vRealize has been fully installed – doing these as you go along will result in installation failures. For example, if you create the monitor on the IaaS web service before the DEM roles are installed, the web service will always be down because it’s waiting for a DEM role.

Creating a NetScaler Monitor

To create the monitor open the NetScaler configuration page and open Traffic Management, Load Balancing then Monitors. Select the “https-evc” monitor and click “Add” – this pre-loads the settings from this monitor, which populates most of the settings we need.

Enter a name for the monitor, and leave the other parameters the same. Select the “Special Parameters” tab and configure the send string to the URL to monitor – e.g for the PSC SSO it’s going to be:

Assigning a NetScaler Monitor to a Service

Assign the monitor to the PSC Services (or Service Groups) configured for PSC by opening the Configuration > Traffic Management > Load Balancing > Services page and selecting the PSC service for HTTPS/443 and clicking Edit.

Click on “1 Service to Load Balancing Monitor Binding” under Monitors

You can see the default TCP connection monitor. Click “Add Binding”

Click the “Select Monitor” box and select the monitor that was created for this service (e.g. VRA-HTTPS-PSC-SSO) and then click Bind.

All being well, you should now see the new monitor bound and with a state of “Up” – the last response should be a success.

Repeat this for each Service or Service group you need to configure that monitor for.

vRealize Automation Load Balancer Monitors

The following monitors can be used for a distributed vRealize Automation installation.

vSphere 6 PSC (SSO) Monitor

The status of the SSO service on each PSC node can be monitored using the following address:

https://<psc-node-fqdn>/websso/HealthStatus

As you can see below, the desired response is “GREEN”.

vRealize Appliance (cafe) Monitor

The status of the Cafe service on each vRealize Appliance can be monitored using the following address:

https://<vra-appliance-node>/vcac/services/api/status

The desired response is “REGISTERED”

Warning: In the case of all vRealize Appliances being shut down, this load balancer configuration will cause the appliances to fail on start-up unless you add the load balancer URL to the /etc/hosts file or temporarily change the monitor back to tcp-default. This is because start-up process refers to the services based on the load balanced URL – which won’t be available until at least one appliance has initialised. Rebooting one appliance at a time will be fine. Thanks to Carl Prahl, Gregg Robertson and Omer Kushmaro for help working this out!

Be sure to add the line outside of the “# VAMI_EDIT” section, otherwise it will not persist past the next reboot.

vRealize IaaS Web Monitor

The status of the Web service on each vRealize IaaS Web role can be monitored using the following address:

https://<vra-iaas-web-server>/WAPI/api/status

The desired response is “REGISTERED”

vRealize IaaS Manager Monitor

The status of the Web service on each vRealize IaaS Web role can be monitored using the following address:

https://<vra-iaas-manager-server>/VMPS2

The desired response is “BasicHttpBinding_VMPSProxyAgent_policy”

]]>http://www.definit.co.uk/2015/08/deploying-fully-distributed-vrealize-automation-instance-configuring-netscaler-monitors/feed/06339VCE Vision, vCOPs to vROps migrationhttp://www.definit.co.uk/2015/08/vce-vision-vcops-to-vrops-migration/
http://www.definit.co.uk/2015/08/vce-vision-vcops-to-vrops-migration/#respondMon, 17 Aug 2015 22:26:13 +0000http://www.definit.co.uk/?p=6342Recently I have been working with various products that compliment or are accessed by vCOPs/vROps via Management packs.

As vROps is still fairly new in comparative terms to other products like vCOPs and VCE Vision, when planning migrations from vCOPs to vROps it is naturally important to check compatibility and what is required for a successful migration.

If you are doing a greenfield deployment of vROps there is nothing to be concerned about, you simply need the VCE Vision appliance running version 2.6.1-2.6.5 and have the latest Management Pack installed on your vROps cluster.

However! what I have encountered when you wish to migrate, things are well, not so great.

Even after a successful and painless upgrade of the VCE Vision appliance, when attempting a migration from vCOPs to vROps a few things occur.

First of all, if you use the supported Vision management pack for vROps it does not detect that the old and the new are alike so a migration cannot occur. I then decided to try the unsupported older management pack which gave me the magical green tick. Sadly after the migration it was very evident the only thing that did get migrated to the unsupported management pack adaptor was the connection settings. No historical data was migrated even after several unsuccessful attempts using various combinations of what I had already been through it was clear something was not right.

So here comes the bad news.

After speaking to VCE and then being passed on to VMware it would seem for now a migration simply is not possible.

“VCE Vision MP is not on the compatibility list of adapters that support the migration process. Due to the lack of compatibility, this is likely why we will be unable to import the historical data from vCOps to vROps for this adapter.”

So that’s it then, for now until it is supported, if you plan to migrate from one to the other bear the above in mind and plan accordingly. Whether that be keeping your vCOPs instance running side by side as reference until you are comfortable with the data collected by vROps, holding fire till it is supported or making the jump and binning vCOPs wholesale.

Now that the prerequisites for the IaaS layer have been completed, it’s time to move on to the actual installation of the IaaS components, starting with the database. We then move onto the first Web server, which also imports the ModelManagerData configuration to the database, populating the database with all of the info the IaaS layer needs out of the box. We then install the second Web server before moving on to the active Manager server. The second Manager server is passive and the service should be disabled – I’ll cover installing DEM Orchestrators, Workers and the vSphere Agents in the next article.

Install IaaS Database

There are three methods of creating the IaaS database, depending on your setup and security model you can

Point the installer at the MSSQL server and let it create the database and populate it for you. This is the simplest option but requires the service account to have sysadmin privileges on the MSSQL instance. These can be pared back after install though; this is my preferred option and the one I’ll cover in this article.

Create an empty database using scripts provided, then use the installer to populate it for you. This is relatively simple and normally works for environments where database administrators are responsible for a shared MSSQL cluster and don’t want to delegate control. The service account needs dbo permissions on the created database for this option.

Create and populate the database using the scripts provided. This option is for database admins who want full control and want to verify the configuration before it’s deployed. The service account still needs dbo permissions to run, but the installer does not do any of the configuration.

Create the database

Log onto the first IaaS server as your service account and run the IaaS installer file that we downloaded to c:\vRA in the pre-requisites.

Click Next, accept the EULA and click Next.

Enter the root credentials for the vRealize Appliance and accept the certificate, then click Next. Select the Custom Install option and select IaaS Server. Change the install location if required and click Next

Tick the Database option and configure the database server and database name. Everything else can stay as default. Click Next and ensure that nothing fails the prerequisites. Click Next

Review the summary, then click Install. Once the install is complete, click Next, then finish the wizard.

Configure MSDTC

MSDTC needs to be configured on the SQL server as well as the IaaS Web/Manager servers (where the pre-requisite script handily configures it for you). I will be publishing an article on MSSQL and MSDTC clustering for vRealize Automation soon, this covers the basic configuration of MSDTC in a stand-alone SQL box.

Open Component Services, expand Computers, My Computer, Distributed Transaction Coordinator, then right click Local DTC and select Properties. Select the Security tab and configure the settings as below:

Install the primary Web Server

Run the IaaS installer again and this time select the Website and ModelManagerData options.

On the “Administration & Model Manager Web Site” tab you can normally accept the defaults for everything except the certificate, which should be the one you generated and imported earlier. If you can’t see your certificate, try unchecking the “Display Certificates using certificate-friendly names.” or adding a friendly name using mmc.exe, then clicking Refresh.

Enter the credentials for the service account to run the vRealize services. Create an encryption passphrase to protect the data at rest (in the database). Use 8 or more alphanumeric characters, but avoid special characters which can cause problems during the installation. Configure the MSSQL database server and name.

Install the secondary Web Server

Before doing this step, log onto your load balancer and ensure that the Web service is up and running:

Log onto the secondary web server as the service account and run the IaaS installer as before, this time though we are only installing the Website component. Everything else is configured as the first web server.

Install the active Manager server

Before doing this step, log onto your load balancer and ensure that the Web service is up and running:

Log onto the active Manager server using the service account and run the IaaS installer. Run through until you get to the custom IaaS server install and select “ManagerService”. Enter the FQDN of the IaaS Web server load balancer and ensure the “Active node” option is selected. Select the Iaas Manager certificate and test that nothing is using the 443 binding. Click next.

The prerequisite checker will run – ensure that any problems detected are resolved, then click Next.

Enter the service account credentials, the Security Passphrase used during the Web service installation and the database server details. Click Next, then click Install.

Once the install is completed, click Next, untick the “Guide me..” box and click Finish.

Install the passive Manager server

Before doing this step, log onto your load balancer and ensure that the Web and Manager services are up and running:

Log onto the active Manager server using the service account and run the IaaS installer. Run through until you get to the custom IaaS server install and select “ManagerService”. Enter the FQDN of the IaaS Web server load balancer and ensure the “Disaster recovery cold standby node” option is selected. Select the Iaas Manager certificate and test that nothing is using the 443 binding. Click next.

The prerequisite checker will run – ensure that any problems detected are resolved, then click Next.

Enter the service account credentials, the Security Passphrase used during the Web service installation and the database server details. Click Next, then click Install.

Once the install is completed, click Next, untick the “Guide me..” box and click Finish.

Open Services.msc and check that the “VMware vCloud Automation Center Service” is not running, and is set to Manual:

One of the trickiest parts of deploying vRealize Automation is the IaaS layer – people sometimes look at me like I’m a crazy person when I say that, normally because they’ve deployed a PoC or small deployment with just a single IaaS server. Add in 5 more servers, some load balancers, certificates, a distributed setup and MSDTC to the mix and you have a huge potential for pain!

If you’ve followed my previous posts, you’ll see know that I’ve got a HA Platform Services Controller configured, and a HA vRealize Appliance cluster configured with Postgres replication – all good so far.

There are loads of ways to deploy the IaaS layer and they depend on the requirements and the whim of the architect who’s designed it – but the setup I deploy most often is below

Active/Active IaaS Web components on two servers and load balanced

Active/Passive IaaS Manager components on two more servers and load balanced, with a DEM Orchestrator also running on each host

Two more servers running the DEM Worker and Agents – typically vSphere Agents.

Keeping the web components separate ensures your users’ experience is snappy and isn’t degraded during deployments. The manager service is load balanced, but has an active and a passive side which needs to be manually started in the event of a failover. The DEM Orchestrator roles orchestrate the tasks given to the DEM Workers and don’t put much load on the servers. Finally, the DEM Workers and Agents are placed on two more servers – the DEM Orchestrators load balance the workloads sent to each DEM Worker and in the case of a failed DEM Worker will resume the workload on the other Worker. The vSphere agents are configured identically and provide high availability for the service, workloads will just use whichever agent is available for a particular resource.

I have already deployed and configured a Microsoft SQL Server 2008 R2 Failover Cluster, with a local MSDTC configured on each node. I hope to publish a blog post on this in the near future – for this article it’s not important, just that there’s a MSSQL database available and MSDTC is configured on the database server.

Pre-requisites

The pre-requisites here are for ALL the IaaS components.

Create DNS records

Create an A and PTR DNS record for the two load balanced URLS

Deploy six Windows Servers

I’m using Windows Server 2012 R2, deployed from my basic template and joined to the domain. Each is assigned a static IP address and because they’re domain joined their DNS records are updated.

VRA-WEB-01

VRA-WEB-02

VRA-MAN-01

VRA-MAN-02

VRA-IAS-01

VRA-IAS-02

Configure BackConnectionHostNames or set DisableLoopbackCheck

This step is required on the servers hosting the Web component of IaaS.

There’s a known issue with accessing websites hosted by IIS using NTLM authentication via a different name than the server’s DNS hostname, which results in a “HTTP 401.1 – Unauthorized: Logon Failed” error. This is one of the most common reasons the IaaS installation fails, and can be solved by setting BackConnectionHostNames, or DisableLoopbackCheck, which is the least secure option as it completely disables the protection. Using BackConnectionHostNames is a much better option as it selectively disables the protection.

There are two steps to this – set “DisableStrictNameChecking” to “1”, then add a multi-string value BackConnectionHostNames with the load balancer URL as the value.

I have written this PowerShell script to configure the BackConnectionHostNames, just edit the $loadbalancerUrl value to match your FQDN:

Note: It’s important to reboot the web server after this step to ensure the settings are in effect.

Create a service account

vRealize Automation IaaS requires a service account is created to run the services and provide access to the database. Create an account and add it to each of the 6 Windows Servers as a local administrator.

Generate Certificates for IaaS Components

To try and keep this blog post to a reasonable size, I won’t go through the process of requesting certificates. I have generated two certificates using the vSphere certificate template as detailed below:

Each of the web and manager servers should have the relevant certificate (including the private key) installed in the Computer > Personal certificate store, and should be visible in IIS manager when it’s installed. It’s also worth ensuring a “Friendly Name” is set in the certificate store – this makes it easier to find in the installation process.

Make sure you reboot the Web/Manager servers after running the script.

Configure NetScaler Load Balancer for IaaS Web

The IaaS web load balancer is a relatively simple affair, load balancing the two active web services on HTTPS/443. The health check should look for the presence of “REGISTERED” on the URL /WAPI/api/status – however the initial check should be set to a basic HTTPS/443 connection during the installation phase.

Create the Servers

Create the Services

Create a VIP

Ensure the persistence is set to “SOURCEIP” with a timeout of 2 minutes.

Configure NetScaler Load Balancer for IaaS Manager

The load balancer requirements for the Manager service are pretty simple, it’s basically an active/passive service and should be configured to point to both and test the /VMPS2 address for a “ProxyAgentService” response – however the initial check should be set to a basic HTTPS/443 connection during the installation phase and the secondary passive server should be disabled to ensure it doesn’t come up during the install process.

Create the Servers (be sure to disable the passive one!)

Create the Services

Create a Virtual Server

Since it’s active/passive, no persistence is required.

]]>http://www.definit.co.uk/2015/08/deploying-fully-distributed-vrealize-automation-iaas-components-part-1-pre-requisites/feed/06240My VCAP5-DCD experiencehttp://www.definit.co.uk/2015/08/my-vcap5-dcd-experience/
http://www.definit.co.uk/2015/08/my-vcap5-dcd-experience/#respondFri, 07 Aug 2015 08:59:26 +0000http://www.definit.co.uk/?p=6193A few months ago I decided to tackle the VCAP5-DCD exam and when booking I gave myself 2 months to study before the date of the exam.

I was keenly aware that at least in this calendar year the VCAP5-DCD exam and undergone some changes E.G. it no longer had multiple choice questions (think VCP5-DCV in terms of format)

There was a wealth of knowledge out there from people whom had either passed/failed or were currently studying so it didn’t take too long to get an understanding of what disciplines/knowledge VMware were looking for in the exam.

I elected to study for around an hour a day to hopefully cover all of the material required without burning myself out. While this did work for the most part, there was the inevitable last few days of cramming and panic.

As I am sure you have heard many times before I cannot say much on the actual exam but I did finish it with 13 minutes to spare, I was aware others had struggled to get everything done in the allocated time.

Being honest I did not expect to pass, as some of the questions were really quite brutal in their ambiguity (my opinion) but to counter this (during the exam) I would complete the questions in the following order (master design, design and then others) before ensuring I re-read each question to double check I had not missed something and in some cases I had and I was able to correct some mistakes I had made.

There is no question time management is the key, you have to be ruthless and disciplined to ensure you get time to cover the design questions as well as the other questions in the exam but it is possible!!

Also factor in the usual “what answer is VMware actually looking for?“, as with any tech related exam, they are looking for -their- answer, not necessarily the answer or solution others would perhaps use or do out in field.

Topics you are going to NEED to cover in your studies

Storage (requirements, dependencies, features, limitations of the various types of storage)

Auto Deploy (yes really)

Understand the following terms and how to identify them.

Requirements

Risks

Constraints

Assumptions

Disaster recovery and Business Continuity particularly understanding what RPO, RTO, WRT & MTD are and how they differ.

Fully understand (or at least have a working understanding of) technologies like vSphere HA/DRS/SIOC/NIOC/VASA/VAAI/ALUA

Good working knowledge of networking.

A good understanding of how tiered applications work and are designed.

As I said earlier I was a little surprised I had passed! I found it very challenging but I shall not complain I know many before me have studied very hard and still failed, do not underestimate this exam.

]]>http://www.definit.co.uk/2015/08/my-vcap5-dcd-experience/feed/06193vRealize Automation Infrastructure Tab displays incorrect labelshttp://www.definit.co.uk/2015/07/vrealize-automation-infrastructure-tab-displays-incorrect-labels/
http://www.definit.co.uk/2015/07/vrealize-automation-infrastructure-tab-displays-incorrect-labels/#respondFri, 24 Jul 2015 08:24:35 +0000http://www.definit.co.uk/?p=6183Having just completed a particularly problem-prone distributed IaaS install, this was almost the straw that broke the camel’s back. Logging into vRealize Automation for the first time as an Infrastructure Admin displayed the infrastructure tab and all menu labels as big ugly references, and no functionality:

Rebooting the IaaS web servers restored the functionality of the IaaS layer but still did not fix the label issue, it took a further reboot of both vRealize Automation appliances, then the IaaS web servers to finally view the correct labels.

As part of some testing I’ve been doing for vRealize Automation DR scenarios, I wanted to test changing the IP address of a HA PSC pair using a script (think SRM failover to a new subnet).

What I didn’t want to do was simply edit the connections directly – quite often with the VMware appliances there are scripts on start-up to ensure the configuration is correct and consistent – what I wanted was to be able to find a more supported and reliable way.

Fortunately the VAMI scripts are deployed on most appliances and are included on the PSC. I was able to work out a process (mostly by trial and error!) of getting the IP change to stick.

# Update the network IP address (this is for IPv4, there are options for IPv6 too, and DHCP)
/opt/vmware/share/vami/vami_set_network eth0 STATICV4 192.168.10.52 255.255.255.0 192.168.10.1
# This updates the IP in /etc/hosts - requires the FQDN as an argument or sets it to localhost.localdomain
/opt/vmware/share/vami/vami_set_hostname vra.definit.local
# This makes the changes “stick” on reboot
/opt/vmware/share/vami/vami_ensure_network_configuration eth0
reboot

I successfully used the the Guest Script Manager package from the VMware Center of Excellence to store and execute the script via vRealize Orchestrator, as well as using a bash script actually on the host. This worked during my testing to modify both the IP addresses in a PSC HA Cluster, and allowed (with some DNS changes) the fail-over to a completely different subnet.

The recommendations for the vRealize Appliance have changed with 6.2, the published reference architecture now does not recommend using an external Postgres database (either vPostgres appliance, a 3rd party Postgres deployment or using a third vRealize Appliance as a stand-alone database installation). Instead the recommended layout is shown in the diagram below. One instance of postgres on the primary node becomes an active instance, replicating to the second node which is passive. In front of these a load balancer or DNS entry points to the active node only. Fail-over is still a manual task, but it does provide better protection than a single instance.

The cafe portal and APIs are still load balanced in an active/active configuration and are clustered together.

Prerequisites

The following pre-requisites should be configured before deploying

Single Sign On

As I discussed in my previous post, there are now three options for single sign on in vRealize Automation 6.2, the trusty old Identity Appliance, whose availability is based solely on vSphere HA. There is also vSphere 5.5 SSO and vSphere 6 Platform Services Controllers, which can both be clustered behind a load balancer to provide high availability.

IP Addressing and DNS Records

Four IP addresses and DNS records are required – one for each node, and one for each load balanced service. My DNS records are configured as below:

Load Balancing

Load balancing vRealize Appliance web (HTTPS) and postgres traffic is a very simple affair and doesn’t require a lot of advanced configuration – most load balancers will be fine. As I’ve already deployed the NetScaler VPX Express for the HA PSC setup, I will continue using that.

Typically I prefer to use NSX Edge Gateways, however these are not supported for the PSC configuration.

vRealize Appliance SSL Certificate

A certificate must be generated with with both appliance short names, FQDNs and IP addresses, as well as the short name, FQDN and IP address of the load balanced URL.

Create an openSSL cfg file (mine is saved as Z:\Certificates\vra-app\vra-app.cfg):

Submit your request to your Certificate Authority using an advanced certificate request, copy the contents of the rui.csr file and paste it into the request form, selecting your VMware template.

Download the base 64 encoded certificate and save it in the same folder as the key. Also download any root and subordinate CA certificates in the chain in base 64 format.

Configure the NetScaler Load Balancer

Log onto the NetScaler admin console and go to Configuration, Traffic Management, Load Balancing, Servers and click Add to enter the first vRA Appliance node’s IP. Repeat for the second node.

Go to Configuration, Traffic Management, Load Balancing, Services and click Add to configure a service on TCP port 5432, and SSL port 443 on each appliance node (they will show as down for now – that’s fine)

Go to Configuration, Traffic Management, Load Balancing, Virtual Servers and click Add to configure a virtual server for port 5432 on the load balanced IP for postrgres (configured earlier with the vra-db.definit.local DNS entry). Also create a virtual server on SSL port 443 on the load balanced IP

For the SSL/443 virtual server, click “No Load Balancing Virtual Server Service Binding” and add the two relating services, with the default weight of 1.

Click on “No Server Certificate” and add the SSL certificate (vra-app.cer) and private key (rui-orig.key) generated earlier, then click on “No CA Certificate” and add the CA certificates for the Appliances

On the right hand side, click on the “Advanced” option to add “Persistence” and then configure for SOURCEIP with a timeout of 30 minutes.

For the TCP/5432 server, only add the first node (failover is manual). No certificates are required, and no persistence.

Deploy the two vRealize Appliance nodes

Deploy two appliances using the OVF deploy wizard, ensure you enable SSH on each node. I’ve configured my nodes as below.

Property

Node 1 Value

Node 2 Value

Name

vra-app-1

vra-app-2

IP Address

192.168.50.61

192.168.50.62

Subnet Mask

255.255.255.0

255.255.255.0

Gateway

192.168.50.1

192.168.50.1

DNS Name

vra-app-1.definit.local

vra-app-2.definit.local

Finish the wizard but do not power on the appliance after completion.

Configure vRealize Appliance nodes

Edit both vRealize Appliance virtual machines using the vSphere Web Client, and add a 20GB disk to each before powering them on:

Download the scripts attached to KB2108923 and copy them to the /tmp folder on the first appliance using WinSCP or similar.

SSH to the first appliance node using PuTTY and move to the /tmp folder. Unzip the file:

unzip 2108923_dbCluster.zip

This unzips a .tar file in the /tmp directory. Extract that using the tar commad:

tar xvf 2108923_dbCluster.tar

There are now two scripts in the /tmp directory, configureDisk.sh and pgClusterSetup.sh

Use the command “parted -l” to identify the unformatted disk – on a brand new vRA 6.2 appliance it should be /dev/sdd.

Now configure the disk using the script extracted before:

/tmp/configureDisk.sh /dev/sdd

Now prepare the postgres instance for clustering using the second extracted script:

To use the same password for all three, just omit the -r and -p options – not a good idea for production, but OK to use in lab/testing.

Repeat this process (add a disk, extract and run configureDisk.sh, then pgClusterSetup.sh) on the second appliance.

Now SSH to the second appliance and configure the second node to run as a replica of the first using the “run_as_replica” script.

./run_as_replica –h <Primary Appliance> -b -W -U replicate
[-U] The user who will perform replication. For the purpose of this KB this user is replicate
[-W] Prompt for the password of the user performing replication
[-b] Take a base backup from the master. This option destroys the current contents of the data directory
[-h] Hostname of the master database server. Port 5432 is assumed

Configure the primary vRealize Appliance node

Configure NTP settings

Select the admin tab, then Time Settings, then select “Use Time Server” and specify the time servers. For a lab environment just using one is OK, but for production it’s recommended to use 3-5. Click Save Settings to apply.

Verify Database Settings

The database settings should have been updated to use the load balanced URL configured earlier, click on the vRA Settings tab, then Database to check:

Install SSL Certificates

Return to the administrative interface on the first vRealize Appliance node, select vRA Settings and then Host Settings. Select “Update Host” and enter the load balancer FQDN under “Host Name”.

Select “Import” under SSL Configuration and paste the contents of “rui-orig.key” in the RSA Private Key field. Open the downloaded certificate in notepad and paste the contents into the “Certificate Chain” field. Immediately underneath, paste the next certificate in the chain. If you have an intermediate CA, paste that first, then the root CA.

Click Save Settings to submit – both the host name and the SSL configuration should be updated with the FQDN of the load balancer.

Configure Single Sign On

Click on vRA Settings and then SSO and configure the FQDN of the load balancer URL for the HA PSC server. Change the SSO Port to 443 (7444 is correct vSphere 5.5 or Identity Appliance). Enter the SSO admin password. If you check the “Apply branding” box it will change the SSO sign on from “VMware vCenter Single Sign-On” to vRealize Automation Single Sign-On, so if it’s a shared SSO platform you need to think about the implications for the users – they may get confused!

Click Save Settings, and then (after quite a few minutes) SSO will be initialized.

Click on “Services” tab and wait until all 23 services are initialized – you can monitor progress by SSHing to the first appliance and running “tail -f /var/log/vcac/catalina.out” to view progress. This can take 15-30 minutes – be patient!

Add the second vRealize Appliance to the cluster

Open the vRealize Appliance administrative interface on the second node (https://vra-app-2.definit.local:5480) and log in using root and the password you specified during the deploy.

Configure NTP settings

Select the admin tab, then Time Settings, then select “Use Time Server” and specify the time servers. For a lab environment just using one is OK, but for production it’s recommended to use 3-5. Click Save Settings to apply.

Configure Cluster

Select the vRA Settings tab, Cluster and enter the name of the first node and the root password.

Click “Join Cluster” and accept the certificate.

It takes a few minutes, as with the first node, for all the services to initialize and get registered – however, no additional configuration is required on the second node, the add to cluster process configures everything required.

Providing a highly available single sign on for vRealize Automation is a fundamental part of ensuring the availability of the platform. Traditionally, (vCAC) vRA uses the Identity Appliance and relies on vSphere HA to provide the availability of the SSO platform, but in a fully distributed HA environment that’s not really good enough. It’s also possible to use the vSphere 5.5 SSO install in a HA configuration – however, many companies are making the move to the latest version of vSphere and don’t necessarily want to maintain a 5.5 HA SSO instance.

The vSphere 6 Platform Services Controller can be deployed as an appliance or installed on a Windows host – personally I am a huge fan of the appliances and I tend to use them in my designs because of the simplicity and ease of use. A pair of PSCs can be deployed as a highly available SSO solution for vRealize Automation 6.2, replacing the Identity Appliance or vSphere 5.5. SSO, using either a NetScaler or F5 load balancer to load balance connections and provide the availability.

Personally, I’d prefer to use an NSX Edge Services Gateway to load balance the PSCs, but at the time of writing the Edge does not support the “Ability to have session affinity to the same PSC node across all configured ports”. See KB2112736 for more details.

So, this guide will show you how to create a highly available pair of Platform Service Controllers, configure one as a subordinate Certificate Authority to a Microsoft Certificate Services CA, and then load balance them with a NetScaler VPX. Although I am using just two node, you can in fact use the same method to load balance up to four.

Pre-requisites

Firstly, we need to ensure some of the pre-requisites are completed.

Create DNS A and PTR records

An A and PTR (forward and reverse) DNS record needs to be created for each PSC and the load balancer address.

Configure Distributed Port Group

Ensure that the target Distributed Port Group’s port binding is set to Ephemeral. This is a requirement for the vSphere 6 vCenter Server Appliance deploy because it’s pushing the appliance to an ESXi host not a vCenter Server. Once the appliances have been deployed they can be migrated to a non-Ephemeral port group.

Deploy the first PSC Node

Run the vCenter Server Appliance installer (vcsa-setup.html on the ISO), at this point you might need to install the Client Integration Plugin. Click Install.

Accept the EULA and configure the target ESXi host

Accept any SSL warning and then configure the appliance name and root password. Select the PSC install type.

Configure the new SSO domain. The PSC is 2 CPU and 2GB of RAM with 30GB disk

Note: For vRealize Automation it must use the default domain “vsphere.local” and site name “Default-First-Site”.

Select the target storage and configure the networking.

Note: As I mentioned earlier, only Ephemeral port groups are visible here.

Configure NTP – NTP is critical for vRA deployments, and enable SSH because we’ll need it later to configure the PSC HA.

Review and deploy the node:

Deploy the second PSC Node

The second PSC node is identical to the first, save the name (I’m using vra-psc-2) and the SSO configuration. Instead of creating a new domain, we join the second node to the SSO for the first node, and select the previously created site:

And complete the wizard

Configure the VMCA as a Subordinate CA

The certificates are generated in a folder called VMCA, under the folder configured in the script for output. Copy the output to a separate folder before generating the second CA certificate. To be clear, you need a VMCA certificate generated for each PSC node, using the PSC node’s FQDN, not the load balanced FQDN.

Here’s my two:

Install and configure the VMCA certificates

Upload and install the certificates using the instructions in Derek’s article – I’ve uploaded the two required files to /root/ssl

Run the certificate-manager script to replace the “VMCA root certificate with custom signing certificate and replace all certificates”

/usr/lib/vmware-vmca/bin/certificate-manager

Configure the certificate configuration, ensuring that the “hostname” field is the FQDN of the PSC node:

Once the install is completed, restart the appliance, or restart all the vCenter Services using ‘service-control –stop –all’ and ‘service-control –start –all’.

Repeat on the 2nd PSC node using the certificates generated for it.

Configure PSC with HA Scripts

Download the HA scripts from VMware and copy them to the /tmp directory on the first PSC node. How you copy the zip file up to the appliances is up to you

If you find SCP problematic with the different shells, it’s possible to temporarily mount an NFS share to copy in/out and copied the file from there – e.g:

However, if you’ve enabled bash as the default shell for the root user, just copy them up using WinSCP.

Once the scripts are on the appliance, create a /ha directory “mkdir /ha” then unzip the scripts to the /ha directory:

unzip VMware-psc-ha-6.0.0.2503195.zip -d /ha/

Still on the first PSC node, run the gen-lb-cert.py script using the FQDN of your load balancer:

python gen-lb-cert.py --primary-node --lb-fqdn=vra-psc.definit.local

Copy the /etc/vmware-sso/keys folder to /ha/keys

cp -r /etc/vmware-sso/keys /ha

The /ha folder now looks something like this:

Copy the contents of the /ha folder over to the second node’s /ha folder. How you achieve this is up to you, WinSCP or a shared NFS mount work for me. It’s important to ensure that the “keys” folder is copied across, with the contents of /etc/vmware-sso/keys.

Configure certificates on the second PSC node

Configure NetScaler VPX Load Balancer

Let me just caveat this with “I am a NetScaler newbie” – this worked for me but if you’re a guru, you may know better!

Ensure that you have SSL Offloading enabled before you start – System, Settings, Configure Basic Features. Failing to do so will mean your Virtual Server for port 443 won’t come up until you do – you’ll see in my screenshots below that I didn’t have it enabled, but by the magic of blogging I’ll make it look good and tell you here!

Configure Certificate Trusts

Download from the /ha folder the following files: lb.crt, lb_rsa.key and root.cer.

Once again, create a Virtual Server for each of the ports and bind the relevant Services (node 1 with weight 1, node 2 with weight 10) to each Virtual Server. Port 443 is the only one that requires the Server Certificate Binding.

Deploy the VMTurbo Appliance

Deploying the appliance is simply a case of importing the OVA downloaded. There’s nothing really to configure and it took 61 seconds in my lab environment, so it’s pretty quick! Network configuration is via DHCP and you can configure a static IP by logging into the console and running “ipsetup”

Out of the box the appliance has 4 CPUs and 16GB of RAM – I don’t have any hosts that will support 4 CPUs in my lab, and 16GB of RAM would be overkill for my tiny environment. I have dropped it to 2 CPU and 8GB for now.

Even with the reduced specs, the appliance booted quickly and presented a login screen:

Configuring a Static IP

If you want to assign a static IP, log in using root/vmturbo and run ipsetup:

Once the basics are configured, it’ll ask if you want to configure a proxy server – I don’t, so I skip this using F10.

The networking services are restarted and that’s the network configured with a static IP!

Logging on to the user interface

Open your web browser and enter the IP address of the appliance

Log in using administrator/administrator credentials, and a setup wizard will begin.

You should already have a license key emailed to you when you signed up, so select “I have a license”

Next configure a Target – for me this is my vCenter Server Appliance, I used a service account with read-only access to the vCenter server.

Once the target is added, it will take a few moments to collect the inventory

Configure the email notifications

And that’s the appliance’s basic configuration complete – obviously it’s going to need some time to pull in data and monitor your environment before any useful data is pulled up.

]]>http://www.definit.co.uk/2015/07/deploying-vmturbo-virtual-health-monitor/feed/05972Unable to connect NSX to Lookup Service when using a vSphere 6 subordinate certificate authority (VMCA)http://www.definit.co.uk/2015/06/unable-to-connect-nsx-to-lookup-service-when-using-a-vsphere-6-subordinate-certificate-authority-vmca/
http://www.definit.co.uk/2015/06/unable-to-connect-nsx-to-lookup-service-when-using-a-vsphere-6-subordinate-certificate-authority-vmca/#commentsMon, 29 Jun 2015 12:26:03 +0000http://www.definit.co.uk/?p=5930After deploying a new vSphere 6 vCenter Server Appliance (VCSA) and configuring the Platform Services Controller (PSC) to act as a subordinate Certificate Authority (CS), I was unable to register the NSX Manager to the Lookup Service. Try saying that fast after a pint or two!?

Attempting to register NSX to the Lookup Service would result in the following error:

Initially I thought that the NSX manager needed to somehow import the VMCA certificate to trust the Lookup Service certificate, however after reaching out to the NBSU ambassadors list I had a reply from Julienne Pham, a Technical Solutions Architect and CTO Ambassador with VMware Professional Services, who pointed me to the correct solution.

It seems that changing the PSC and vCenter certificates (even with the Certificate Manager tool) does not correctly update the service registration information. To quote VMware KB 2109074:

…the vCenter Server system uses a new certificate, but the service registration information on the Platform Services Controller is not updated

To resolve this issue, we need to use the ls_update_certs.py script to register the services correctly.

Retrieve the old SSL certificate’s thumbprint

If you haven’t updated the VCSA certificate yet, you can just view the vCenter certificate and find the sha1 thumbprint value. If, like me, you’ve already updated it, you’ll need to use the Managed Object Browser (MOB) to view it.

Enter valid administrator credentials and then edit the “VALUE” field to have just the <filterCriteria></filterCriteria> tags, then click “Invoke Method”.

Use the browser’s built in search function (CTRL+F) to find the FQDN of your VCSA server – you’re looking for entries like the below image, where the sslTrust value is a Base64 encoded string which represents the old certificate.

Copy the Base64 string into a text editor and save it somewhere.

Open the saved certificate and find the SHA1 Thumbprint:

Copy the thumbprint back to your text editor and edit string to replace the spaces:

Update the Lookup Service registration certificates

You need to have a copy of your PEM encoded certificate chain on your VCSA. If you followed Derek Seaman’s excellent guide, you’ve already enabled bash and changed the root user’s default shell, and you also have the certificate chain in PEM format (under /root/ssl). If you haven’t, check out the “Install VMCA Certificate (VCSA PSC)” step on the linked article.

Connect to the VCSA with your SSH client and use the following command to update the lookup service certificate:

We regret to inform you that your attempt to achieve VCDX certification on June 09-11, 2015 in Frimley, UK was unsuccessful.

It wasn’t entirely unexpected, but somehow I still hoped my assessment of the defense was pessimistic and so it was nonetheless disappointing. It’s a big hit to not achieve something I have been focusing on for months and it is hard not to feel embarrassed that I didn’t make the grade. I am looking forward to receiving some feedback from the panelists and will be gearing up for another attempt in October.

Attaining VCDX is a step on a long learning journey and from the start I approached this defense as that – a learning experience. It’s an opportunity to focus and set a goal, and to push myself to reach that next step.

Design Defense

I felt that I defended my design relatively well – although the panelists exposed some issues, I felt that I was able to present explanations and reasoning behind each issue and I had anticipated a lot of the questions they did pose. Going in I felt that the defence of my design was something I should be all over, it was written from start to finish by myself. I may be wrong, and if the feedback is that I didn’t defend it well then I’ll need to reassess this.

What will I do differently?

Plug the gaps – where the panelists exposed an issue in my design I will re-work the solution to close that gap. Where something needed explanation I’ll expand it.

Not “over answer” questions – one of the things I learned from doing mocks was that I wasted time explaining things that I didn’t need to. I’ve seen advice to have a high, medium and low level explanation prepared for everything, and I think that’s a good approach. Give the minimum explanation unless pressed for more.

Design Scenario

I had not anticipated how nerves in the design scenario would affect me. The design methodology is something I do day in, day out – it’s not something that would normally cause me to stumble. What actually happened was that I panicked trying to do a whole detailed design in 45 minutes and lost the structure and method that I normally use. Normally, I would have a couple of weeks to work through the design process. Instead, I started to pull out the Business Requirements and then jumped ahead to a conceptual diagram and then moved back to start the use cases and then jumped to a physical design. Not following through the design method stopped me from moving through the design in a logical manner and I think that’s where I came undone.

What will I do differently?

Follow the methodology – clearly the panelists are not expecting me to produce a low level design in 45 minutes, they want to see me work through the method. They can’t do that if I don’t! Not following the method led to a random jumping around, no structure, and I doubt showed the panelists the confidence required.

Don’t rush in – stop, breathe, think – not for too long, but I started talking and ran down random avenues because I didn’t take a beat and calm myself.

Mock scenarios – I didn’t spend as much time doing mock scenarios as I did preparing to defend my design, certainly not with the time and pressure of the real defense. I’ll be getting as many mock scenarios together as I can.

Troubleshooting Scenario

Troubleshooting isn’t something that you end up doing as often when you’re working as a consultant/architect but it is something I do a lot in my lab. Having panicked in the previous scenario I tried to recover my composure somewhat for this one, but once again I think I rushed in too quickly. The scenario is not about the solution, it’s about the method – a structured, logical approach is key. I found it hard to judge how I did in this one, I did rush headlong down one particular avenue on more of a “gut feeling” than I should have, but we shall see what the feedback is.

What will I do differently?

Go slower – establish the facts and the basics before focusing in too quickly on what I think the problem is. The CMA troubleshooting scenario is 30 minutes long and after 20 minutes I had run out of ideas in the narrow field I had focused on.

Mock scenarios – again, the focus of my mocks was on the design defense and not the scenarios, something I should have prepared better for – with the timer.

All of these things are my reflections on the defense, and why I failed. I could quite easily be wrong and have completely missed the reasons I failed – hopefully the feedback will confirm my thoughts and help me re-focus on the right areas. It has been a huge learning experience, from getting my design ready to submit, preparing to defend it, and then defending. I will continue to learn and develop and push myself for this next milestone, and beyond.

People say it’s a huge achievement just to be invited to defend, and I don’t disagree, but I am more determined than ever not just to get the invite, but to successfully defend VCDX.

]]>http://www.definit.co.uk/2015/06/my-vcdx-cma-defense-experience/feed/05912Automate the deployment of vROps (supporting vDS)http://www.definit.co.uk/2015/05/automate-the-deployment-of-vrops-supporting-vds/
http://www.definit.co.uk/2015/05/automate-the-deployment-of-vrops-supporting-vds/#respondMon, 18 May 2015 12:50:17 +0000http://www.definit.co.uk/?p=5898Recently I have been looking at William Lam‘s excellent post on automating the deployment of vROps.

After having a play around with it, to suit my own needs, I made some modifications to the Powershell script so it would support distributed switches.

I read Christian’s comments on the vCommunity – or lack of it – yesterday and although some things resonate with me, and a lot of other people, I don’t quite agree. I want to be clear this is a response to Christian and not an argument, I respect Christian as someone who does contribute to the vCommunity.

I think that there is a strong "vCommunity" – but I don’t think you will see much of it by following some twitter "superstars".

Christian says:

As clickbait replaces journalism, hyperbole and FUD seems to be replacing what used to be based on technical merit.

And this is something that I agree with…there are a lot of "big names" on twitter and on their blogs who are using their large audience to spread FUD (Fear, Uncertainty, Doubt). I’ve seen so many running battles over this kind of nonsense recently that I 100% understand why Christian would feel like he does.

As a side note to those working for vendors who engage in this online FUD/counter-FUD – for customer or an independent it leaves people feeling stone cold towards you and your product if your best marketing is to undermine the opposition. Either your product sells on its own merit, or it does not. Give your customers enough credit not to patronise them by engaging in this.

…changing your personality, well probably not as healthy. Also, it probably shows that your previous “personality” wasn’t real either. Again, not so healthy

Again, I agree with Christian.

Where I disagree with Christian is here:

we are not collectively working towards anything but our own self indulgence or self worth, or whatever might seem to be the best “move” at any given time

and here

I won’t kid myself into thinking that I can influence this trend in any way, shape or form

Let me explain why.

Let’s take for example the VMUGs – I know for a fact that the people who run the London and South West VMUGs are not doing it for themselves, for personal profile or for any other reason than to build a community. I know them personally and I they are not self promoting. They help people connect, they help people develop – and they don’t get paid for organising these events which take a lot of time and effort to produce. The awesome thing about VMUGs is that it’s in people’s common interest to be involved.

How about blogs? Well, true, some people do use their blogs to spread FUD, or counter-FUD and it’s ugly. Some blogs are about profile. But how many others are just about getting info out there? Take this blog you are reading right now – I have spent countless hours writing articles for this blog, hours of my own personal time. It’s not going to make me rich, it’s not going to make me famous. The number of people making money off of their tech blogs that exceeds the time they put into it, is very small.

Even on twitter there’s a lot to be said for the community – if I tweet out a question on the #vExpert or #VCDX tags I am almost guaranteed to get an answer from people trying to help. You could argue that it’s not altruistic, and they are just trying to prove what they know…but what do I care if they are? I asked for help, they helped me – for free. It’s true, you can’t say that ALL VCDXs, or ALL vExperts contribute in this way, but I’d be willing to bet that the majority do. I have just submitted for VCDX and I can testify to the number of current VCDXs who have helped – and there’s a decent study group of people, working together for

Finally, the question of influence. We all influence everything we are involved in just by being involved – it is our choice as to whether that influence is positive or negative. The old adage "don’t feed the trolls" can apply just as much to FUD – the more you respond with righteous indignation, the more people will see it. Give back to the community, go meet some awesome people at VMUGs, write a blog post that is just about the tech because it might help someone out in future. Contribute on twitter to helping people solve their problems – pay it forward!

I know the frustration that Christian feels, but I also know that he does contribute to the community in these ways – he’s helped me before and I hope that I’ve helped him at some point.

Thanks for reading – and for contributing to a great community.

Sam

]]>http://www.definit.co.uk/2015/04/in-response-to-h0bbel-the-vcommunity/feed/05855vSphere 6 Lab Upgrade – vCenter Orchestrator to vRealize Orchestratorhttp://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vcenter-orchestrator-to-vrealize-orchestrator/
http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vcenter-orchestrator-to-vrealize-orchestrator/#respondThu, 02 Apr 2015 16:23:55 +0000http://www.definit.co.uk/?p=5852I tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Upgrading the vCenter Orchestrator Appliance

Upgrading the vCenter Orchestrator Appliance is child’s play – just log onto the admin interface at https://vco.fqdn.com:5480 using the root credentials.

Select the update tab, then click “Check Updates”. You should see appliance version 6.0.1 available, then click Install Updates

Accept the EULA and then confirm the update

Sit back, relax and wait while the upgrade package downloads and the appliance upgrades

Reboot when prompted

Once the reboot is completed, you have your newly upgraded vRealize Orchestrator Appliance!

]]>http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vcenter-orchestrator-to-vrealize-orchestrator/feed/05852vSphere 6 Lab Upgrade – VSANhttp://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vsan/
http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vsan/#commentsThu, 02 Apr 2015 11:16:38 +0000http://www.definit.co.uk/?p=5838I tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Upgrading to VSAN 6.0

The upgrade process for VSAN 5.5 to 6.0 is fairly straight forward

Upgrade vCenter Server

Upgrade ESXi hosts

Upgrade the on-disk format to the new VSAN FS

Other parts of this guide have covered the vCenter and ESXi upgrade, so this one will focus on the disk format upgrade. Once you’ve upgraded these you’ll get a warning on your VSAN cluster:

Because I’m only running a 3 host VSAN there are some special considerations:

If I evacuate the data from each host as I upgrade the disk format (which is an absolute must in a production environment) it will fail. This is because my failures to tolerate is set at 1, which means VSAN needs 3 copies of the data to rebuild (two mirrors and a witness). If I evacuate one of my three hosts, then there are not sufficient hosts remaining to maintain that protection level. For this reason, I must use the “Ensure accessibility” maintenance mode – which means if I lose hardware while doing the upgrade, I will lose data.

When I put each host in maintenance mode, I need to make sure that I have enough capacity in the other VSAN nodes to accommodate the data being moved around.

I also need to use an option to allow reduced redundancy while doing the upgrade, which again exposes me to data loss IF I lose hardware while doing the upgrade.

Having said all of that, I have a good backup of my lab and actually, I don’t care that much if I lose them

If you’re familiar with PowerShell, rvc reminds me a little of using a PSDrive to navigate the structure of vCenter, but it’s not exactly intuitive.

The first command I want to run is the vsan.cluster_info command to check the health of the cluster. To do this I need to know the path to my cluster in RVC. As you can see from the image on the left, the VCSA host name is vcsa-01, the datacenter is “DefinIT Lab” and the cluster is “HPN54L”. I can use these to build my path to the cluster.

Note the space must be escaped with a “\” and the “computers” folder in there too.

vsan.cluster_info /vcsa-01/DefinIT\ Lab/computers/HPN54L/

There’s a lot of output from that command, but you should see various clues such as “VSAN enabled: yes” and see the role of each host in the cluster “Cluster role: master”.

You can also see the “Auto claim: on” setting – this means VSAN will automatically claim new disks (automatic mode). This is fine for the normal operation of VSAN but needs to be changed to manual for the upgrade process. Use the vsan.cluster_change_autoclaim command to disable auto-claim. Use the option –disable, or -d to disable.

You can also see from the “Node evacuated” message, that I’m still in maintenance mode. Somewhat counter-intuitively, the cluster needs to be out of maintenance mode in order to upgrade! This is because it needs to evacuate data on the disks, and it can’t if the other nodes won’t accept it.

Next up use vsan.disk_stats to verify the health status of all devices in the cluster:

vsan.disks_stats /vcsa-01/DefinIT\ Lab/computers/HPN54L/

Verify that all components are OK in the Status Health column.

Next up use the vsan.check_state command to ensure that everything is in-sync:

vsan.check_state /vcsa-01/DefinIT\ Lab/computers/HPN54L/

You can see that 0 objects are inaccessible and everything is in sync:

Another method to do this is to use the vsan.resync_dashboard command:

vsan.resync_dashboard /vcsa-01/DefinIT\ Lab/computers/HPN54L/

Finally, I need to do a “what if” scenario to check what would happen if I lost a host:

vsan.whatif_host_failures /vcsa-01/DefinIT\ Lab/computers/HPN54L/

My lightly used VSAN would not have a capacity problem if I lost a host:

So, from the results above I can conclude that my VSAN cluster is healthy, and I can move on to the disk upgrade.

Performing the disk format upgrade

Upgrading the disk format is a single command for the whole cluster, using the option I mentioned before to proceed even with reduced redundancy (required for a 3-host VSAN)

This goes through the process of updating all the disks in the cluster and can take quite a while depending on the number and size of disks, and the number of objects stored on them. The first time I ran the upgrade, it failed because one object did not upgrade successfully, even though all the disks were correctly upgraded.

I simply re-ran the upgrade command to upgrade the last object:

And that’s my VSAN upgraded, I can verify the version of the disks in the web client:

]]>http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vsan/feed/15838vSphere 6 Lab Upgrade – Overviewhttp://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-overview/
http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-overview/#respondWed, 01 Apr 2015 19:23:50 +0000http://www.definit.co.uk/?p=5805I tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

I will be upgrading

vCenter Server Appliance – currently 5.5 update 1

vSphere Update Manager – currently 5.5 update 1

3 HP N54L resource hosts

1 Intel NUC management host

In my lab I run various VMware software suites listed below, although I typically run them in nested environments to keep my lab install relatively clean.

High level plan

Having read a lot of vSphere 6 docs, my upgrade plan is as follows:

Upgrade vCenter Server Appliance

Upgrade vSphere Update Manager

Upgrade ESXi

Upgrade VSAN

Upgrade nested labs and other software suites

]]>http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-overview/feed/05805vSphere 6 Lab Upgrade – Upgrading ESXi 5.5http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-upgrading-esxi-5-5/
http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-upgrading-esxi-5-5/#respondWed, 01 Apr 2015 19:11:49 +0000http://www.definit.co.uk/?p=5802I tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Checking for driver compatibility

In vSphere 5.5, VMware dropped the drivers for quite a few consumer grade NICs – in 6 they’ve gone a step further and actually blocked quite a few of these using a VIB package. For more information, see this excellent article by Andreas Peetz.

To list the NIC drivers you’re using on your ESXi hosts, use the following command:

As you can see from the results, my HP N54Ls are running 3 NICs, a Broadcom onboard and two Intel PCI NICs. Fortunately the Broadcom chip is supported and the e1000e driver I’m using is compatible with vSphere 6 and is in fact superseded by a native driver package.

Upgrade to ESXi 6 using vSphere Update Manager (VUM)

Open the VUM administration console in the vSphere Client – you can’t do this in the Web Client yet. If you’re in Compliance view, use the link in the top right hand corner to get to the Admin view.

Select the ESXi Images tab, and then select the Import ESXi Image… link

]]>http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-upgrading-esxi-5-5/feed/05802vSphere 6 Lab Upgrade – vCenter Server Appliancehttp://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vcenter-server-appliance/
http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vcenter-server-appliance/#respondWed, 01 Apr 2015 15:23:33 +0000http://www.definit.co.uk/?p=5753I tested vSphere 6 quite intensively when it was in beta, but I didn’t ever upgrade my lab – basically because I need a stable environment to work on and I wasn’t sure that I could maintain that with the beta.

Now 6 has been GA a while and I have a little bit of time, I have begun the lab upgrade process. You can see a bit more about my lab hardware over on my lab page.

Upgrading the vCenter Server Appliance

Download and mount the VMware-VCSA-all-6.0.0-2562643 ISO image (mounted as G:\ on my workstation).

Browse the ISO and run the Client Integration Plugin “G:\vcsa\VMware-ClientIntegrationPlugin-6.0.0.exe” – it’s a simple next, next finish sort of install.

Next, open “G:\vcsa-setup.html” in your browser of choice (I’m using Chrome) and when prompted, allow the plugin to launch:

We now have the option to Install or Upgrade…click Upgrade

Ensure you have a supported vCSA version to upgrade:

Accept the EULA and click next

Enter the ESXi host details, and credentials – it’s worth pointing out here that you do need to have enough RAM/CPU available to deploy the vCSA appliance to the host. Check you’re not deploying to a host in maintenance mode(!) and if you’re using a vSphere Distributed Switch, there’s an ephemeral portgroup available.

Select the correct version of the appliance you’re upgrading from, and enter the credentials for SSO and root. In my lab I’m not bothered about migrating historic performance data, but it’s a consideration for production environments. Enter the details of the source ESXi host and check that it is not in lock down or maintenance mode – and check that DRS is set to manual or disabled before upgrading – you wouldn’t want the old vCSA migrating away while you’re trying to upgrade!

Click yes to the warning – basically it’s going to set the postgres password and open port 22 to transfer data over SSH.

Select the appliance size – since this is my lab and it will manage 4 hosts and ~30 VMs I will go with Tiny

Select the datastore on which to place the vCSA and enable thin provisioning if required

In order for the new vCSA to copy data from the old vCSA it needs to be on the network, so select a network and configure either DHCP or a static IP address. I have a DHCP server on this network, so I will use DHCP. After the upgrade is completed the new vCSA will use the same network/IP as the old one.

Review the summary and complete the wizard.

At first the deploy seemed to be going well and I could see the new appliance booting…

…however, that soon ended!

I had to manually power off and delete the deployed vCSA..

I re-ran the upgrade wizard using IP addresses rather than DNS names, which resolved the issue (my DNS server was on the same host and ran out of memory).

This time it ran through correctly, taking about 37 minutes on my resource constrained management host.

I could log straight into my vCenter, browse the inventory and manage my hosts.

Upgrading vSphere Update Manager

Upgrading vSphere Update Manager was simple, just mount the ISO VMware-VIMSetup-all-6.0.0-2562643 on the Update Manager Server and run the Update Manager Executable “G:\updateManager\VMware-UpdateManager.exe”

Assuming your OS and SQL server are supported, the installer detects the earlier version of VUM and offers to upgrade it. After that it’s a simple upgrade wizard with very little to write about!

It was almost there – but not quite – when I opened the Update Manager tab in the Web Client, I got the following message

There was an error connecting to the VMware vSphere Update Manager

The installer had changed the service to run under LocalSystem, rather than my specified AD Service Account – once I changed the Log On account and restarted the service, it all kicked back into life.

Next up – upgrading ESXi to 6.0.

]]>http://www.definit.co.uk/2015/04/vsphere-6-lab-upgrade-vcenter-server-appliance/feed/05753Creating a vRealize Log Insight 2.5 cluster with Integrated Load Balancinghttp://www.definit.co.uk/2015/02/creating-a-vrealize-log-insight-2-5-cluster-with-integrated-load-balancing/
http://www.definit.co.uk/2015/02/creating-a-vrealize-log-insight-2-5-cluster-with-integrated-load-balancing/#respondWed, 11 Feb 2015 15:36:32 +0000http://www.definit.co.uk/?p=5678vRealize Log Insight 2.5 improves on the clustering in previous versions with an Integrated Load Balancer (ILB) which allows you to distribute load across your cluster of Log Insight instances without actually needing an external load balancer. The advantage of this over an external load balancer is that the source IP is maintained which allows for easier analysis.

The minimum number of nodes in a cluster is three, the first node becomes the Master node and the other two become Worker nodes. The maximum number of nodes supported is six, though acording to Mr Log Insight himself, Steve Flanders, the hard limit is more:

@sammcgeown yes though hard limit in product is much higher. How big do you need?

The Log Insight appliance comes in four sizes: an Extra Small, for use in labs or PoC, through to Large, which can consume a whopping 112.5GB of logs per day. Those figures scale in a linear fashion for clusters, so a 3-node cluster of large instances can consume 337.5GB per day, from 2,250 syslog connections at a rate of 22,500 events a second. See more on sizing vRealize Log Insight.

Load Balanced Cluster Pre-requisites

Configure a minimum of three nodes in a Log Insight cluster.

Verify that all Log Insight nodes and the specified Integrated Load Balancer IP address are on the same network.

The Log Insight master and worker nodes must have the same certificates. Otherwise the Log Insight Agents configured to connect through SSL will reject the connection. When uploading a CA-signed certificate to Log Insight master and worker nodes, set the Common Name to ILB IP address during certificate generation request. See Generate a Certificate Signing Request.

Select your Issuing Certificate Authority and then save the certificate that is returned (I called mine LogInsight.cer).

Now create a new text file and paste in, in order, the contents of the private key file (rui.key), the certificate (LogInsight.cer) and the Issuing CA certificate. Save the new file as LogInsight.pem.

This certificate will need to be added to all of the LogInsight instances later.

Deploy the Master vRealize Log Insight node

The OVF deploy is simple:

Select the OVF template

Accept the OVF additional configuration items

Accept the EULA

Select a name and location

Select the size required

Select the storage

Choose a network

Configure the network settings

Review and deploy

Once the node has completed it’s boot up and is at the screen below you can proceed.

Open your web browser of choice and go to the IP address of the Master node and you should see the startup wizard. For the Master node we start a new deployment:

Note: If you see “Failed to start new deployment” you can reboot the appliance and try again, however I found that it caused problems later on in the deployment (“Login failed” during the wizard) and required a re-deploy of the appliance. When I redeployed, I rebooted pre-emptively and still saw the “Failed to start new deployment” message. Rebooting again allowed me to proceed – this time without the “Login failed” messages. The login failed errors actually stop you from completing the wizard, so if you see those it’s time to redeploy.

After a reboot clicking the “Next” button takes you straight to the next step, which is to set the admin password (This is a different user to the “root” password that we set during the OVF deploy, that’s used for SSH access).

Configure the Admin user

Add the license key

Configure notifications

At this point I skipped the NTP and SMTP settings to configure them later (I have found the wizard to be unreliable, NTP would not pass tests and SMTP would not send a test email). Finish the wizard and you should be presented with the login page:

Configure the Master Node

Log in to the master node using “admin” and the password you set. In the top right hand side of the dashboard view, you can open the Administration page

Select “Time” under the “Configuration” heading. Time is always critical for clustering, as well as for accurate logging. I have a local NTP server set up on my network to which everything syncs – so I configured and tested that:

Similarly, go to the SMTP section and configure the SMTP settings:

And again, if required, select the Authentication page and configure Active Directory support

AD Groups can be added via the “Access Control” page

Finally, import the certificate via the SSL page and reboot

Deploy the Worker nodes

Deploy two more instances in the same way as the first – but this time use the “Join Existing Deployment” option, and then specify the first node (for me vrli-01.definit.local) and click “go”. You’ll then see a message to go and approve the Worker on the Master node (and you’ll get an email, if you set that up).

Click allow to join the worker to the cluster.

The Worker will then show a nice green tick, and the Master will have a whole load of information which can be summed up as “configure DNS and NTP, and think about load balancing” – all of which we have done, or will do.

Click OK on the Worker node and it will take you back to the login page. Log in using the admin credentials and you can see there are very few options available to us, just some general information and the SSL page.

Import the SSL certificate we generated before and restart the Worker

Enabling the Integrated Load Balancer

Now the cluster has been deployed and the two Worker nodes are registered with the Master, we can enable the Integrated Load Balancer (ILB). So long as the pre-requisites have been met, this is simply a case of ticking the box and entering an IP address for the load balancer:

Once saved, it will take a few seconds to configure and then the status will go green – “Available”

I’ve created a DNS record for the load balanced IP to use to direct clients, this means any future changes can be implemented easily.

Post deployment configuration

Having a load balanced cluster is all well and good, but if the nodes are on the same physical host and that goes down, you could be faced with a loss of data while HA recovers them. Be sure to create an anti-affinity rule to keep them on separate hosts!

]]>http://www.definit.co.uk/2015/02/creating-a-vrealize-log-insight-2-5-cluster-with-integrated-load-balancing/feed/05678Updating Cisco SG300 firmware the command line wayhttp://www.definit.co.uk/2015/02/updating-cisco-sg300-firmware-the-command-line-way/
http://www.definit.co.uk/2015/02/updating-cisco-sg300-firmware-the-command-line-way/#respondMon, 09 Feb 2015 15:14:26 +0000http://www.definit.co.uk/?p=5601I recently had the “pleasure” of upgrading my lab switch, which is the excellent Cisco SG300-20, I’ve not had a chance to update the firmware since it was released 6 months ago because of the downtime. For some reason I prefer configuring the SG300 from the command line – a hangover from my old networking days I suppose, but somehow it doesn’t feel right to me to use the GUI!

I found an article by Chris Wahl which ran through the steps required to do it via the GUI. If you’re only interested in doing the update, then I suggest following Chris’ article – otherwise follow me for some CLI goodness!

@sammcgeown and you’re going to post the CLI version instructions, right?

Note: You can’t go directly from version 1.1.2 to 1.4.0.88 – you have to use 1.3.7.18 as an intermediate update

You need a TFTP server set up – I like Open TFTP Server but there are plenty of other free ones around – just unzip the firmware package from Cisco and place the two files (.ros and .rfb) in the TFTP root. It’s worth noting that the rfb file is a boot image, and the ros file is the firmware image.

It’s a pretty good idea at this point to back up your switch configuration and current firmware/boot image. Also, your switch will reboot a couple of times during the process so make sure nothing important is happening on the network – I shut down my whole lab environment.

Open a PuTTY session to your SG300 and use the below command to update the boot image first:

copy tftp://<tftp server/path/to/bootimage.rfb boot

Use “sh ver” to check that the Boot version is now correct:

Note: You can’t go directly from version 1.1.2 to 1.4.0.88 – you have to use 1.3.7.18 as an intermediate update

Use “wr” to ensure you current running-config is saved to startup-config and then “reload” to reboot the switch.

Once the switch has rebooted, SSH back in and upload the firmware package:

copy tftp://<tftp server/path/to/firmware.ros image

Now the image has been copied to flash but is not yet active, use “sh bootv” to list the versions available:

Check which filename is the new version and “not active” and then use the following command to activate it next boot:

boot system image-2

Once again, “wr” and “reload”. When the switch comes back up you can check it’s using the new firmware:

]]>http://www.definit.co.uk/2015/02/updating-cisco-sg300-firmware-the-command-line-way/feed/05601vRealize Orchestrator (vRO/vCO) – Troubleshooting SOAP operationshttp://www.definit.co.uk/2015/01/vrealize-orchestrator-vrovco-troubleshooting-soap-operations/
http://www.definit.co.uk/2015/01/vrealize-orchestrator-vrovco-troubleshooting-soap-operations/#respondWed, 28 Jan 2015 12:09:19 +0000http://www.definit.co.uk/?p=5583Recently, I’ve had a bit of a SOAP baptism of fire – the project I am working on makes hundreds of SOAP calls to multiple SOAP APIs on multiple hosts. During this time I’ve encountered some common and rare problems and troubleshooting them seems to be a bit of a black art, if the number of results in Google is any measure.

To demonstrate some of these troubleshooting methods I will use a global weather SOAP service, http://www.webservicex.com/globalweather.asmx?WSDL. I’ve added the web service to vRO using the “Add a SOAP host” workflow, and then used the “Generate a new workflow from a SOAP operation” workflow to create a new workflow: GetWeather. This simple workflow runs successfully:

Using the SOAP Interceptor

The SOAP Interceptor allows you to both examine, and manipulate the body of the XML SOAP request sent to the SOAP host, and the response returned to Orchestrator.

The SOAPInterceptor class has four methods that you can use to assign a function to process the request head/body, or the response head/body. It’s worth noting that you can use just one of the methods at once if required – you don’t have to define all of them.

Now, with these functions you can begin to dig a little deeper – for example, you can write the SOAP body out to the System.log:

Now when I run the workflow, I can see the XML request body:

And if I add a handler for the response body…

I can see the XML output from the SOAP host that is returned to me:

Using Fiddler to see the whole SOAP message

Viewing the SOAP body is useful, and allows a certain amount of troubleshooting – but sometimes you need to see the bigger picture, and that’s where I use Fiddler. Fiddler is described by themselves as “The free web debugging proxy for any browser, system or platform” – it basically allows you to intercept and analyse local traffic a-la-WireShark, but also create a local proxy server to monitor remote traffic through.

Using Fiddler as a proxy allows you to see the full SOAP request and response.

Configuring Fiddler as a proxy server

Configuring Fiddler is really easy – once it’s downloaded and installed you can simply go through Tools > Fiddler Options… and then select the Connections tab. Tick “Allow remote computers to connect” and then specify the port you want your proxy to listen on – simple as that!

Next, add your SOAP host again, but this time specify a proxy when you do:

Recreate the workflow from the new SOAP host’s operation, or modify the existing workflow to use the new proxied SOAP host.

Viewing the full SOAP conversation

Now in the Fiddler console, you should be able to locate the call to the SOAP host – labelled 1 – then once it’s selected you can see the XML sent to the host (I like the SyntaxView) – labelled 2 – and the response from the SOAP host – labelled 3.

Problems encountered with Orchestrator and SOAP

Below are some of the problems I encountered while working with SOAP hosts – I hope to add a few more later.

WSDL and the response do not match

Orchestrator is strict about parsing responses from a SOAP host – if it does not match the WSDL it will throw all sorts of spurious errors. The chances are that if there’s a problem, it’s to do with the WSDL.

One such misleading message is below – SOAP host is not available:

By using the SOAP Interceptor to process the responseBody, I found another cryptic message which allowed me to track down the real problem “Invalid XML request body”:

Finally, after comparing the WSDL to the response (viewed using Fiddler) I found that the response contained an item that was not defined in the WSDL.

XML generated by Orchestrator isn’t interpreted properly

The XML generated by Orchestrator is always valid – but sometimes the SOAP host does not like the way it’s shaped – for example I had a problem with the XML namespace definitions.

Orchestrator likes to define namespaces in the nodes – e.g. the below code defines the namespace “tns” on each attribute sent to the Weather service:

One of the APIs I used was not expecting the namespace in this format, so I was able to use the SOAP interceptor object again to modify the namespace code using the String.replace() function to strip out the definition in the “tns” tags themselves, and add it to the “axis2ns” tag: