When Rubrik launched in 2014 with version 1 of the product, it focused on Protecting VMware vSphere workloads. It was able to protect a VMs VMDKs and index the data therein, therefore making the files searchable for easy file-level recovery. In addition, you are also able to live mount a vSphere virtual machine backup snapshot directly from the Rubrik cluster using NFS, essentially bringing the backup snapshot to life as a running virtual machine within seconds. Not only can you do it on one VM, but many virtual machine snapshots at the same time.

But innovation for us doesn't stop at virtual machines. We added support for protecting Microsoft SQL databases, using the native SQL APIs. And yes, of course, we are able to live mount a SQL database backup snapshot from the Rubrik cluster back into SQL server, making the backup snapshot available for querying in MSSQL.

We also support the live mounting of backup snapshots for Hyper-V VMs and managed volumes (containing Oracle Backups).

With CDM4.2 now officially released, I'd like to look at one of the new features we've added, Windows Volumes, and yes, you've guessed it, the live mounting of Windows Volume snapshots.

Windows Volume protection in CDM4.2 enables you to take a backup of a full Windows Volume and therefore the system state. Prior to CDM4.2, Rubrik was able to take a file-level backup of a Windows and Linux VM, but the system state wasn't being protected. In CDM4.2 we are now able to protect the full drive volume, as well as make any backup snapshots of that volume available via our live-mount feature within seconds, by exposing the backup snapshot as an SMB share from the Rubrik cluster.

After a snapshot has been taken of a full Windows Volume, we are able to recover individual files from the volume snapshot (multiple file restores are also a new feature in 4.2), in the same way as what is possible with a VM-level backup or a file-level backup, by either searching for the filename or by browsing the backup snapshot. In addition, we can choose to mount the snapshot which will expose the data within the snapshot as an SMB share:

When selecting the mount option, you will see a list of all the volumes that were backed up with Rubrik:

Selecting the volume and clicking next will present us will a list of available Windows hosts with the Rubrik Connector installed. We can also choose not to mount the snapshot to a specific host, but to rather just expose the backup snapshot via SMB by providing a link to the SMB share:

For someone with a strong background in automation and scripting, the ability to expose backup data of a full Windows Volume via SMB without the need to mount the snapshot to a particular host is a game changer. Remember, this is Rubrik we're talking about. Therefore, if it's in the UI, it was first in the API. And if it's in the API, we can use it in automation. Just think of the possibilities. You can take a backup snapshot of any point in time, programmatically live-mount it via the Rubrik REST API, which will return an SMB link, and use the data in that snapshot for dev/test purposes. When done, just unmount the snapshot again using the API.

For this post, I have selected a C: volume for a Windows host called "mysqldev". I've also opted to mount the snapshot back on to the "mysqldev" host. In the Rubrik UI, we can see the live mount has succeeded by clicking "Live Mounts -> Windows Volumes":

The backup snapshot has been mounted on the "mysqldev" VM under C:\rubrik-mounts:

At Rubrik, we believe that backup infrastructure and the data held within can serve a bigger purpose than just an insurance policy. If you're backing up the data, why not use it in a meaningful way? This feature, as with many other such features in CDM4.2 is just another way that we help enable you to get more out of your backups. Don't Backup, Go Forward!

So, more than a month has passed since I've joined Rubrik on the 2nd of January 2018. Those who know me would know that before moving to Rubrik, I worked for Computercenter for nine years. So what's it like leaving a secure role in a large corporate environment after such a long time, to join a 4-year-old startup?

Well, there are many aspects to consider. Firstly, my last role at Computacenter was all about cloud automation and orchestration. Backup was the last subject I was eager to discuss, so not only did I make a decision to leave a comfortable role to my interests and skillset, but I also made a decision to join what most people believe to be a backup company. Not many people saw that coming. That being said, after month one, I am more than just pleased to have made a move as the company culture is excellent and the technology rock solid and very interesting. Simply put, Rubrik seems to have made the impossible, possible. That is, to make the subject of backup and data protection not suck anymore.

The first thing that an institutionalised corporate junkie like me have to get used to in the startup world is the onboarding process. My laptop (a MacBook Pro), which was ordered with a single email to Rubrik IT requests from my personal email address in which I stated my preference, was delivered within a business day, still sealed in the original Apple packaging with no corporate images, approved software or branding applied. Contrast this with the traditional corporate culture in which I dwelled for my entire professional career, where locked-down corporate Windows builds and prescribed approved software applications are the order of the day.

It did not take long for me to understand why my laptop was so "vanilla". I soon realised that everything, and I really mean everything we use internally is based on SaaS applications. From HR, email, documentation, messaging, conferencing, expenses, everything is SaaS. So there really isn't any need for complicated corporate builds. All you need is a reliable machine (to be determined) and internet connection, and you're good to go.Rubrik has an extensive set of learning content for the onboarding process. This is followed by a boot camp training week, which in my case was held in only my second week with the company at the Rubrik HQ in Palo Alto.

The second aspect of my move to Rubrik that I'm getting used to now is that it's my first role within a vendor. Previously, I was either employed by non-IT specific businesses or channel/service providers. What that means is that we make things with features and quirks, just like any other vendor. One thing that I quickly realised is the rapid pace of product development, with (if you're not careful and specific with what you want to know) an overwhelming amount of information coming out of the engineering teams about improvements, changes, updates planned within the lower bowls of the product. When I joined, people kept hammering home just how fast Rubrik moves as a business, but you don't realise what that means until you're in the thick of it. I can see how being involved in startups are so addictive and why so many people go from startup to IPO or in many cases, acquisition and then on to the next startup.

The rapid pace isn't just limited to product development but in every other aspect of the business. The sale cycle is short, and the deal sizes are impressive, and towards the end of a quarter, your inbox is on fire with customer win notifications. Just the sheer volume of those emails was something to get used to!

Transparency is one thing that I'm also finding fascinating. All employee calendars are open and shared. If I wanted to know what Bipul (the CEO) was up to, I could find out by looking at his packed calendar.

One month in, I'm learning at a rapid pace to try and get up to speed and also to stay up to speed, not only with the technology but also with how the business operates. I'm doing quite a bit more travelling, but at the moment, I enjoy getting out and about and meeting new people. I've also been to a few data centres to get my hands dirty with installing some Briks. That's been a lot of fun as well. I've been learning the Rubrik API and thinking about all of the use cases and scenarios where we can use the power of Rubrik and the API to make data genuinely cloud-mobile.

There's just too much that I can say about my first month at Rubrik to cover it all in a single post. I hardly have enough time in the day to learn what I need and want to about the company and the technology, so blog posts are not my current priority. I'm confident that this will change as time goes on, and that there will be many posts coming out of this site shortly. But for now, bring on the future because from where I'm sitting, it's looking very bright with Rubrik!

We had an issue today where the vRealize Automation (vRA) 7 Event Broker Service (EBS) would time out. The timeouts would happen intermittently, during different stages of the provisioning lifecycle. We noticed that something was not right when extensibility workflow calls to vRealize Orchestrator (vRO) would return after the vRO workflows completed, but the provisioning lifecycle state for the virtual machine would fail to change or progress and eventually time out with an EBS timeout message.

While doing some investigation work, I found that for some reason, the RabbitMQ configuration in this distributed HA vRA deployment did not look right. From the vRA Cafe appliance VAMI portal, I could see that both appliances only showed the local RabbitMQ instance as a cluster node. Issuing the following command on the command line of each vRA appliance confirmed my suspicions. The RabbitMQ configuration was not clustered: (Server names have been changed for this post)

For the EBS to function correctly, it needs to have access to the RabbitMQ queues. If an event is placed in a queue on appliance 1 and appliance 2 is waiting to process a task related to that event but waiting for a notification from the queue; the task times out after a default period of 30 minutes if the message never makes it to the queue on appliance 2. If the RabbitMQ instances between the vRA nodes are not clustered, messages in RabbitMQ queues on one appliance will not be visible to the other appliance.

To fix the issue, take snapshots of the vRA appliances, log into vranode2 via SSH and issue the following commands to form a RabbitMQ cluster:

I've been with Computacenter now for 8 and a half years, during which time I have been privileged to have worked with many talented people and for many wonderful customers. However, every good thing does eventually come to an end. After much deliberation, I've decided that the time has come to bring this chapter of my life to a close.

It is not often that you hear of IT professionals who stay with the same business for 8 or more years. Many of my friends in the industry have changed companies several times during my time at Computacenter. However, within Computacenter, many people have been in the business for much longer than my eight years, some have been there for more than 20 years. When asked why, I'm sure most, if not all of them will tell you that it is an excellent place to work, with many opportunities and a great culture.

However, sometimes opportunities come and find us, rather than us actively looking for them. I've had my fair share of offers over the past eight years to join other businesses, but I've never really felt that those opportunities were right for me. However, this time, I could not just sit back and pass on the opportunity.

So, I am pleased to announce, that as of the 2nd of January 2018, I will be joining Rubrik as a Solutions Architect within the EMEA region. I'm looking forward to the new challenges that await in vendor land.

I've learned a lot during my time at Computacenter, and I will always be grateful for all the support I enjoyed from individuals within the business. However, I believe the move to Rubrik will take me out of my comfort zone and help me grow even further, and I am excited for what lies ahead and grateful for the opportunity. Onwards and upwards!

Custom properties in VMware vRealize Automation (vRA) provide us with the ability to set data on vRA objects and to change configurations that affect the behaviour of objects in vRA. For example, when set on a vSphere Virtual Machine component contained in a composite blueprint in vRA 7, the property "VirtualMachine.Admin.ThinProvision" results in the virtual machine deployed with thin provisioned disks in vSphere. The "VirtualMachine.Admin.ThinProvision" property is a custom property that the out of the box vSphere provisioning workflow uses when set, and we do not have to do anything other than specifying a "true" value for the property to have an effect on the resulting virtual machine. VMware has developed the built-in workflows to make use of custom properties such as "VirtualMachine.Admin.ThinProvision" when they are specified on various components. These properties are documented in the "Custom Properties Reference" documentation provided with vRA 6 and 7.

Custom properties that ship out of the box with vRA, however, are only a small part of where the concept of custom properties can be used to extend the capabilities of your automation solution. Just as the built-in workflows make use of custom properties, so can your workflows take advantage of custom properties that you define using vRealize Orchestrator (vRO).

In this post, I demonstrate how to programmatically select a vRA catalog item, using a vRO workflow. That is by no means representative of a real-world implementation but provides a technical goal for this post to achieve. It is also a chance for me to show my methodology for coding against vRA in vRO. The vRA 7 plugin for vRO provides a lot of objects, properties and methods, and not knowing why they exist and how to use them is a primary source of frustration for those new to vRO.

During this course of this post, I walk you through vRA custom properties, how to define them in the property dictionary and how to assign then to a composite blueprint in vRA7.2. I then walk you through building a vRO workflow to select a catalog item based on the value of the custom property defined in vRA. We look at several vRO scripting classes that are provided by the vRA plugin, their properties, methods and method return types. We look at how to find sufficient information about these objects using the API Explorer in vRO, to achieve your goal of building a workflow from scratch to do just about anything you can dream up.

For this post, to keep things as simple as possible, we base our catalog item selection criteria on the operating system type of a single vSphere virtual machine component in a composite blueprint. This post was written against vRA7.2 and vRO7.2.

Deciding How to Determine the OS Type

Remember, a vSphere virtual machine component in a composite blueprint is a vRA object that defines the basic deployment parameters of a new virtual machine in vSphere. It, therefore, has properties that are used by vRA to, for instance, request a new VM clone from vSphere. None of the properties on the vSphere virtual machine component allows for an administrator to specify the guest operating system, nor does vRA interrogate the selected vSphere VM template for that information. Therefore, we need to attach this information manually somewhere on the blueprint or the component, so that we can use it later. There are a few ways to which we can attach this information. Our first option is to embed the guest OS type in the naming convention of the blueprint name itself. That is by far the simplest and most common method that I have seen out in the field. However, it is not flexible and not scalable to include many other "features" or "capabilities" that the blueprint might have on offer, within its name.

Our second option is to specify the guest OS type as a custom property on the vRA blueprint. Retrieving the custom property value for evaluation is more complicated than just obtaining blueprint name, but it is a cleaner and more robust solution to the problem. It also makes it possible to define any other custom properties that we might need to make a catalog item decision at request time in vRO.

Our third option is similar to option two, and also makes use of custom properties. However, instead of attaching the custom property to the blueprint, we can attach a custom property on the vSphere virtual machine blueprint component object directly. That, however, does add another layer of complexity to our JavaScript code in vRO, as we need to retrieve the vSphere virtual machine component after obtaining the blueprint, and then get all of its properties.

To find a middle ground between simplicity and complexity for this post, I am going to select option two and define a new custom property in the vRA property dictionary, and then attach that property and its appropriate value, to the blueprint. I then demonstrate how to use vRO to:

Get a list all published catalog items

Get the associated blueprint for each catalog item

Read the custom properties of the blueprint

Select the correct blueprint based on the custom property values

For this post, we have the following blueprints published as catalog items:

Blueprint Name: Windows 2012 R2

Blueprint Name: CentOS 7 64-bit

Figure 1 below shows the catalog items configured in vRA7.2

Figure 1: vRA Catalog items configured in vRA7.2

Using the Property Dictionary

Although a custom property can be “made up” and specified directly on a vRA object without having to define it first, the property dictionary is where you should define custom properties “properly”, before using them. Using the property dictionary has several benefits. These include (but are not limited to):

Minimises typing mistakes when defining properties, as properties defined in the property dictionary can be selected from a drop-down list when attaching the property to a vRA object such as a blueprint

The ability to specify a custom label for the property which allows for user-friendly field names in request forms

The ability to specify an input data type, the input control type, input validation and even lists of data that can be selected by a user using a drop-down form control.

The custom property that we define for this post is only utilised with a static value in each blueprint, and will therefore not be displayed to the user at request time. We could therefore not have bothered using the property dictionary at all, and just defined the property on each blueprint directly using the “custom properties” tab. However, it is still a good idea to define the property in the property dictionary, as if it was to be displayed to the user at request time. Getting into a habit of always using the property dictionary when defining custom properties, helps you keep your properties consistent throughout the environment and reduces the risk of errors due to misspelt or inconsistent custom property names. It also means that all custom properties are labelled properly with user-friendly labels, which means that all of your custom properties are “safe” to be made visible on request forms displayed to users.

Defining our custom property

It was not my original intent to do a walk-through of the vRA user interface, as I assume in the post that you are at least familiar with the vRA user interface to some extent. However, to ensure that everyone knows how to define custom properties using the property dictionary, for this step, I walk through the vRA interface for defining custom properties using the property dictionary.

In the next steps, we use the property dictionary in vRA to define a new custom property called “vvcp.blueprint.guestOSType”

Ok, so we have a new property defined within the property dictionary called “vvcp.blueprint.guestOSType”. We could have named the property anything we wanted, as long as the property name is unique. The “vvcp” part of the property name is simply an abbreviation for VirtualvCP, and it aids as a visual clue for me as the administrator/developer that the property is a custom property defined in the VirtualvCP vRA tenant, rather than a built-in vRA custom property. It also prevents us from defining custom property names that might conflict with any built-in vRA custom properties.

Attaching our custom property to the blueprint

The “vvcp.blueprint.guestOSType” property is used on each blueprint to store the guest operating system type for the VM that the blueprint deploys. We, therefore, need to attach the custom property to each blueprint and provide an appropriate value for the custom property. We can attach the custom property by editing the blueprint and clicking on the silver cog icon as indicated in figure 3 below:

Figure 3: Click the silver cog to edit the blueprint properties in vRA

The “Blueprint Properties” window opens. Attach the custom property to the blueprint by completing the following steps:

Click the “Properties” tab

Click the “Custom Properties” tab

Click “New.” A new blank line is displayed.

Under the “Name” field, click the arrow to the right of the dropdown field, or simply start typing the custom property name. While typing, notice how vRA is suggesting to the property to select?

Double click the “Value” cell, and enter the guest OS type. For the blueprint in this example, we have entered “windows”. Optional: You can set overridable to “No” for this custom property, as we do not need to override it with any other value once it has been set.

Click “OK.”

Click “Finish” to exit the blueprint editor.

Figure 4 below shows the steps required to attach a custom property to a blueprint and to provide a value for the custom property.

Figure 4: Attaching a custom property to the blueprint

Note: When we come to code in vRO, custom property names and values are pulled through to vRO in the same letter case as what is specified in vRA. vRO makes use of JavaScript as its scripting language, which is case sensitive. I therefore always use lowercase or camelCase in my custom property names and values for consistency. However, we cast everything to lowercase in vRO when testing values to ensure that a letter case mismatch does not provide us with incorrect logical operator results.

Repeat the process for the Linux blueprint, however, enter “centos7x64” as a custom property value as shown in figure 5 below.

Figure 5: Custom properties for blueprint CentOS 7 x64

Now that we have attached our custom property (“vvcp.blueprint.guestOSType”) to both our blueprints, we can look at how we can read the value of the custom property within a vRO workflow and can programmatically make a catalog item decision using vRO.

Using vRA custom properties in vRO

I acknowledge that the section above on vRA and custom properties could be considered mundane to some of the readers who are familiar with vRA. However, I did not want to make the assumption that everyone reading this knows what custom properties are and how to define them properly. It also provides a solid foundation for the next part of the blog post.

With our two blueprints in place, each configured with a custom property to store the guest operating system type information, we are finally able to get to the real reason for this blog post, which is to look at some vRO code.

The next section of the post covers the basics of vRO, such as accessing the vRO client and creating a folder and a new workflow. If you are familiar with vRO, you can skip this section and go to Building the workflow.

Accessing the vRO Client

Head over to the vRO client and log in with an account that has administrative privileges. If you do not have a local copy of the vRO client, you can download it from https://<vro-appliance>:8281

Figure 6 below shows the login screen of the vRO client.

Figure 6: vRO Login screen

When creating a new tenant in vRA, I always create a new vRO instance, dedicated to that particular tenant. For the current vRA tenant, VirtualvCP, I have deployed a vRO appliance called vra7vrovvcp01.lab.virtualvcp.local. The port over which the client connects to the vRO server is TCP 8281. Therefore the “host name” field contains “vra7vrovvcp01.lab.virtualvcp.local:8281”.

I could probably write a separate blog post about why I recommend a dedicated vRO instance for each tenant, and for that reason, that explanation is beyond the scope of this post. Let’s just say for now that it keeps things clean for authentication purposes and the execution of workflows triggered from vRA via the Event Broker Service (EBS), which is also outside the scope of this post.

Creating a New Folder Structure and Empty Workflow

For us to be able to do anything meaningful at this stage in vRO, we need to create a new workflow, and we also need a folder in which to place this new workflow. However, and although it is not technically required to be able to create new folders and workflows in vRO, I normally switch the vRO client from “Run” mode into “Design” mode. I tend to work in “Design” mode most of the time when writing workflows and vRO, as this gives you access to create and define actions and configuration elements/attributes, as well as import/browse resources.

To switch to design mode, click the “Run” dropdown list to the right of the “VMware vRealize Orchestrator” logo, and select “Design”, as shown in figure 7 below.

Figure 7: Switching between vRO client modes

To create a new folder:

Ensure that the “Workflows” tab is selected (blue workflow icon)

Right click on the vRO server name and click “Add folder”

Specify a new name for the folder. I normally create a folder called “Sandpit” for workflows I intend to use to experiment

Figure 8 below shows the process of adding a new folder in the workflow tab of vRO.

Figure 8: Add a new folder in vRO for workflows

To create a new workflow in the “Sandpit” folder:

Right-click the new “Sandpit” folder

Click “New workflow”

Enter “Select Catalog Item” as a name for the new workflow

Figure 9 below shows how to create a new, blank workflow in the “Sandpit” folder

Figure 9: Creating a new blank workflow

The workflow editor opens in full screen. The workflow editor can look quite daunting when you first start using vRO. Don’t be alarmed. I promise it is not as bad as it seems. For those new to vRO, have a look at figure 10 below where I explain key aspects of the workflow editor’s “General” tab.

Figure 10: Workflow Editor overview

Right at the top of the workflow editor is the workflow name in bold. This might at first seem like redundant information as there is a “Name” field (marked as 3 in the image). However, this is useful when you work in other tabs, as it allows you to glance at the workflow name and its exact spelling if you need to add it somewhere in code (for example while logging).

The “General” tab, which is currently selected.

The workflow name. This field is editable and can be used to change the workflow name within the workflow editor if need required.

The workflow ID. Although it is not relevant to this post, the ID is important as it identifies the workflow internally to vRO. You also need the ID to make calls directly to the workflow from the vRO REST API as REST API URL contains the ID. Note, that the workflow ID is generated when the workflow is first created and is unique to the workflow. Even when the workflow is exported and imported into another vRO instance, the ID remains the same.

The “Version” field control is used to track the changes in the current version of the workflow in relation to previous versions. vRO contains a version control system which allows you to roll back to earlier versions of your workflow, and compare differences between versions. Versioning is also critical when it comes to importing exported workflow packages, as by default, vRO overwrites older versions of workflows already installed on the vRO library with newer versions contained in a package being imported, all based on the workflow version numbers. vRO also makes use of workflow version numbers when synchronising workflows between different vRO instances (for example, Dev -> Test -> QA -> Prod). As a general rule, always increment at least the minor version manually and entering notes about changes made in the current version, before saving changes in a workflow. If you do not manually increment the version number, vRO, by default, prompts for a version increment when saving a changed workflow. However, the automatic prompt simply increments the version without giving you the chance to enter notes about changes made in the current version.

Use the description field to provide a brief summary of what the workflow does. You can also include details such as the author name, or anything else you would like to record.

The attributes section of the “General” tab is where you can create attributes (think variables) that are accessible to all objects and elements within the workflow. More on attributes later.

The inputs tab is where we define input parameters to the workflow.

The outputs tab is where we define the output data that the workflow returns when it completes. These values are used as inputs to other processes (for example the vRA EBS) and workflows in vRO.

The “Schema” tab is where we define workflow elements. We normally spend most of the time during workflow development in this tab.

The “Presentation” tab is where we define the workflow input parameter presentation. The presentation is used in workflow forms when a workflow is run within the vRO client. This tab also enables us to set mandatory fields and define input data validation.

Building the Workflow

If you have opted to skip the above section, you will not have a new workflow to complete the steps in this post. Please open the vRO client and create a new workflow called “Select Catalog Item”

Now that everyone knows how to create a new empty workflow, we can get started on building out our workflow. Before starting a new workflow, or any other coding project such as a PowerCLI script, it is a good idea to write down what you would like to achieve and how you plan to go about creating the workflow or script.

At a high level, our workflow needs to programmatically select a vRA catalog item based on the value of a vRA custom property called “vvcp.blueprint.guestOSType”. Therefore the workflow needs to have the following components and tasks:

An input parameter to request the desired OS from the user as a mandatory input

A scriptable to write to the vRO system log that the workflow is starting and to confirm the input parameter value by logging this to the system log as well. The scriptable item then evaluates the input parameter and assigns its value to a local workflow attribute

A scriptable item then gets a catalog item that matches the guest OS type based on the workflow input parameter. The scriptable item runs through the following steps to complete the catalog item selection process:

Requests a list of available catalog items from vRA

Returns the associated blueprint for each catalog item in turn

Gets the blueprint properties and finds the “vvcp.blueprint.guestOSType” property

Checks the value of the property and if a match to the input parameter is found, writes the calalog item to a workflow attribute via a scriptable items output parameter

A scriptable item sets the workflow success state attribute to true, sets the workflow output parameter to the selected catalog item and logs to the system log that the workflow run is ending.

Setting Workflow Attributes

Our workflow requires local attributes to store information and data. A local attribute is like a variable that is available to all objects in the workflow to read and write data. An attribute has a name, type, value and a description. We need to define at least the following workflow attributes in the workflows “General” tab to get started:

Attribute Name

Type

Value

Description

attWorkflowName

String

Select Catalog Item

The name of the workflow. Used in vRO system logging messages

attErrorCode

String

Not set

The workflow stores exception messages in this attribute if an exception occurs

attSuccess

Boolean

False

We set this attribute to true once we are sure that the workflow has completed all of its tasks successfully. Workflows that may call this workflow can evaluate its success state using this attribute value.

attvCACCafeHost

vCACCAFE:VCACHost

Set to your vRA CAFE Server vRO inventory object

This object is used by the workflow scriptable item elements to request information from vRA

attCatalogItem

vCACCAFE:CatalogItem

Not set

Once a catalog item is selected, the catalog item is stored in this attribute

attGuestOSType

String

Not set

We assign a value to this attribute during the first scriptable item from the inGuestOSType input parameter

Figure 11 shows the attributes configured on the workflow.

Figure 11: Workflow attributes configured in vRO

Setting Workflow Input Parameters

For the workflow to be able to select a catalog item based on an operating system type, we need to have a way to tell the workflow which operating system we need. We use a workflow input parameter for this purpose. Input parameters are similar to workflow attributes. An input parameter has a Name, Type and Description. It does not have a value field as the value is set by workflow inputs when the workflow is run.

Setting Workflow Output Parameters

When the workflow completes, it needs to be able to pass information back to the system or workflow that called it. This information is passed back via output parameters. If another workflow has called our workflow, the calling workflow can use the values in the outputs of our workflow. An output parameter has a Name, Type and Description. It does not have a value field as the value is set by workflow elements during the workflow execution.

Building the Workflow Schema

We can now start to create the schema for the workflow. Click on the “Schema” tab to access the schema editor. The workflow schema is made up of elements that are dropped on the schema canvas into the process flow diagram. The elements that can be dropped onto the canvas are listed in the left pane of the editor. These elements are grouped into categories. From the general category, drag a “Scriptable task” onto the canvas between the green start marker and the grey end marker. Then, hover the mouse cursor over the new “Scriptable task” element, and click the yellow pencil to open the scriptable task editor, as shown in figure 14 below:

Figure 14: The workflow schema editor

The scriptable item editor opens. The editor has the following tabs:

Info: Provides several fields. Most of the fields on the Info tab are out outside the scope of this post. We are only interested in Name and Description. Figure 15 below shows the info tab configuration for the “Start workflow” item.

In: The IN tab is where we select and map workflow attributes and workflow inputs to new temporary local attributes that exist in the scriptable item. Follow the steps in the image below to select the “attWorkflowName” workflow attribute and the “inGuestOSType” workflow input parameter as inputs to the scriptable item. Figure 16 shows the IN tab parameters as well as the chooser dialog that is used to select available workflow parameters and attributes as inputs to this scriptable item.

Figure 16: The “in” tab configuration for the “Start workflow” scriptable item. Also displayed is the chooser dialog that is used to select available workflow parameters and attributes as inputs to this scriptable item

As shown in figure 16 above, the “IN” tab parameters have 4 fields. The local parameter field is only accessible within the scriptable item itself. The name of this parameter can be different than that of the mapped workflow source parameter. However, when mapping a local parameter to a workflow source parameter using the “Chooser” dialog, the name of the local parameter is set to match the source parameter name automatically, provided that the parameter name is not in use by any other local parameter.

Out: The “OUT” tab is where we select and map any local scriptable item parameters to workflow attributes and workflow outputs. As we would like to update the values of two of the existing workflow attributes (attSuccess and attGuestOSType) with the values of local scriptable item parameters, we map the local parameters of each to their respective workflow source attributes as shown in figure 17 below.

Exception: The “Exception” tab is simple, and I have seen many people ignore it. Do not fall into the habit of doing so. The exception tab allows us to bind a workflow attribute to the scriptable item. The workflow attribute is used as a storage location for any exception messages that may occur during the execution of the script in the scriptable item.

Click on the “Not set” link and select the “attErrorCode” attribute from the list in the “Chooser…” as displayed in figure 18 below.

Visual Binding: The “Visual Binding” tab is very helpful to quickly determine which local parameters are mapped to which workflow inputs, outputs and attributes. In the image below, we can see that the workflow input parameter “inGuestOSType” is mapped to an IN parameter, “inGuestOSType” of the “Start Workflow” scriptable item. We can also see that the workflow attribute “attWorkflowName” is mapped to an IN parameter, “attWorkflowName” of the “Start Workflow” scriptable item.

The “Start workflow” scriptable item also the two local OUT parameters, “attSuccess” and “attGuestOSType” mapped to workflow attributes. Figure 19 shows the visual binding tab for our “Start workflow” scriptable item

Figure 19: The “Visual Binding” tab for the “Start workflow”

Scripting: The “Scripting” tab of the Scriptable Item element is where the real action happens as this is where we insert JavaScript code. Throughout this post, I attempt to explain all the code in detail. The scripting tab also gives us quick access to the API Explorer, a tool we frequently use to gain an understanding of what objects are available, what their properties are and what the return types are for methods. Figure 20 shows the Scripting tab with the API Explorer visible on the left-hand side.

Figure 20: The “Scripting” tab of the “Start workflow” scriptable item. The API Explorer is visible on the left

Coding the Start Workflow Scriptable Item

Let’s start with the code. Go ahead and paste this block of code into the scripting tab for our “Start workflow” scriptable item.

//These are comments and ignored by the JavaScript interpreter
/* These are multi-line comments and also
Ignored by the JavaScript interpreter */

System.log() accepts a string between the parentheses () and logs the input string to the vRO system log. For example, System.log(“Hello World!”) logs the text “Hello World!” to the system log.

if (inGuestOSType != “”){

The if statement checks to see if the inGuestOSType input parameter contains an empty string. If it is not empty, we continue by assigning the value of inGuestOSType to the workflow attribute attGuestOSType:

attGuestOSType = inGuestOSType;

If it is empty, we throw an exception, and the default exception handler (which is implemented later in this post) is invoked.

Configuring and coding the “Get Catalog Item” Scriptable Item

Back on the workflow schema tab, drop another Scriptable item between the “Start workflow” element and the grey end element. Edit the scriptable item and ensure the following settings are applied, as shown in Table 4:

Table 4: “Get Catalog Item” Scriptable Item configuration settings

After the “Get Catalog Item” scriptable item has been configured, the workflow schema should match what is shown in figure 22.

Figure 22: Select Catalog Item Workflow Schema

Let’s walk through the process of writing the code. It would be easy for me just to paste the code and then talk through it, but I really would like to try and convey the thought processes I have when writing code, especially when it is code that interacts with vRA. When you look at the blocks of code that someone else wrote, it always seems that they knew what they were doing and that their code seems so good and complex. I always think that surely, the guy who wrote this code is a genius? However, I find that when I code it takes a lot of trial and error, but in the end, it always looks like polished code that was written line by line in 5 minutes. Let me assure you, it never takes 5 minutes and it never is written line by line without having to go back and make changes. I am sure that everyone thinks the same about code generated by others!

Ok, so where do we begin? We know that we must retrieve a list of available catalog items from vRA, so let’s start there.

The vRA plugin that ships with vRO 7.2 should have functions available to retrieve objects such as catalog items from vRA. So, let’s open the API explorer and search for the term “findCatalog”. Ensure that “Scripting class”, “Attributes & methods” and “Types & enumerations” are all checked. As shown in figure 23 below, the vRO API explorer has found a method that might just do the job.

One of the methods found is called findCatalogItems(), and it is part of the vCACCAFEEntitiesFinder scripting class. This method should do the job. Select it from the list and click the “Go to selection” button. Then click the “Close” button on the search dialog window.

Back in the script editor, notice how the API Explorer has selected the findCatalogItems method as shown in figure 24:

The API explorer gives us all the information we need to work with this method. The signature (vCACCAFECatalogItem[] findCatalogItems(vCACCAFEHost host, String query) tells us that the method will return an array of vCACCAFECatalogItems (vCACCAFECatalogItem[]). We know it returns an array because the two [] (square brackets) following the vCACAFECatalogItem type indicates an array. We also know from the signature that the method name is findCatalofItems and that it accepts two parameters; a host object of type “vCACCAFEHost” and a query of type “String”. Finally, just to spell it out in plain English, the API Explorer tells us the “Return Type” is Array of vCACCAFECatalogItem.

Why does it matter that we know the return type, you may be asking? Well, methods return objects, and each object has a set of properties and methods, depending on its type. To know which properties are accessible for an object that returned from a method, we need to know the object type, so that we can inspect that object in the API explorer.

Before we start digging into the object type “vCACCAFECatalogItem”, let’s just write a bit of code first.

We need to define a new variable to contain the object that the findCatalogItems() method returns. We do this with the “var” keyword.

To start off with, we declare a new variable called catalogItemList, and at the same time, we initialize the variable to null. Although it is not necessary to initialize variables when you declare them, I want to make it syntactically clear that the variable is null to start with as we will be testing if the variable is null at a later stage, after attempting to assign it a value.

var catalogItemList = null;

Now that we have a variable that is null, we can go ahead and try to assign a value or an object to it calling the vCACCAFEEntitiesFinder.findCatalogItems() method, which should return an array of vCACCAFECatalogItem[] if successful.

After this code executes successfully, the catalogItemList object that we have created and initialised to null should now contain the result returned from the vCACCAFEEntitiesFinder.findCatalogItems() method.

We know that the findCatalogItems() method returns an array of objects. This array could contain one object, or it could contain many objects. At this point, we do not care what type of objects array might contain; we just need to test the array size first to see if the findCatalogItems() method returned a result.

When looking at API explorer, we find that there is a JavaScript scripting class called “Array”. When inspecting this class, we can see the properties/methods shown in figure 25.

Figure 25: Array properties and methods in the API Explorer

As shown in figure 25 above, we can get the size of any array by inspecting its length property. In our case, if the findCatalogItems() method returned a result of at least one catalog item in an array, then, that array is assigned to our variable called catalogItemsList. So, therefore we should be able to access a property called “catalogItemList.length”.

We use an if-statement to determine if the catalogItemList object passes two tests. The first test (evaluation) determines if the catalogItemList object is not null. The second evaluation determines if the length of the array is larger than 0. If the length property does not exist (the catalogItemList is probably not of type “Array” and therefore does not have the length property), then the evaluation returns “false”.

The AND operator (&&) tells JavaScript that both conditions in the if-statement need to evaluate to “true” for the if-statement to return true. If either one of the evaluations returns “false”, the if statement also returns “false”. So, if both tests return “true”, JavaScript executes the code block between two curly brackets (or braces, depending on where you are from).

We use an “else” statement after the code blocks’ closing curly bracket if the result of the if-statement turns out to be false (catalogItemList.length is 0 or smaller), and we use the “throw” keyword to throw an exception as no catalog items were returned from vRA. The default exception handler in the workflow handles the exception, although we will implement the default exception handler later.

Running the workflow at this stage using the “Debug” button in the workflow editor, should provide the following in the system log (can be view in the Logs tab of the workflow editor)

Ok, we are now ready to start digging into the array that came back from vRA. Fron now on, all code that we write is to exist within the if-statement curly brackets, as we know that we are dealing with an object that is not null and contains at least one element. Arrays are interesting things, and I find them rather difficult to explain to others. For those readers who are from a PowerShell / PowerCLI background, arrays should make much sense. For those new to scripting/programming, arrays might take a while to get your head around. I am not going to try and explain arrays here as there are many articles on the internet that do a much better job at explaining the concept of arrays than I could ever do.

By nature, arrays could contain multiple elements(objects), and each object within the array has its set of properties and methods. Therefore we have to evaluate each object in turn. Many people would do this with a for loop, for example, something like this:

The code above is perfectly legal JavaScript, and sometimes when you need to be able to return the index number of the current object you are processing with an array, the code might even be necessary. However, I like clean and simple code, and the code above might look geeky and clever, but it is not simple, and it is not clean. I recommend not to use code like that unless it is necessary. There is a much better way!

The for each statement above produces the same result as the for loop demonstrated before it, yet it is much easier to read and write. Keep your code simple; you will thank yourself when you come back to look at it again months later!

In the for each loop, we are declaring a new variable called catalogItem that only exists in memory as long as we stay in the for each loop. Every time the code loops round to the top, the next item in the CatalogItemList is assigned to the catalogItem object. The loop, therefore, runs as many times as the number of objects within the catalogItemList array. Each line of code within the curly brackets executes for each object in the array in turn.

You might be wondering what the catalogItem.name is all about. How did I know that would work? Again, we have to look at the API Explorer. We are aware from the signature of the findCatalogItems() method that it returns an array of type vCACCAFECatalogItem. Searching for vCACCAFECatalogItem, we see a scripting class for it as shown in figure 26 below:

Figure 26: API Explorer detail of vCACCAFECatalogItem

The API explorer reveals all of the properties and methods available for our object that lives within our for each loop’s catalogItem variable, which has one of the elements of the catalogItemList array assigned to it. All of the elements within the catalogItemList are of type vCACCAFECatelogItem, and that therefore also makes our catalogItem of type vCACCAFECatelogItem.

In the line of code that reads “System.log(“Catalog Item Name: “ + catalogItem.name);”, we simply access the “name” property of the catalogItem that the loop is currently evaluating. If you run the workflow in debug mode, you should see the following log output:

The log entries clearly show that we can read the name property for each of our vRA catalog items. That is a good start. However, the custom property “vvcp.blueprint.guestOSType” is attached to the backing blueprint of each catalog item and not the catalog item itself. Therefore we need to work on obtaining the blueprint for each catalog item in turn within out for each loop.

NOTE: Just a head’s up, we are going to end up with a few nested for each loop instances here!

Now, this is where things can get tricky with vRA and vRO. I would have been great if the vCACCAFECatalogItem object type had a method to get the vRA blueprint. Something like catalogItem.getBlueprint() would have worked a treat. Sadly, that is not the case. The API Explorer does not list a method that could help us, so we need to have a stab at this manually. Fortunately, vRO can help us out a little in our endeavours to find the blueprint.

Save the workflow and exit the editor. It might give you a validation failure but just ignore that. We need to browse the vRO inventory to look at the catalog item and composite blueprint vRO object representations of the actual vRA catalog items and composite blueprints. Our aim is to identify property values in the catalog item that correspond to property values in the blueprints.

To view the vRO inventory of the catalog items, ensure the vRO client is in “Design” mode (1), then click the inventory tab (2), expand vRealize Automation (3), expand the vRA server (4), expand Catalog (5), and select a catalog item. I have selected CentOS 7 64-Bit, as shown in figure 27 below.

Figure 27: vRO Inventory – Catalog Item Properties

From figure 27 above, we can see the providerBinding property has a value of “Binding id: virtualvcp!::!CentOS764Bit …” That looks like something we should be able to use to identify the blueprint. Make a note of the providerBinding property value before moving on.

Next, we need to browse the composite blueprint inventory object in vRO to determine if the same value exists in any property of the blueprint object. Under the server, expand “Administration”. Then expand “Composite Blueprints” and select the blueprint that matches the catalog item’s providerBinding id by name, in my case this was the CentOS 7 64-bit blueprint, as shown in figure 28 below.

Figure 28: vRO Inventory – Composite Blueprint Properties

From figure 28 above, we can see that the blueprint externalId matches the providerBinding Binding Id property value in the catalog item. We should be able to use these two properties to associate a catalog item with a blueprint in vRO.

Open the workflow in the editor again, and edit the “Get Catalog Item” scriptable item. In the API Explorer, browse to or find the vCACCAFECatalogItem Scripting Class. In the scripting class, we can see that VMware has provided us with a getProviderBinding() method, which returns an object of type vCACCAFEProviderBinding as seen in figure 29 below.

Figure 29: API Explorer – vCACCAFECatalogItem

Clicking on the vCACCAFEProviderBinding link in the API Explorer takes us directly to the vCACCAFEProviderBinding Scripting class in the explorer. Here we can see that it has a method called getBindingId() which returns a string, as seen in the image below. I think this is what we need, but we should test it using System.log before we know that we can trust it!

Figure 30 below shows vCACCAFEProviderBinding in the API Explorer.

Figure 30: API Explorer – vCACCAFEProviderBinding

Back in our script editor, we enter the following line of code, below the line that reads “System.log(“Catalog Item Name: “ + catalogItem.name);”:

Now that we have a way of obtaining the provider bindingId for our catalog item, we need to look at getting a blueprint that corresponds with the Id. We use the API Explorer to search for methods that return a composite blueprint, as seen in figure 31 below.

Figure 31: API Search – getCompositeBlueprint()

The search turns up a method called vCACCAFEEntitiesFinder.getCompositeBlueprint(). Select that method and click “Go to selection”, then close the search window.

Back in the API Explorer, the getCompositeBlueprint() method signature tells us that the method expects two parameters, a vCACCAFEHost and a blueprintId, which is the Id of the composite blueprint. Figure 32 below shows the information for the getCompositeBlueprint() method.

Now, if we look at the inventory object in vRO of the composite blueprint, we notice that the composite blueprint ID is CentOS764Bit as shown in figure 33.

Figure 33: vRO Inventory - CompositeBlueprint Properties

However, the bindingId that we currently have is:

virtualvcp!::!CentOS764Bit

We need to pass an id as a parameter to the getCompositeBlueprint() method, however, passing the current bindingId will not work as it contains “virtualvcp!::!” in the string. Therefore, we need to get the composite blueprint Id which is CentOS764Bit from the bindingId string. Fortunately, it looks like there are characters in the bindingId that we can use to split the string. We can use the exclamation marks (!) as delimiters to form a new array object.

The “String” object in JavaScript has a cool method. It is called “split”. Since our bindingId object is of type “String”, we can use the split method on this string as well. The split method takes any string and splits it up into an array, based on a delimiter of your choosing. For example, the following line:

What the code above is doing, is telling JavaScript to take the “String” result from the getBindingId() method, and split the string on “!”. Then, return element index 2 (the 3rd element when counting from 0) and assign it to bindinId.

With the change made to our existing code, the current state of the “Get Catalog Item” scriptable item script is:

Now that we have the ID of the composite blueprint, we can call the vCACCAFEEntitiesFinder. getCompositeBlueprint() method. We pass in the vCACCafeHost and the bindingId objects as parameters. We assign the result of the method to a new variable called blueprint.

The getCompositeBlueprint() method returns an object of type “vCACCAFECompositeBlueprint”. Therefore, providing that a blueprint was found in vRA that matches the input ID, our blueprint variable is now of type vCACCAFECompositeBlueprint. Looking at the vCACCAFECompositeBlueprint in the API Explorer shows is a list of properties and methods that we can use as shown in figure 34.

Figure 34: API Explorer – vCACCAFECompositeBlueprint

Before we continue with getting the properties of the blueprint object, we need to check to see if the getCompositeBlueprint() method returned something. We can do this with a simple if-statement:

For logging purposes, we are interested in the object name property. Also available is a property called “properties”, which should contain the blueprint’s properties. However, we can also see that there is a method called getProperties(), which returns an object of type java.util.Map. Searching for: “java.util.Map” in the API Explorer yields no results, as it is undocumented in the API Explorer. However, we can find more details about this object and how it works on the internet. The following page explains the “java.util.Map” type better than I ever could:

Based on the log output, we know we are dealing with a HashMap in this instance. The HashMap is an array of key-value pairs. As it is an array, we need to iterate through the array to get each of the key names, so we need to set up another “for each” loop:

The text above includes the name of our vRA custom property (vvcp.blueprint.guestOSType). That is good and well, however, what does not make any sense is the value. It is not what you had probably expected to see. The truth is, we have yet another object, and we need to drill into this object to extract the actual value of the custom property. The reason for this is that in vRA, a custom property has a name, a value and then some other properties as well, such as the following boolean-valued properties: “Encrypted, Overridable, Show in Request”. All of these properties have to be accessible in vRO, so the vRA plugin in vRO presents these properties in the form of an object.

We can see from the logged output, that the key value is an instance of type “vCACCAFEComponentFieldValue” A quick search for this type in the API Explorer provides the following information as shown in figure 35.

Figure 35: API Explorer – vCACCAFEComponentFieldValue

The vCACCAFEComponentFiledValue provides a method called getFacets(), which has a return type of java.util.Map. So, what we have here is a blueprint with a set of custom properties, and each property has a name and value, and each value is another set of properties. We should, therefore, be able to use the same methodology to retrieve these properties as we did with the blueprint properties.

Just before we continue to build this out, let us just review what our code should look like at this moment:

At this point, you can be forgiven for feeling the need to give up, as it seems that we have yet again ended up with just an object rather than a value that we can use. However, just bear with it, we are almost there.

Each value has now returned a new object of type vCACCAFEConstantValue. The API Explorer describes this type as shown in figure 36.

Figure 36: API Explorer – vCACCAFEConstantValue

Ok, so the object does have a method, called getValue(). Let’s update out System.log() line to use this method:

The output now shows yet another object, this time it is of the type that matches the property type in vRA. In this case, it is either of type vCACCAFEBooleanLiteral for boolean property types or vCACCAFEStringLiteral for String property types. However, we can see the values for each of these objects in the log as “ -- VALUE : xxxxx”. We should, therefore, be able to append the property name “.value” to our line of code, so that it reads:

We have almost completed our script for the “Get Catalog Item” scriptable item. However, we still need to make a few more changes to the code. First, we need to ensure that we are only requesting the fieldValueKeys for the “vvcp.blueprint.guestOSType” property. We can achieve this with an if-statement:

Now we need to match the value for the “vvcp.blueprint.GuestOSType” property of each blueprint and if that matches attGuestOSType workflow input parameter, select the catalog item by assigning the current catalog item being processed in the for each loop to the workflow attCatalogItem attribute.

Once the script has found a matching catalog item, we would like to script to exit, as there is no need to continue iterating through all other catalog items. Once we find our catalog item, we need to break out of the “for each (var catalogItem in catalogItemList){…}” loop. As we are using nested “for each” loops, breaking out of a child loop using the “break” JavaScript keyword, won’t cause the parent loop to break, so we need to set up a variable to monitor the success of a child loop.

To begin, we define a variable at the top of the script called “catItemFound”, and we set that to “false”. Then at the beginning of the “for each (var catalogItem in catalogItemList){}” loop, we use a simple if statement to test the value of the catItemFound variable. If it returns true, then the break keyword stops the execution of the loop. If it returns false, the loop continues. We can then set the value of catItemFound to true anywhere in any of the nested loops. Obviously, we would only set the value to true once we are confident that we have found a result. Below is our completed script code:

With the attCatalogItem workflow attribute set, we are free to continue with the workflow. Drag two scriptable items onto the workflow schema and change their names name to “Set Success” and “End workflow”, as demonstrated in figure 37 below.

Figure 37: Workflow Schema

We can use the “Set success” scriptable item to set the workflow success state attribute (attSuccess) to “true”. The scriptable item has no IN attributes, and only have one OUT attribute, namely attSuccess. Figure 38 shows the visual binding for our “Set success” scriptable item.

Figure 38: “Set Success” Visual Binding

The script consists of a single line of code:

attSuccess = true;

With the workflow success state set to true, we can continue with the “End workflow” scriptable item. We can use the “End workflow” scriptable item to:

Set the output parameter values to their matching workflow local attributes

Set the workflow success state

Log that the workflow has ended

We need the following configuration for the “End workflow” scriptable item:

Figure 39 shows the configuration for the “IN” tab of the “End workflow” scriptable item.

Figure 39: “End Workflow” scriptable item “IN” tab configuration

Figure 40 shows the configuration for the “OUT” tab of the “End workflow” scriptable item.

Figure 40: “End Workflow” scriptable item “OUT” tab configuration

Figure 41 shows the configuration for the “Exception” tab of the “End workflow” scriptable item.

Running the workflow at this point, with an input value to inGuestOSType of “windows” results in the workflow completing with the following output parameters set as shown in figure 43.

Figure 43: Workflow run status

So far, so good. The workflow is working as expected. However, we have not done anything to catch exceptions. Fortunately, for a simple workflow such as this, the default exception handler provided by vRO should be more than sufficient. We can use the default exception handler to set the workflow success state to false and to log the exception.

As this workflow is intended to be called from other workflows, we have the option to either end the workflow on an exception or end it gracefully via the “End workflow” scriptable item. Ending the workflow on an exception by using the exception handlers’ default configuration causes the workflow run to be shown as “failed in vRO” (marked with a red X). Conversely, ending the workflow gracefully via the “End workflow” scriptable item still gives us the chance to log the exception, mark the workflow success state as false, and pass the exception information to the calling workflow (if called by another workflow). However, the workflow run in vRO is shown as having completed successfully. That might sound like an undesired outcome, but if your workflow is called by a master workflow, this behaviour might be beneficial. That is because the calling workflow can then decide what to do about the exception. There is no right or wrong way to go about handling exceptions, but I would go as far as to say that failing to handle exceptions is the wrong way. It does not matter how we handle the failure; it just matters that it is handled in some meaningful way.

In this post, we handle the exception by exiting the workflow gracefully via the “End workflow” item. If the workflow you are working on is never going to be called by any other workflow, then I would suggest failing the workflow on an exception so that if an exception occurs, the workflow run in vRO is listed as failed. It all depends on the circumstances and goal which you are trying to achieve.

In the schema editor, browse the “Generic” category and drag a “Default error handler” object onto the canvas as shown in figure 44.

Figure 44: Schema editor – Default error handler

The object has one property, and that is an error code on the Exception tab. If an attribute in the workflow is in use by other objects’ output exception binding property, that attribute should automatically be configured on the default error handler item. In our case, as shown in figure 45, it has selected attErrorCode, which is what we want.

Figure 45: Default error handler – Output exception binding

From the logging category on the left-hand side of the schema editor, drag a “System Error” item between the Default error handler item and the Exception item, as shown in figure 46 below.

Figure 46: Workflow Schema – “System error” item placement

The “System error” item is a scriptable item that is preconfigured to log an “error” message to the system log rather than an “Information” message that results from using “System.log”. We use the System.error() method to log whatever is in the attErrorCode attribute, which contains the exception message.

The “System error” item also gives us an opportunity to add other code. We add a line of code to set the attSuccess attribute to false since an exception was raised somewhere in the workflow.

Configure the “System error” item as follows:

Figure 47 shows the configuration for the “IN” tab of the “System error” item.

Figure 47: “System error” item “IN” tab configuration

Figure 48 shows the configuration for the “OUT” tab of the “System error” item.

Figure 48: “System error” item “OUT” tab configuration

Figure 49 shows the configuration for the “Exception” tab of the “System error” item.

Figure 49: “System error” item “Exception” tab configuration

Figure 50 shows the configuration for the “Scripting” tab of the “System error” item.

Figure 50: “System error” item “Scripting” tab configuration

If an exception occurs in the workflow’s current state, the workflow exits and is marked as “failed” in vRO. No output parameters are set, as output parameters are set in the “End workflow” item, which is bypassed by the exception handler. To make exception handler use the “End workflow” item, simply drag and drop the “Exception” item (Red exclamation mark) onto the “End workflow” item, as shown in figure 51.

However, now the exception does not cause the workflow to exit immediately and to be marked as failed in vRO. Instead, it is marked as successful by vRO. However, the attSuccess attribute is set to “No” or “False”, and the exception message is logged. Finally, the workflow output parameter outSuccess displays “No”, and the outErrorCode parameter contains the exception message. These parameters can be read by calling workflows. A workflow run where an exception has occurred is shown in figure 53 below.

Figure 53: Failed workflow run exits gracefully

As stated at the beginning, this post implemented option 2 or 3 options, where the custom property was applied to the blueprint, rather than the blueprint components. If you would like to know more about reading custom properties from individual blueprint components, please leave a comment below, and I will write a blog on how to achieve that.

I have been dabbling in the world of vRO plugin development. Yes, I know, vRO is a product that doesn't get much love from the VMware community, and I do not think that is fair. People seem to have decided that the product is too complicated and where possible would rather write a PowerCLI script to automate things. The truth is, that when you take a little bit of time to look at vRO, you will find that it is not that complicated to develop vRO workflows and the possibilities are endless. I know, so I'm telling people that vRO isn't that complicated in a blog post which is targeted at myself for when I run into this issue in the future! So, if you are finding workflow development too complicated a task, this post is not for you, as I doubt you will be interested in plug-in development.

I have been developing workflows for a few years, and up until now, I have not come up against an automation problem with vRealize that I could not solve in some way with vRO. However, sometimes I find that the plugins that are available for vRO can in some ways be restrictive, and workarounds have to be made in the vRO workflow development process. These workarounds are in my opinion some of the things that can make workflow development more complicated than need be. If you are unfamiliar with the term plugins, they are the things that provide JavaScript objects and classes that you can use in your workflows in vRO. They, in turn, interact with APIs of external systems via either APIs with Java SDKs or REST APIs. So, to further enhance my abilities with vRO past simply being able to write good functional workflows, I have been thinking about plugin development. What if I can write vRO plugins, especially for APIs for which there aren't any published plugins available? Moreover, what if I can develop a plugin to provide me with those features that the vRA plugin does not provide, or provide in a less than desirable way?

So right from the outset, I hit my first problem. vRO plugin projects are built with Apache Maven, using a published archetype that is provided by your vRO appliance. Maven has an archetype plugin for this. With the JDK and Apache Maven installed, you should be able to build the basic starting point for a plugin with the following command:

Running the above command on my version of Maven, resulted in Maven not being able to find the archetype catalog as defined in the command. Strange! It turns out that there is a problem in the latest version of Maven and the archetype plugin for Maven (I have Maven version 3.3.9 and maven-archetype-plugin version 3.0.0

To get around the problem, force Maven to use archetype plugin version 2.4 with the following command:

Last night I was searching for a domain name for a new personal project that I would like to kick off. Personally, I don't find searching for a new domain name a fun thing to do. I wanted to see if I could find a domain name which is made up of a combination of words. Some of these include the terms tech, cloud, river, stream, sphere, and many others. As I started my search, I quickly came up with domain names that were already taken. I then decided to look at synonyms for some of these terms. It was at this point that I noticed something peculiar about the word "cloud". This is not a serious post, but just a bit of fun, so check this out:

Have you ever been in a meeting or attended a presentation where the presenter tried to articulate what the term "cloud computing" actually means, only to find that the next person describes it differently? I think everyone in their own minds has an idea of what is meant by cloud computing, but I'm also willing to bet that most people define it differently. Most IT professionals will instantly describe it as XaaS (Anything as a Service), which is correct. Or is it?

Anyway, the point I'm trying to make is that everyone has a definition of cloud computing, and who am I to say that your definition is less accurate than mine? But I still get the feeling that the term cloud computing is still a bit loose in its definition within technology circles.

So last night I spotted something interesting. The definition of a real cloud is "a mass of water particles in the air", and that quite obviously has nothing in common with cloud computing. However, some the synonyms for the term "cloud" include:

Notice how most if not all of those terms indicate poor visibility? To me, it's a funny coincidence that just as so many find it difficult to articulate cloud computing to others with clarity, the synonyms for the term "cloud" literally define "poor visibility" or a "lack of clarity". Is it almost like a self-fulfilling prophecy?

Anyway, this finding actually derailed my domain name search, and I am yet to find that name that just fits with what I'm trying to achieve.

This blog post has the potential to be a very controversial. I'm sure there will be many in the IT industry who will want to protest against a post like this, but there will also be others who would agree with this post.

Disclaimer: Following a review of the first draft of this article, and after careful consideration, I opted to remove about three paragraphs of text. The three pieces of text outlined some of the current buzzwords that drive some of us mad. It also included an extract of text from a website of a well known international consultancy (and no, it's not the one I work for ;-) ), that quite simply put, is a paragraph entirely formed out of BS buzzwords and phrases. You know, one of those monologues that consist of a lot of fancy buzzwords, but doesn't tell you anything. I decided to remove the text as I don't want this article to look like an attack on any individuals or organisations. I didn't mention any names of persons or organisations in this article, nor did I have any particular names in mind when I was writing the article. However, I am conscious of the fact that some people will be drawing conclusions. Therefore, any conclusions drawn by the reader are their own, and do not necessarily represent truth, or align with my intent with this article. You might also be reading some parts of this article and think "this guy is writing about my organisation!". Well, if you've been around the IT industry long enough, you will know that his issue is everywhere. No, it's not just your company. I'll place a bet that it is in every IT business out there.

The issue I outline here, today, is by no means new. It's been around since the dawn of time when two distinct camps of individuals started meddling with computers. In the one camp, you'll find technicality gifted people, those who really couldn't care less about pretty pictures on presentation slides, or inventing new buzzwords to drive their personal image or to hide the fact that there are things about technology that they simply don't know or understand. Then you get the other camp, where we find many people from different walks of the "IT life". They are those who have an interest in technology but couldn't cut it as real techies due to no fault of their own, or, to climb the corporate ladder, had to let go of some of the technologies that they did at some point in their lives know. There's nothing wrong with that. The camp also has those who couldn't care less about technology, but they are good at selling things, including technology. There's nothing wrong with that either. We do need people who are good at selling things, including the solutions we as techies build. This post does not simply target either one of the two camps as a whole. No, it deals mainly with the behaviour of a few individuals in the latter camp. However, I do acknowledge that in some cases, there will be those from the technical side who are guilty of falling into the same traps as what I'm about to outline below.

So, what's my problem? Well, simply put, my problem is the invention of, and constant verbal and written diarrhoea of meaningless sentences and paragraphs filled with buzzwords by some in the industry, in other words, (and for a moment, excuse the language), bullshit. You might think it makes you sound intelligent and that it has the potential for launching you into a nice cushy job. Who knows, maybe it will (we will discuss how and why in a minute), but in reality, to the rest of us, it makes you look like, well a bullshitter really, trying to brown-nose his/her way to the top. I guess on a subconscious level; it's not so much the buzzwords that I have an issue with, but rather the reason or intent for their use.

To try and explain why I have a problem with the level of BS in the industry, let us consider the following hypothetical scenario. Let's just for a moment think about why all your BS and smooth talk could get you into a nice cushy job. Let's assume that the person responsible for hiring you into your next job doesn't have a clue about what he/she is doing in the IT industry and is therefore just as much of a bullshitter as what you are. In which case, you might just have a shot at landing an offer for a cushy job. I take exception to the saying "You can't bullshit a bullshitter." There are hiring managers in this and every other industry who used an awful lot of BS to get to where they are today. The reason? Because they don't understand their respective industries and in the case of the information technology sector, the technology trends that drives it. So along you come with your fancy meaningless buzzwords and smooth tongue which impresses the clueless hiring manager to the point where they offer you a cushy job. That would be all good and well if you were locked in your comfortable office never to be seen or heard from again while collecting your paycheck. I'd be happy with that. Honestly. At least then you'd be silenced. But no! Because you've secured a job via the BS route, you now feel the need to continue further down the path of BS to at the very least maintain equilibrium with your peers and directors/managers and their opinions and beliefs in your abilities. At this point, your BS spills into the public domain, littering the world with your buzzword-filled blog posts, tweets, Linkedin posts and YouTube videos.

Like all companies, those in the IT industry are run by business-minded people, not technologists. Come on, let's not kid ourselves here. The days when an engineer or two or three could build a technology company out of a garage without strong business minded people helping them are over. The fact is that you have to have business minded people at the top of even the most innovative technology business to make it work in the current climate. However, you have to ensure that the technologists take care of and drive the message around the technology and innovation that the business is building, promoting and selling. I'm sure everyone will agree with that statement, right? However, what seems to happen now is that there are a lot of people who built their careers on who they know and how good they are at BS, rather on what they understand and their ability to adapt and learn new things. That is of course not only the case in IT but also in every other industry. However, I can only speak from an IT point of view, because that is all I know. These people are now in charge of forming a "vision" and "strategy" for those businesses' technology innovation and sales. They think they know some things about the industry, because they attend daily Webex meetings, speak to so-called industry experts and browse business and technology websites to try and spot the current trends and bandwagons to join. The problem is that if you've never really been able to understand the existing technologies, how are you going to form a vision and drive direction using new and emerging technologies? If you think you can get by simply by doing some of the Webex, meeting and internet browsing activities as stated above, how do you know if the information you're looking at and making your decisions on isn't just more BS? More BS generated by someone who potentially also finds themselves in the same situation as you are? Consider the source of your information. Failing to do so could result in a situation where the blind is leading the blind.

So, am I saying that if you don't understand existing technologies that you don't have a place at the top, driving strategy and direction? No, it's not at all what I'm saying. And neither am I blanketly labelling everyone non-technical who finds themselves in a leadership position at the top, a bullshitter. The vast majority of business leaders are not technical, and they are also not bullshitters because they earned their positions on merit and most importantly their proven leadership abilities. Here is where leadership comes in. There's obviously nothing wrong with being non-technical. However, you need to admit to at least yourself that you don't know what you're doing with the nuts and bolts of technology and accept that it's not your job to know what you are doing. You need to then identify and rely on those individuals in your organisation who do know current and emerging technologies and who understand how to make those technologies work together. That doesn't mean just pick any "technologist" who seems to have a bit of available time and a loose interest in the subject you're trying to build capability. IT is a vast field, and not all technologists will have technical ability in all technologies. That is simply not possible. Pick your candidates carefully. You'll probably find that the people best suited for the job aren't the polished types in suits with straight-cut jackets, who know how to say a lot of words with a vast and impressive vocabulary, yet still, says nothing. You'll probably find that the right person is the individual who couldn't care less about their image, but does care a great deal about technology and helping their customers. Equip those guys with what they need to do their jobs and lead them well. If you nail that bit, you won't need to get on the buzzword bandwagon, as the results will speak for themselves. Just because you are in a position of leadership, doesn't mean people see you as a leader. One of my favourite speakers on leadership is John C. Maxwell, and he quotes: "True leadership cannot be awarded, appointed, or assigned. It comes only from influence, and that cannot be mandated. It must be earned." I firmly believe that one of the biggest challenges the world is facing right now is a lack of leadership. Not just a lack of leadership in any particular industry, but the world in general. The world needs leaders, not bullshitters. There will always be vacancies in leadership, so why not just step up, cut the BS and be the leader this world needs?

I also see a lot of mud-slinging on Twitter, with vendors and partners trying to prove that they or their solutions are better than others. There's a lot of name calling going on, and that's not good for anyone. It's yet more BS. It puts your organisation in a bad light, and it creates friction between you and the ones you're putting down. I've been guilty of this myself, unfortunately allowing my frustrations to get the better of me and putting vendors and others down on Twitter for various reasons. I know what it's like to in hindsight go back and delete a bad Tweet or two. I'm a work in progress, and that is a start. Yes, sure, in private I could have unfavourable opinions of others in the industry, but who hasn't got those? It's how we deal our personal views and opinions in public that determines how we move forward. I do think that if you honestly believe your product or service is better than the rest, then let the product or service do the talking.

I could have pulled buzzwords into the post and made fun of them, but there's no need. There are a lot of buzzwords out there, but it's not necessarily the buzzwords that I have a problem with, it's the motive and the reason for the buzzwords that gets my back up. I find great joy in seeing my colleagues and fellow techies in the industry demo some of the most exciting solutions that they've built for their customers. What makes me even happier is the no BS approach they take. It's pure skill in the use of technology, no BS. I wish the rest of the IT world would either catch on or just butt out.

Following on from my original vRetreat blog post, I thought it would make sense to report on some of the technical IT discussions that happened on the day, For this blog post, I am going to be focusing on the presentation by Darren Swift from Zerto.

So who and what is Zerto? Well, as started on the "About Zerto" page on their website, "Zerto provides enterprise-class disaster recovery and business continuity software specifically for virtualised datacenters and cloud environments."

In simple terms, Zerto provides hypervisor-level replication and automation with no hypervisor vendor-specific lock-in. It provides continuous replication (no snapshots) of virtual machines between hypervisors and replaces traditional array-based replication solutions that were not built to deal with virtualised environments.

Zerto Hypervisor-based Replication is made up of two components. The Zerto Virtual Manager (ZVM), and the Virtual Replication Appliance (VRA).

The ZVM, as the name suggests, is the manager of the solution, and it manages replication for the entire vSphere domain, keeping track of application data replicating in real time. From a scalability point of view, the ZVM can accommodate the replication management of 5000 VMs per vCenter.

The VRA is a module deployed on to physical hypervisor hosts as a VM made up of 2 vCPU's, and 4GB of RAM. The VRA is responsible for the continuous replication of data from selected virtual machines. It compresses and sends the data to the remote site over WAN links.

Each VRA can handle 1500 virtual disks. This architecture provides excellent scalability. As the number of VMs to replicate increases, you can meet the additional replication requirements by deploying additional VRA modules.

The use cases for Zerto mentioned were:

Recovery from ransomware - Zerto provides file-level recovery for all files when used as backup/business continuity. Zerto also keeps a journal history by a default of 24-hours. However, the recommendation is a journal of 96 hours. Any files infected with ransomware can be restored from the journal within the specified time window. The journal should not be used as a replacement for regular backups. It is a tool to use when the rapid recovery of data is needed.

Migration/replication between vSphere & Hyper-V and to Azure or AWS - great use case. Using an appliance deployed in Azure (Zerto Cloud Appliance), you can migrate your on-premise workloads to Azure using Zerto's continuous replication. You can also use Azure as a DR target for your on-premise ESXi or Hyper-V VMs.

If your requirement is to migrate to Azure, a migration license is available at a 1/4 of the cost of an enterprise license. It should be noted that a migration license is for one-time use only.

If you are interested in what Zerto has to offer, especially in the ever-growing hybrid cloud space, I recommend that you head over to zerto.com and request a trial license.

I was honoured to have been invited to attend the inaugural vRetreat event in the UK. The event, arranged by Red-Track Ltd, took place at the Porsche Experience Centre at Silverstone on 27 January 2017, and was attended several well known bloggers and virtualisation community members. The day was made possible by Zerto, Veeam and Cohesity who presented on their respective products and upcoming capabilities within their product suites. This provided ample opportunity for those present to discuss several product features and their possible use cases in the world of hybrid and public cloud infrastructure.

Now, normally I, like many would see vendor presentations mainly as a necessary component to make the event possible, as someone has to sponsor the event in order to pay for the facilities to host such events. Even though that was still very much the case here, there was a difference this time. Rather than a vendor representative walking into a room and presenting, the vendor representatives were very much “part of the audience”. This enabled the presenters and attendees to get to know each over over the course of the event, thus creating an environment where discussions were more open and frank.

As the vRetreat was attended by some who had to travel quite a distance, accommodation was arranged for all attendees at the Silverstone Golf Club, a short distance away from the Porsche Experience Center. This further enhanced the vRetreat experience as a whole, as all the attendees and some vendors got to socialise prior to the event to “break the ice”.

The day started with us arriving at the Porsche Experience Centre, a purpose built facility opened by Porsche to provide the public with a chance to experience the latest Porsche cars on a purpose built set of racetracks, with an assigned Porsche Driving Consultant (PDC) in the passenger seat. The Porsche Experience Centre is located next to the hanger straight of one of the world's best known race tracks, Silverstone. Silverstone has been the host track of many motorsport championship races, including the British Grand Prix leg of the FIA Formula One World Championship. It is also the racetrack where the F1 Drivers Championship first started back in 1950. As a massive F1 fan of more than 20 years, just driving into the main entrance gates of Silverstone, is like walking into a cathedral of motorsport.

Arriving at the Porsche Experience Centre we were greeted by the centre’s friendly staff, and after signing in and obtaining our passes, we sat down for some breakfast, which was prepared and well presented by the catering team at the centre. In addition to be being a driving facility, the Porsche Experience Center is also used by many businesses for corporate events, with conference rooms available, and catering.

After breakfast, we took a few minutes to look around some of the new Porsche cars that were parked on the ground floor, which has a showroom kind of feel to it. What made this “showroom” experience even better was that all the cars are open, so you can open the doors and sit in the cars, unlike many car showrooms where all cars on display are locked, and for “eyes only”.

Following on from a lot of “umming and ahing” over some of the leather stitching on the seats and dashboards of some very lovely cars, we made our way to the conference room assigned to us. Although off of the attendees were massive car fans, we are still IT techies at heart, so we didn’t find it too difficult to temporarily forget about the high octane action that was happening on the tracks outside of the four conference room walls, and concentrate of some 0’s and 1’s talk.

First up to present was Darren Swift from Zerto. Although I’ve not had much dealings with Zerto in the past, some of the talking points around their product did spark some curiosity from my side, especially on the public/hybrid cloud replication front. I’ll cover Zerto in more detail in another post.

Following on from Darren and after a short “break” which consisted of some more “umming and ahing” over a £140,000 Porsche Panamera parked in the showroom, we returned to listen to talk about a presentation by Michael Cade from Veeam Software. Having used Veeam for many years (since Veeam FastSCP back in 2007) I was more familiar with the software in the presentation, but again, we had some good discussion about Veeam Agent as well as the Veeam solution for backing up Office 365.

Again, following on from Michael’s presentation and yet another short break, this time drooling over the interior of the new Porsche 718 Cayman S and Boxster S models, we returned for the last techie chat session and presentation of the day, this time by Ezat Dayeh from Cohesity. As I’ve not had any dealings with Cohesity myself prior to this, it turned out to be a very informative presentation, and I’ll be sure to look into Cohesity in due course. The storage analytics side of the product looks to have a lot of potential. All of the presentations will be covered in greater detail by either myself or the other attendees present at vRetreat.

So, done with the techie stuff. We made our way to the restaurant in the Porsche Experience Centre, where a fine 3 course lunch was provided again by the excellent catering staff. Following on from that, we made our way back to the conference room, where a Porsche Driving Consultant gave us a safety briefing. He explained how it’s all going to work, the cars we’ll be driving, the tracks available to us and what they are used for etc. This was also a time for us to ask any questions we had. One of the questions asked by one of the attendees, was “How long will we actually be driving for?”, to which the Porsche man replied “About 180 minutes”. By this time, you could feel the excitement building as 10 men turned into the 10 boys they actually are. You know the saying, boys never grow up, our toys just get more expensive as we get older.

We made our way outside to the cars where each of us was assigned a Porsche Driving Consultant (PDC), who would look after us for the remainder of the day on the track. My driving consultant explained to me that we would have about 90 minutes in the first car, a 718 Cayman S, followed by a short break, and then about another 90 minutes in a 911 991.2. Now as someone who is actively looking to buy a 911, hearing that in about 90 minutes time I’ll be behind the wheel of a new 911 on a racetrack with a license to drive it has hard as I can, was like music to my ears. It doesn’t really get much better than that!

Driving the 718 Cayman S

Cayman S

The Porsche 718 Cayman S is the coupe version of the 718 family, with the Boxster S being the convertible brother of the Cayman S, it therefore also carries the same 718 model number. Many believe that the Boxster and Cayman S is a “poor man’s 911”, but you could not be more wrong if that’s your opinion. The 911 and 718 are two completely different cars, with different engine layouts, weight distribution and handling characteristics. The 718 S model is a mid-engined 2 seater sports car with a 2.5L Twin Turbo powerplant capable of delivering 350 hp (257kW) and maximum torque of 420 Nm. The manual Cayman S gets from 0 - 62mph in 4.6 seconds, and the “automatic-manual” PDK version does the 0 - 62mph sprint in just 4.2 seconds with launch control.

The Cayman S I got to drive was fitted with the PDK transmission, which was one of the very many things that completely shattered my preconceived opinions about everything I thought I knew about cars (and I’m a huge car fan). Up until that day I was almost convinced that a sports car has to come with a manual transmission. Well, the Cayman S and PDK box very quickly got me to change my opinion about that. You see, the PDK dual clutch transmission has just that, two clutches. Here’s my attempt of a simple explanation how the two clutches are used in the PDK transmission:

The first clutch is responsible for 1st, 3rd, 5th, 7th & reverse, and the 2nd clutch is responsible for 2nd, 4th & 6th. With the car in 1st gear, clutch 1 is engaged and driving the car forward, however in the meantime, clutch two has already pre-selected gear two. When the gearbox changes from 1st to 2nd, it releases the drive from clutch 1 and engages clutch 2, simultaneously, in one mechanical move, which results in a gear change with no loss of power or drive. It is astonishingly quick and smooth. There is simply no way that a manual transmission can make a shift so quickly and smoothly. It should be noted that the response from the transmission when calling for another gear by pressing the paddle on the back of the steering wheel is instant. It’s almost like playing a video game. Just, it’s not!

Anyway, back to the driving. Pulling out of the car park and heading towards one of the race tracks, the Porsche Driving Consultant (PDC) directed us to the figure of 8 track, a large square surface with a figure of 8 race track painted on the surface. This is an ideal place to learn the car and how the weight transfers between acceleration, braking, and turning. We spend a short while there and soon made our way to track 1 where I really got to drive the car fast, around a twisty track that was designed with a lot of different types of slow and fast turns.

We then made our way over to the straights. There we got to test acceleration and braking. Accelerating from 0 - 60mph, as fast as possible and then hitting the brakes as hard as possible. The car has to slow down to a complete stop from 60 - 0mph in half the time it would take to accelerate from 0 - 60mph. In other words, if the car takes 4 seconds to go from 0 - 60mph, it has to be able to get back to 0 from 60mph in 2 seconds. It is a rather violent experience. This exercise showcases the car's stopping ability as well as its stability under heavy braking. Some exercises also focus on breaking hard and steering the car into another lane in the process. Thus teaching you precision car control under heavy braking to avoid obstacles in the road.

“The straights” is also the place where we got to play with the car’s launch control feature (PDK only). At one stage I was sitting on the start line and I saw Simon Gallagher pull away like a rocket with launch control next to us. He had a smile on his face like a naughty boy and this captured the whole day in a few moments for me. The PDC then said: “Yes, if you pull away like that and you don’t have a smile on your face, then you’re not alive”. I agree. I don’t think that it’s possible to get to play with cars like that and drive them to their maximum on a track without managing to crack a smile.

Following on from the straights, we did some more fast racetrack driving, this time on track 2.

After about 90 minutes it was time to park the car and grab a coffee before picking up the keys to a 911 (991.2) Carrera 4S.

Driving the 911 Carrera 4S

991.2 911 Carrera 4S

The 911 is the icon in the range. If you’re reading this and you don’t know what a Porsche 911 Carrera is, then I assume that you’re not a car fan at all or, that you’ve not been on planet earth for the last 40+ years.

The new 911 991.2 is fitted with a 3.0L Twin Turbo engine, which is quite different from the traditional normally aspirated 911 engines. Yet another misconception I had before arriving there on the day, was that somehow the new 3.0L turbo engines are somewhat inferior to the 3.8L normally aspirated engines in previous generation “Carrera S” models. I expected a dampened sound, and a laggy turbo experience in the low rev ranges (like you get with most turbo charged engines), with a sudden burst of power when the turbo kicked in. Again, I was wrong. Very wrong.

The 3.0L twin turbo engine (fitted with the optional sports exhaust) is still very loud, sure may be a tiny bit quieter than the normally aspirated engine to a finely tuned ear, but take nothing away from her. When you put your foot down, she does roar with that distinct 911 sound behind you. As for power delivery. Smooth! From the bottom ranges to max RPM, it doesn’t feel like a turbo. It still feels like raw, smooth power all the way through the rev range. Every Time I would accelerate hard down the back straight, I’d say to the PDC: “Gosh this engine is good!”

As for grip, the 911 is in a league of its own. Even on that damp and sometimes even wet track, the car just obeys your every input. Amazing grip under acceleration, braking and turning in at high speed.

In addition to all of the driving activities undertaken in the 718 Cayman S, in the 911 we did some work on the skid pans. Basically, these are slippery areas that are constantly being sprayed with water. They provide the perfect environment to learn how to correct the car when you encounter a spin. The PDC, again very professional, would explain how the car will react and explain to you what you need to do in order to catch the spin and correct it. It’s harder than it sounds, but after very many runs on all of the skid pans, I managed to keep the car pointing in all directions. I did almost lose complete control of it only once, but still managed to keep it from swapping ends.

At the end of the 90 minutes in the 911, I was ready to make my way to a Porsche dealership and place my order for a new 911. But the sensible, realistic me kicked in and stopped me from doing that. Needless to say, if I has a spare £100,000.00 sitting around, I’d know where to go and spend it.

I’m still looking to buy a 911 Carrera S 997 Generation 2 (2009 - 2012) probably with PDK. I have been for quite a while now. It’s all about finding the right one at the right time.

All in all, a big thumbs up for the vRetreat day. Great discussions around the presentations with some of the brightest minds in the community, great food, amazing facilities and very friendly and professional staff all round at the Porsche Experience Center. And yes, if you have £300 sitting around and you don’t know what to do with it, buy yourself a 90 minute 911 driving experience at the Porsche Experience Centre, Silverstone. You won’t be disappointed and you’ll be smiling for days on end. I’m still smiling days later!