REST API

In todays data centers, it is not uncommon to find servers with only 2 x 10GbE network interfaces, this is especially true with the rise of Hyper-Converged Infrastructure over the last several years. For customers looking to deploy NSX-T with ESXi, there is an important physical network constraint to be aware of which is quickly mentioned in the NSX-T documentation here.

For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose vmnic0 is used for management and storage networks, while vmnic1 is unused. This would mean that vmnic1 can be used as an NSX-T uplink, but vmnic0 cannot. To do link teaming, you must have two unused physical links available, such as vmnic1 and vmnic2.

As shown in the diagram below, an ESXi host with only two physical NICs can not provide complete network redundancy as each pNIC can only be associated with a single switch (VSS/VDS or the new N-VDS) as pNICs can not be shared across switches.

For customers, this means that you need to allocate a minimum of 4 pNICs to provide redundancy for both overlay traffic and non-overlay VMkernel traffic such as Management, vMotion, VSAN, etc. This is much easier said than done as not all hardware platforms can easily be expanded and even if they can, there still is a huge cost in expanding the physical network footprint (switch port, cabling, etc).

UPDATE (06/12/18) - As of NSX-T 2.2, which was recently released, there is now a UI in NSX-T Manager for managing the migration of VMkernel interfaces to the N-VDS. For automation purposes, you may still find this article useful but now you have option of using the UI.

During the VMware Fusion 2017 Tech Preview, I was experimenting around with the new Fusion REST API and I had built a small prototype PowerShell Module as a way for me to learn how the API works. This allowed me to provide valuable feedback back to the Fusion Engineering team on improving the REST API UX. I was pleasantly happy to see that the majority of the feedback was indeed implemented for Fusion 10 which GA'ed a few weeks back.

Given the PowerShell module was pretty useful for my own use, I figure I would also publish it for others who might also be interested in Automating VM management using the new Fusion REST API, especially those with a PowerShell/PowerCLI background. Another nice thing about the module is that it can run across macOS/Linux via PowerShell Core or Windows using full blown PowerShell. I have been slowly tweaking the module to include the updated REST API changes and I am please to announce that the VMware.Hosted PowerShell Module which supports the new Fusion 10 REST API is now available!

The module includes the following 14 functions:

Connect-HostedServer

Disconnect-HostedServer

Get-HostedNetworks

Get-HostedVM

Get-HostedVMNic

Get-HostedVMSharedFolder

New-HostedVM

New-HostedVMSharedFolder

Remove-HostedVM

Remove-HostedVMSharedFolder

Resume-HostedVM

Start-HostedVM

Stop-HostedVM

Suspend-HostedVM

If you have ever used PowerCLI before, these functions should feel very familiar. We have basic Connect/Disconnect-HostedServer which will set an environmental variable called $DefaultHostedServer. This variable contains some basic information about the Fusion API endpoint as well as the base64 encoded credentials which are required when connecting to the new Fusion API. Below are a few examples using the new Fusion module, they are pretty basic and I have only implemented a sub-set of the Fusion REST API, so any community contributions are most welcome!

In case you have not heard the news, the VMware Fusion and Workstation team just released their 2017 Tech Preview releases which you can read more about it here and here. A couple of years back, VMware had released a slimmed down desktop Hypervisor based on VMware Fusion called AppCatalyst which was optimized for developers wanting to run Docker Containers. Although the feedback for AppCatalyst was positive, the large majority of customers preferred to see the AppCatalyst specific features such as the RESTful API to just be included natively within Fusion rather than having a separate product.

Although it could not be said at the time, the feedback was heard loud and clear and the plan was to pull in the AppCatalyst REST API directly into Fusion. With the Fusion 2017 Tech Preview, you will now be able to interact with your Virtual Machines running on Fusion using the new Fusion REST API which also includes some additional new capabilities that was not there with the AppCatalyst REST APIs such as network and port forwarding management.

UPDATE (09/27/17) - VMware Fusion 10 has just officially GA'ed and there have been number of updates and enhancements since the Tech Preview. From an Automation/API standpoint, there have been several major updates that I would like to call out.

First, there are several new command-linen options to the vmrest utility including support for both HTTP and HTTPS API endpoints, credentials are also now supported so you can setup a shared username/password and ensure that only authorized folks can login to the API and lastly, the default port is now also configurable. Along with these widely requested features during the Tech Preview, there is also a nice debugging option while using the Fusion UI for troubleshooting purposes.

Secondly, the Fusion Swagger REST API docs has received a total re-vamp in terms of organization and cleaned up documentation. Below is a screenshot of the Swagger interface for the GA version of Fusion 10 which should make it even easier to consume the REST API.

Getting Started

Step 1 - Once you have installed the Fusion 2017 TP release, you will need to start the REST API endpoint which is provided by /Applications/VMware Fusion Tech Preview.app/Contents/Public/vmrest You can just type vmrest and it should automatically start or if you prefer to run it in the background, just type the following:

Over the weekend, while taking a break from putting together some furniture as it was my time for my daughters nap, I got that the chance to explore and create a new Alexa Skill which integrates with a few of VMware's APIs. This has been something I wanted to try out for some time but have not had any spare time. I had even purchased an Amazon Echo Dot but its just currently being used as a music player for the family. A couple of weeks back I saw an awesome blog post from Cody De Arkland where he demonstrates how to easily integrate the new vCenter Server 6.5 REST APIs into an Alexa Skill which can then be consumed using an Amazon Echo device.

Cody's write-up was fantastic and I was able to get everything up and running in about 20-25minutes with a few minor trial/error. It was great to see how easy it was for a non-developer like Cody to easily consume the new vCenter Server REST APIs which includes basic VM Management as well access to the VMware Management Appliance Interface or VAMI for short. Given Cody already did the hard work to create the initial Alexa integration, I figure it might be cool to extend his work and introduce Alexa to a few more VMware's APIs including the traditional vSphere API (SOAP) and the new vSAN Management API.

UPDATE (06/15/17) - Just added support for PowerCLI, it was a little tricky as Flask app is written in Python and so poor man workaround was to call Powershell/PowerCLI using subprocess.

Since Cody's integration module was written using Python, it was pretty simple to add support for both pyvmomi (vSphere SDK for Python) and vSAN Management SDK. To install pyvmomi, you can simply run

pip3 install pyvmomi

and for installing vSAN Management SDK, have a look at this blog post here.

Here is a quick video that I had recorded which demonstrates the use of both the vSphere API and vSAN Management API using my Amazon Echo.

You can find all my changes in this forked repo lamw/alexavsphereskill and make sure to follow Cody's blog post here for instructions on how to get setup. For those wondering if Cody will be publishing an Alexa Skill for general consumption, I know he is working on some awesome updates to make it even easier to consume. Here is a sneak peak at just some of the recent updates that Cody is working on ...

One thing to note which I was not aware of until Cody mentioned it, is that once your Alexa Skill is built, you can directly access it from your own personal Amazon Echo without needing to publish it. You need to activate the Alexa Skill by saying "Alexa Start [APP-NAME]" where name is the name used in the "Invocation Name" field as shown in the screenshot below when setting up your Alexa Skill. I should also mention that if you decide to change the Alexa Skill name itself, which I had initially done and called it "vGhetto Control", make sure you update the Flask App name in __init__.py to the same name (spaces are converted to underscores) or you will run into issues.

Similiar to the vCenter Server Appliance (VCSA) 6.0 release, the new VCSA 6.5 is also composed of multiple virtual machine disks (VMDKs). Each VMDK maps to a specific function and OS partition within the VCSA. There are now a total of 12 VMDKs, two of which are new in vSphere 6.5: vSphere Update Manager (VUM) and Image Builder. The following table provides a break down of the VMDKs in VCSA 6.5 compared to VCSA 6.0:

Disk

6.0 Size

6.5 Size

Purpose

Mount Point

VMDK1

12GB

12GB

/ and Boot

/ and Boot

VMDK2

1.2GB

1.8GB

VCSA's RPM packages

N/A as it is not mounted after install

VMDK3

25GB

25GB

Swap

SWAP

VMDK4

25GB

25GB

Core

/storage/core

VMDK5

10GB

10GB

Log

/storage/log

VMDK6

10GB

10GB

DB

/storage/db

VMDK7

5GB

15GB

DBLog

/storage/dblog

VMDK8

10GB

10GB

SEAT (Stats Events and Tasks)

/storage/seat

VMDK9

1GB

1GB

Net Dumper

/storage/netdump

VMDK10

10GB

10GB

Auto Deploy

/storage/autodeploy

VMDK11

N/A (Previously InvSrvc 5GB)

10GB

Image Builder

/storage/imagebuilder

VMDK12

N/A

100GB

Update Manager

/storage/updatemgr

In addition to the VMDK/partition changes, there are a couple of enhancements when needing to increase disk capacity in the VCSA. Just like in VCSA 6.0, you will still be able to hot-extend any one of the VMDKs while the system is still running.

The first change is that instead of the old vpxd_servicecfg command which is used expand the logical volume(s) making the new storage capacity available the OS/application, it has been replaced with the following command: /usr/lib/applmgmt/support/scripts/autogrow.sh

The final difference is that in previous releases, you could only resize the Embedded VCSA or External VCSA node, but not the Platform Services Controller (PSC) node. In 6.5, this has changed and you can apply this method on any one of the VCSA nodes. Thanks to Blair for reminding me on this one!

Lets walk through an example of increasing the Net Dumper partition (VMDK9) and exercising this new VAMI API.

Step 1 - Login to VCSA using SSH to run a quick "df -h" to check the current size of your Net Dumper partition which by default will be 1GB as seen in the screenshot below.

Step 2 - Next, we will increase the VMDK to 5GB. In this example, I am using the vSphere Web Client but if you wanted to completely automate this process end-to-end, you can use the vSphere API/PowerCLI to perform this operation.

Step 3 - To quickly try out the new VAMI API, we will use the new vSphere API Explorer that is included in the VSCA 6.5. Simply open a web browser and enter the following URL: https://[VCSA-HOSTNAME]/apiexplorer Select the "appliance" API and then click on the login button and enter your vCenter Server credentials.

Step 4 - Scroll down to the POST /appliance/system/storage/resize operation and expand it. To call this API, just click on the "Try it out" button. If the operation completely successfully, you should see a 200 response as shown in the screenshot below.

Step 3 and 4 can also be called directly through PowerCLI using the new CIS cmdlets (Connect-CisServer & Get-CisService) which exposes the new VAMI APIs. Below is a quick snippet that performs the exact same operation:

There were a few questions recently about the required syntax for specific VMware AppCatalyst operations when consuming the REST API using cURL. I figured I put together a quick "cheatsheet" that contains cuRL examples for the entire VMware AppCatalyst API which not only would it help me in future but could also benefit others. Like many, I also learn by example and having explicit samples to start with is a great way to get familiar with a new technology or product. If you are new to VMware AppCatalyst and would like a quick run down on how to quickly get started, be sure to check out my getting started article here for more details.

While going through the AppCatalyst API, I did find a couple of API operations which had some inconsistencies and did not strictly adhere to the JSON format. Thanks to Roman Tarnvski for providing the solution. I am hopeful that these issues will be resolved in a future update of AppCatalyst as I do like the ease of use of their API. For the majority of the API, the self documentation via the AppCatalyst API Explorer is accurate, which you can see from the screenshot below.

Before you can interact with the AppCatalyst REST API, you will need to start the AppCatalyst Daemon by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst-daemon

Once the AppCatalyst Daemon is running, you can open a new terminal and start working with the REST API via cURL or any other tool of choice.

1. Create a new VM from the default Photon OS VM template:

You technically only need to specify the unique "id" property, but you can also give a display name for the VM by using the "name" property.

To retrieve a specific VM, you will need to power on the VM before this operation is allowed. I did find it strange that this was the case, but perhaps this could be enhanced in the future to not have this requirement, especially if you want to pull out details such as the "tag" property.

The "guestPath" property is not an absolute path within the guestOS, but rather a logical name. For more details about shared folders in AppCatalyst, please have a look at this article here. Currently there is only one "flags" property with the value of 4 which enables read/write, please refer to the article in the link above for more details about folder sharing in AppCatalyst.

Primary Sidebar

Search this website

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).