Parsing JSON with the help of structs when the structure of the data is known

In this case we got a valid response. This means we do know the structure of the data now. What I prefer is to create a struct which should represent the data within go. To do this it would be great to have a documentation or SDK which defines a representation of all responses. With AOS 5.0 there is a way to use swagger to generate this.

But today we can just create our own struct based on the response. So copy/paste the response to JSON-to-GO will return this struct.

We can just use this struct and parse the response to it. You may think: “This will end up with a lot of structs and they are just huge”. This could be the point where you have a look at the generic interface approach.

I created a simple example which extends the basic authentication part and adds the following:

Making use of json.Unmarshal to parse the data into the variable clusterGetResp and print a few values to std out.

1

2

3

4

5

6

// Parse/Unmarshal JSON into the struct

json.Unmarshal(bodyText,&clustersGetResp)

// Print the parsed data to stdout

println("ID: "+clustersGetResp.Entities[0].ID)

println("Name: "+clustersGetResp.Entities[0].Name)

Retrieve VM config (vCPU, Mem, IP Address….)

In the last example we made use of parsing json into structs. I prefer this way but there is a “simpler way” to achieve the same. Making use of the interface. The aim of this example is to retrieve some information about a specific VM.

So we would like to get the information about a VM called “docker-mac”. BTW: “I am using this VM to demonstrate the integration of Nutanix and docker.”

The first step would be to retrieve the UUID of the VM because the Nutanix REST API makes use of a unique identifier.

Lets get a list of all VMs and extract the UUID for the VM with the name “docker-mac”.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

.

req,_=http.NewRequest("GET",v2_0(NutanixHost)+"/vms",nil)

.

.

.

// create interface

varfinterface{}

varuuid string

// Unmarshal into interface f

iferr2:=json.Unmarshal(bodyText,&f);err!=nil{

panic(err2)

}

// type assertion to access f´s underlying map[string]interface

m:=f.(map[string]interface{})

// the response will include entities which includes the data of the VMs we searching for

e:=m["entities"].([]interface{})

// we can iterate through the map and search for the name of the VM

fork:=rangee{

t:=e[k].(map[string]interface{})

ift["name"]=="docker-mac"{

uuid=t["uuid"].(string)

}

}

So we request a list of all VM’s first and then search for the name to get the UUID (Unique Universal ID) of the VM called “docker-mac”

Now we are able to request more info about this specific VM when requesting where bd74362d-2cb0-4d06-a95b-dfb7403c5a01 is the UUID.

A special case ! Getting the IP Address!

You may have noticed that there is no key/value which shows the IP Address of the VM. So first of all there are two different kind of IPs for a VM. If you are using AHV than you are able to retrieve an IP for a VM from IP pools which are managed by AHV itself. The other maybe more common way is that the VM get an IP via DHCP or it is statically set all without the “knowledge” of Nutanix.

AHV managed IP

To retrieve the IP when it is managed via the AHV IP pool the way to retrieve the IP Address is as follow. Just add the include_vm_nic_config paramter to the GET request and the NIC details will be included.

Keep an eye on the key “requested_ip_address”

The implementation is a little bit ugly because of the interfaces/mapping etc. but can be done.

// the response will include entities which includes the data of the VMs we searching for

e:=m["entities"].([]interface{})

// we can iterate through the map and search for IP of the VM

fork:=rangee{

t:=e[k].(map[string]interface{})

fmt.Println(t["name"])

ift["vm_nics"]!=nil{

n:=t["vm_nics"].([]interface{})

fork2:=rangen{

t2:=n[k2].(map[string]interface{})

fmt.Println(t2["requested_ip_address"])

}

}

}

IP Address of the VM if not managed by AHV

At the moment the v2 of the API is not able to provide the IP. So there is a fallback to the API v1 which just includes the IP. So a simple GET to https://192.168.178.130:9440/PrismGateway/services/rest/v1/vms/ will return all VMs and their IP Addresses.

The implementation is pretty simple because the info is direct at the main level of the VM.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

// Defines the HTTP Request

// send a GET to the NUTANIX API v1 and receives the details of all VMs

req,_=http.NewRequest("GET",v1_0(NutanixHost)+"/vms/",nil)

.

.

.

// Unmarshal into interface f

iferr2:=json.Unmarshal(bodyText,&f);err!=nil{

panic(err2)

}

m=f.(map[string]interface{})

// the response will include entities which includes the data of the VMs we searching for

The first step when interacting with the Nutanix Rest API with GO is to authenticate against the API. To do so you need to know that Nutanix makes use of user authentication. There is a role concept for users, which means a user can have different rights when connecting.

There are three roles a user can be assigned to:

Viewer: This role allows a user to view information only. It does not provide permission to perform any administrative tasks.

Cluster Admin: This role allows a user to view information and perform any administrative task (but not create or modify user accounts).

User Admin: This role allows the user to view information, perform any administrative task, and create or modify user accounts.

The first user in PRISM (GUI) which is direct related to the API authentication is “admin” with the default password “admin” which will be changed via the first connect to PRISM. I prefer to set the admin password to the default password “nutanix/4u” in my test environments but this is up to you. The admin user does have all three roles assigned.

Using Google Chrome developer tools to learn how PRISM authentication works (optional and before AOS 5.0)

At the moment the documentation of the REST API lacks some easy examples and explanations how the authentication should be done. I started to learn this by making use of the Google Chrome developer tools when I am connecting to Nutanix via the Web GUI.

First step is to open Google Chrome and show the developer tools and switch to the Network tab. Clear all and start the recording. Type in CVM or Cluster IP/DNS to open the PRISM GUI.

It looks like that the client tries to check if we are already connected which is this case fails because the response code shows “401” which means we are not authorized to access this URL at the moment.

Let’s type in the user and password now and stop the recording ones we are successfully connected to the GUI. You should found a POST to https://192.168.178.130:9440/PrismGateway/j_spring_security_check. There are three interesting parts:

It is posting the username and password in Form data as j_username and j_password

It is receiving the Set-Cookie: … in the Response header which means a cookie can be used for all subsequent http methods

It is receiving the Location: https://192.168.178.130:9440/PrismGateway/nmb/loginsuccess which is like a redirection in this case

The https://192.168.178.130:9440/PrismGateway/nmb/loginsuccess GET will be requested via the client with the cookie which seems to check if it works. A response of the status code of “200” and the response of “Success” means cookie authentication is working. Then a https://192.168.178.130:9440/PrismGateway/services/rest/v1/users/session_info is followed which gets some user session info like userDTO.

I believe the rest can be ignored for our task now. We learned this basic workflow for authentication.

Request session_info to check if we are already authenticated

Request pre_login_details which may be used to react on different API versions

Send username and password with the request

Set or receive a cookie for subsequent http methods

check if we are connected via loginsuccess

Nutanix REST API Authentication with GO

There are two methods to authenticate to the Nutanix REST API.

1. Basic Authentication :
The user provides user-id and password every time a request is send as the auth-header.

2. Session Authentication :
The user credentials are stored in a cookie.

GO Authenticate via “Basic Authentication”

I wrote an example which shows how the basic authentication works. The general workflow is simple. EVERY http method sends the username and password in a encoded fashion. Nutanix REST API requires a base64 encondig which is included in the “encoding/base64” package.

I created the func EncodeCredentials which encodes the input parameter username and password as required.

Go

1

2

3

4

5

// EncodeCredentials this func is encoding the Username and Password with base64 encoding which is

Before we send the request we make sure username and password will be included in the request header. The key we need to set is “Authorization” with a defined base64 value with isa string of “Basic “+encodedString.

Go

1

2

3

// before the request is send set the HTTP Header key "Authorization" with

The request can be send and we are able to handle the response. In this example I am checking for some response codes but it is up to you to implement more.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

resp,err:=httpClient.Do(req)

iferr!=nil{

log.Fatal(err)

os.Exit(1)

}

defer resp.Body.Close()

// Status Code 401 Unauthorized means user+password was not valid

// https://en.wikipedia.org/wiki/List_of_HTTP_status_codes

ifresp.StatusCode==401{

log.Fatal("Username or password not valid for host: "+NutanixHost)

os.Exit(1)

}

// Response status code 200 should be send if credentials are valid

// all other could be ignored or handle if needed

ifresp.StatusCode!=200{

log.Fatal("Connection to host: "+NutanixHost+" not possible")

os.Exit(1)

}

The last part prints the response body to give you a feedback and the received info about the user.

Go

1

2

3

4

5

6

7

8

9

// read the data from the resp.body into htmlData

htmlData,err:=ioutil.ReadAll(resp.Body)

iferr!=nil{

fmt.Println(err)

os.Exit(1)

}

// print the response body (htmlData) to give you a feedback

fmt.Println(string(htmlData))

That’s it. Just remember you only need to make sure the header of the request includes the base64 encoded string.

GO Authenticate via “Session Authentication”

I wrote an example which shows how the session authentication works. The general workflow is like that. Send a “basic authentication” http Get with base64 encoded credentials and set a cookie. Use the cookie in all subsequent http methods .

I will only focus on the parts which change to the “basic authentication” part.

The way how the HTTP client will be created changed. First the cookie will be created and the the new created http client is using this cookie.

Go

1

2

3

4

cookieJar,_:=cookiejar.New(nil)

// create a HTTP client

varhttpClient=http.Client{Transport:tr,Jar:cookieJar}

There is a second request but this time we don’t set the authorization header. The reason is simple. The http client makes sure the cookie is send with the encrypted credentials.

This blog series is dedicated to the Nutanix REST API and how to automate tasks based on the language Go leveraging the REST API.

The official documentation can be found here: http://developer.nutanix.com/ but this tutorial was started before this has been announced and will focus on the whole process of development with GO and Nutanix REST API.

I will cover the typical tasks you would like to automate. Some examples:

The PRISM GUI is using the REST API. This means everything you can do in PRISM can be done via the REST API and even more. I believe it makes no sense to explain every method of the REST API right now. Instead I will show some basic examples in this tutorial and explain it via implementing use cases. But feel free to browse through the different methods/objects.

Your first API call using the REST API Explorer

Connect to https://CLUSTER-IP_or_CVM_IP:9440/console/api/

Click on /vms and you are able to see the standard HTTP methods like GET/PUT/POST/DELETE which are used to modify/create/get VMs based on the Nutanix plattform.

Retrieve a list of all VMs which are running on Nutanix!

In this case we use the GET method to retrieve a list of all VMs which are running on Nutanix. So click on “GET /vms/”

The Implementation Notes says: “Get the list of VMs configured in the cluster”. This means we would get a list of ALL configured VMS if we call a “GET” using the the right URL. Okay you may ask: “What is the URL I need to send a GET to?”

URL entry points

For v0.8 the entry point is : https://CLUSTER-IP_or_CVM_IP:9440/PrismGateway/api/nutanix/v0.8

For v1 the entry point is : https://CLUSTER-IP_or_CVM_IP:9440/PrismGateway/services/rest/v1

So in this case we are using the v1 API and the URL is:

https://CLUSTER-IP_or_CVM_IP:9440/PrismGateway/services/rest/v1/vms/

So copy and paste this to the browser will show something like this:

or you could use a tool like curl but you need to handle the authentication as well. I will talk about authentication in the second part of this tutorial.

But back to the REST Explorer because this can be done easier. If you scroll down in the /vms/GET you will find a button called “Try it out!” which will do exactly the same for you! Click Try it out!

You will see that the format of the response which is JSON can be much better viewed now and we get some nice details!

First you will see the same URL I already showed you in the “Request URL”

Second you are able to scroll through the response and may search for the key “vmName” and the corresponding values to find all VM names. “Response Body”

Third the response Code is displayed. “200” which means: “Everything worked great” 🙂 “Response Code”

But this lists all VMs which are configured and not only the once which are running. We would like to change this. Let’s first search the response if any key shows the actual state of the VM!

You may found a key called “powerState”. Let’s than try to filter the response and only retrieve the VMs which are “on”.

Using Nutanix FilterCriteria

For this case the we are able to use the option “filterCriteria” in the REST API Explorer to only find all VMs which are powered on. Type in “powerState==on” in the filterCriteria field and try it again.

This request failed with a response code “500” and it says: “invalid filter criteria specified”. You may ask: “Why? It is exactly stated like in the response! And where the hell should I know more about this?”

The answer: “There is a KB article which shades some light here: KB 000001962”

It says for all who are note able to access the KB:

If you would like to learn more , about the filter you could use on a query, use the arithmos_cli on the CVM to get more details.

In this case connect to a CVM and type:

1

arithmos_cli list_attributes_and_stats entity_type=vm|grep power

which shows the attribute is called “power_state” instead of “powerState”. Let’s try it again with the filter criteria “power_state==on”

Boom!… It works!

This completes part 1 of this tutorial! It will get an update soon with the new API coming with the Asterix release (v2 and v3)!

If you would like to learn more about the REST API there are some resources you may have a look into:

This post is related to AHV only!!! Make sure a recent backup of the VM exist!

In the last weeks customers asked me how to move a VM from one AHV container to another AHV container on the same cluster. The answer is: “There is no PRISM/GUI option for this and the manual task is pretty difficult”. So i wrote a script called move_vm which I show in this post to simplify this.

But why should you move a VM?

There are several reasons for this.

container settings don’t fit

different containers for different organization units

DR/backup concepts based on containers

Automation based on containers

… and more

Example: Let’s say the customer started with two containers:

ISO – just for templates and CD/DVD ISO images

prod – productions environment

Now he realizes that some of the server VMs would be great for compression but some are not. He used the best practices to figure out which server VMs would fit.

You may noticed that there are two VMs now with the same name. I believe it makes sense to keep the old VM unless you are sure the new copy works.

You can use the option “–delete” to delete the source VM. The advantage is that the new network adapters will have the same MAC address then the source VM!

Move just one vDisk/disk of the VM from container prod_comp back to prod

I renamed the new Move_VM_Test1 to Move_VM_Test2 for the next part.

Let’s say you would like to move just one vDisk from a container prod_comp back to prod because you found that inline compression makes no sense. An example maybe the “transaction log” vdisk of a MS SQL Server.

First we need to find out which vdisks exist and how the mapping looks like. This can be done with the “–list_mapping” option.

which means that there are four vdisks and two CD/DVD drives. Let’s say we identified that the the second vdisk: “scsi.1” is the one we would like to move back to container prod. In this case we need to specify the whole mapping when calling the move_vm tool to only move the second vdisk. Copy and paste is the way to go!

You may ask: “Why is the whole VM cloned and not only the vdisk?” Yep you are right. This would be the better way. This is how this tool works atm. This is more a copy then a move but it works and its pretty fast because only the vdisk “scsi.1” needs to be copied.

You could specify the option “–delete” the delete the source VM and to make sure the new network adapters get the same MAC address.

There we go.

For all the people who wants to know more, this is an overview how this tool works:

Upload vDisk from source VM to image service. This is needed while a direct copy is not possible

in a recent customer Proof of Concepts (PoC) I encountered a common task. Importing a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and after testing export the VM back to the production environment. The import via the AHV Prism GUI can be done without too much effort but there is no interface or command line tool which exports a VM. I wrote a script for the export which can be run on a linux/mac environment or on NTNX-AVM. Jump direct to the export part and skip the import.

USE CASE: Import a VM from VMware vSphere to Nutanix Acropolis Hypervisor (AHV) and export it back after testing!

There is a decent post from Artur Krzywdzinski on how to import/migrate a Windows 2012R2 from VMware vSphere/ESXi to AHV. There is even a detailed documentation on the Nutanix support portal. Search for “MIGRATING WINDOWS VMS FROM ESXI TO AHV”.

The way this works can be described as follow:

Install all needed drivers into the VM before migrating. (drivers for disk devices, video, network)

Copy the VM to a Nutanix NFS share which is mounted on the source ESXi/vSphere via Storage vMotion if available, else copy it with command line/GUI yourself.

Convert the VMware vmdk to a format AHV can read/write

Create a VM based on the converted vmdk files with the same settings like in ESXi/vSphere

…. something else maybe… Start the VM… done

1. Windows VM Migration Prerequisites

I advice you to read the full documentation If you migrate VMs. I only list the basic steps needed in my test lab.

Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.

Optional: Clone any ESXi VMs you want to preserve.

Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later.

Mount the AHV container as an NFS datastore to vSphere.

Before you migrate a source VM, ensure that the source VM does not have any hypervisor snapshots associated with it.

This is a Windows 20012 R2 Windows Server with VMware Tools installed.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter with no snapshots.

Optional: Clone any ESXi VMs you want to preserve.

Yes I would like to do this. The VM which I will migrate will be the clone and not the original VM. Just in case at any point we mess something up it would be nice to have the original one.

In this case I clone the VM directly to the NFS datastore mounted from the Nutanix cluster. So jump to the Mount AHV … part and continue here when finished this step.

I clone the Windows2012R2 server to the prod datastore which resides on the Nutanix cluster.

Nutanix recommends using AOS 4.5.x or later and AHV-20160217.2 or later. See the Hypervisor Details page on the Nutanix Support Portal for all the AHV versions

I am using the Nutanix CE edition so it is not that easy to make sure these requirements are met. Lets start with AOS 4.5.x or later.

I connected today (04.10.2016) to my Nutanix NUC cluster and choose the option “Upgrade Software” in the right top corner in PRISM (gear wheel).

As you can see: “2016.06.30” is installed but is this equal or later than AOS 4.5.x ? Yep it is. The version format seems to be YYYY:MM:DD so this should be okay. There is a table called “AHV AND NOS/AOS VERSION UPGRADE COMPATIBILITY” on the Nutanix portal which will make it easier to understand. The newer the NOS/AOS the newer the AHV version which is required. In this case I believe the version should be 4.6.3 for NOS/AOS.

Now the AHV-20160217.2 hypervisor part. The Nutanix CE shows: Nutanix 20150513 which does not meet this requirement. But anyway. It works.

BTW: In this case there is an update available but I will upgrade after I finished this post.

Mount the AHV container as an NFS datastore to vSphere

First we need to make sure that the source ESXi/vSphere environment is able to mount the NFS container/datastore. To do this we need to whitelist the ESXi/vSphere environment on Nutanix. In my case the ESXi/vSphere (192.168.178.80 vCenter/192.168.178.81 ESXi) environment is in the same subnet than the Nutanix CE edition (192.168.178.130). Make sure that in your environment the ESXi and the CVM are able to reach each other via IP and no firewalls are blocking traffic!

Choose “gear wheel” on top right corner in PRISM and select “Filesystem Whitelists” and enter the IP range which should be able to mount the container/datastore. In my case I used the whole subnet 192.168.178.0/24.

Now we are able to mount the NFS datastore. I would like to mount the Nutanix container “prod” into vSphere.

In the vSphere client choose to Add NFS datastore and insert the needed values like I did. I used the Nutanix cluster IP as the Server address and “/prod” as the folder.

There we go. A datastore called “prod” is available on the source ESXi environment.

2. Install Nutanix VirtIO

In my case I created a clone of the source VM so I install the drivers only into the clone. If you skipped this part install them to the source VM.

Download the Nutanix VirtIo drivers from the Nutanix portal. I prefer the ISO image because it seems to be easier to mount it via vSphere Web Client than copy something to the source VM.

Mount the ISO and install the drivers.

Set the Microsoft Windows SAN policy to online. This makes sure all disk devices will be mounted after migration.

1

2

diskpart

SAN POLICY=OnlineAll

3. Migrate the VM disks to Acropolis Distributed Storage Fabric (DSF)

To migrate a VM to the Acropolis Distributed Storage Fabric you only need to SVMotion the VM to the mounted NFS datastore. I copied the VM already to the container/datastore “/prod” when cloning the source VM. If you didn´t do this you need to move all data of the VM to the prod container/datastore via SVMOTION.

In the vSphere Web Client choose “migrate” the VM and use the datastore only option. So all vmdisks will be moved to the container/datastore “prod”.

4. Convert the VM disks to AHV by importing them

To import a VMware vmdk to Nutanix AHV you need to use the Image Service/Image configuration. So click on “gear wheel” in the top right corner and select the “Image Configuration” (maybe the name changed already).

Choose “Upload” and enter the following:

Attention!!! Make sure you use the “-flat” !!!

nfs://127.0.0.1 will always work!

As you can see it may not easy to know the exact filenames. Use the vSphere Web Client datastore browser to get all needed details.

5. Create a new Windows VM on AHV and attach the Windows disk to the VM

Power Off the source VM now.

Create a VM in the Nutanix Prism GUI with the same settings than the source VM.

1 CPU – 2 GB RAM – 20 GB Disk – 1 Network Adapter in my case.

Add a disk and choose to clone from the image service where the disk should be imported already.

Add a network adapter and connect it.

Now you can start the VM and it should run. You need to configure network and maybe an option here and there. But basically the VM is imported.

As you may have mentioned. The import works without the NTNX-AVM. But for the export there is no PRISM/NCLI/ACLI command to export a VM. So i wrote a script which helps with this part based on the great post from Artur Krzywdzinski.

The script will export the AHV VM to a container of your choice where you can copy it via NFS to ESXi/vSphere or to somewhere else.

In my case I would like to export the WinClone VM back to ESXi. So all tools are installed like VMware tools etc. I will export the WinClone to the ISO container. Just to make sure there is no confusion with the “prod” container where the import took place.

Connect to the NTNX-AVM via SSH. Now lets export the VM with the export_vm command.

STEP 1 – using export_vm to export the VM to a container

The export takes some time because the all VM disk data needs to be converted into the VMware vmdk format.

Mount the ISO container to the target ESXi/vSphere environment.

Step 2 – Register the VM into the ESXi/vSphere

Now I would like to create a new VM on ESXi/vSphere based on the exported files. I will just register the .vmx file! The vmdk file is atm not a proper ESXi/vSphere file. So it needs to be converted. I am using the migrate (move VM) which will do the same but so I can avoid the command line. But this KB article should help if you want to do it the manual way.

Browse the ISO datastore and register the .vmx file as Win-Clone-2. The “-2” is only needed if the original clone still exists.

Step 3 – Prepare the VM for the first boot

Upgrade the virtual Hardware.

Set the VM OS to Windows.

Change the SCSI Controller from “Bus logic” to “LSI LOGIC SAS”

Add a network device or all needed onces.

Now the nice part begins. Instead of manually converting all vmdisk we just migrate the VM to another datastore. In the best case directly where the VM should reside.

Step 4 – Last step! Power on the VM.

I asked myself how could an admin use the NTNX-AVM. So I decided to show and provide some real world examples how this powerful automation VM can be used.

USE CASE: A daily health report should run on the Nutanix cluster and send to a specified email address!

Let’s start with the script itself. There is no script provided by Nutanix except the Nutanix Cluster Check (ncc). It does a decent job but because of the hundred of tests and output it may not be the easiest to start with. So based on the script provided by BMetcalf in the Nutanix Community I developed a script called “daily_health_report.sh” for the NTNX-AVM. It is automatically installed with the NTNX-AVM starting today.

It runs the following command remote on a CVM which gives you a good overview of the current cluster status.

1

2

3

4

5

6

nodetool-h0ring

genesis status

cluster status

allssh df-h

ncli alerts ls

__allssh'ls-lahrt~/data/logs|grep-ifatal

Okay we do have a script but how to run it once a day? For this case I introduced jobber to the NTNX-AVM.

Learn jobber the fast way

Connect via SSH to the NTNX-AVM and run:

1

jobber list

In this case no job is known. I prepared an example which runs the script daily_health_report.sh every day at 04:00.

How does the daily_health_check.sh work?

First of all, this script will not run in your environment because all parameters for the daily_health_check.sh are setup for my lab environment. Okay lets make sure it will run in your environment.

STEP1 – Enable SSH access from NTNX-AVM to the cluster CVMs

The script makes use of ssh/scp to run the commands remote on one of the CVMs. To run a script non-interactive we need to enable password-less authentication between the NTNX-AVM to the CVMs. I wrote a script which enables password-less authentication.

This scripts creates a key pair and deploy the keys to the CVMs. When you run it you need to specify the Cluster IP/Name and the PRISM admin password.

1

install_key.sh--host=192.168.178.130--password=nutanix/4u

A test ssh connection should work now without requesting a password.

STEP2 – Edit the jobber file

Use an editor of your choice like “vi” and edit the line which starts with ” cmd : daily_h…” and edit the parameters to your needs.

STEP3 – Test the job

There it is. Sorry for the German Thunderbird version but you should get the idea how the email looks like. An email with one attachment called “daily_health_report-<DATE>.txt”.

USE CASE: Run a monthly “ncc health_checks run_all” and send the output to a specified email address!

Some Nutanix people would say: “Why don’t you use the ncc instead?” Good point. This post shows how to run ncc every x hours and send an email. But how to run ncc once a month and get all ERROR/FAIL messages in the body?

For this case I created the ncc_health_report.sh script which runs the “ncc health_checks run_all” and send an email.

STEP1 – Extend the “.jobber” file to add this job

The example which can be found on the NTNX-AVM in “~/work/src/github.com/Tfindelkind/automation/NTNX-AVM/jobber/example/monthly_ncc_health” defines a job which runs on the 1st of each month and calls the ncc_health_report.sh

Since I started at Nutanix I thought about a way to write and run scripts/tools around the Nutanix ecosystem. But there are different languages which are used by the community. Perl/python/golang/powershell etc. So I asked myself: “Where the hack should I install the runtime and the scripts/tools, because the CVM is a bad place for this”

The answer took me a while but here we go:

Nutanix automation VM called NTNX-AVM

So there is no image which fits all but instead the NTNX-AVM is based on recipes which defines the runtime/scripts/tools which will be installed. The foundation of these are the cloud images which are designed to run on cloud solutions like AWS/Azure/Openstack. These images provide good security from scratch. Another advantage is that the images are already deployed which means there is no different way to install it other then “importing” a vendor controlled image. This is good for maintaining the whole project.

NTNX-AVM v1 when deployed provides golang , git, govc, java, ncli (CE edition), vsphere CLI and the automation scripts from https://github.com/Tfindelkind/automation preinstalled. So for example you can move a VM from container A to container B with the move_vm binary which leverage the Nutanix REST API which is not possible in AHV.

I introduced a job scheduler system called https://github.com/dshearer/jobber to automate tasks/jobs. The advantages are that you are able to review the history of already executed jobs and you have more control when something went wrong.

Use cases for the NTNX-AVM

Backup Nutanix VM’s to a NFS store like Synology/Qnap/linux…

Move VM from one container to another one

Do some daily tasks like generate reports of specific performance counters you would like to monitor which are not covered by Prism

anything which talks to Nutanix REST API and needs to be scheduled.

…. there will be more

Installation of NTNX-AVM on Acropolis Hypervisor (AHV)

For an easy deployment and usage I created a simple bash script which will do all the hard work.

The deployment for VMware and Hyper-V will follow. At the moment the process is more manual. I will post a “HOW-TO install”.

What you need is a Nutanix cluster based on AHV (>=4.7) and a client where you able to run the bash script. Ubuntu, Debian, Redhat, CentOS, Mac OS should work fine as a client. The Community Edition (CE) is the base of my development environment and is fully supported.

This is how the environment looks like before the deployment. My three node cluster based on Intel NUC.

We start at your client system in my case a Mac Book Pro. Download the latest stable release of DCI from https://github.com/Tfindelkind/DCI/releases. In my case the version v1.0-stable is latest build available. The “Source code (tar.gz)” will work for me.

Change to the Download folder and unpack/untar the file:

1

2

cd Downloads

tar-xvzfDCI-1.0-stable.tar.gz

You can see there are several recipes available but let’s focus just on NTNX-AVM v1 based on CentOS7.

NTNX-AVM recipe config file

IMPORTANT: THE NTNX-AVM needs internet connection when deployed. Because all tools need to be downloaded.

Now we need to edit the recipe config file of the NTNX-AVM to make sure that the IP,DNS,etc. is setup in the way we need it. Use a text editor of your choice to edit the “/recipes/NTNX-AVM/v1/CentOS7/config” file.

You should edit following settings to your needs:

VM-NAME This is the name of the VM guest OS.

VM-IP The fixed IP

VM-NET The network of VM

VM-MASK The netmask of the network

VM-BC The broadcast address of the network

VM-GW The gateway

VM-NS The nameserver

VM-USER The username for the NTNX-AVM which will be created

VM-PASSWORD The password for this user -> Support for access keys will be added soon.

You need to escape some special characters like “/” with a “\” (Backslash)

VCENTER_IP IP of the vcenter when used.

VCENTER_USER User of the vcenter

VCENTER_PASSWORD

This is an example file for my environment:

NTNX -AVM with DHCP enabled

If you don’t want to specify a fixed IP,DNS,.. you could roll out the NTNX-AVM with DHCP. To do this edit the “/recipes/NTNX-AVM/v1/CentOS7/meta-data.template” file and remove the network part so the file looks like this one. The “ifdown eth0” and “ifup eth0” is related to a bug with the CentOS 7 cloud-image.

1

2

3

4

5

instance-id:iid-VM-NAME

bootcmd:

-ifdown eth0

-ifup eth0

local-hostname:VM-NAME

Deploy the NTNX-AVM to the Nutanix cluster

Now we are ready to deploy the VM to the Nutanix cluster with the dci.sh script.

We need to specify a few option to run it:

–recipe=NTNX-AVM Use the pre build NTNX-AVM recipe

–rv=v1 It’s the first version so we use v1

–ros=CentOS7 In this case we use the CentOS7 image and not Ubuntu

–host=192.168.178.130 This is the cluster IP of Nutanix/ CVM IP will work too

First it will download the cloud image from a CentOS. Then it will download the deploy_cloud_vm binary.

It will read the recipe config file and generate a cloud seed CD/DVD image. Means all configuration like IP,DNS,.. will be saved into this CD/DVD image called “seed.iso”.

DCI will upload the CentOS image and seed.iso to the AHV image service.

The NTNX-AVM VM will be created based on the CentOS image and the seed.iso will be connected to the CD-ROM. At the first boot all settings will be applied. This is called the NoCloud deployment based on cloud-init. This will only work with cloud-init ready images.

The NTNX-AVM will be powered on and all configs will be applied.

In the background all tools/scripts will be installed

The CentOS cloud image and the seed.iso have been uploaded to the image service.

The NTNX-AVM has beed created and started.

Using the Nutanix Automation VM aka NTNX-AVM the first time

Connect via ssh to the NTNX-AVM IP. 192.168.178.200 in my case. First of all we need to make sure that all tools are fully installed because this is done in the background after the first boot.

So let’s check if /var/log/cloud-init-output.log will show something like:

The NTNX-AVM is finally up, after … seconds

You should reconnect via ssh once all tools/scripts are installed to make sure all environment variables will be set.

Everything is installed and we can use it.

Test the NTNX-AVM environment

Let’s connect to the Nutanix cluster with the “ncli” (nutanix command line) and show the cluster status.

1

2

3

4

ssh nutanix@192.168.178.200

ncli-s192.168.178.130-uadmin-pnutanix/4u

cluster status

That’s it. NTNX-AVM is up running.

Today I start to implement the ntnx_backup tool which will be able to backup/restore a AHV VM to/from an external share (NFS, SMB,….) which will leverage jobber as the job scheduling engine.

Now it’s time to create a Nutanix cluster. But there are some default settings I would like to change before I create the cluster. This is not mandatory but this will increase the usability in the future. Just jump to the create cluster part if you want to skip that.

Changing the AHV hypervisor hostname (optional)

Use a ssh client like PuTTY or my favorite mRemoteNG to connect to the AHV (Host) IP. Use the default password when connecting as the “root” user which is “nutanix/4u”. Use a text editor like vi/nano to edit the “/etc/hostname” file and change the entry to a hostname you would like to have.

The following table shows the hostnames i used in this setup.

DNS-Name

Type

IP

NTNX-NUC1

AHV

192.168.178.121

NTNX-NUC2

AHV

192.168.178.122

NTNX-NUC3

AHV

192.168.178.123

NTNX-NUC1-CVM

CVM

192.168.178.131

NTNX-NUC2-CVM

CVM

192.168.178.132

NTNX-NUC3-CVM

CVM

192.168.178.133

Changing the AHV hypervisor timezone (optional)

By default the timezone of the AHV hypervisor is PDT (Pacific daylight time). From a support perspective it makes sense that all logging dates are using PDT, so that it is easier to analyse different log files side by side. But I would like to have the time in my timezone which is Germany. To change the timezone it is needed to use the correct /etc/localtime file. You can find the files needed in “/usr/share/zoneinfo”.

Make a backup of the actual /etc/localtime: “mv /etc/localtime /etc/localtime.bak”

Make a link to the wanted timezone file: “ln -s /usr/share/zoneinfo/Europe/Berlin /etc/localtime”

Changing the CVM name (optional)

This is a tricky part. I could not found a solution to change the CVM name. It seems there is no way to do this.

Changing the CVM timezone (optional)

1

2

sudo mv/etc/localtime/etc/localtime.bak

sudo ln-s/usr/share/zoneinfo/Europe/Berlin/etc/localtime

@TimArenz remind me that it may be easier and the better way to change the timezone after the cluster is created. This can be done via the Nutanix CLI (ncli)

1

ncli cluster set-timezone timezone=Europe/Berlin

Creating the 3 node cluster

There are two ways to install a multi node Nutanix CE cluster. Via the cluster init website or via the command line.

Cluster init web page

Connect to: http://CVMIP:2100/cluster_init.html

Enter the needed values and start the creation.

Cluster create via command line

We need to connect to one of CVMs of this setup via ssh with user “nutanix” and password “nutanix/4u”.

The creation is pretty simple which involves two steps. Invoke the create cluster command and set the DNS server.

cluster – s CVM-IP1, CVM-IP2, CVMIP3 create

ncli cluster add-to-name-servers servers=”DNS-SERVER”

1

2

cluster-s192.168.178.131,192.168.178.132,192.168.178.133create

ncli cluster add-to-name-servers servers="192.168.178.20"

The first connect to PRISM

Open a browser and connect to one of the CVM IPs. Enter the user credentials: “admin/admin”

When login the first time after the installation you will be asked to change the admin password.

The NEXT Credentials which have been used for the download need to be entered now. This means that Nutanix CE edition needs a internet connection to work. There is a grace period which should be around 30 days.

I will focus on my own setup , based on the Intel NUC6i7KYK. The setup is pretty straight forward up to the point when the onboard network comes into play. The Intel driver which is included in the Nutanix CE does not provide the right ones needed for the Intel NUC6i7KYK onboard network.

Overview of the Nutanix CE install process

Make sure your environment meets the minimum requirements. The table shows that a minimum of two disks are needed, at least one SSD. That´s the reason why I used 2x SanDisk X400 M.2 2280 in my environment. Remember that NVMe drives are not working atm.

Download the Nutanix CE disk image which will be copied to an USB flash drive. This will be the install and the boot device for this environment. The USB drive should be at least 8 GB in size but I recommend to use a device as big as possible. 32 GB flash drives starting at 10€. The reason is simple. If your environment for any reason starts to write extensive logs or data to the flash drive an 8 GB drive may end up with a wear out. Second! Maybe the image becomes bigger in the future?

Boot from USB flash drive and start the installer with the right values (IP,DNS..) This step will install the Controller VM (CVM) to one of the SSD drives where all the Nutanix “Magic” resides. All local disks will be directly mapped to the CVM. This means the Acropolis Hypervisor (AHV) which is KVM based is not able to use the storage directly anymore.

If chosen a single node cluster will be created. In my case where I will build a three node cluster I will leave this option blank.

Step-by-Step Installation of Nutanix CE based on Intel NUC6i7KYKD

The image itself is packed with “.gz”. I used the tool 7zip to unpack the file. A file like ce-2016.04.19-stable.img will be unpacked which is ready to be copied to the USB flash drive.

Now attach the USB flash drive and download the tool called Rufus. This program enables to “raw” copy an img like this one byte by byte to an USB flash drive. Choose the right USB flash drive, then switch to “DD Image” (dd means disk dump). Last step is to choose the img file and “Start”.

because the actual version does not provide the right ones. Unzip the file so you have got a file called “e1000e.ko”

Now we need to copy the file “e1000e.ko” which is a kernel module to the USB flash drive. But the filesystem which is used on the USB flash drive is ext4 which MS Windows is not able to edit by default. So we need a tool like EXT2FSD to do so.

After the installation of EXT2FSD and a reboot you start the Ext2 volume Manager. In my case I needed to choose a drive letter manually to be able to work with the USB drive. So scroll down to the right device in the bottom window and select the drive and hit the “F4” key which should assign an unused drive letter.

Copy the file “e1000e.ko” to the USB flash drive in the following directory: “/lib/modules/3.10.0-229.4.2.e17.nutanix.20150513.x86_64/kernel/drivers/net/ethernet/intel/e1000e/” and override the existing file.

The USB flash drive is ready to boot on the Intel NUC6i7KYK!

Attach the USB flash drive to your Intel NUC6i7KYK and boot it. Feel free to change the boot order right now so that the Intel NUC6i7KYK will always boot from the USB flash drive.

Now the Intel NUC6i7KYK is ready to boot from the USB flash drive.

After the boot you should see the login screen.

Login as user “root” with the password “nutanix/4u”. Loading the Intel network driver works with the command “modprobe e1000e”. Use “exit” to return to the login screen.

The user “install” starts the installation.

Choose your keyboard setting. In my case I used “de-nodeadkeys”.

The following screen shows a small form. This is an examples for a single node setup.

You may miss the configuration for a 3 or 4 node cluster. If you would like to setup a multi-node cluster your setup could look like this. This means that the cluster itself will be created later and we just install the environment. (Acropolis Hypervisor = Host, CVM = Nutanix Controller VM)

There are two IPs which are needed to be configured. Host IP is the IP of the hypervisor. In the case of Nutanix CE the Acropolis hypervisor will be installed, which is based on the KVM hypervisor. There are a lot of changes compared to the vanilla KVM so it is not the same. The logic of all Nutanix functions are implemented in the Controller VM. This is the reason why the OS which is installed in the VM is called NOS (Nutanix OS). NOS is based on Centos.

The installation takes a while. In the end you should see a login screen with a random hostname.