Saturday, June 02, 2018

I haven't blogged in a very long time, but felt this was something that should be shared. Sometimes tools deal with the same type of information inconsistently and that can cause you serious headaches and a lot of effort to resolve. In this particular case, Ansible requires AWS tags to be in list format for autoscaling groups (ASGs) and to be in dict format for elastic load balancers (ELBs) and security groups (SGs). If you try to use a list of tags for ELBs or SGs, you'll get the following error:

"argument tags is of type <type 'list'> and we were unable to convert to dict: <type 'list'> cannot be converted to a dict"

Of course, you'll get a similar but opposite error if you try to use tags in dict form for ASGs.

"argument tags is of type <type 'dict'> and we were unable to convert to list: <type 'dict'> cannot be converted to a list"

If you're like me, you want to define your config data once and manipulate it as needed. In our case we have a few hashes/lists of hashes of AWS tags that need to be combined to be useful with the various Ansible ec2 modules. You can see here https://gist.github.com/ITBlogger/a5b1ac1ab7ac2f12c4d7f6f77be359e7 that I've setup a gist of what I'm talking about You can go here https://gist.github.com/ITBlogger/a5b1ac1ab7ac2f12c4d7f6f77be359e7#file-aws_role_sample-L27 and see from line 27 to line 71 how we're manipulating the lists to combine them for one use and recreate them into dicts for the other usages.

The above shows how we're using the jinja2 combine filter and with_items loop to go over the list of tags and add them to the merged_tags dict which is used later. https://gist.github.com/ITBlogger/a5b1ac1ab7ac2f12c4d7f6f77be359e7#file-aws_role_sample-L41

You can also see in the playbook vars that we've initialized merged_tags as an empty dict `merged_tags: {}` https://gist.github.com/ITBlogger/a5b1ac1ab7ac2f12c4d7f6f77be359e7#file-aws_playbook_sample-L16 Hope this makes sense given the gist examples

Hope you never have to run in to this problem, but if you ever have to install Red Hat using an iSCSI boot disk do not set up /var as a separate partition.

Doing so will seriously break the server because iSCSI state data is held in /var/lib/iscsi and Red Hat cannot access this data if /var is a separate lvm volume as it needs that data in order to mount /var.

Friday, May 03, 2013

When using Mirantis Fuel Web to build up an OpenStack cloud, it requires 6 separate networks to be created. Simplest way to do this is by using virtual networks (VLANs), however this presents a few challenges because of the complexities of configuring VLANs.

This post is specific to Cisco switches, but may be relevant for other networking equipment as well.

For test purposes, this was all done on one switch and each bare metal server had one interface cabled to the switch. In production environments, it is likely that, at a minimum, a completely separate storage network would also exist, but more complicated networking schemes are quite possible.

For each port on the switch that will be connected to one of the bare metal servers, the following needs to be done:

Trunking mode must be used because each interface needs to be able to handle all 6 of the virtual networks. In the normal access mode, an interface can only communicate over a single VLAN.

For each interface, it's a good practice to specify which vlans it is allowed to handle.

Trunk native tells the switch that untagged traffic goes over this vlan. VLAN tags are the mechanism that allows a network interface to communicate over multiple discrete subnets. So in this case, only traffic going over vlans 100 through 104 will be tagged.

Spanning-tree portfast trunk tells the switch that this port is a trunk connected to a server or PC and to bypass the usual network loop checks.

Switchport nonegotiate forces the switch to treat this interface as a trunk because the connected device does not understand the protocol the switch uses to negotiate trunking vs access modes.

In the switch config the VLANs also need to be activated using:

vlan10,100,101,102,103,104

For those who normally work with vlans in switchport access mode, the switch automatically does this, but apparently, when working with trunks, adding a vlan to a port does not automatically create and enable that vlan.

I am not a networking expert so some particulars here may have been missed, but I hope this is helpful.