Google Analytics

Google Custom Search

"... an engineer who is not only competent at the analytics and technologies of engineering, but can bring value to clients, team well, design well, foster adoptions of new technologies, position for innovations, cope with accelerating change and mentor other engineers" -- CACM 2014/12

Syndicate This Blog

Sunday, December 31. 2017

I run check_mk for monitoring some servers. Currently, the check_mk host uses ssh connections to acquire the data from the check_mk monitored host.

Journey into the SaltMine to keep nagios fed with check_mk shows some ways of not using ssh, but to use minion/master interactions to capture the data. I am leaning towards revisiting this by using SaltStack's inotify beacon to signal captured file changes, which then trigger events and orchestration to transfer the data from the minion/host to the check_mk/monitor. And I think it can be done in a way such that the salt master doesn't necessarily need to reside on the check_mk monitor. [as a note, the article shows some file locking mechanisms which might come in handy when I try to tackle this].

But, first, I wanted to prove the theory in a different scenario. This example uses three hosts:

monitored host, which is running the salt-minion, and on to which the check_mk monitoring agent is to be installed,

monitoring host, which is check_mk, and also has a salt-minion installed, and the

salt-master, which controls the state and interactions between hosts

The monitoring host will use ssh to connect to the monitored host and access the agent. During the first ssh session, a manual intervention is typically required to confirm usage of the destination's public host key, which then goes into the ~/.ssh/known_hosts file. '-o StrictHostKeyChecking=no' could be used as a simple work-around, but is not very security conscious. Instead, I came up with a series of SaltStack events and states to get the monitored host's public key into the monitoring host's known_hosts file.

There are a number of key sets in use:

When check_mk connects to an agent via ssh, it will typically use a local private key, and will require a shared public key in the monitored host's ~/.ssh/authorized_keys file. I use SaltStack states and pillars to distribute and install the public key, and make use of the "command="/usr/bin/check_mk_agent" option in the authorized_keys file

Each host has a unique public/private key. SSH uses this to prevent man in the middle attacks, and to ensure the host hasn't changed. This blog entry is about getting this monitored host's public key into the monitoring host's known_hosts file.

While discovering this issue, I have been thinking about ways to maintain temporary variables somewhere. I first looked at Thorium, but it doesn't seem to have a very flexible two-step look up system, as I need to access variables based upon minion_id, and service type.

The consul pillar state seems to be used only on the master, but I am thinking something should be available on the minion, or accessible from the minion. I thought of the sqlite state for this, but might be a bit heavy duty.

While researching this issue, I came across other interesting Salt articles, which I book mark here, for possible future reference:

stretch repository is missing packages #42715: this is interesting because in order to load 2017.7.1, I would have to obtain it from the SaltStack repository (as debian doesn't have this yet), and since I run Stretch and Buster, this might cause some issues.

Wednesday, November 2. 2016

Dustin Spinhirne's OVN tutorial was recently mentioned on the OVS mailing list. His examples use a single host in an ESXi environment with virtual hosts simulated through namespace commands.

As an excuse to try out Vagrant automation skills, I created a Vagrant multi-host test environment running in VirtualBox. Interfaces are automatically created, ip addresses automatically assigned, and guests can have readable names. Previous articles here explain how I built a recent custom kernel and how I built the latest version of OVS/OVN. The resulting kernel and packages are used to spool up this Vagrant/VirtualBox test environment for playing with OVN.

Tuesday, October 25. 2016

I have been using Salt for a number of infrastructure automation projects. Still a lot to learn. For me, it is an excellent 'as built' documentation tool. Not to mention the time saving and repeatability of building, rebuilding, and supporting infrastructure.

kitchen-salt: A Test Kitchen Provisioner for Salt. The provider works by generating a salt-minion config, creating pillars based on attributes in .kitchen.yml and calling salt-call. This provisioner is tested with kitchen-docker against CentOS, Ubuntu, and Debian.

Salt Community Projects: The one currently at the top of the list: HubbleStack. "... on-demand profile-based auditing, real-time security event notifications, automated remediation, alerting and reporting"

Thursday, October 13. 2016

Having gone through the exercise of getting Vagrant to build machines with multiple interfaces consistently, I am now ready for the next step.

The next step is being able create multiple interconnected machines. I tested the basic configuration of two machines in a previous blog entry. I could continue on with manually defining machines and their relationships, which becomes a lot of manual editing and book-keeping.

Instead, I wanted to be able to see if it could be done with some simple lists and some code. So... having not programmed in Ruby before (upon which Vagrant relies), I picked up the third edition of 'Beginning Ruby', took a read through, figured out the basics of the language, and was able to successfully program an auto-build configuration.

Two sets of definitions are required. A machine list, and a list of connections between each machine. My test example:

# a list of machines to create within VirtualBox
machineNames = ['edge01','edge02','core01','core02','host01']
# a list of connections to be made between machines
# this example makes use of a series of point to point links
vlans = [
['edge01','core01'],['edge01','core02'],
['edge02','core01'],['edge02','core02'],
['core01','core02'],
['host01','core01'],['host01','core02']
]

In the example, there are a number of links, and each link is defined by identifying the machines taking part. For example, one link contains edge01 and core01. This link will have an ip subnet automatically assigned and will have a unique VirtualBox internal network.

By changing the machineNames list and the vlans list, a network of desired topology will be provisioned enabled with the 'vagrant up' command.

Tuesday, October 11. 2016

After having gone through the basics of a Vagrant install, with more to do yet, I need to record some of the commands I used to get things going.

This solution is based upon using Debian Stretch/Testing as the 'box'. Which I build in a minimal configuration. I create a 'vagrant' user during the build process. Also ensure that the first network card is of type 'Paravirtualized Network (virtio-net)' when being built under VirtualBox.

When using Vagrant to provision a test environment where each guest has a single network interface, life is good. However, when running Debian Stretch/Testing as a guest with multiple interfaces, the interactions between Vagrant and VirtualBox networking get a little weird.

As I write this, and thinking ahead a bit, I will need to try the virtio driver and see if I get something different, ie, a better result which doesn't require a work-a-round.

Anyway, fundamentally, VirtualBox, when emulating Intel network adapters, like the PRO/1000 MT, will assign them to PCI bus numbers in high number first. On boot up, these are the default assignments in the guest prior to renaming:

You can see that the PCI 00:11.0 is labelled as eth2, but is actually 'adapter 1' in the VirtualBox Network configuration list. This gets in the way of Vagrant's requirement of the first interface to be type of NAT. Under Stretch' standard renaming convention, eth0 will be renamed as enp0s8, and eth2 will be renamed as enp0s17. Due to lexical ordering, Vagrant will pick up eth0/enp0s8 as the first interface, which in fact, matches with 'Adapter 3' in the VirtualBox Network list, and therefore, won't be assigned a NAT address, and therefore results in no connectivity between Vagrant and the guest.

The first port is an implied ':forwarded_port'. The two port definitions, one set for each test guest, create VirtualBox internal networks 'net1' and 'net2' on Adapters 2 and 3. When using 'virtualbox__intnet', be sure to put the ip address after 'virtualbox__intnet', otherwise a :hostonly network will be created. The VIRTUALBOX INTERNAL NETWORK documentation could therefore be deemed slightly in-accurate.

I am sure there is a better way to do it, but to do things the hard way, I ended up using udev in Stretch to rename the interfaces to the opposite order. These changes are put in the 'box' file so they are automatically available each time. This requires two changes in /etc/udev/rules.d:

Another diagnostic command, which shows some additional information, but I couldn't get the 'DEVPATH' to match properly. I don't know how to properly debug commands to find what matches and what doesn't.

Disclaimer: This site may include market analysis. All ideas, opinions, and/or
forecasts, expressed or implied herein, are for informational purposes only and should not
be construed as a recommendation to invest, trade, and/or speculate in the markets. Any
investments, trades, and/or speculations made in light of the ideas, opinions, and/or
forecasts, expressed or implied herein, are committed at your own risk, financial or
otherwise.