Ansible and Junos Notes

18102016

I’m working on a project to push out configs to Juniper devices and upgrade them if necessary. Ultimately it will be vendor-independent. In the first instance I thought about writing it all in Python, but there’s really no need because quite a lot of legwork has already been done for you in the form of ‘PyEz’ and the Junos Ansible core modules.

Juniper give you a few examples to get you started, but don’t really explain what each of the lines in the YAML file does, but I guess they expect you to figure that out. Below are a few notes on things I discovered – perhaps obvious to some, but they might help someone else.

Ansible’s agentless nature

Ansible has the upper hand over Chef and Puppet in that it is agentless. All you need is SSH. Or so they tell you.

Ansible actually needs the system it is making changes on to be running Python. So really, *that’s* the agent that Ansible talks to. Since most network devices don’t have Python (it is roadmapped for Junos soon), that means you’ve got a problem – you can’t use most of the commands module of Ansible to execute remote commands.

With Junos you have two choices:

Use the ‘raw’ module to send SSH commands to Junos: This opens up a channel, sends the command and closes it again. No error-checking, no frills.

Use the Juniper Ansible modules: Used with a playbook you need ‘connection: local’ so that the Python part is done on the controlling node. Then the module uses Netconf to connect to the Junos device to issue commands.

There is no other way – since there’s no Python on Junos and no way to get into a shell over SSH, these are your only options.

‘No module named jnpr.junos’ When Running Ansible

In the examples Juniper give, they don’t tell you that the Ansible module ‘Juniper.junos’ relies on a Python module called ‘jnpr.junos’. (It is mentioned elsewhere if you look for it.)

So if you’ve done an ‘ansible-galaxy install Juniper.junos’ you could be forgiven for thinking that you’ve downloaded the modules you need. You then gaily go on to have a crack at the example given above, but get this error:

To resolve this, you need to download the PyEz module for Python. On my Mac, I did this using ‘sudo pip install junos-eznc’

Authenticating

I thought I’d be able to issue a command like ‘ansible-playbook junipertest.yml -u UserName -k’. Ansible would then use the specified username and ask for the password interactively before it began – which it does. However I was persistently getting the following authentication error:

This was a bit confusing for a while. It seems you can’t use command-line switches to pass username and password to a playbook. Instead, the Juniper.junos module wants you to make a variables section in the YAML file, and then pass the variable contents to the tasks that are specified later in the file. The result is that you still interactively asked for a username and password, and can successfully authenticate.

My YAML file

My YAML file looks like this – it successfully retrieves the version running on the device. I have commented down the right-hand side of this to add explanations, so you will need to remove these:

As you can see from the previous YAML file, you can retrieve junos.facts.version in order to get just that one ‘sub-fact’. Replace ‘version’ with any of the above facts and see what you get – e.g. junos.facts.hostname

Ansible files

I wanted my work to be portable to a colleague’s computer with minimal fuss. Ansible looks for a config file in /etc/ansible/ansible.cfg I believe, but it looks for a config file in the current directory first. That is nice – it means you can override the system’s settings on a per playbook basis.

My hosts file is pretty basic – it only contains a single IP address at the moment. According to docs.ansible.com you should be able to alias a host using this:

jumper ansible_port=5555 ansible_host=192.0.2.50

‘jumper’ in this example is a host alias – apparently it doesn’t even have to be a real hostname. However I found that this did not work for me – it tried to use the alias to connect to the host, rather than the IP address specified by ansible_host. The error was:

I edited the /etc/hosts file on my machine to include the same alias, and that now works fine. Not sure this is intended behaviour – why specify an ansible_host value in the ansible hosts file if your /etc/hosts file contains the IP address as well?

Update on 27th Oct: I’ve discovered the above does actually work – not entirely sure what I was doing wrong last time, but with this in the inventory it works fine:

[Devices]
line2001 ansible_host=192.168.30.20

Authentication

SSH Public Key Authentication

There are a few ways that authentication can be achieved. The most preferred way is to use an SSH public key, so no password is required. This means generating a public/private key for the user and then transferring the public key to the Junos host’s config. Since my script is for pre-staging lots of devices, I felt that was overkill – if I was going to use Ansible for daily management of these devices, then it would be worthwhile, but that won’t be the case here.

Interactive Authentication

An alternative is the method described above, where the credentials are requested interactively. The ‘vars_prompt:’ section of the YAML file is what makes this happen.

The Juniper.junos module pays no attention to the ‘-u <USERNAME>’ command-line argument, nor does it observe the -k argument, which prompts for a password. I’m not sure why that is, but it is documented here. If you put those command-line switches in, they are accepted and ignored – instead, the $USER environment variable from your computer is sent, resulting (probably) in an auth failure.

‘Insecure’ Authentication

Instead of using ‘vars_prompt:’, you can write the username and password into a ‘vars:’ section of the YAML file. Obviously this isn’t secure, but since my script is for lab purposes, security of this information isn’t a concern. Just replace the ‘vars_prompt:’ section shown above with something like this:

vars:
- USERNAME: someusername
- DEVICE_PASSWORD: yourpassword

An alternative to doing this is to put the usernames/passwords in the hosts file, though again this is not recommended:

Vault Authentication

The proper way to store usernames and passwords is in the Ansible vault. This is a file that is automatically encrypted with AES256 encryption. You can pull in just the password, or a variety of variables as per the example on this Juniper page. Quite cool, but too complex for my basic lab setup.

Formatting The Output

Instead of calling each fact one after the other, it is possible to do this and create a comma-separated list of facts:

That’s all very well, but maybe you want to write this to a text file in that format? Simply create another task that uses the copy module to write the output to a file. Here’s the ‘Print model’ task again, followed by a new task called ‘Write details’:

This results in a file in the current directory in the format <hostname>.txt:

$ more 192.168.30.12.txt
CW0212286591,EX2200-24T-4G,12.2R9.3
$

Issues

When running this playbook, the Juniper.junos module is supposed to write output to a file in the location specified by ‘savedir=’ in the YAML file. It does do this, but fails to pre-pend the hostname on the file, so you get a file called ‘-facts.json’. This is a problem because the filename begins with a ‘-‘ and it is therefore interpreted as a command-line switch by vi, cat and more.

Opening this file in the GUI reveals it to be a JSON formatted file containing all of the ‘facts’, just missing the hostname:

It occurred to me that it isn’t the hostname on my Ansible control machine that should be in the filename – it should be the hostname of the device I am configuring. As you can see above, the hostname is an empty string. That’s because my device is new – all it has on it is an IP address and username/password.

I used the ‘set system host-name <name>’ command to give the device a name. Re-ran the playbook and this time got the hostname on the file as expected: