RFC 4255 specifies a standard for using
the DNS to securely publish Secure Shell (SSH) key fingerprints. We've
discussed that here before and there is a gotcha you should be aware
of before deploying the records.

generates SSHFP records from .knownhosts files or from the output of ssh-keyscan.

This is all fine and dandy, but how can we collect SSHFP records for a mass of
nodes (i.e. hosts) in our network and publish those in the DNS? Depending on the brand of DNS server
in use, this could mean populating a zone file, using a Dynamic DNS Update,
using an SQL dialect to manipulate an RDBMS back-end (e.g. for PowerDNS), etc. In the
following unordered list of ideas, I'll term that populate the DNS.

In a recent chat on IRC, Pieter Lexis mentioned he wanted to work on a Puppet
module to collect SSH keys for SSHFP records. I hadn't thought of that, but I like the idea.
Using Puppet and storeconfigs, I can have the nodes transfer their public
keys to the puppet master, create the DNS records there, and then populate the DNS.
This is possibly the most elegant of solutions, providing you deploy nodes with
Puppet, as it caters for nodes being unavailable at a particular point in time.
Furthermore, we can ensure fingerprints are submitted only when host keys change
(i.e. when they are rolled by the SSH service).

Update: Pieter also points out that Puppet's facter already provides both the hosts's
RSA and DSA keys, which make the task even easier. (The following example runs on a node.)

Puppet stores all nodes' facts in YAML files on the Puppet master.
I've written
facts2sshfp which slurps through those, chops off whatever isn't needed
and prints out SSHFP records for all nodes. The program optionally produces YAML
or JSON output, and you can give it a Python template in a file to print
the key fingerprints however you want. (An example for creating SQL INSERTs for
PowerDNS is included.) Documentation is contained in the README.

If the network is tightly controlled and I don't mind distributing credentials
to all my hosts, I can have the node update the DNS database directly, either
by giving it access to the back-end RDBMS (think passwords) or by allowing it to perform a dynamic
update (think TSIG or SIG(0) keys).

I could create a special HTTP REST service to which nodes can POST or PUT keys. This
can be protected with SSL/TLS certificates I issue to each node. (Similar in concept
to the previous method.)

sshfp can be instructed to use ssh-keyscan to probe public keys of specified
hosts or domains, so I could do this centrally. For example, sshfp -a -n 127.0.0.2 -s ww.mens.de
performs a zone transfer for the specified name server and zone, and it then scans the SSH public
keys from the individual hosts. This will obviously fail to obtain fingerprints
for nodes which are currently unreachable, so I'll have to schedule it to run periodically,
check for unreachable nodes, handle failures, etc.