There are a lot of different ways this can be deployed - it would be best if you explain why you've chosen this particular architecture, what you're looking to accomplish with it, and what you're envisioning the workflow as being for a user updating a zone.
–
Shane Madden♦Jun 12 '12 at 16:32

@ShaneMadden: Our DNS requires a lot of users to make the modifications to BIND directly. The need is to track the zone/named.conf changes [with SVN] & within a master/slave design.
–
deppfxJun 13 '12 at 5:13

2

Can you clarify why it needs to be master/slave, though? If everything's run from SVN, there's no need for the BIND nodes to communicate directly with each other in a master/slave relationship at all; effectively, the "master" is the SVN repository. Multiple master servers with the identical zone data from SVN (updated on a cron or some such) will work just fine.
–
Shane Madden♦Jun 13 '12 at 5:26

@ShaneMadden: You are right. It is not necessary for them to be master/slave. But yes, they need to have the same data, as a precautionary measure. Can you explain more on how do we sync both the masters? svnsync?
–
deppfxJun 13 '12 at 12:18

Have you investigated dynamic updates? You can have a key pair per user and allow them to dynamic update the DNS. If you want multiple servers to be serving the same zone, you can have one of them (generally called the stealth primary) dynamically update the others. Unless you want to track the precise changes each user made, managing with SVN is an overkill and you'll be re-inventing the wheel. You can freeze the zone on a regular basis, copy the zone data and commit into source control. Write a script for that or work something out with Rancid.
–
nearoraJun 13 '12 at 23:46

3 Answers
3

If you're not attached to the master/slave architecture, then you can use SVN to keep them in sync without building a master/slave relationship in BIND; the BIND servers will be serving the same data, behaving the same as they would in a master/slave configuration, without talking to each other at all or needing different configurations on the two servers.

Your SVN repository should be independent from the BIND servers - it can reside on one of them if it needs to, but you'll probably want to avoid that if you can, and give it its own server. The SVN server will hold the master copy the data, and the BIND servers will retrieve that data and serve it. From a logical perspective, the SVN server is the master and the BIND servers are slaves - but from a DNS perspective, we'll be making both BIND servers think they're masters, with a full copy of the zones.

svnsync is a tool to synchronize an SVN repository to a second copy of the same repository; depending on your architecture, you may want to use svnsync to send a copy of your repository to a backup site, for instance. You can never commit against the repository that's having data synced to it, so it's essentially read-only, but still covers you in the event of a loss of your primary repository.

So, to get this working, there's a few things to set up:

Set up a centrally accessible SVN server. There are several options for how to configure this; I've written a summary of the options (and some of the pros and cons of each) here. If desired, set up svnsync to a DR server, or svnadmin dump --incremental flat-file dump of a repository to pull into a backup.

Build the zone files up in the SVN server, and check out a copy of the zone files into the directory where you want them stored on the BIND servers. You'll want to make sure that the BIND server can pull the data without human interaction; the password or SSH key that the BIND server is using to authenticate to SVN should be saved.

Configure BIND to read those zone files that were checked out from SVN. Set each zone block up with type master; - they'll each serve the answers for queries to the zone as authoritative (as would happen in a master/slave configuration, as well).

(One server is still the "primary master" from a DNS perspective, due to its presence in the zone's SOA record, but that mostly just matters for dynamic updates - name resolution will work through either node)

Set something up to keep the zone files up to date. What this'll probably look like is a cron job that runs svn update against the SVN working directory that you checked out earlier, then sends a reload to your BIND process through its init script to tell it that the zones have changed.

Work with your zones through your SVN client against the repository! After you commit, the servers will grab changes, reload their zones, and serve the current data. Make sure you update the zone serials number on each change!

By the way, I definitely also condone using a configuration management tool to distribute the zone files, as 200_success has suggested. If you do that, you'll just be removing the BIND servers' connections directly to the SVN server, and the cron job supporting updates; those aspects will be handed by the configuration management tool. The rest of these notes still apply; the workflow, configuration of the BIND servers, and their behavior once configured, remain the same.

BIND9 slaves only store their zones in memory. If the master is down, and the slave is restarted, then the slave will have no zone data to serve. (It sounds farfetched, but such situations can occur under extreme circumstances. It happened to me last weekend.) Therefore, pushing the zones to all BIND servers has the advantage of robustness.
–
200_successJun 13 '12 at 8:16

2

@200_success, where did you get that nonsensical idea? The slaves, at least those under my care, most definitely keep their data in zone files, just like the master. On restart they reload that data from file prior to asking the master for any updates.
–
John GardeniersJun 13 '12 at 9:30

@JohnGardeniers - same here, the slave file is just stored in a more temporary location, so if bind9 is restarted, and the master isn't available it uses that. Also - keep in mind that your slave will only update on the refresh schedule defined in your master file.
–
MaddHackerJun 13 '12 at 16:32

If you are using CFEngine or Puppet to manage your machines, use that to distribute your zone and configuration files and to instruct the processes to reload the new data. Your post-commit script can write the files to the CFEngine/Puppet master server and trigger a run.

If you aren't using CFEngine or Puppet, you should seriously consider doing so. Since you are managing multiple machines, it's really a smart thing to do.

By the way, I also suggest running named-checkzone from your pre-commit script to prevent mangled zone files from being committed.