Latest Posts

Categories

Categories

Authoritative DNS with redundancy, using nsd and Debian Wheezy

Following up on yesterday’s post about what motivated me to host my own DNS, I’ll do my best herein to detail how I pulled this off. This is written for Debian Wheezy because I haven’t finalized an upgrade plan for Jessie yet; with Wheezy LTS extending support to 2018, I hope some find this useful.

As noted in my previous post, I started with a now-deprecated tutorial from Linode. It, along with other sources I referenced, is linked in the Resources section at the end. I opted for nsd because it seemed most-approachable for a novice (as approachable as DNS can be), is well-regarded, and due to the number of existing tutorials focused on it.

Getting started

The ensuing discussion is quite lengthy, and hosting DNS requires several considerations. Best to cover some things now before too much time may be wasted.

Choosing hostnames

I opted to use ns1, ns2, and ns3.ethitter.com as the hostnames for my nsd configuration, but there’s no specification of, or restriction on, what subdomains are used–they just have to be valid subdomains; they don’t even have to share domain names. I could have just as easily given each machine a name, like CloudFlare does: bart.ns.cloudflare.com, matt.ns.cloudflare.com, and rita.ns.cloudflare.com. Amazon Route 53 uses a series of similar domains across different TLDs: ns-1.awsdns-01.org, ns-2.awsdns-02.com, ns-3.awsdns-03.co.uk, and ns-4.awsdns-04.net.

As far as the hostnames themselves are concerned, most importantly, choose subdomains you’re comfortable using for a long time, as changing the URLs later is cumbersome and time-consuming.

Planning against downtime

As I’ve hinted at (and plan to write about further), when this idea struck, I was fortunate enough to already have three servers at my disposal. Given my self-documented paranoia, I’d chosen to locate those servers with a different provider and on different continents, allowing my DNS to inherit the redundancy I’d established for my mailservers. That degree of separation isn’t necessary for everyone, but at the least, I wouldn’t consider hosting my own DNS if I didn’t have a minimum of two servers, each in its own datacenter (distinct providers are an added bonus).

Redundant servers alone aren’t enough, though. Equally important is a proper backup plan for each of those machines; nsd stores DNS zones in files, which is something to bear in mind when planning your backups approach.

Glue records

Glue records are an interesting detail that solve a funny, but somewhat basic, problem with hosting one’s own DNS: if the domain name for the nameserver is hosted by said nameserver, there has to be a way to know the address of the nameserver without asking it. It’s a bit circular, yes.

My primary nameserver is at ns1.ethitter.com. It knows its IP address, and the addresses for ethitter.com and so on, but without being able to connect to it via the IP address for ns1.ethitter.com, there’s no way to ask it what its IP address is. In other words, we can’t very well ask it where to connect to it if we don’t know where to connect to it to ask it in the first place.

Glue records address this detail. These records are set with the registrar for your domain and point to the IP addresses for your various nameservers. In my case, I use ns1.ethitter.com, ns2.ethitter.com, and ns3.ethitter.com, so I’ve set glue records for those three subdomains, which explicitly state the IPv4 and IPv6 addresses where each server can be reached, with Namecheap.

Pointer records

In my experience, nameservers are not dependent on specific PTR, or reverse DNS, records in the way mailservers are. Since each of my DNS daemons runs on the same host as a mailserver, my PTR records all return mail-specific subdomains and that’s raised no issues. That said, if PTR records aren’t set for some other purpose, be sure to update them to reflect the nameserver hostnames you choose.

Creating the primary NS

If you’ve gotten this far, you’ve chosen hostnames and set the appropriate glue and pointer records for those domains. On to the fun!

Throughout the duration of this tutorial, I use ethitter.com as the domain to host DNS for, and ns1.ethitter.com, ns2.ethitter.com, and ns3.ethitter.com as my nameserver hosts. A bit of find-and-replace with those keywords should easily allow you to adapt this for the domains you’ve chosen.

As it happens, those are the real domains I use, and the examples that follow are taken nearly verbatim from the configurations that allowed anyone or anything to access this post. The secret was changed. 😉

Keys for slave servers

Once the primary and backup DNS are configured, zone updates are transferred via AXFR requests between servers. To ensure that only my servers can request and receive zones, a TSIG shared key is used to authenticate the transfer.

Getting dnssec-keygen

The dnssec-keygen tool is used to generate the keys we need; it’s part of bind-utils. Check if dnssec-keygen is installed with:

which dnssec-keygen

If nothing is returned, install the bind-utils:

apt-get install bind-utils

Generating keys

dnssec-keygen generates two files, but we only need to copy a value from one of them, and then both files can be discarded.

The above generates a sha256 hash, an upgrade from the sha1 hashes that many tutorials promote. sha1 is considered vulnerable to cracking, hence the stronger algorithm (related, major browsers are deprecating sha1 for SSL certificates).

The command generates an output similar to:

nsd3slave. IN KEY 512 3 163 GdcDfTYujAcfhlmGWE4KvyKwAIA=

From this, GdcDfTYujAcfhlmGWE4KvyKwAIA= is the piece we care about. Save the corresponding value from your output (don’t use GdcDfTYujAcfhlmGWE4KvyKwAIA=, it’s just an example) as we’ll use that in the next section.

We’ll create the actual zone file next, but this addition tells nsd which domain name to associate with each zone file, as well as listing the slave DNS that nsd should share that zone file with. Note the ns2 and ns3 after each IP address, which tells nsd to use the key with that name when communicating with that particular slave server.

Replace the name and zonefile values with those that correspond to the domain you’re adding. Define as many of the notify/provide-xfr pairs as you have slave servers and keys, replacing the IPs with yours and the key names with those that you defined in the preceding section.

Contents of a zone

Below is a very basic example of a zone file for ethitter.com. Notably, nsd uses the format standardized by BIND.

In reality, the ethitter.com zone is nearly 200 lines and includes almost every conceivable record type. That random fact aside, the following would be added to the /etc/nsd3/zones/ethitter.com.zone file referenced in the previous section.

I’ll leave a detailed description of the above for a future post, but in short, this tells nsd where the mail for ethitter.com should be directed, sets the IPs for my nameservers (which reflect the glue records from earlier), and points both ethitter.com and the balance of its subdomains to my primary server at Linode. I’ve chosen a five-minute duration for most records (300 seconds) because I’m impatient, but values should be higher if they change infrequently. Longer lives protect against errors and loss of control of DNS.

Depending on where your DNS is hosted currently, your provider may offer a way to export a zone file via an interface or through an AXFR request. In other cases, manual copy and paste is the only option. Regardless, it’s important that the zone file you create includes all of the records that your circumstances require, because only wildcard (*) records will handle any gaps or omissions, and only if you’ve added them.

Testing the primary NS

Now that the required configurations have been made, it’s time to test our progress thus far. First, nsd needs to be restarted to recognize the myriad configuration changes we’ve made; it started automatically after apt-get install completed.

service nsd3 restart

If successful, you should receive a message like “Restarting nsd3…” before returning to the command prompt. Otherwise, nsd should provide some indication of what the issue was; if not, check the log at /var/log/nsd.log (or wherever you changed the path to in nsd.conf).

With the nsd daemon running, it’s time to build the database it uses to serve requests. From the human-readable flat files that hold zone records, nsd provides a method to generate the database it requires.

nsdc rebuild

Calling the rebuild command, however, isn’t enough to get nsd to serve the latest zone files. After rebuilding the database, the daemon also needs to be reloaded:

nsdc rebuild
nsdc reload

At this point, nsd is running and ready to serve requests. To confirm, the dig command is used to make DNS requests to the nsd daemon we just configured:

dig @localhost ethitter.com soa

The above command asks the nsd daemon running on the server for the “Statement of Authority” record for ethitter.com. This is but one simple way to verify that nsd is running and responding to requests for the domains we’ve configured.

The result, shown in full below, can be a bit overwhelming, but the important part is the ;;ANSWER SECTION midway through the output.

When successful, the ;;ANSWER SECTION should provide the same SOA details that we set in the zones/ethitter.com.zone. If the results don’t match, restart nsd, rebuild the database and reload the daemon, then try the dig command again. If nsd still doesn’t provide the expected results, or refuses to answer queries for the domain specified, please revisit the Adding zones section above.

With the primary DNS configured and returning expected results, we’re now ready to configure some redundancy.

Adding slave(s)

Many of the steps to set up an nsd slave daemon are the same as were needed to configure the primary server, with really only two notable exceptions: zone files aren’t recreated on the slaves, nor are the keys used to secure zone transfers. The process to add a slave, described below, can be repeated for as many servers as you plan to run.

To begin, repeat the steps from the Installing nsd section above, changing the identity string in nsd.conf to whatever you chose for the slave you’re configuring. This will provide the basic nsd configuration upon which each slave is configured.

Keys

With the sample key and zone declarations removed, we first add our key to the slave’s nsd.conf:

Notice that this block is identical to the corresponding key declaration on the primary server. That is intentional as the shared secret is used to communicate between the servers.

Regardless of how many slave-server keys are defined in the primary DNS, the only key that should be added to a slave is the one for the slave currently being configured. One slave shouldn’t include the key for another–only the key it needs to communicate with the primary server–because the slave servers aren’t configured to communicate amongst themselves and nor do they have reason to.

Zones

Again, for organization’s sake, I list the zone declarations in a file separate from the main configuration, nsd.conf. To prepare, we create the zone configuration file and the directory to hold zone files themselves:

mkdir /etc/nsd3/zones
touch /etc/nsd3/zones.conf

Next, the following line is added to the very end of nsd.conf (after the key block) so that the zone configuration file is actually used:

This declaration is quite similar to that found on the primary server, with two key differences. First, notify is replaced with allow-notify, which restricts which server the slave will accept a zone transfer from. Second, provide-xfr becomes request-xfr, again to make the connection between the primary and slave servers.

Testing

With that fairly small bit of setup, the slave server is ready to test. Continuing the pattern, this will be very similar to the testing steps for the primary server, but with some important deviations.

First, restart the daemon so it uses the latest configuration:

service nsd3 restart

Next, build the zone database for the first time, and reload the daemon so it can use it:

nsdc rebuild
nsdc reload

After rebuilding and reloading, nsd should report the following error:

warning: slave zone ethitter.com with no zonefile '/etc/nsd3/zones/ethitter.com.zone'(No such file or directory) will force zone transfer.

This is normal, and simply indicates that since we haven’t copied the ethitter.com zone file from the primary server to the slave, the slave will have to ask the primary server for that information. Again, this is expected, and something we’ll address in the next section.

Lastly, test that the slave nsd server is running using the dig command from earlier:

dig @localhost ethitter.com soa

Despite the previous warning about requiring a zone transfer, nsd should return the same SOA information seen in the Testing the primary NS section. If it does, then great, we can move on to the next section. If no information, different information, or an error results instead, confirm that the keys and zones are properly set on the slave and try the testing steps again.

Again, as a reminder, these steps to add a slave DNS can be repeated for as many backup servers as you plan to run. I’ve opted for two, as three total DNS responders seemed both quite standard, and reasonable for my meager needs.

Keeping slaves in sync

Now that one or more slave DNS are configured, the primary server is able to share zone files with the slaves. Except when new domains are added, all DNS changes will be made on the primary server, then shared with the slaves via an nsd command.

After making any zone changes, nsd first needs to be made aware of these changes:

nsdc rebuild
nsdc reload

Once those commands are called and you’ve used dig to verify locally that the changes are correct, pushing those updates to the slave server(s) is a matter of:

nsdc notify

If successful, nsd will report “Sending notify to slave servers…” before returning to the command prompt. If a key doesn’t match, or the slave took too long to respond, nsd will share the reason it failed to notify all slaves.

Saving zone files on slaves

Now that both primary and slave servers are configured and communicating, there’s one easy step to resolve the “slave zone ethitter.com with no zonefile” error that slave servers report when rebuilding their databases.

nsd provides a patch function that writes the zone data in its database back to the flat files that the database can be sourced from. Since the primary server holds the canonical versions, this command is only relevant on the slaves. To force nsd to write its zones to disk, simply call the following command on each slave:

nsdc patch

Once that completes, another call to nsdc rebuild should not generate the “no zonefile” error. If it does, confirm that nsdc patch was able to write the files to disk as expected.

While you can call nsdc patch as often as you’d like, it’s only necessary to do so when each slave server is first established. nsd installs a cron.d configuration that calls nsdc patch automatically, and since the master zone data is on the primary server anyway, the slaves’ zone files are purely backups. Running the command now confirms that the zones directory is writable and, again, resolves the “no zonefile” error.

Maintaining zones

With each zone update, there are four steps to ensure that the update will take effect and that all servers provide consistent results.

First, every zone update, regardless of how insignificant, should include an update to the serial number set in the zone’s SOA record. My example in the Contents of a zone section uses the serial 2016012401. The format isn’t terribly important, but it is imperative that the value augment with each successive revision. Other DNS responders use the serial number to cache lookups, so changes will not be detected right away if the serial is unchanged. I use the date and a per-day revision number, but I could just as easily have started a counter at zero.

Once zone files–including their SOA records–are updated, activating those changes and synchronizing the slaves servers is accomplished with these three previously-discussed commands:

nsdc rebuild
nsdc reload
nsdc notify

External testing

Before making the final commitment and updating domains to use your new DNS infrastructure, a few free tools will identify issues that may exist with your configuration.

If neither service reports any errors, your nameservers are ready to use.

Final thoughts

If you’ve made it this far, and all tests are passing, the servers are now ready to begin handling DNS requests for your domain(s). The last step is to update the nameserver settings with your domains’ registrar(s), followed by 24 to 48 hours of waiting for the changes to propagate. Given this delay, I strongly suggest testing your setup thoroughly. as switching back to the prior nameservers will incur the same lengthy delay.