Introduction

Active Servers

The pool.ntp.org project is a big virtual cluster of timeservers providing reliable easy to use NTP service for millions of clients.

The pool is being used by millions or tens of millions of systems around the world. It's the default "time server" for most of the major Linux distributions and many networked appliances (see information for vendors).

Because of the large number of users we are in need of more servers. If you have a server with a static IP address always available on the internet, please consider adding it to the system.

News

As you might have seen a few days ago several potentially critical security vulnerabilities in all versions of ntpd were announced.

Most OS'es have released back-ported fixes. Depending on your specific ntp and network configuration you might not be exposed, but the easiest way to make sure your systems aren't vulnerable is to apply the software updates and make sure ntpd has restarted on the fixed version.

Alternatively you can read the announcement page linked above carefully and make configuration changes to mitigate the issues.

If you have built ntpd from source, the easiest fix is to update to 4.2.8. If you have trouble building that version, there's a "4.2.8p1-beta1" version available now from support.ntp.org as well with some fixes.

If you are using the standard ntpd daemon to serve time to the public internet, it’s important that you make sure it is configured to not reply to “monlist” queries. Many routers and other equipment are included in this.

The configuration recommendations include the appropriate “restrict” lines to disallow any management queries to ntpd. Most Linux distributions will have an updated version by now that just disables the “monlist” queries, that will also solve the primary problem.

This week we had a period of weird behavior for the monitoring system for (mostly) German IPv6 servers.

After much back and forth on the mailing list and numerous debugging sessions we got this information from a network engineer at Hurricane Electric:

A bug was recently discovered in Force10 switches that cause unicast IPv6 NTP traffic to be erroneously broadcast to all ports. Due to this, there are currently access lists in place preventing some IPv6 NTP traffic from traversing the DECIX exchange, as it was causing a storm that generated nearly 1 terabit per second of traffic. This should be resolved in the near future.

The number of IPv6 servers active in the pool appears to be about back to normal.

Also this is the answer to "why don't we have IPv6 servers by default on all the pool zones" yet. As you might know only "2.pool.ntp.org" (and 2.debian.pool.ntp.org, etc) returns AAAA records currently.

The NTP Pool "backend systems" are moving racks at Phyber. To minimize the risk of things going wrong we're doing it the old-fashioned simple way of turning everything off, moving it and turning it on again. It will mean about an hour where servers are not monitored and we can't add new ones or access the www.pool.ntp.org site.

In the new rack there'll be more power available so when the move is done we'll have more capacity.

Over the last couple of months we had a couple of the "central servers" fail. It hasn't caused any service outage for the NTP clients, but some of you might have noticed that the manage NTP Pool site has been sluggish at times.

A few months ago I bought a few new servers and sent them down to our friends at Phyber Communications who wired them up in their hosting facility. Over the last weeks I've added puppet declarations) to configure them and since earlier this evening they're in production for the web sites and a few other services.

I have a long road map for the NTP Pool system and many of the items involve processing and storing more data to make our system better. The new servers are going to be helpful for that.

My other project for the months have been upgrades to the GeoDNS server to support EDNS-CLIENT-SUBNET. It has been live for users of Google DNS for a while. We're still working out some kinks with the OpenDNS folks to get it fully enabled there.