UTC with Smoothed Leap Seconds (UTC-SLS)

UTC-SLS is a proposed standard for handling UTC leap seconds in
computer protocols, operating-system APIs, and standard libraries. It
aims to free the vast majority of software developers from even having
to know about leap seconds and to minimize the risk of leap-second
triggered system malfunction.

Overview

The international standard timescale Coordinated Universal Time
(UTC) is defined by a network of atomic clocks. It is kept
synchronized with Earth's rotation through the occasional insertion
and deletion of leap seconds. UTC is widely referenced in
computer-related specifications, in particular communication protocols
and application-program interfaces of operating systems. An increasing
number of computer system is closely synchronized with UTC today.
However, the specifications of these systems usually do not dictate an
exact behavior of the system when a leap second is inserted or deleted
from UTC, that is when UTC skips the last second (23:59:59) of a day
or inserts an additional second (23:59:60). Some currently widely
implemented methods for handling leap seconds are potentially
disruptive.

The Coordinated Universal Time with Smoothed Leap Seconds
(UTC-SLS) aims to solve this problem. It is a minor
variation of the UTC time scale with the following properties:

UTC and UTC-SLS are identical most of the time, except during the
last 1000 seconds of a UTC day that ends with a leap second.

During the last 1000 seconds of a UTC day with inserted leap
second, a UTC-SLS clock slows down to by 0.1% to 0.999 times its
normal rate, such that the last 1000 UTC seconds (including the
inserted leap second) span exactly the same time as the corresponding
999 "slow seconds" on the UTC-SLS clock.

During the last 1000 seconds of a day with deleted leap second, a
UTC-SLS clock accelerates by 0.1% to 1.001 times its normal rate,
such that the last 1000 UTC seconds (excluding the deleted leap
second) span exactly the same time as the corresponding 1001 “fast
seconds” on the UTC-SLS clock.

At each full hour (and half hour), UTC and UTC-SLS are identical,
even right after a leap second.

The UTC-SLS specification provides a detailed explanation of the
advantages of this particular choice among all the alternatives
considered.

Specification

This proposal has now been published
by the IETF as an Internet-Draft
standard for wide discussion.

Frequently asked questions

How widely was this proposal already discussed?

The UTC-SLS proposal was initially called UTS and was repeatedly
discussed in various expert forums since 2000, including the USNO leapsecs
mailing list, the USENET group comp.protocols.time.ntp, and the ITU-R
SRG 7A Colloquium on the UTC timescale in May 2003 in Torino. Two
earlier publications that described this proposal (then still called
UTS) are:

uts.txt – the initial October 2000
proposal (written in the style of an ITU recommendation)

What criticism has been voiced against this
proposal?

The only specific criticism or suggestion for improvement that I
ever heard in relation to this proposal concerned its original name,
UTS, which has since changed (see below).

All other objections to UTC-SLS that I heard were not directed
against its specific design choices, but against the (very well
established) practice of using UTC at all in the applications that this
proposal targets:

Some people argue that operating system interfaces, such as the
POSIX “seconds since the epoch” scale used in time_t APIs, should be
changed from being an encoding of UTC to being an encoding of the
leap-second free TAI timescale.

Some people want to go even further and abandon UTC and leap
seconds entirely, detach all civilian time zones from the rotation of
Earth, and redefine them purely based on atomic time.

While these people are usually happy to agree that UTC-SLS is a
sensible engineering solution as long as UTC remains the main time
basis of distributed computing, they argue that this is just a
workaround that will be obsolete once their grand vision of giving up
UTC entirely has become true, and that it is therefore just an
unwelcome distraction from their ultimate goal.

I do not believe that UTC in its present form is going to disappear
any time soon. Therefore, it makes perfect sense to me to agree on a
well-chosen guideline for how to use UTC in a practical and safe way
in selected applications.

Why is UTS now called UTC-SLS?

The original October 2000 proposal was to call this specification
UTS, for “Smoothed Coordinated Universal Time”. Daniel Gambis noted at
the May 2003 Torino meeting that the IERS uses already internally a
“Smoothed Universal Time (UTS)”, although he admitted that this term
is not widely known. Nevertheless, his comment, combined with the
idea that having the designation start with “UTC-” makes it perhaps
slightly clearer to readers that UTC-SLS is just a minor modification
of UTC, as opposed to yet another definition of Universal Time,
changed my mind about the proposed name.

Why make UTC-SLS an IETF standard, rather
than an ITU-R recommendation?

The definition of time scales has traditionally been the domain of
astronomers and metrologists, and their respective international
organizations (IAU, IERS, BIH, BIPM). The ITU-R got involved only with
regard to time-signal radio stations, for which it defined UTC. The
whole notion of a leap-second was born out of practical considerations
to do with standard-frequency transmitters and their carrier-phase
locked pulse-per-second time codes.

UTC has been adopted far more
widely than just in radio time signals and radio clocks. It has become
a standard reference in computer networking protocols,
programming-language standards and operating-system interfaces. The
question of how to deal with UTC leap seconds on computer networks
arose primarily with the wide-spread implementation of NTP, a standard
maintained today by the IETF. Since UTC-SLS is foremost an
implementation and interoperability guideline for NTP users, the IETF
seems the most appropriate forum.

In addition, the ITU-R working group in charge of UTC is at present
busy discussing a proposal to abandon Universal
Time entirely as the basis of civilian time keeping, in favor of
a purely atomic-time based timescale that has no leap seconds. This
proposal faces substantial opposition and it seems at present unlikely
to go through. However, before that discussion is off the table, the
working group is unlikely able to ponder more pragmatic
recommendations on how to implement the existing UTC definition
smoothly in systems other than radio clocks.

Is Google already using UTC-SLS?

In a sense, but not quite. In a September
2011 blog
post, Google engineer Christopher Pascoe describes how Google
internally distributes via NTP a "smeared" version of UTC that does
not contain any leap seconds. This way, they avoid the headache of
having to audit millions of lines of application code for how it deals
with the potentially hazardous implementation of leap seconds in the
existing Linux/etc. NTP kernel code. That simply jumps back by one
second at the start of an inserted leap second 23:59:60, in other
words it replays the 1000 ms worth of timestamps of the preceding
second 23:59:59, and thereby makes time non-monotonic, which may cause
malfunctions in application programs that depend on clock
monotonicity.

Google's "leap smear" looks from an application writer's point of
view very similar to UTC-SLS. However, their implementation differs in
two important aspects:

Difference 1: Location of implementation

Google's smeared leap seconds are implemented in their core NTP
server(s). They distribute over the NTP protocol a smoothed form of
UTC, and drop in the protocol any indication that a leap seconds is
about to happen. Therefore, the PLLs in the tracking NTP clients or
connected operating-system kernels remain unaware of the leap second,
and merely experience what looks to them like a temporary frequency
deviation of their local oscillator.

UTC-SLS is instead intended not to affect NTP servers or the NTP
protocol in any way. The UTC-SLS proposal envisions implementation of
the smoothed leap seconds instead in NTP clients, or even better in
the kernel-clock implementations that they interface with.

The UTC-SLS approach has the big advantage that NTP clients remain
aware of the scheduled time of a leap second. This means that NTP
clients can

accurately implement the smoothing steps based on pre-announced
data, even when they temporarily loose connectivity with NTP servers
near the leap second;

continue to accurately monitor the frequency stability of their
local oscillators (which may be OCXOs expected to be far more accurate
than a second per month), because they are not being lied to by the
NTP servers;

still leave to an operating-system kernel the option to accurately
provide alternative leap-free timescales (e.g., TAI), to specialist
applications that ask for these (e.g., astronomic or geo-physical
experiments, navigation systems).

With UTC-SLS, only application programs that do not worry about
utmost levels of frequency stability are briefly lied to about UTC and
thereby kept unaware of the leap seconds.

The Google approach mainly has the short-term practical advantage
that no changes are needed to existing OS kernels. It is therefore
easier and quicker to roll out locally with existing code. However, it
violates the NTP protocol specification and therefore should really
only be implemented in in-house networks that are isolated from the
world-wide public network of low-stratum NTP servers.

Difference 2: Shape of smoothing

Google appears to use half of a raised-cosine shape to bridge the
pre-leap and post-leap timelines, however the above blog post does not
reveal the length of the time window over which the correction
extends.

UTC-SLS uses a linear ramp function that starts exactly 1000
seconds before the end of the leap second (midnight UTC).

The choice of the much simpler linear correction function in
UTC-SLS was motivated by the fact that it will be evaluated in the
same code that applies already today time and frequency offset
corrections when an application called gettimeofday(), and this is
usually a linear function. Therefore, UTC-SLS can be implemented very
efficiently, and very similar to existing kernel-clock PLL code, by
just temporarily tweaking some of the kernel clock's internal
parameters.

The semi-raised-cosine correction has the advantage of a smooth
first derivative, and may therefore be easier to follow by some types
of PLL. This makes sense if the correction is applied before any
tracking kernel PLLs, as Google does in NTP servers. But it brings no
advantage if the correction is applied after any PLLs. (A
spline function formed by joining segments from two second or third
order polynomials would have similar properties but is more efficient
to evaluate in kernel-clock code, especially where floating-point
registers are not easily available. So if there there really were a
need for a smooth first derivative of the correction ramp, I'd
advocate using a spline over a cosine.)

Overall, the Google experience suggests that there is a justifiable
need for a smoothed version of UTC for use in computer APIs, if only
for due diligence reasons. Their current solution makes sense because
of its quick deployability in a highly heterogeneous environment. On
the other hand, UTC-SLS has many additional advantages and remains a
desirable and more robust candidate for a standardized, long-term
solution for the same problem.