[LEAPSECS] The Debate over UTC and Leap Seconds

M. Warner Losh imp at bsdimp.com
Tue Aug 10 16:18:26 EDT 2010


In message: <20100810194307.GB4307 at cox.net>
Greg Hennessy <greg.hennessy at cox.net> writes:

: On Tue, Aug 10, 2010 at 11:14:15AM -0600, M. Warner Losh wrote:

: > I think that he means that the WP7A folks are telling the software

: > community that either they suck, or it really isn't a problem.

:

: Well, they may have no desire to tell the software community they

: suck, but the software certainly sucks, in the sense that it doesn't

: represent reality, i.e. leap seconds.

:

: The two solutions are: 1) change the software standards to match

: reality, or 2) change reality to match the software standards. If

: there was a third, I've missed it.


The problem with (1) is that the software standards are reality.
There's a huge body of code that was written to those standards. Just
like there's a large number of deployed telescopes and such that
depend on DUT1 < 1s to work correctly, this software depends on the
POSIX definition of time_t.

So any change to (1) would require a lot of code to be rewritten, and
then the standard would no longer match reality since it takes a long
time for changes to the core of the standards to ripple out to
implementations, and even longer for people to update their software.
So you're left with the situation where the standard can either match
the ITU reality or the deployed code base reality, but not both.

The fact remains that the infrequent and erratic nature of leap
seconds makes them difficult to test for (end to end). Eliminating
them solves the problem. Scheduling them out N years helps a lot
since the tables can be placed in the software now, and time and
materials can be planned out more than 6 months in advance so there's
no 'mid year surprises' in somebody's budget for extra QA, GPS
simulator time, etc.

In addition, since there's been no negative leap seconds, I'd wager a
large percentage of the gear that manages to handle positive leap
seconds correctly fails when a negative leap second happens, but
that's not a standardization problem.

The current status quo doesn't match what's described in the paper,
except for the least-demanding of applications. The current status
quo has a number of ill-specified edge cases where the standards (ITU
vs POSIX and ntp) don't mesh well. It is quite difficult for
applications writers that need to keep real-time track of time during
a leap second to do the right thing because the standards conflict on
what the right thing might actually be. This leads to domain-specific
hacks that might work well in one place, but fail in another. Eg,
stepping time back a second works great for a web server, but fail
horribly for a database. Freezing time works great for a database,
but fails for systems that use high-resolution time as an unique
identifier. Skewing the time over several hours would work well in
most domains, but fail for high speed trading software that needs to
know the legal time to within a few milliseconds. What's a general
purpose OS to do? What's the 'right' way to 'fix' the software
standard?

The paper posted here gets into none of this subtlety, but instead
glosses over the problems and suggests non-functioning software has to
be fixed. But as you can see, "fixed" can be a very slippery term.

Warner


More information about the LEAPSECS mailing list