[LEAPSECS] Bloomberg announced its smear

Warner Losh imp at bsdimp.com
Sat Sep 24 17:46:50 EDT 2016


On Sat, Sep 24, 2016 at 3:32 PM, Steve Summit <scs+ls at eskimo.com> wrote:
> Brooks Harris wrote:
>> On 2016-09-24 11:39 AM, Stephen Scott wrote:
>>> Smearing is fine if you don't depend on a second being a second.
>>> I work in the broadcast industry where time synchronization is
>>> critical.
>>
>> The challenge here is that the broadcast industry needs fixed-epoch
>> deterministic local timescales to accomplish media (video and audio)
>> timekeeping.
>>[...]
>> Fundamentally, the early implementations of POSIX and the many systems
>> based on its heritage cannot represent "23:59:60" and so most systems
>> are *indeterminate* at (or near) the Leap Second.
>
> Right.  I think several things are clear:
> * Most code assumes Posix (warts and all) and needs the Posix
>   definition preserved, and can tolerate smearing perfectly fine.
> * Some code needs more perfect timekeeping, with perfectly
>   accurate seconds, and/or explicitly visible leap seconds,
>   and therefore definitely no smearing.  (See also RFC 7164.)
> * The typical "fallback" Posix implementation, which does
>   something variously ill-defined, nondeterministic, and/or
>   jumpy at a leap second, is really pretty unacceptable.
>   (Yet that's what most systems are still living with today.)

I don't know about 'unacceptable.' It's certainly less desirable, but
experience has show the vast majority of systems 'fail safe' after a
period of time and the errors don't materially affect their
operations. That's generally acceptable, but not some users will have
problems with that. Those users need to lie to these broken systems
about the time to avoid the undesirable behavior.

> My conclusion is that there's no One True Solution.  My hope
> (and I'm trying to arrange a demonstration) is that it's possible
> to implement some decent compromises, preserving Posix (with
> possible smearing) for the majority of programs that need it,
> providing true UTC for the few programs that need it, and
> absolutely getting rid of any awkward clock jumps at UTC midnight
> on leapsecond days.

It's also possible to cope with the jump at midnight. Several
different mechanisms are in place to allow that, but they require some
lie to be told. Like you've concluded, there's no universal lie. The
lie you tell may be useful and reasonable for the systems in your
demonstration, but other systems may find the lie untenable.

Some of the lies told:
(1) Stopping time around midnight. The second at midnight takes 2s to elapse.
(2) Repeating the last second before midnight (first second after midnight).
(3) Running the kernel in TAI time, fixing the output with timezones.
(4) Running the kernel in TAI time, fixing the problem with emulation
that tries to preserve the 86400 invariant.
(5) smearing the seconds for some period of time.

All these lies have problems for some class of program. If your system
doesn't have that class of program on it, or you can test / fix such
programs before the leapsecond, then that's the best lie to tell. For
web-server / file-server type operations, generally the best lie is to
smooth it over with smeared seconds, especially since this can be a
lie told to all systems via ntp or similar means. For real time
control, generally the best lie is to repeat the second at midnight,
but use a purely monotonic timescale for the real-time control
programs and avoid POSIX interfaces as much as possible. For
development boxes, fixing the problem in timezones often suffices, so
long as you have a good source of leap-second metadata.

You're quite right that in the absence of perfect software, and
wide-spread adaptation of better APIs, there is no silver bullet.

Warner


More information about the LEAPSECS mailing list