[LEAPSECS] Crunching Bulletin B numbers

Warner Losh imp at bsdimp.com
Fri Feb 18 13:39:34 EST 2011

On 02/16/2011 16:52, Mark Calabretta wrote:

> On Tue 2011/02/15 18:51:58 PDT, Rob Seaman wrote

> in a message to: Leap Second Discussion List<leapsecs at leapsecond.com>


>> My point is just that archival data is sufficient to characterize the

>> real world behavior of the algorithms already developed. We needn't

>> wait ten years to know if data limited to what was available ten years

>> ago can predict this year's UT1 to some level of confidence.

> I was hoping to see more evidence of geophysics in the prediction

> algorithms but, apart from a few that incorporate predictions of

> atmospheric and oceanographic angular momentum, they mainly seem

> to be mathematical extrapolation techniques.


> Considering that LOD can be affected by essentially unpredictable

> things such as earthquakes and volcanoes; magma currents in the deep

> mantle; the melting of the ice caps due to global warming; the

> southern oscillation index and Indian Ocean dipole; meteor impacts;

> etc., likely including some "unknown unknowns", the task is probably

> no less difficult than reliably predicting movements in the stock

> exchange 10 years from now.

Applying the short-term models that work really well for 500 days (that
result in an error bar of about 100ms) to long-term works adequately
well for most people, but would exceed the 1s tolerance in the 1000-1500
day time frame due to the list of factors that you've listed.

> That said, if leap second insertions were simply deferred for 10

> years, DUT1 would probably grow to no more than about 6s (even

> including deceleration), which seems much preferable to letting it

> grow without limit.

That's part of the compromise that I've put forward. Goal: publish them
out 10-20 years. To get to that goal we can do things like let it get
to 1.1s and see what breaks as Tom has suggested. We can provisionally
publish things out one, two or even three years at first until the
models improve. Based on experiences of purposely pushing things past
the edge, as well as doing things out a few years, we can gradually
stretch that time horizon.

Given that we know approximately what the end point will be 100 years
from now, we could even have a mechanical rule like the leap-day rule
that would put us in the right neighborhood of synchronization. Having
a good, formulaic/mechanical method for declaring leap seconds would be
a vast improvement over the 'update your tables every 6 months' we see

I'd bet that the following would keep us within 10s over the next 100 years:

leap_second_end_of(int year, int month)
int m = (year - 2012) * 12 + month - 6; /* First leap june 2012 */
if (month != 6 && month != 12) return false;
if (m < 0) return false;
if (m % 18 == 0) return true;
return false;

which would give us a leap second every 18 months, starting June 2012.
Of course, we'd have to tweak the frequency every century to (a) steer
to 0s off and (b) track the observed LOD trends (or in other words, to
steer in phase and frequency to keep UTC synchronized to UT).

If we had something like that, then leap second compliance would
approach that of leap-day compliance in modern software.


> Regards,

> Mark Calabretta



> _______________________________________________

> LEAPSECS mailing list

> LEAPSECS at leapsecond.com

> http://six.pairlist.net/mailman/listinfo/leapsecs




More information about the LEAPSECS mailing list