[LEAPSECS] Crunching Bulletin B numbers

Rob Seaman seaman at noao.edu
Tue Feb 15 11:12:32 EST 2011


Ian Batten wrote:


> The UK's standard time broadcast, which is funded by the government, contains DUT1 in a format which doesn't permit |DUT1|>0.9.


The point is that the state of the art appears to allow the prediction of UT1 to better than 0.1s over 500 days - perhaps even better than 0.05s over this period that the "Earth orientation parameters prediction comparison campaign" called the "medium-term". We're interested here for the purpose of stabilizing leap second scheduling in what the state of the art would be for the long-term of 5-10 years.

The mean absolute prediction errors for DUT1 over those 500 days appear to be very well behaved among several different independent teams, increasing roughly linearly. It would be unwarranted to plan extensively based on such a simple-minded extrapolation - presumably the predictions become uncorrelated over some period longer than 500 days. However, it seems entirely warranted to brainstorm on this list as well as to suggest that a long-term prediction project could be extremely interesting.

To keep |DUT1| <= 0.9s basically requires confidence in a prediction to better than 0.4s. A +/- 0.5 second buffer is needed to confidently add (or subtract) a leap second. (Off more than 0.5s one way means you can add a leap second to make it off less than 0.5s the other way.)

So, say the predictive tolerance is 0.1s per 1.5 years. If a linear extrapolation proves a reasonable assumption (I'm personally skeptical), then that's 6 years before the 0.4s wall is encountered. (If the predictions really are good to better than 0.05s/500 days, that's 12 years - not bloody likely.) Throw a square-root in there while waving your hands about errors adding in quadrature - they we could speculate that the 0.9s requirement could be met over 3 years.

As we've discussed previously, it is not completely out of the question to gently and prudently relax the DUT1 tolerance in stages so as to reveal issues. One point here is that the +/- 0.5s buffer remains the same. If the limit is increased to |DUT1| < 1.9s, for instance, that gives the prediction algorithm more than three times the wiggle room, 1.4s versus 0.4s, not merely twice the wiggle room. The fairly conservative 3 year extrapolation, could become 9 years. I think this would be of interest to many here of very different points of view.

So, what is the state of the art for long term predictions of UT1? Could the algorithms used by the EOP PCC teams simply be run on the historical Bulletin B numbers to find out?

Rob



More information about the LEAPSECS mailing list