This periodic adjustment mainly benefits scientists and astronomers as it allows them to observe celestial bodies using UTC for most purposes
This is incorrect, dealing with leap seconds is a major problem in astronomy, requiring a data file or even recompile any time there is a new one announced. Since they are only ever announced 6 months in advance, it creates a lot of logistical problems.
Astronomy algorithms usually work in Barycentric Dynamic Time, or Terrestrial Time, or UT1. In reality the whole invention of UTC is so that people don't have to deal with those systems on a daily basis.
The process most astronomy programs go through is to get the UTC from the user, convert the Gregorian date to a Julian day number to get rid of the Gregorian Calendar altogether. Then look up the number of leap seconds, add those to the JD to get International Atomic Time, then add 32.184 seconds to get Terrestrial Time. If Barycentric Dynamic Time is needed, you must first compute the velocity of the Earth relative to the solar system barycenter (which itself requires TDB), then compute the relativistic effects of that motion on Terrestrial Time. If you need UT1, these can only be obtained by observation from the International Earth Rotation Service, and required daily updates, and interpolation of values in between observations.
So, as you can see, leap seconds do nothing to make an astronomer's life easier. In fact, we jump through a lot of hoops to make the average person's life easier. It sounds like Meta wants to change the definition of time for everyone just to make their programming a little easier. I find the very premise just out right ridiculous.
It's true that your average person will not be affected by the Sun setting a few seconds earlier. But eventually it will build up enough error that it eventually has to be addressed, and Meta is just trying to make their problem someone else's problem.
> So, as you can see, leap seconds do nothing to make an astronomer's life easier. In fact, we jump through a lot of hoops to make the average person's life easier. It sounds like Meta wants to change the definition of time for everyone just to make their programming a little easier. I find the very premise just out right ridiculous.
Meta (and everyone else behind this proposal) are arguing that leap seconds are more trouble than they're worth. Just as you are in the specific case of astronomy.
> It sounds like Meta wants to change the definition of time for everyone just to make their programming a little easier. I find the very premise just out right ridiculous. ... Meta is just trying to make their problem someone else's problem.
Meta's argument is: we should get rid of leap seconds because even though they're helpful for astronomers, they're bad for everyone else.
Your counterpoint was: Actually leap seconds are _also_ bad for astronomers.
Which implies then that Meta is even more right to suggest that we get rid of leap seconds, because it seems like they help nobody.
I can see how you'd read it that way, but my point was that they were not invented for astronomers, with the implication being the author didn't research the topic thoroughly enough to be making recommendations.
Astronomers may just understand that the problem exists: [the ITU group’s chairman] says he needs more input from people who use universal time. “Many people don't even know that leap seconds exist,”
The decision to ignore leap seconds reminds of the decision by Russian and some other orthodox churches to ignore changes introduced by Gregorian calendar. The decision saved them some internal political problems, like the need to decide which saints will have their celebrations missing, but now the Orthodox Christmas is moving to summer with the speed of about 1 day per 100 years. Eventually somebody will have to deal with this, but not now - there’s a lot of time - same thinking.
Meanwhile, if you leave in a Gregorian country you can have your Christmas tree for free on Orthodox December 12th…
Guilty as charged. Thinking of that - there could be no more Russian winter as we used to it at all, even in the Northern hemisphere. My friends are now growing apricots in the Russian temperate zone. Apricots! - in the place there the “Baptismal Frosts” (to happen each year just before Christmas) used to kill all such fancy vegetation.
The Islamic calendar drifts by about 10 days a year, and Muslims seems to be able to cope fine with their holidays not consistently being in the same season.
Well to be fair, I don't think the Muslim calendar is _meant_ to have any solar correspondence anyway.
But more importantly the holidays, is the use of the calendar in having a natural sense of time (that is, corresponding with the seasons / sun). Which is why most Islamic countries use the Gregorian calendar for civil purposes (or so claims Wikipedia, at least).
Many Islamic countries do in fact use the Gregorian calendar for Civic purposes, at the very least Egypt does which I can confirm by virtue of living here.
I'm not sure Orthodox Christians celebrate Christmas with a tree. At least in Russia the tree is mounted for New Year celebration and it lasts till Orthodox Christmas for granted. Actually it usually lasts to the so-called Old New Year - New Year in Old Style (meaning Julian Calendar before introduction of Gregorian one by Bolsheviks).
Far from being "most vocal", that 2003 article has a single quote from a single astronomer whose only warning is:
> If the change is implemented, “the old meaning of time will become ambiguous,” warns Steve Allen, an astronomer at the Lick Observatory in Santa Cruz, California.
This does seem like a problem that is solvable.
In 2015 the ITU (mentioned in that 2003 story as studying the problem) announced:
> The ITU World Radiocommunication Conference (WRC-15), currently in session in Geneva from 2 to 27 November, has decided that further studies are required on the impact and application of a future reference time-scale, including the modification of coordinated universal time (UTC) and suppressing the so-called “leap second”
> If the change is implemented, “the old meaning of time will become ambiguous,” warns Steve Allen, an astronomer at the Lick Observatory in Santa Cruz, California.
I'm with Allen. It's fine to drop leap-seconds; but then it's not UTC any more, it's a new time scale, and it needs a new name.
We've made this mistake before: in fact GMT has been redefined many times, and as a result the time interval between two GMT timestamps depends on which versions of GMT the two timestamps use, and so on the dates of the timestamps.
> a single quote from a single astronomer whose only warning is...
It's not a quote, but the article also has Allen saying that "it could cost between US$10,000 and US$100,000 for each telescope to correct the problem, and that corrections will be tricky to implement."
It also has another astronomer claiming problems: "But separating time from rotation would mean that telescopes would no longer function properly, according to Patrick Wallace, an astronomer at the Rutherford Appleton Laboratory in Oxford, UK. Most telescopes with a viewfinder wouldn't be immediately affected, but instruments that track satellites and other moving objects could feel the effects within a few years, Wallace warns."
It sounds like FB/Meta is correct in saying that Astronomers did object. I'd suggest (like Mata does) those objections are not especially important considering the trade-offs though.
Astronomers use time for at least two different thing. For recording the time of events and observations, astronomers usually use the julian day, and for that leap seconds are a big pain. However, for precisely pointing a telescope without a viewfinder, you need the local sidereal time.
The local sidereal time is fairly straightforward to (approximately) compute from UT1, the current date and your longitude. And leap seconds keep UTC within 1 second of UT1, so you can just use UTC.
But, for the latter problem, a better solution would probably be to have a service similar to NTP, that gives the current Greenwhich sidereal time, rather than UTC. Then it could be corrected more often than the current leap seconds. That doesn't solve the problem for existing telescope systems though.
Leap seconds are a "problem" in that they represent work and effort. That doesn't mean we need to discard them; they serve an important purpose.
I agree with gmiller here: https://news.ycombinator.com/item?id=32229049 that the whole thing smells like Facebook's engineers wanting to save themselves some work, at the cost of generating more problems for the future.
This kind of garbage attitude is why we have climate change, covid-19 and so many other things wrong with the world. Befitting that this one came from facebook.
It's not Facebook. Even the examples of the post include non-Meta. It's not "some work". It's how we conceptualize time moving linearly, and that logic encoded on software.
You have one of my favorite comments on this entire thread - so I'm interested in your thoughts on who leap seconds are useful for, if not Astronomers (and clearly not Meta).
They are useful for ordinary people. Without them, at some point in the future people would go to work at 8am at sunset and go to bed at sunrise. Leap seconds are a way for everyone to agree to shift their schedules by 1 second to stay in sync with solar time. It's subtle enough that most people don't even know they exist, nor will it have an effect on most systems. Larger jumps would be much more difficult to ignore, and smaller jumps would be more frequent.
In the worst case, you would get 2 leap seconds per year. So in the worst case, 1800 years.
However, in the past 50 years, we've only had 27. So at the current rate, probably more like 6667 years to be off by an hour.
Of course, if the earth's rotation changed speed by so much that 2 leap seconds per year wasn't enough, we'd find a way to cram more in. In fact, in that case, the case for leap seconds would probably get stronger.
I would absolutely take the cost of work schedules being adjusted by an hour once every 6667 years over the cost of 6667 years of programmers having to deal with leap seconds and everyone else having to deal with the fallout of any bugs from that.
How long has the 9-5 even existed? Less than 200 years? At no point in 6667 years would anyone notice any consequences of the drift, culture and work schedules change so much faster than that.
The physical effort of setting a clock is not the issue. Switching to "leap hours" would involve a good part of the world living with about 9am sunrise (or a 3:30pm sunset) for hundreds of years. It'd be a really hard sell to say that's actually better. DST is adjusting to the amount of daylight available, so there's not much of a negative impact. Systems that need to be more accurate just set their time zones to UTC. And one second every year or so is far within the accuracy of a typical computer clock, so anything that needs something better is already going to have a complex time synchronization system.
For humans, it's no worse than folks today who live on a time zone boundary and cross it often. You might argue that for humans, the local solar time should be computed as a continuous function of longitude, leap seconds can be managed there, and we get rid of the concept of a time zone.
Computers are another story though, perhaps in the past "a typical computer clock" could be on the order of seconds off and be fine. When computers start to control things like wireless networks (cellular is a problem today, time accuracy is a design factor in evolving Wifi as well), those seconds turn into microseconds very, very quickly. The whole point of the article is to reduce unnecessary complexity in the existing complex time synchronization systems.
If you're running large infrastructure, the next leap second is going to cause an outage somewhere in your org, and the cost may be very large.
> Switching to "leap hours" would involve a good part of the world living with about 9am sunrise (or a 3:30pm sunset) for hundreds of years.
Can we switch to leap minutes instead? In the worst case it would mean to adjust the time once in 30 years, which means that astronomers and software engineers would have to deal with this problem no more than 3 times in a century. Fair enough IMHO.
So what is it about a system where everyone and every business adjusts their schedule slightly (I guess independently) of each other that is better than the leap second system we have now. Which, as I said earlier, is essentially the same thing, but done in a standardized way.
Timekeeping at the abstraction level where business hours exist is a lot more tolerant of 40 seconds of error than at the abstraction level where timekeeping primitives are implemented.
My understanding was that leap seconds are only ever added. It’s not like this averages out over time. The problem is that because they are only added the effect is cumulative and eventually some future generation will be eating lunch at 00:00:00.
And, by some accounts, every time we do, auto deaths and other bad things increase for some period after the change. Not to mention the weeks of bitching on either side of the change about how bad/great the time change actually is.
This isn’t what would happen. Even we had a huge number of inexplicably needed leap seconds (there are 3600 seconds in an hour), local offsets would just change, for the same reason that people on the other side of the planet have a different time on their watch than you do. And if for some reason we proved incapable of doing that, then social conventions on when the work day started would shift relative to the local time. There just is no danger of people going to work at sunset.
I’m not saying that abandoning leap seconds is a good idea, just that abandoning them wouldn’t consign future generations to whacky schedules.
It's not really a straw man because Meta's not really attacking that position, they're ceding that point.
Like a straw man would be Meta saying "astronomers think that leap seconds are helpful but in fact they're not useful for anything!"
Here, if anything, Meta is attempting to steel man leap seconds a bit by saying "look, sure, leap seconds are at least useful for some things, like astronomy, but they're still not worth it overall". And GP is claiming that it's not a very good steel man because even astronomers don't like leap seconds.
But Meta's not actually ceding the point. They're essentially arguing that (1) leap seconds are an esoteric concept that only astronomers care about, and that (2) we shouldn't care about their esoteric concerns, so (3) we should therefore dispose of leap seconds.
The GP's point is that astronomers find leap seconds annoying too, so Meta's argument is based on a faulty premise.
Ok sure, I don't think I quite got that from what GP wrote, but I can see that as a valid argument: "There are actually valid uses for leap seconds, but just not within astronomy. Meta's claiming that leap seconds were made for astronomy so that we assume that's why they exist and ignore the other real reasons leap seconds exist and the benefits they provide."
> If the actual leap seconds do not benefit astronomers, and do not benefit anyone else, they are purely a tax on us all.
May as well dispense with the calendar entirely and just measure time in powers of ten.
An day is about 80 kiloseconds, and a year about 30 megaseconds.
Maybe many gigaseconds into the future our descendants will learn about old Earth time keeping in school, along with Roman numerals and the imperial system.
Astronomers whose software uses UTC and isn’t able to change to UT1, and who don’t need precision less than 1 second, but more than 3-5 seconds or so will be worse off. This seems like much less net inconvenience than what the tech industry has to deal with when it comes to leap seconds.
I hear the argument that we have to deal with this eventually to keep time in track with mean solar time, but the adjustments are so small that I don’t see any real impact on regular people.
You could very likely fund fixing the software of every astronomer on the planet for less than the cost of one outage caused by adding/removing a leap second when you look at Meta + Google + a few telcos.
I like meta-bashing as much an any other facebook hater, but their point is that leap seconds are by definition used to keep UTC in almost sync with earth rotation and their argument is that it is not something useful to do.
You made a case that leap seconds don't make astrononmy easier.
But you seem to be implying that you do not want to get rid of them, which surprised me. After you made your case for how they actually make things harder for astronomers, I thought you'd be in favor of eliminating them.
You seem to be against eliminating them, but are not saying so clearly, or explaining why. Unless I am misunderstanding. Can you explain why you object to getting rid of leap seconds? Is it because of value to astronomy, or because of value in other domains despite the inconvenience to astronomy? Other? Or do I misunderstand?
> But eventually it will build up enough error that it eventually has to be addressed
I also misread their argument until this point. But there is no reading that makes sense except that Meta’s programming problems protect everyone else from dealing with the drifting time.
I think this is right. There will still be calculations for sunrise and sunset which will have to make the calculations.
Seconds don't matter for sunset and sunrise to humans. Move a bit and your off no matter what timezone you are in.
If you need really really precise sunset and sunrise leap seconds don't help. You are more likely to hit a bug than be helped by them.
I'm all for ending leap seconds and ending daylight savings while we at it. In Spain, in summer, some people start work at 08:00 or 07:00, no need to change clocks to change habits.
I'm not really sure why sunrise and sunset times are being discussed when what leap seconds are designed to do is have noon occur when the sun is highest in the sky, which is a fraction of a split-second event.
One cool thing about leap seconds is that it is bidirectional, so a time proxy could introduce time traveling into any supported system aside from vector clock issues.
It's not just a headsche for Meta's programmers. Anyone involved with bookkeeping and synchronizing has an unnecessary bugbear to deal with. Every six months is simply too often. If it was switched to an adjustment made every 2-10 decades with at least a decade to update the file then the problem would be greatly lessened.
> Every six months is simply too often. If it was switched to an adjustment made every 2-10 decades with at least a decade to update the file then the problem would be greatly lessened.
You seem confused. It doesn't happen every 6 months, it gets announced 6 months in advance – since we know there won't be one this year, it will have only happened twice total in the 10 years prior to December this year.
That's hardly an enormous and constant disruption – turning it in to a multi-minute disruption "every 10 decades" would likely be more concentratedly disruptive, leading to more calls to put it off to avoid the immediate pain, leading to ever-more accumulated error.
Spreading the pain out into a leap second every few years feels like a much better and more sustainable solution. This keeps processes in-use and up-to-date rather than long-forgotten-and-everyone-who-knew-them-is-dead.
> Anyone involved with bookkeeping and synchronizing has an unnecessary bugbear to deal with.
It is unnecessary but not because of leap seconds which are the phenomenon of real life, not some abstract invention of astronomers. The unnecessary bugbear is caused by a wrong time system design. We need both the atomic time (which corresponds to number of quartz pulses elapsed since some fixed moment in the past), and the astronomy time (which represents the orientation of the Earth in space).
The decision to mix them up was wrong and is exactly what causing “the unnecessary bugbear”. Do not want the bugbear - fix the cause of the problem.
Palestine switched off Daylight Saving Time this year with only 4 days notice.
Anyone involved with bookkeeping and synchronizing should already know to never attempt to handle time conversions using your own program logic updated by hand, because the definition of time is constantly changing and there are already projects like tzdata dedicated to centralizing the handling of it.
There have already been 5 time zone updates this year, 4 changes to the date that Palestine switches daylight savings (one of them with only four days notice!) and Fiji decided to quit using daylight saving time, plus the Leap Second announced on July 9.
That is a different problem. You usually work with timestamps and then convert the time to local time only when displaying it or doing calculations that depend on the particular timezone. So it's less of a problem, since at lower levels you don't have the concept of timezone (you may never have it, for example on embedded devices).
But if UTC shifts, it's another story. You assume the UTC (or UNIX) time to be a monotonic counter, and usually it's used this way, for example you compare two timestamps to know what definitively happened before another thing. Surely you don't take into account leap seconds... this is the problem.
Leap seconds wouldn't be that much of an issue if the time is only increased, you have one more second, it's the higher level implementation that has the concept of time to (maybe) deal with it. But they become an issue if they can take behind the UTC clock since it can generate all sort of strange bugs.
It's even a worse solution to make the duration of a second in a day longer or shorted, because you preserve the counter value, but you no longer can rely on the fact of 1 seconds being 1000ms long!
By the way the concept of leap second is nonsense to me. What is the purpose? If the purpose is to keep in sync the time with the rotation of the earth, nobody will ever notice an error of a couple of seconds, or even minutes. It makes more sense to take the current definition of a second, and then wait till the error is noticeable to an human being (let's say an error of 15 minutes, that would take centuries and we will probably far gone from the planet) and adjust it by shifting all the timezone for that amount of time.
Actually, UTC -is- continuous. Even with leap seconds. At no point does UTC go backwards, at no point does a second happen twice, and at no point does a second change in length -- your 'extra' second happens as 23:59:60; when there's a negative leap second, the last second of the day is 23:59:58. UTC is not expressed or defined as an offset from some time in the past.
The real problem for computing is when UTC is converted to unix epoch time, which is defined as an offset from the past, and by definition has exactly 86400 seconds in a day, every day, so some provision has to be made for those extra (or missing) seconds. And -that- is where the problem happens. But it's not UTC that's mucking around with the definition of time, it's the standard representation of time that's used in modern computing that causes the problems.
That being said, UTC is still probably to blame for most of the problems, because it effectively requires knowledge of more than just a timestamp to understand when something actually happened. And that extra knowledge (the map of when leap seconds have happened before) changes often and irregularly. Epoch time could totally be redefined to include leap seconds, and that would solve lots of problems, but there's no practical way to distribute that updated leap seconds table to every system that would possibly need it...
That would most be a local problem and would not continue to exist for a long period of time.
tzdata is complex, but having waded through it a few times, it seems needlessly so for the desired use cases. There is more editorial comments than data in the files, and they spend a lot of time and effort creating entries for historical time zones so that you can convert times in the year 1942 into territorial time zones that don't even exist any more with perfect accuracy.
Except, even then, it seems like it's not. If you dig into the time routines that use tzdata, you see they must use an unstable conversion process that _may_ converge, but has a simple counter protecting it in the case that it _does not_ converge and in that case it just gives you a best guess time that may still be wrong anyways.
Really.. the problem I have and I suspect is the majority of cases, I have a time in Some/Zone and I want to tell someone in Another/Zone what the time would be on their clock, today, right now. The time our grandfathers would have used during the War in summer is never going to be material to me in this instance or for the majority of use cases.
This isn't meant to be overly critical of tzdata just a recent annoyance I experienced when trying to find a minimal code and cross language solution to the above problem.
Extra historical zones and comments don't add any real complexity, do they?
> If you dig into the time routines that use tzdata, you see they must use an unstable conversion process that _may_ converge, but has a simple counter protecting it in the case that it _does not_ converge and in that case it just gives you a best guess time that may still be wrong anyways.
What kind of time zone data would be required to make it fail to converge when that time actually did exist?
> If you dig into the time routines that use tzdata, you see they must use an unstable conversion process that _may_ converge, but has a simple counter protecting it in the case that it _does not_ converge and in that case it just gives you a best guess time that may still be wrong anyways.
What does this mean? Can you explain in a little more detail, or link to somewhere I might read about this? I haven't used tzdata myself, but I understand it in broad strokes as a list of time zones, and a list of which offsets apply to which timezones at which given dates. What am I missing?
See the code for __mktime_internal in glibc, specifically the case for which it returns EOVERFLOW; but in general, the whole file has many comments worth reading.
> So another option is to adjust to 1/100 second every few months
That's pretty awful. We have a lot of systems now which care about duration. Any scheme that makes a time span unpredictable will cause problems. Doing it more often means more problems.
The only improvement is that you hope that the things that care about 1/100th of a second differences are rare enough that disturbing thing many times more often is a good tradeoff. This isn't a good bet especially because at time goes on we build faster and more stable systems that care about tighter and tighter synchronization.
Because tzdata is based off of political boundaries and decisions. If I'm making a dataset that needs continuous timestamps for multiple decades then UTC is effectively off the table because of how under-specified and poorly implemented leap seconds are in various software. So what's left is TAI and GPS, fine enough but now users see the timestamped data as being several seconds off. So we use a translation layer but some HMIs don't allow that so it's best to just not do it at all.
This reality is not unique. Thousands of engineers have walked down this path and asked "why?".
It isn't every six months, the Earth doesn't rotate that regularly. They are introduced "at most" every 6 months, but usually far less. There have been 27 leap seconds since 1972, so less than one a year.
UTC timekeeping in computers depends on a database of leapseconds, which has to be updated. But many systems don't get updated, resulting in drift. If leap-[seconds|minutes] only occur every 2 decades, many systems will ship with no update mechanism, on the principle "Don't worry, the system will be obsolete before twenty years is up". Hah! Famous last words.
Further, if someone thinks it’s already a pain to adapt systems for leap seconds, imagine the pain of ignoring this for decades/centuries and then having to deal with a massive headache on the scale of the feared Y2K problem. In the spirit of continuous integration and “releasing often” and “let it crash” on smaller scales, leap seconds seem like a pretty good solution for robustness at the global systemic level.
> It sounds like Meta wants to change the definition of time for everyone just to make their programming a little easier. I find the very premise just out right ridiculous.
It's also incredibly common attitude amongst software engineers and software engineering organizations. "Simplify" often means push complexity off of us and onto others.
> ...Meta is just trying to make their problem someone else's problem.
> While the leap second might have been an acceptable solution in 1972, when it made both the scientific community and the telecom industry happy, these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead.
UTC was never to make astronomy applications easier. The only thing that has changed is that cheap computers make it easier for astronomers to use UTC as a starting point.
> If you need UT1, these can only be obtained by observation from the International Earth Rotation Service, and required daily updates, and interpolation of values in between observations.
The GPS CNAV messages (on L2C and L5) have support for the Earth orientation parameters, but GPS isn't broadcast those messages yet. Maybe we'll get them with the OCX deployment. BeiDou 3 is transmitting the Earth orientation parameters now. So this one tiny bit of your headache will get easier in the near future as receivers start to forward these data along.
Actually one of the few objections to dropping the leap second at the previous review in 2015 did come from the International Astronomical Union (IAU).
Their report says:
Some members of the astronomical community have expressed great concern over any change to
the current system. These concerns are based on the use of existing software that takes advantage
of the current definition and uses UTC as a substitute for UT1. Their requirements for precision are
such that the current 0.9-second tolerance is adequate, and their software has been designed
accordingly. Should the definition of UTC be modified in any way that would permit this tolerance
to be exceeded, they anticipate it would create a substantial cost to make non-trivial changes in
existing software. Some members of the astrodynamic community have voiced similar concerns
regarding legacy software used in the determination of orbital parameters of artificial satellites that
again utilizes UTC as a substitute for UT1.[1]
So I'd suggest the FB summary of their objections is pretty much correct.
Reading the report completely, it seems the IAU wants to create a new time standard (not UTC) that drops the leap second and keep UTC as it is:
It was suggested that a means of transitioning to a uniform time scale could be accomplished by the creation of another time scale that might be called Temps International (TI) to clearly distinguish it from Universal Time.
Even NIST (1) claims that leap seconds are added to sync UTC with IAT.
Your point is that astronomical systems don’t use IAT directly but instead start with UTC and convert across various other systems? Could be, but then the purpose of UTC syncing was lost somewhere. Any idea why?
Navy celestial navigation. Naval almanacs by necessity were based on rotation of earth, so that was the timescale that naval organizations broadcast across the earth to support their ships. Eventually that radio broadcast time morphed into UTC, and then was repurposed as general-purpose civil time.
I was partially motivated to write about leapseconds because I'd spent much of the day trying to construct leapsecond tests for my telescope pointing code. You're absolutely right, leapseconds don't help astronomers.
Casual astronomy will do a one star alignment, so in effect they'll point at a star find out where it is and from it effectively compute the UT1 UTC offset (along with correcting other sources of pointing error). This applies no equally for electronic or manual pointing.
More serious astronomy with use the IERS Bulletin A data. https://datacenter.iers.org/productMetadata.php?id=6 Without it you UTC would only get you within 0.9 seconds (and you won't know the other polar motion terms), so you might as well just platesolve. If you do have the IERS data you have to then back out leap seconds.
You note you need constantly updated tables to do that, but it's worse than that: All over the place systems are getting their UTC replaced with leap-smeared UTC without the operators knowledge. You can't just back out leapsmear especially if you don't know you have it.
I'm sure leapseconds sounded like a good idea on paper and when only a small number of especially exotic systems needed to deal with them. But the pervasive computerization since the 1960s has increased the cost of leapseconds by many orders of magnitude.
> But eventually it will build up enough error that it eventually has to be addressed
Yes, in 4000 years or so people can adopt new timezones that are shifted over by an hour. We already have comprehensive support for timezones, and support for them changing (as they're fairly vulnerable to political whim), computers handle them reasonable well-- and importantly they're primarily a presentation element so they don't mess up precise timekeeping. Maybe in 4000 years no one will even care-- perhaps everyone will have finally switched to SWATCH INTERNET TIME. But regardless, a timezone addition/change every few thousand years is a much better situation than we have with leapseconds.
Yes, I fully support this position by Meta but their reasoning is not quite right when it comes to astronomers and scientists. Passing around a leap second kernel is a major pain for everyone dealing with GPS, satellite ephemerides, celestial bodies, etc. Leap seconds transform a problem which is entirely deterministic, and make it dependent on a policy decided by a committee, like timezones.
Correct time keeping is important whenever you have time-base events. Especially the billing ones.
Looong ago, while working on a mobile telphony billing system, we found out that a telephony system by Siemens (Germany, Europe) was creating billing events off by one our when processed by a system by Digital (USA, America) because of different DST switch date.
I always heard that the main reason for the leap second was that the british didn't want the prime meridian moving away from Greenwich (i.e. it is a weird political thing).
The prime meridian has actually shifted hundreds of feet from the original line already. It's not really where they show you at the observatory any more.
Everyone is treating this as some idiosyncratic proposal of Facebook's, but removing leap seconds is a mainstream position. Representatives of the US, China, France, and a majority of other countries were in favor of this when it was discussed at the 2015 ITU meeting [1] though the UK and Russia's were not. This has been under discussion since at least the 2003 ITU meeting in Turin.
Yes. Leap seconds are just a thoroughly bad idea, from any angle.
Yet, not nearly so bad as Google's approach: no leap, but a 24-hour smear, during which 24 hours they are out of sync with literally everybody else in the world.
For future leap seconds, Google (including GCP) are planning to use a "standard" smear (https://developers.google.com/time/smear). This is also the same smear used by AWS.
It seems like if the ITU decides to keep the leap second (a bad idea, in my opinion), the large infrastructure providers will just use the same standard smear for their clocks.
At most half a second, in the middle of the smear at midnight when the leap second is applied.
The Facebook smear is asymmetrical so it starts off 1 second off just after the leap second, and subsequently corrects itself.
[ The reason Google and Amazon use a linear smear is because NTP clients try to measure the rate difference between their local clock and the reference clocks; if that is different every time the NTP client queries its servers, it will have trouble locking on and accurately matching the smear curve. You can mitigate this somewhat by fixing a higher NTP query frequency, but that’s a heavy-handed fix for an engineering mistake. ]
They are "wrong" technically but
they won't be "wrong" more than a packet getting routed transatlantic would be - so if I send two packets from my laptop in London - one to AWS in London during the smear and one to another laptop in NYC, and timestamped the packets arrivals, the time stamps would likely be similar. Yes "wrong" but if it's a problem then it's a problem you have with speed of light. The answer is to find a different / reliable method of ordering.
I assume finance and control systems to start. I might be helpful to have a fallback time-ordering algorithm not dependent upon one monotonic clock, but then you might have a rarely-used fallback to have bugs in, I imagine.
Control systems, or any form of embedded safety critical system, use monotonic clocks where calendar time is a complete non-issue.
Ordering of events, on what scale are we talking here? If it’s just within a transactional database there are multitude of ways to do it. Even distributed dbs have such features without relying on perfect time. If you are looking at a spanner style db you need a lot more guarantees than “I just used the time my cloud provider assigned to my vm”, plus being in sync only matters within your own cluster?
I was thinking of more distributed control systems, particularly where testing for edge cases might be difficult and rare, and rigorous methods (did Lamport solve distributed mutex?) are probably off the radar in terms of culture.
Meta also smears, as mentioned in the article. The article calls smearing "common practice".
I think smearing can help reduce outages. The article mentions that reddit and Cloudlare had outages caused by leap seconds when they weren't smearing. I think it's a tradeoff between being standard and avoiding outages.
> The AWS Management Console and backend systems will NOT implement the leap second. Instead, we will spread the one extra second over a 24-hour period surrounding the leap second by making each second slightly longer.
I’m really struggling to understand the problem - what does out of sync here mean - especially when you are talking about a max 500ms. ntp clock drift can already be as bad as 500ms and if you need something tighter, then you may have already gone the gps route. Either way I can’t imagine anyone building a system that has less than a 500ms tolerance for any arbitrary computer on the internet. At the very least special relativity is going to be one the first road blocks you face in trying to get perfect universal time sync
> I’m really struggling to understand the problem - what does out of sync here mean - especially when you are talking about a max 500ms.
From a purely academic perspective, it means from noon preceding a leap second to noon after the leap second there's a difference between UTC and what google (and anyone else who does smears) reports as the time. If you really hated yourself and were trying to use timestamps from conflicting sources to order transactions in a database, for example, this could mess up the order of your transactions which could have all sorts of fun consequences.
Algorithms that involve 'i did this thing at this time, everyone else should wait for that time to have passed before continuing to ensure correct ordering' (ie. Many database systems) will dramatically slow down.
Systems that use atomic clocks might have been able to do 10,000 transactions per second will suddenly only be able to do 2 transactions per second if they contain a node that doesn't use the same smearing.
Those algorithms depend on their reference time being common across all processing nodes - in fact the only system I know that uses wall clock time for inter node ordering is.... Google Spanner, which runs just fine with their smearing approach.
But _all_ of Google's servers will be off by the same offset, which means that they can continue to use their distributed time-based locking mechanism.
Generally, if you require that high resolution of time, you don't leave your infra.
Seems a weird way to write an algorithm. Easier to rely on database locking and/or the obvious fact that if you have received some data telling you about an event having happened, then it has happened… Curious if there are any real reports where this caused issues?
If you have not heard of problems with time in distributed systems everywhere, maybe read up instead of guessing? There are literally thousands of papers on the topic, more every year. It is not considered solved.
If you have a distributed system that requires strict time synchronization why do you care if Google’s time is wrong? Surely if you need that level of synchronization you have your own implementation with gps clocks.
If you don’t, then you already have to deal with standard drift with ntp anyways so Googles time smear is the least of your problems
If you don't have the luxury of managing all your systems within one organization, then time smear is the least of your problems. Day-to-day ntp drift alone will be a bigger problem.
But where are the reports/blog posts from the thousands of companies using these algorithms that ran into performance problems the last few times cloud providers used time smearing? I’ve done some searching and can’t find any, which suggests this is mostly a theoretical issue.
Yes I could come up with an algorithm where this is a problem, but in most cases I’d favor a solution that doesn’t depend on separate systems having clocks perfectly in sync.
> “If we have an offset from solar time, it is not extremely important,” [...] “We are already shifted by one hour in summer compared to winter time. Are we affected because of that?
> Official time would slowly move out of sync with Earth’s rotation, but — given that it would take thousands of years to accumulate a difference that is greater than the kinds of shifts already caused by changing the clocks backwards and forwards for daylight savings time
I think this argument: that we're syncing to atomic clocks and keeping in sync with the sun isn't so important should have been included in the post. I stand by my earlier comment that it felt like a hack, but aligning with SI unit of time + atomic clocks is, I think, far superior.
Facebook should make another trade group so they can post well thought out proposals under that group’s name without the ad hominem distractions of being Facebook
Their name, Meta and Zuck’s name is too sullied for good faith discussions
Curious as to the reason for the UK's position, given that UK timelords have a strong track record, but this:
> Britain’s argument is largely based on the desire to keep a link between official time and Earth’s rotation, says Peter Whibberley, a metrologist at the National Physical Laboratory in Teddington, UK.
just seems silly. Bit more from Whibberley here [1]:
> There's no agreement internationally. Some countries favour ending leap second because they do cause problems. Some software in particular has great difficulty handling leap seconds. The simplest solution is simply to end them.
> But other countries say it's important to maintain the traditional link between timekeeping and the Earth's rotation and arguing we should keep leap seconds until at least we understand much better the long term consequences of ending them.
> Well, I'm part of the UK's delegation at this grand meeting that would discuss the issue in November of this year. So, it's my job to argue the UK's position but from a personal point of view, I don't have any stronger sympathies one way or the other. They're both very good arguments and the problem is, no compromise position. You had to keep leap seconds or you end them. Whatever happens, it's going to be very interesting.
> The UK government has considered the issue and it's theory is we should maintain this traditional link between our timekeeping and the Earth's rotation.
Some random googling turns up a document [2] which says:
> The UK has previously consulted official bodies and agencies with an interest in precision
timekeeping. None of these authorities reported significant problems arising from leap seconds,
while some scientific institutions reported strong support among their memberships for retaining
leap seconds.
With a reference to the article 'A British perspective of the future of Coordinated Universal Time' [3], which is longer, but doesn't really contain any more substantive points in favour of leap seconds. Looking for that also led me to a more detailed presentation on the subject [4]. Hmm.
Can't help but agree with 80% of this post but strongly disagree with the solution. This feels like a hack that punts the problems to the future.
UTC has the leap second cause it's not "real time" and so now we're just gonna never sync up UTC to real time at all? How is that the solution? Either we deal with leap seconds or we need to implement something that can't go backwards and properly models time. Leap seconds seem much simpler...
In the end we didn't get rid of SQL cause of SQL Injection. We fixed the frameworks and promoted the solutions. We may simply need to make a push for languages and etc to just properly support time and promote how to do things correctly. It honestly seems easier.
I'd argue that TAI has far more right to the term "real time" than UT1. The Earth's wobbling and halting deceleration should not impact, let alone underly, our definition of time. UTC agrees with this assessment almost always: it tracks TAI except for leap seconds that prevent it from de-syncing too far from UT1.
TAI > UTC > UT1
That said, the hard work to build out a compromise has already been done, so whatever, let's just keep it until political or speed-of-light issues make it awkward to distribute information about leap seconds, at which point dropping them will be an easy and natural solution.
> The Earth's wobbling and halting deceleration should not impact, let alone underly, our definition of time
If you define time in terms of days and years, I.e in term of revolutions of the earth around the sun and itself, of course it’s wobbling and deceleration has an impact. If you think it shouldn’t then measure time differently, for example as seconds since a certain instant
> If you think it shouldn’t then measure time differently, for example as seconds since a certain instant
Isn’t that what the article is arguing we do? What practical benefit does keeping this historical definition of time give us? Outside of people trying to account for the rotation of the earth precisely who could even notice?
Sounds like it would be massively simpler for the users who need to use time like this to deal with leap seconds rather than require that all software deal with it all the time
It would take thousands of years for UTC-without-leaps to shift an just an hour. Our timezones are often wider than an hour.
If you really need a 'solar-ish' time, you could also define a fixed function correction for UTC->'solar' and then extend that thousands of years by an order of magnitude. (Sadly the SI second is somewhat far off from the 1/86400th of a solar day, so a big portion of the correction caused by leapseconds is just this predictable portion of the difference).
> If you think it shouldn’t then measure time differently, for example as seconds since a certain instant
Plenty of stuff would love to do that, but we distribute UTC. To get back to "time from an instant" given UTC you have to know about and correctly and consistently handle leapseconds. It's quite tricky, because when an interface hands you UTC you don't know if the latest leapsecond has been processed in it. It's getting even harder recently because the widespready distributed systems failures caused by leapseconds are causing some people to deploy varrious kinds of "leap smear" which smears out leapseconds over some number of hours. Interfaces that are supposed to return UTC now sometimes return leapsmeared UTC with some unknowable choice of smearing scheme.
Arguably the root issue is that we've based out computer timekeeping hierarchy on UTC instead of TAI (or GPS time or whatever non-leaping thing). Had we based everything on atomic time and handled 'UTC' as a presentation layer thing like timezones are normally handled things would work much better. Unfortunately that ship has really sailed. Fortunately, if leap seconds are just not issued essentially everything keeps working without issue for hundreds if not thousands of years.
The handling of timezones and of leap-seconds are orthogonal problems, arising from distinct causes. Leap-seconds are a response to the desire to synchronise an atomic timescale with the uneven rotation of the Earth; timezones (and DST) are strictly a political problem.
Politicians often don't understand the consequences of timezone changes; they often introduce them without giving enough time for people to update their timezone database - sometimes just a few days. The result is that the new timezone is nothing more than a political gesture, because hardly anyone is using it.
> The Earth's wobbling and halting deceleration should not impact, let alone underly, our definition of time.
Who's definition... the whole relativeness of time means this is a problem.
Yea, I agree that we should have a unit of time defined for things near sea level and not moving very fast. But even that becomes problematic over 'long' periods of time as the earth is slowing down and will skew from what humans experience, of which has been the basis of how we define time until recently.
Even then we're still leaving out the problems of things in space and on other planets.
At the end of the day we're attempting to define time as something exact for all observers, and when you attempt to give an exact definition to something that is not exact problems are going to occur.
It's natural to distinguish between wallclock time and duration time; or, time used to fix events chronologically and time used to figure out how long to do something. The first kind of time is the only plausible use case for leap seconds, because if you insert a leap second into the command "run main motor three seconds" you're going to be in a lot of trouble.
UTC with leap seconds already does this. Leap seconds don't go backwards, they just alter the number of seconds in a minute. When the leap second passes, you have a minute with either 59 or 61 seconds in it.
The problem described in the article where "time goes backwaards" only exist when you compare two different time sources, which is always risky, whether a leap second is happening or not.
UTC is a count of seconds. Conversion to HMS is dictated to fool with the S field, but everybody suffers. The seconds count is supposed to actually skip.
UTC is not a count of seconds, because the standard representation of UTC does not have enough information to give you the right count. You also need to know the leap second table to convert a pair of UTC times to an accurate interval.
> we need to implement something that can't go backwards and properly models time
There simply is no definition of solar time that can obey this constraint long term, because the rotation of the earth varies, and varies over time in ways we have a limited ability to predict. This is the entire crux of the problem.
It is not the crux of any problem, except for, uniquely, astronomers.
But they have their own solution we need pay exactly zero attention to. Leap seconds in UTC are just as big a nuisance for them as for everyone else. They have TAI, sidereal time, and this dumb bastard UTC with its own hacked up thing that doesn't match their precise alternative.
It's not a problem for astronomers. It's a problem for everyone. It's a small problem for everyone today. And it will remain a small problem for a long time. Eventually it will not be a small problem.
Meanwhile, making sure everyone agrees as to the kind of time they're talking about it hard enough.
Everybody has already settled on UTC. All we need is for UTC not get fucked every 15 to 30 months and break about half the systems that need to communicate with each other.
If they stop announcing leap seconds, everything correctly equipped for leap seconds will still work, and everything else that gets it wrong every time will also work.
And those of us who have to make the stuff that works right adapt to all the crap that doesn't and can't be fixed can do other things, instead.
You seem to have missed that everyone is already on UTC, and will not change to satisfy your sense of esthetics.
Fixing UTC by simply not declaring any new leap seconds eliminates all problems. What used to break on a semi-regular basis stops breaking. Nobody suffers. Nobody pays. No problems surface.
Having read a lot of the comments here, I tend to agree with you.
Leap seconds have not sit well with me ever since I learned about them. Messing with the number of seconds in a certain minute a few times a year just seemed ... unclean.
So if fellow commenters are right, that without leap seconds it would take 6667 years for the time to drift just one hour, then leap seconds are absolutely more trouble than their worth, and we should drop it this instant and try to come up with a solution for that leap hour in the next six milleniums.
If you don't want leap seconds use TAI. If you do want leap seconds use UTC.
This is like people saying that we shouldn't have daylight savings so we should redefine GMT to not have DST. That is not only stupid but breaks all use cases where you care about historical DST.
This is why we have UTC, it's GMT - DST. People in 1960 could figure this out. Are we so incompetent that our grandfathers were better programmers than us?
Anything Injection has a well defined solution of "properly escape parameters". deciding how time should be represented is a completely different issue.
Depends if you define a solved problem as a problem with a known solution (theory) or as a problem people still pay the cost for in the wild (practice).
Database drivers for mabny programming languages have something like Java's PreparedStatement which allows you to compile the SQL query together with code, before the application is ran. Whatever input is provided later by the user cannot result in SQL injection, because it is not parsed/compiled at all. So yes, SQL injection is a solved problem, it's up to you whether you know about/use the solution.
The SQL language is not flexible enough to allow preparation of every possible query, so the problem is fundamentally not solved. To take a simple example: query parameters are not supported in DDL or DCL statements, and in DML queries the usage of parameters is limited to values only: even such a simple thing as an ORDER BY clause cannot be controlled by parameter insertion.
"We have solutions for the most common occurrences" is not the same as "the problem is solved".
Because we are still correlating time with the rotation of the earth. Time is an absolute measure, a second is defined as a precise amount of time, and you can count the time that one event with that measure. Only historically the second was defined as a fraction of the day, and was correlated to the speed of the rotation of the earth, to this days we have better ways to define it.
Who said that time needs to be in sync with rotation of the earth? Nobody, who cares? It's a so small variation happening in a so long period of time to not be noticeable to any practice use: it's not that we take the sun as a reference of time anymore! We can keep counting time as we want and not be bothered with something not precise as the rotation of the earth. And for the applications that require that sort of precision, and only them, adjust the time accordingly.
I mean, a meter was once defined as the one ten-millionth of the distance from the equator to the North Pole along a great circle. It's not that for this reason if the distance between the North Pole and the equator changes we are all throwing away our meters... it's just that we found a more precise definition of one meter. Same happened with the second (and all the time measures, such as a day): we express them in a new format where it no longer matters what the earth and the sun do.
> Who said that time needs to be in sync with rotation of the earth? Nobody, who cares?
Humans sleep, usually at night time, and the rotation of the Earth defines night in any location, so regardless of the continual tick of some sort of "universal clock" we need a way to measure daily events that humans can use that rely on the sun going up and down at certain times and thus we need to adjust our local Earth times based on it's rotation.
Maybe we can decouple UTC (or whatever) from local times completely, so then a leap second is just a second we add to or remove from the offset for any given location, but we still need to deal with adjusting local clocks to the rotation of the Earth and it's orbit around our star, the "Sun".
Sleep is not influenced by a second before or more. And I never understood this, it's far simpler to adjust the time you wake up/go to work/go to bed/whatever than to adjust the clocks.
These solutions (including daylight saving time, timezone, etc) were probably a good solution where the clock were local and it didn't exist the problem of synchronizing clocks all around the world. But to these day, it doesn't really make a lot of sense sense. Even timezone, it would be far easier knowing that there is only a world clock (UTC) and we adapt our time to the clock and not change the clock.
Timezone, daylight saving, and leap seconds generate a ton of problems only for a minor convenience...
This seems like a kind of navel gazing point, however-- without leapseconds it would take on the order of 4000 years to slip an hour, quite a big longer so that nights hours aren't aren't at night.
We already have a good mechenism for matching local sun time-- timezones.
I don't see anyting in your post to justify one second level offsets.
(aside, UT1 is technically defined in terms of earth's orientation with respect to distant quasars, not the sun :) )
A million problems started when system clocks were changed to follow UTC (as opposed to local time) and then UTC was conflated with Unix time - a fixed monotonic reference, which UTC is not!
Though the ship has sailed, I think it could have been much better if computers were set to follow TAI Time (atomic clock time - unaffected by leap seconds) time than the UTC. UTC is as variable as localtime and should have been treated as such.
If fb wants to - they can (and should) use TAI time for system reference.
This is actually what the Precision Time Protocol (PTP) does. It's the successor to NTP, so it improves on some of NTP's mistakes. The protocol uses TAI, but also sends the TAI-UTC offset so the computer can display times in UTC.
PTP and NTP have completely different scopes: PTP requires end-to-end layer 2 support and hateful choice of hardware, so it can only work within a single network; NTP on the other hand was always designed to work across the internet between different organizations, where the network doesn’t help with timekeeping and the organizations don’t work closely with each other.
Why does the time protocol need to send the UTC offset, rather than have the offset be part of system data files, like with timezones? Wouldn't you need the data anyways to translate historical timestamps to UTC?
I think this is the argument NTP makes for not including the offset. TAI can be just another "timezone", so that TZDATA should be to used it to derive it.
But that's backwards. A Stratum 1 NTP usually gets its data from GPS, which HAS the offset (GPS runs TAI). But it only outputs UTC, but not the offset, making other programs compute it from TZDATA. Why is NTP making user programs harder to get the data that IT ALREADY HAS? Because philosophically, NTP is married to UTC (even though NTP is mostly for computers!)
And providing this offset would basically get rid of a large body of people (like the TFA) who wants to CHANGE the definition of UTC, which is a more drastic proposal.
Those are two different problems that require two different solutions:
1. Displaying current time: for that ideally you need the offset directly from the time server because the system timezone data can be out of date in regards to current time.
2. Displaying historical timestamps: for that you use the system timezone file.
> Someone should fork the NTP protocol to use TAI instead, and go from there (or at least provide tai offset).
There's a draft of the next version of the NTP protocol (NTPv5) at https://www.ietf.org/archive/id/draft-mlichvar-ntp-ntpv5-04.... which not only has the option of working in TAI, but also has explicit support for "leap second smearing". It also has a field to explicitly provide the TAI offset.
This sounds like a sensible solution. If Facebook wants to start using non-UTC time coordination, like TAI, they by all means should try it. They only need to publish their own NTP servers I guess, but that shouldn't be a big problem for an organization of their size.
As far as I can tell, TAI will always be offset from UTC by an integer amount of full seconds, and I guess time coordination between large independent systems is mostly useful just when comparing log timestamps (I would guess almost all sensible software uses already account for much larger clock drifts than the current leap second count of 37, right?)
After adopting TAI, Facebook engineers just need to remember that their log timestamps are offset by N seconds from all the others.
The ship has not, in fact, sailed: UTC could abandon leap seconds any time. They just need to announce there won't be any more, for the foreseeable future. Along about 2100 they might announce plans for a correction in 2125 or so.
It seems the problem of adopting TAI for computers is because the NTP protocol does not provide the offset. We should add TAI-UTC offset to the NTP protocol.
For systems that leap seconds actually cause problems on, the solution is simply to use International Atomic Time (TAI) internally, and convert it to UTC when you want to display information to a user.
Every time I see ditching leap seconds come up, they never try to explain why TAI won't work for them, leading me to believe they probably just don't know it exists, nor could they even imagine something like it being invented.
Seriously, I was reading this article wondering the same thing.
Nanoseconds since X is a fairly unambiguous reference (relativity aside).
Use this for any kind of timestamp or recordkeeping. Convert to UTC (or EST, CST, etc.) as necessary for reference to the solar day.
The only reason not to do this is because you have a million systems that are already running on UTC, and you don't want to do a massive leap second erasure to get back to TAI.
> ... the solution is simply to use International Atomic Time (TAI) internally, and convert it to UTC when you want to display information to a user.
From a software development perspective this seems monstrously more complex than just muddling along with the current situation even with leap seconds.
First, consider the question of "what do unix timestamps mean?" The answer is UTC, except just be broken during the second where there's an extra leap second added or removed, and don't try to represent it. They neither represent TAI nor UTC perfectly, and have no way to encode a leap second, but they are very space efficient to store needing only 8 bytes.
Now, we want to start converting our systems over to TAI, great! To do that we need a format to efficiently store a unix time in a binary representation. For this exercise let's use Jan 1, 1970, and so when we call it we get a value today that's equal to unix time plus 37 (the current offset between TAI & UTC). Awesome, now we need a new function to call it in every single language we use, and then migrate all callsites over to it. Try not to miss any places where someone's passing unix time as an int64 instead of, say, a time_t. If you call into libraries that return timestamps make sure you shim them so that you can convert those timestamps from UTC to TAI.
Now, we have the problem of how to store those int64s. We can't store them in place of our current timestamps, they'll be 37 seconds off. So let's add a field to all databases where timestamps are stored to store the TAI version. Additionally, every RPC is going to need to send along both TAI & UTC during the migration process, so change those too. We can't just ignore cases where we need an integer representation of time, either - those have historically been the places where systems break during leap second changes.
I hope this gets to a little bit of why it would be extremely non-trivial to use TAI in place of UTC right now. If you're storing your time as, say, a string representation with a time zone built in you're right, it's actually generally not that bad, and time zone information is frequently encoded in the string. It's _extremely_ difficult to deal with once you move to a representation of time where the timezone isn't directly encoded in a timestamp.
Coordinated Universal Time in French is Temps Universel Coordonee (give or take an accent). So that should be TUC. I don't think there's any language in which UTC stands for anything. It's a fake acronym.
The problem with English is that you have to plan out the whole noun phrase in your head before you start talking. Not only do the adjectives have to come first, they have to be in the right order. With Romance languages you can just say the noun and then keep tacking on adjectives as they occur to you.
>Oddly, languages either have the exact same ordering as English, or the exact opposite as English. And, nouns in various languages fall in some designated position between the string of adjectives -- in English it's at the end, in Romance languages it's somewhere in the middle, such that most adjectives follow the noun, but certain adjectives precede
Well yeah, the adjective order is actually mostly semantic, and tracks something similar to intensivity. Varying adjective order is not normally a grammatical error per se, it's emphatic (though emphasizing the wrong thing is a pragmatic error).
But in Romance languages you have to plan out very complex sentences. I mean, you don't have to, but it's what people do.
The real answer is that deeply learning grammar as a child makes you think in grammatically correct ways to begin with, which means that when you start vocalizing a sentence, it's already queued up in grammatically correct form or very close to it. This is why we should teach children grammar grammar grammar and almost nothing but grammar, using poetry for memorization, reading to expose them to grammatically-correct texts, and writing to drive grammar home. And some 'rithmetic.
I am a native speaker, which means that I didn't realise that british english isn't actively taught. Like we aren't taught grammar in any particular way (apart from I before E, except after C, which is mostly bollocks, because it depends on the origin of the word. [their for example.])
Basically we are dipped in the language and we either succeed, or in my case dumped in remediation. (I can't spell for shit)
I didn't realise this until I was learning another language as an adult. They were usings terms like "present continuous, reflexives, compound verbs, etc" None of which I knew the practical meaning of.
Teaching a child to read, again you just realise that basically its 5 different languages smeared together, with shit all rules. Syntax, yeah we have some, but no native speak can explain the rules. (we have adjective ordering , but I don't know what it is, I can only tell you if you've got it wrong.)
That one's even odder - that's the compromise between the English version, which would be CUT and the French version which would be TUC ... i.e. none of the above.
TUC is a brand of crackers well-known in France, so maybe that’s why they didn’t insist on the French version. And CUT obviously didn’t make the cut. ;)
There are very few systems that care about second-level time that leapseconds don't cause issues for. They are almost universally mishandled by widespread systems, so much so that we even get multi-million dollar scale internet disruptions due to leapseconds when there hasn't even been a leapsecond.
I doubt anyone talking about the elimination of the leapsecond is unaware of TAI but TAI is not readily available on general purpose computers (and, increasingly, UTC is being silently substituted for leap-smeared UTC). And for specialized systems attempts to make pockets of TAI break down when they have to talk to the outside world (and have consistent times with it) and/or due to hardcoded leapsecond tables in software.
>remain at the current level of 27, which we believe will be enough for the next millennium.
I would have loved to read more justification about _why_ Meta thinks we no longer need the leap second beyond calling it a community push. They did a great job of complaining about how hard it is to solve from a technical perspective, and then explained how they solved it. Is the only problem really that Meta doesn't know how to test a negative leap second?
Facebook will be able to test and handle a negative leap second better than most organizations in the world. Its everyone else they are worried about. If the rest of the internet breaks from a negative leap second, it doesn't really matter if facebook's servers all stay up.
You're probably right about the corporation, but some of their component organizations and assets may survive as long or longer, albeit after being resold, merged, acquired a few times. It could happen, you never know.
After all, there are still pieces of the original Bell organization and infrastructure around in active use, just with the names on the business cards and buildings changed. I don't think someone 50 years ago could have predicted which parts would still be around and which long gone.
> This periodic adjustment mainly benefits scientists and astronomers as it allows them to observe celestial bodies using UTC for most purposes. If there were no UTC correction, then adjustments would have to be made to the legacy equipment and software that synchronize to UTC for astronomical observations.
> While the leap second might have been an acceptable solution in 1972, when it made both the scientific community and the telecom industry happy, these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead.
The claim is that the benefits accrue primarily to a community whose relative importance is minuscule compared to the broader software world in 2022. The tradeoff was made in 1972, when astronomers etc represented a vastly larger proportion of software.
Actual research astronomer here, and as a (radio) astronomer I would love to get rid of leap seconds. Assuming that UTC = UT1 (the time measure based on Earth rotation) is not accurate enough for most calculation such that you already need to use UT1 tables/forecasts and leap second tables in your ephemeris calculations so there isn't really much benefit in trying to keep UTC close to UT1. And the reality is that for us they actually make things worse: we stop data acquisition at the telescope over the leap second period because we timestamp our data in UTC dealing with the changing interval would be a pain.
There are some good arguments for keeping leap seconds, but I don't think research astronomy is really one (it might be more useful for amateurs), particularly on ~100 year timescales where you don't expect things to slip that much. I think this sentiment is shared by most of my peers, particularly those who actually have to implement data acquisition and analysis pipelines!
The thing is, similar reasoning can also be used to promote getting rid of leap years. Heck, maybe just switch to 12 months of 30 days while we are at it.
If you want to use real-world time, you have to make sure it stays in sync with the real world. Switching to something which is close-but-not-quite-correct will cause even more issues than we are currently having with leap seconds. Can't deal with that? Well, just use the Unix timestamp like the rest of us?
The issue is that all timekeeping is going to be an approximation of our messy Solar System. The only question is, "How accurate is accurate enough?"
Currently our calendar goes off by one day in 3236 years. If the history is calendars is indicative, in about 10,000 more years we may change our calendar. (Or, being multi-planetary by then, maybe we'll consider it a quaint relic of our origin.)
Our clocks predict astronomical time of day to less than a minute for a human lifetime. And both DST and timezones demonstrate that we're happy to live with the clock and Sun disagreeing by an hour or more.
My position is that both are currently good enough for the next couple of thousand years. And we can let our distant descendants sort it out when the time comes.
> (Or, being multi-planetary by then, maybe we'll consider it a quaint relic of our origin.)
It’s more likely that every planet will have its own calendar and you’d have to do the conversions and translations. That’s what scientists do with Mars time. Thanks to relativity, time at point A is not the same as time at point B, and whether they stay in sync for you depends on how you get from A to B.
The calendar point is possible. But the special relativity point, not so much.
While "now" can vary according to the time it takes to get from A to B, all observers can work out what "now is according to the reference frame of the distant stars". On a mere interplanetary scale, this idea is amazingly precise. We might not agree on how much time passed (this has already mattered for GPS satellites), but for all practical purposes "now" is perfectly meaningful.
Yeah, looks like it would work out fine. Since synchronous communication breaks down pretty quickly even on merely interplanetary levels, the remaining use for calendars and timekeeping is basically event planning. For that difference in reference points and interval duration is not that important, as long as you can compensate for that, making events happen at a time you expect in a place you expect. We're doing that fine already, communicating with our deep space probes.
Yeah, if it wasn't for that pesky requirement to keep in sync with the wall clock, my life would be so so so much easier not having to deal with drop frame timecode. The video industry isn't exactly small, and nobody has just up and decided "meh, it's too hard, so we're going to push the everyone else to do what we want". We all put on our big boy&girl pants and do the work.
Of course, drop-frame timecode itself imperfectly compensates for the fractional framerate, and also as a result of the math not really working out there isn't a drop-frame standard for 23.976 fps at all, so sometimes the industry just throws up its hands and says "meh, it's too hard."
It's not that they don't have DF for 23.976 because it's hard, they don't because it wasn't needed. A 23.976 framerate was never broadcasted as it wasn't part of the NTSC standard. In fact, rarely did anyone actually edit at 23.976 unless it was going back to film. They cut the telecined film as 29.97 video with no regard to A-frames or any other methods that would enable the edit to cleanly go back to source frame rate. They so didn't care that some times the film cadence changes on every single edit. Why? Because nobody needed it nor could they possibly imagine the time of the internet and digital streaming that could do any frame rate to even bother wasting time trying to do it "right".
Well it _is_ in the standards now and you still see just non-drop timecode being used on it, with the resulting noticeable skew from wall time as the duration gets up there.
Again, it is not a broadcast standard, at least in the US. Pretty sure it's not in non-US markets either. Sure, it's a format that modern decoders and monitors can handle, but it's just not a format that people are concerned about it matching wall clock.
The video and music industry has been screwing a lot of our digital lives with mandatory encryption and anti hacking/reverse engineering requirements just to sustain some of their business models. The abstraction layer is different than this, but god the irony.
I can see that, but I disagree that it's enough of an explanation. The blog specifically gives an example of the Earth's angular velocity changing with the melting of ice caps. They even threw in a cute animation of an ice skater to explain it. Last I checked, we're in the midsts of global climate change which is rapidly melting our ice and raising sea levels. I'm supposed to take Meta's word that we'll never need another leap second then? Why is that the case? Instinctually it makes sense to me to have a clock that follows the Earth's rotation. Why does Meta believe this is no longer the case? The only justification I saw was "it's hard" followed by their explanation of how they've solved it already. So what is the problem that's being solved by not counting leap seconds?
So, time is a human construct and Earth doesn't care how we measure it.
It's not that we'll never need another leap second; we could add ten, negative ten, or zero in the next 25 years and Earth won't care. Who will care are humans, who may get a bit annoyed when the sun starts setting in equatorial latitudes at 3PM.
Because they are lazy and are not aware of the prior work.
Already decades ago many have proposed to use only TAI internally in computers, instead of UTC, which is not a time.
UTC should have always been used only to compute the local times, together with the time zones, only for human interfaces.
There have been for decades libraries for using TAI instead of UTC, and even versions of Linux or *BSD kernels patched to do time keeping in TAI, not in UTC, eliminating all problems with the leap seconds.
Unfortunately the use of TAI has remained a niche, but that was the right solution.
Time may change forwards and backwards, because my computer clock may be fast or slow. The same logic to correct for leap seconds, can be used if my computer is 2 sec fast.
And your code had better handle it, and the os, libraries, it really is only hard ), if you have to support 1000s of novice devs.
And FWIW, every major company does have to support a thousand novice devs, every year, over and over, forever.
Honestly, this issue screams education problem. "Time is a mess" should be taught alongside buffer overflow attacks and networking theory (in fact, networking theory is a great place to teach "time is a mess," because you can roll it in alongside ideas like "simultaneous action is a lie" and "clocks are always wrong anyway").
Because time in the real world uses leap seconds. The time businesses open and close. The times reported in news articles. And so on. What facebook wants is for all of them to stop using leap seconds so facebook’s infrastructure is simpler
The article you linked seems to be arguing pretty strongly against leap hours. I'm not sure we could solve all those problems even in 3000 years. Can we compromise and use leap minutes? That way the problem is a bit more immediate, in 50 years we're sure to have solved it; or we'll all be dead and it will be on someone else!
Instead of leap hours we could just permanently abandon leap anything, and state that the offset between UTC and TAI is fixed at 27 seconds from now on, forever. Every few thousand years, when this new UTC has drifted enough from solar time on the prime meridian for people to start noticing and caring, countries can decide to simply change their offset from UTC, which will just be a normal update in the time zone database that doesn’t need to be coordinated with anyone else, i.e. something that already happens quite regularly.
This is all quite hypothetical as it’s hard to predict whether anything like our current technological civilization will exist thousands of years from now, but even if it does, a simple mechanism exists to avoid problems (just have each jurisdiction change their local time when they decide they want to).
I can’t see any downsides for literally anyone from this proposal, other than the insignificant downside to the British that they will lose the prestige of being the place that global standard time is based on.
It infuriates me that the BBC World Service insists on announcing the time as "<something> GMT", pronounced in a smug tone of voice. The GMT timescale no longer exists; nobody broadcasts it. What the announcers are broadcasting is the BBC's ignorance. I find it embarrassing and jingoistic.
They've also adopted an idiosyncratic way of pronouncing the time itself: "The time is Four, GMT". Everyone else says "Four O'clock", or "four hundred hours" or "four AM". The Beeb are almost completely immune to complaints; they've outsourced their complaints department, and the contractor's brief is to make sure no complaints reach programme makers.
Because they have to interact with the rest of the world, and billions of lines of computer software and hundreds of thousands of protocols have been written to use UTC instead of, say, using TAI and computing UTC on a presentation basis like timestamps are handled.
I can't imagine people who need accurate timekeeping (like scientists, astronomers and the telecom industry) preferring UTC over TAI. They do however prefer UTC over UT1. UTC was a reasonable compromise in the sense that it's almost TAI, but is close enough to UT1 that you can get all countries on board without much effort. Imagine getting the whole world on board to accept a mysterious device that counts electron transitions, without giving some kind of reassurance that it won't deviate in any relevant way from the timekeeping system they are used to.
Meta has it backwards, because they could've already made another choice. UTC is a representation of the offset. This will always need to be calculated somewhere. Much like how the GPS counts the time since the epoch and broadcasts the offset...it can be used or not by the user. Those times are a calibration adjustment. For UTC, that adjustment is a reference to the current status provided by IERS. That's literally the job of UTC.
Meta can simply stop applying a time offset in their reference, use TAI for forensics, and then have a separately calculated time when they need to display in that representation.
...Which is what happens already. System time is seconds since The Unix Epoch. All of those times are calculated and available, and always will be. They chose one of them, didn't like it, and invented a crummy workaround. They could've logged it in TAI and appended the TAI TZ to all their timestamps.
This is being done to work around poor coding choices they made, instead of making the computing fit reality. Basically "We chose smearing, which is a poopstain of tech debt, so we'll fix it by telling the whole world what time it is." That request is loaded with colossal hubris.
They might as well have the second redefined. The real operators do their thing and leave the squawking to those who want to self-identify a poor coders. Because if they don't want to account for a clock that jumps, then they're kicking the can down the road and they want everyone else to join.
Turning it into a Y2K or Y2038 problem is a sad choice of saddling bigger tech debt on the rest of the world.
Calling it mainstream is as much of a narrative as permanent daylight savings time was. The software solutions exist and were deployed (at least throughout Linux and main userspace) by the June 2015 leap.
This is a Facebook problem that they're sloppily handling by pushing it on literally everyone else.
Leapseconds are an everyone problem. You've personally been impacted by disruptions from leapseconds though you might not be aware of it: it's often not even mentioned in coverage the the cause of issues were leap seconds. E.g. https://insidegnss.com/u-s-flights-canceled-as-faa-looks-int...
How many times have you had an issue and resolved it by restarting a system or service without ever truly figuring out the cause?
> You've personally been impacted by disruptions from leapseconds
That linked article doesn't mention leapseconds; it doesn't explain the cause of the disruptions, other than to note that it appears to be related to certain defective ADS-B systems.
The bigger problem is that UTC (and TAI) is defined in a gravity well so it's not going to be very useful in the long run. GPS has to correct its clocks to keep track of what we slow Cesium/Rubidium down to on the surface, and Voyager's clock is going even faster. We clearly want an Earth-centric time standard for wall clocks and that is UTC. The Earth is not a precision time-piece so we will always have to adjust our wall clocks to its rotation. Realistically, we should probably be deriving a time standard where every day has a slightly different length and we record the timestamps of the beginning of each day relative to a universal monotonic clock in a log (with rollups to years, centuries, etc.) that we keep around as long as anyone cares exactly how many Cesium vibrations have happened since $whence.
If we want to actually solve the problem then let's switch to an interstellar time standard in a rest frame relative to the CMBR as far outside of gravity wells as possible and make that the universal monotonically increasing standard. Then computers can run on that time standard and UTC and friends can be derivatives.
I'm not sure it matters. To synchronize, you'll have to make corrections either way, and over sufficiently large scales, signal propagation delays are going to be quite large.
Here my physics knowledge gets a little sketchy, but if we have an accurate clock broadcasting on a known frequency somewhere far away we should be able to at least measure the drift between it and a local clock to arbitrary precision. With ~zero drift we can measure the distance and relative velocity very accurately with round-trip timing and Doppler measurements, and from that measure the clock offset as accurately as we can measure the distance. I think, but could be wrong, that we'd always be measuring the spacetime interval, not the euclidean distance in space, but that is probably what we actually want once we start caring about relativistic timekeeping.
> let's switch to an interstellar time standard in a rest frame relative to the CMBR as far outside of gravity wells as possible
This won't work: since galaxies aren't static, and because the universe expands, the point with the highest gravitational potential ("as far outside of gravity wells as possible") will both move w.r.t. the CMBR and have a gravitional potential that changes over time.
Maybe I'm making a bad assumption that intergalactic space is so flat that pretty much any region of it will do as a reference. Since we can't reach those regions yet we'd still be approximating it with our slower clocks, but it seems like it's at least feasible.
Galactic groups are gravitationally bound enough that they’ll stay together through the expansion of the universe. Would want to find an area that is particularly void of galactic influence maybe?
The TAI second is defined relative to a particular gravitational potential. Adapting to another known one is just a unit change. (if you don't know then nothing can help you).
The difference between potential already need to get handled on earth. Esp. since the NIST labs in boulder are at about 5400 feet.
Your proposal is to make the time Standard of billions of people not match what they experience, on behalf of a few space probes, none of which will use it.
Meta don't want leap seconds because leap seconds make distributed system cluster synchronization tricky, because that's the only case where an extra second here or there actually matters.
Across all the devices in my house that have a clock in them, the time drift is a little over an hour and it doesn't make the slightest bit of difference to me: internet connected computers being the most accurate.
Between individual computers, thanks to filesystem semantics, drifts of up to 2 seconds are expected and generally considered "identical" - it certainly doesn't matter for file sync applications.
So either we want to build a time standard that can be transformed in a sensible way to all the others which are useful - in which case something which works for interstellar timekeeping would be a sensible step, given that relativistic issues crop up even in Earth orbit, or we're building a time system "for humans" in which case everything we have now is fine, and will by definition be messy because time and timezones change all the time.
That is absolutely not the only case where an extra second here or there matters.
It's just for distributed systems the OTHER ways of dealing with the problems leapseconds create (e.g. using a local monotone clock) aren't available. But even where they are available leapseconds still create issues because they're extremely difficult to test and are often handled somewhat incorrectly.
UTC without leapseconds would in no way be less "for humans". After 4000 years of drift the effect on the solar noon would be about the same as you get driving to the next state over.
Except as I noted up thread we've already got "UTC without leap seconds" - it's UNIX epoch time which is defined to ignore leap seconds.
Leap seconds are still seconds so even with GPS sync, if all you're doing is tracking 1000ms increments then we have a format already which doesn't have them.
Unless I am misunderstanding what you are writing, you're mistaken about unix time, in practice.
In unix time there are 86400 "seconds" per day since the epoch. On days when there is a leapsecond the timestamp of the leapsecond is given out for two (TAI) seconds.
A result is that unix time stays in sync with UTC and the difference between them doesn't change.
The belief that conventional unix time is somehow leapsecond free is a common source of leapsecond handling bugs. :( It results from misleading standards text which states that it doesn't have leapseconds, which it technically doesn't-- instead leaves the second undefined, which implementations distort to make unix time match UTC.
If your goal is to measure accurate durations, synchronize distributed systems, or point telescopes-- unix time will not solve any of your leap second issues.
Thanks for the clarification, but I think my underlying point still stands: ignore leap seconds - invent a time standard without them, which in practice would probably be Unix-time-sans-leap-things, and calibrate off against that.
It's plain weird to rewrite the world which doesn't actually care about leap seconds for use-cases which don't work in normal human reaction or processing times. 1000ms is a value only relevant to high precision applications, which are perfectly fine not being tied to a precise date or solar rotation or anything.
This feels like a programmers without science training issue: all models are models and you choose the one useful for your purpose. It's particularly galling coming from Meta who are large enough to just do this, write the code, and submit a standard for consideration by the tech-industry.
Your "feels" have no influence on any of the firmware in millions of devices that are coded the way they are coded, and cannot be changed, and fail to synchronize with the others that do change in response to a leap second announcement. Those devices do not respond to your sense of fitness. They just fail, and take along everything that has to synchronize with them.
Except everybody is already using UTC, and will not change from UTC to "Unix epoch time" on your say-so. Or to TAI or any other crack-brained system you can think of from your armchair, including all the hardware that cannot be changed without throwing it away and buying new.
All we need to do is simply choose not to announce that all the software in the world that (might have) "got leap seconds wrong" but still has to talk to the other half that (might have) "got leap seconds wrong" will be officially broken, this year. Or next year. Etc.
A lot of people in this thread are criticizing this move, but let me offer an opposite view.
One of the largest electronic health records systems has code that predates the UNIX epoch. Much of the time handling code is custom written to deal with this. However, the code was so poorly written that the system would lose data during the double 1 am window that occurs during daylight savings time shift. Hospitals would just shut off all of their computers during this time to deal with it.
As the article notes, issues with leap seconds have also brought down reddit and cloudflare. Many people in this thread are treating this like some sort of display of incompetence, but if you've ever written code that deeply interacts with time, you'd know how difficult it is to get right. A sign of a good system is one where it is difficult to fuck up.
IMO it is better to guarantee that time always moves forward rather than trying to match computer time to human time.
I don't see how replacing all UTC in software with TAI is more realistic than breaking UTC sync with UT1 (isn't it literally doing the same thing?). The whole point is that going forward, leap seconds are going to get harder to deal with. Especially in the case of a negative leap second, which seems like a more "true" y2k-like scenario.
The difference is that replacing usage of UTC with TAI is a voluntary choice made for each program, but redefining UTC to be a fixed offset relative to TAI, which is effectively just redefining UTC to be TAI, is a forced change on everything everywhere all at once that everybody has to handle because one of their dependencies changed.
It would be like silently changing the start of unix epoch time to 1800 instead of adding a new “Unix time since 1800” and asking people to switch.
Not at all. Everybody using UTC would just not need to deal with leap seconds anymore. A UTC second is the same as a TAI second. It's a no-op for the vast majority of UTC users. UTC will just drift slightly more from UT1.
This change only affects people who need UTC to be close to UT1 and also somehow don't know what UT1 is.
Sure, everybody using UTC when they actually want TAI would be a no-op, but then you irreversibly break everybody who actually wants UTC and assumed that UTC would not change meanings.
The people who would be unaffected by the redefinition can already just trivially switch manually (as we already assumed that just redefining things under them would work), leaving the UTC people alone. There is no good reason to silently break all programs carefully designed to use UTC correctly to fix all of the programs haphazardly written by people who did not know what they were doing and used UTC when they actually wanted TAI. Especially since fixing the wrong use of UTC is so trivial that we assume it can be done with no modification.
‘Programs carefully designed to use UTC’ would only irreversibly break by very slowly becoming out of sync with the rotation of the earth.
A few applications should switch standards, the question is whether solar concerned applications should switch to UT1, or continuity concerned applications should switch to TAI. The former is simpler, easier, cheaper, and only causes unexpected behavior (quite slowly), NOT systematic failure.
>IMO it is better to guarantee that time always moves forward rather than trying to match computer time to human time.
Not sure if you're playing Cunningham's Law or if you don't know this was the line of thought until everything was so far out of touch with reality, 10 days of time never existed, and official records were kept with dual-dates.
> However, the code was so poorly written that the system would lose data during the double 1 am window that occurs during daylight savings time shift.
> [...]
> Many people in this thread are treating this like some sort of display of incompetence, but if you've ever written code that deeply interacts with time, you'd know how difficult it is to get right.
Your example only speaks for the incompetence argument.
In reality, times and dates are really complicated. Luckily, the engineers at Facebook, Reddit, and Clouflare are being paid hundreds of thousands of dollars to show off their expertise. Is it that much to ask for them to read into details like leap seconds?
It is too much. I was Google SRE and there is an internal meme showing a time series graph jumping backwards during the double 1am at DST. These mistakes happen everywhere and are best avoided by a system that doesn't allow them to happen in the first place.
So advocates of memory safe (or even high level, period) programming languages are just showing off their incompetence in your book?
Would you say to an advocate of C (much less ... rust): Look man, real programmers write in boolean circuits. Programming is hard, sure, but the engineers at Facebook, Reddit, and Clouflare are being paid hundreds of thousands of dollars to show off their expertise. Is it that much to ask for them to read into details multiplication circuits?
:)
Leapseconds causing widespread failures isn't a hypothetical, just like buffer overflows aren't. Yet, in theory, with perfectly competent development ...
Yet even with perfect competence leapseconds are still pretty gnarly: They require systems have a trustworthy and consistent source of the list of leapseconds. ... and they mean that you fundamentally cannot predict the amount of time between two UTC timestamps when one or more of them is more than 6 months in the future... and no amount of competence can fix that.
> Hospitals would just shut off all of their computers during this time to deal with it.
FWIW, there are many things that deal with leap seconds that way too. Too much risk of ending up in a difficult to fix or silently corrupt state, while coming up from a reboot is highly tested and known to work.
The cost of leapseconds is quite significant.
> but if you've ever written code that deeply interacts with time, you'd know how difficult it is to get right.
Good odds that even if someone has that they got it wrong and don't know-- especially when it comes to leapseconds as they're fairly hard to test esp. with distributed systems and infrequent enough that you may not realize the cause even when you've suffered from an issue.
If one is relying on time of all actors in a distributed system to be perfectly in sync, you already have a bug, leap seconds or not. (unless you are Google Spanner)
For timers within a single system, use monotonic clock of your own cpu.
start := time.Now()
// do something
spent := time.Now().Sub(start)
It's worth noting that the Go time library is specifically designed so that computer clocks running backwards won't cause `spent` to be a negative duration. A monotonic clock that only ticks forward is used for time comparison and subtractions.
Hehe, story time: I was using exactly this logic to detect hibernation/sleep, especially for laptops. I was surprised when it never triggered, and printed the time, which indeed showed that a long time had passed. So why didn't it trigger? Because IFF both time stamps have monotonic component (internal, not visible on print) then the monotonic stamp is used. Confused me a lot.
It's frustrating that programmers want to redefine civil time just because it is "hard". This article glosses over the real world problems that detaching from UTC will cause.
(You may want to scroll down to "Implementing the plan outlined at Torino".)
If we end leap seconds, it doesn't take long - only until 2028 - until "midnight" is sufficiently far from "the middle of the night" that you will have to consider the legal issues caused by events that happen just before or after 0000 hours.
By 2055, the "minute" displayed on a clock may be incorrect, which again may cause issues with legal timestamps.
And by 2083, sundials are measurably wrong.
All because programmers wanted to save some lines of code.
> It's frustrating that programmers want to redefine civil time just because it is "hard". This article glosses over the real world problems that detaching from UTC will cause.
I agree, but I'm also - sad to say - less than surprised to find engineers at a Big Tech firm taking a high-handed, not to mention narrow and ill-informed, approach over the issue and trying to impose their will on a global scale. My worry here is that, Meta being Meta, they carry quite a lot of influence and may actually gain some traction.
EDIT: I'll add a bit more colour here. At the core of our platform we manage a database containing billions of legacy timestamped records (or events, if you prefer), adding more and more every day. Without even giving it a great deal of thought I guarantee you that this proposal will cause us more problems that it solves and will distract us from making more valuable investments of time and effort that would benefit our business should it be implemented. Sure, we can no doubt fix all these problems, but we've got better things to do. I imagine that many other businesses would be similarly affected and would take a similar view.
I'm kinda impressed by the hubris, really. Usually it's emperors, kings, and big multinational governing bodies that try to screw around with the time standard that ordinary people have to live with. Occasionally strident revolutionaries who've already solved the "overthrow and replace the government" part of their problem and aren't content with just beheading people all day.
Says something about how Facebook sees itself, I guess.
Lot's of folks care, what are you talking about? Accountants and lawyers the world over EXTREMELY care about keeping the computers idea of wall-clock time and your idea of time in sync, and if you're a customer faced with the side-effects of changing the standard after-the-fact, you probably care as well.
Let's paint a picture based on actual code I've actually seen in the real world. If you ignore the leap second but keep using UTC, then in about 5 years, UTC will differ from wall clock by about 5 seconds. So if, in some software used for, I don't know maybe billing customers, someone was calculating day boundaries by doing modulo division of UTC by the number of seconds in a day (I've seen it), then in 5 years we've got a 5 second discrepancy in the number of API calls made by customer X when comparing what the software says to what the customer measured. Customers don't like this, accountants and lawyers REALLY don't like this, and us engineers will have the wonderful experience of telling them all
> "this code used to be valid until some boneheaded engineers at Facebook convinced a ton of other engineers to break the agreed upon standard about what it means to measure time in this way, and now things that used to work fine need to be patched because we've got a Y2K EVERY DAY!"
Oops, I guess ignoring wall clock time might be something other human people care about after all.
> I totally don't care about the Earth slowing down.
Neither do I in day to day life. But I do have to care about it when I or members of my team write code, or store and retrieve data to and from a database, or work across multiple timezones, because it can be critically important to unambiguously know whether something happened on one day or the next.
The reality is there aren't any nice, elegant solutions to this problem. Leap seconds aren't a nice solution. Meta's proposal isn't a nice solution. I don't necessarily even think it's worse than leap seconds, but it's certainly not substantially better. The key point is it's a change and one which, in my view, won't deliver enough value for everybody (beyond just Meta) to justify the level of disruption it will certainly cause if implemented.
>Your honor the nuclear attack on San Fransisco happened at 10.59.59 as per UTC-Facebook time and is as such part of WWIII and not a violation of the armistice.
By "UTC-Facebook" time, you of course mean UTC time, the time everybody already uses, and that has no need to be broken every year, two years, or three years, and wouldn't be broken at all if we simply stopped breaking it.
By a foot you of course mean the foot-meter, a measure good enough for everyone and one which will stop breaking metric conversions if we just defined three feet to be a meter.
That’s all it is, they ran into an engineering problem and they’re trying to get the world to bend to their will instead of solving the problem because they think it will be easier. Mark’s arrogance is nauseating.
We have three alternative time systems and a big bag of issues with each of them, but you think the extremely mundane argument that we should prefer one bag represents nauseating arrogance because you think that your favorite bag -- a different one -- is obviously correct? Come on. Do better. Be civil.
FB is not making the mundane argument that we should pick one time system over another. They are literally proposing that the world should redefine UTC to be TAI with a permanent fixed offset, which is functionally equivalent to just using TAI.
That is effectively proposing the deletion of the most commonly used time system of the three primary time systems from existence and forcing everybody and all existing systems that use it to convert to what is effectively TAI.
That is not mundane. Mundane is arguing that everybody should use TAI. Arrogant is arguing that we should force everyone to do it by redefining their dependencies under them.
No, it is a change that breaks everybody using UTC correctly, TAI with a offset to synchronize with UT1, in order to fix everybody who did not know what they were doing and used UTC when they actually wanted TAI.
If there was a scheme that fixed only the wrong usages, that would be fine. But, it is frankly absurd that we should even consider breaking carefully designed programs correctly using their dependencies to fix programs incorrectly using their dependencies especially when it is trivial for the wrong usages to be fixed manually.
No, UTC is TAI kept in sync with UT1. Changing UTC to being TAI with a offset is a fundamental breaking change in what it means. Anybody relying on UTC doing what it is designed and advertised to do, keep in sync with UT1, will be broken. The only people who will not be broken are people using UTC incorrectly as TAI. The only reason this is interesting is that basically everybody uses UTC incorrectly as TAI, but that is not a valid excuse to break the programs using it correctly.
People using the wrong dependency should fix their system to use the right dependency. They should not campaign to steal the name and replace it, that is absurd.
Literally nobody depends on any relationship between UTC and overhead sun angle.
The only people who care or need to do not use UTC. They use TAI, and a separate continuous log of fractional seconds.
UTC has one role, and that is Standard worldwide civil time. Telling people who need Standard civil time to use TAI makes everything strictly worse: not only do you then not match most of the world, but you still have to track irregular, unpredictable corrections to be able to sync with everybody else.
Except that standard civil time cares about the overhead sun angle for some reason, that is why we use the day demarcations of UT1 instead of TAI. If we really decided as a society that we really no longer care, then we should switch standard civil time to TAI and do away with UTC entirely, not calcify it as some arbitrary offset from TAI.
> "cares about the overhead sun angle for some reason"
That is what is proposed to be fixed and that you are arguing against for reasons you don't know or, apparently, care about.
Switching civil time to TAI would break everything, most of which cannot be fixed. Random breakage is the problem. More breakage would be strictly worse.
Well we could introduce negative leap seconds until they align. The problem (UT1 deviating from UTC by more than one second) would be the same as in this proposal.
> Literally nobody depends on any relationship between UTC and overhead sun angle.
That's just... completely incorrect and totally false? Have you ever even worked for a business? Have you ever read how time libraries are actually written?
It is literally built on the exact assumption that 0 means January 1, 1970 and that right now is (number of seconds in a day) x (number of days since Jan 1 1970). If we stop adjusting UTC, then by this time next year UTC will be one second out of date with our wall-clock times, and calling `datetime.now().isoformat()` will give us a timestamp that's 1 second off from the wall-time of a user. At one-second past midnight on the 20th of the month, your computer will incorrectly be spitting out timestamps saying it's exactly midnight of the 19th. That's what you might call a major breaking change.
Now expand this reasoning far beyond the scope of time keeping.
Big Tech companies/Anti Big Tech lobbyists massively oversimplify in their pitch to influential people to deregulate/overregulate certain areas. In both cases they end up making poor decisions for the general case both end up making the average case worse for everyone except themselves. It's about creating a market where none need exist. Facebook doesn't need to care about time really. It's not remotely important to their business.
I've built and worked on platforms with sub microsecond measuring requirements and this stuff didn't bother me. This is idle bad money finding work for itself at the expense of everyone else.
Disclosure: I am/was an early investor in facebook in 2012. Mark is turning it all to dirt because he's run out of ideas
If you stop thinking about time being wrong from what is officially correct, and instead see this whole exercise as a error minimization framework I think it is far easier to make the case for ending leap seconds as it is for keeping them.
This isn't just about lines of missing code. This is about forcing subterranean or submerged computers to surface. This is about out of sync clocks across information propagation networks across planets. This is about real lives that are ruined because time stamps didn't quite line up, causing delays, deaths, and needless headaches.
It doesn't need to be this way. We could just accept a minute of the clock being off from "true" midnight, which doesn't even make sense to me given that few people are right at the astronomic point where midnight is "true" midnight for their timezone. Heck, China is one big giant timezone so who is this actually for, really? The people that care about sundials? Most people don't even grow their own food.
We're no longer a sun-driven economy. Well coordinated timekeeping across devices that may not always be able to transfer data is far, far more important. If it's sufficiently wrong by the year 3422 then we'll deal with the fifteen minutes of annoyance then. This is a crazy premature optimization.
> Well coordinated timekeeping across devices that may not always be able to transfer data is far, far more important.
How do you have a well coordinated clock without being able to get four bits [1] per year of leap second data? It's hard to keep within one second of a time standard over 6 months or a year without communication.
[1] bit 0: was there a leap second in the most recent period, bit 1: was it positive or negative; bit 2: will there be a leap second at the end of the current period, bit 3: will it be positive or negative. Bike shed my fictious encoding if you like, but it's good enough. Use a whole 8-bits, go wild.
Cesium reference clocks can operate with accuracy around 10^-14 (aka 0.01 parts per trillion). In a year, a cesium clock would slip by a few tenths of a microsecond. That said, the whole "submerged computer must surface" thing is a bit of a red herring argument IMO. What use case would you have for needing to keep time in sync within seconds with the outside world, but being unable to communicate with that world? If you're trying to plan simultaneous delayed action across the world, it would suffice to merely be in sync with each other, leap seconds ignored.
The chip-scale atomic clocks were developed to support precise timekeeping for small devices that can’t communicate. One example is undersea sensor networks, where you want to leave the sensors in place for a year or more, and when you return you can correlate the readings from the sensors because you know they were all ticking at the same rate the whole time.
> It's frustrating that programmers want to redefine civil time just because it is "hard".
Yes. Problems with delay time going negative usually come from not using CLOCK_MONOTONIC for delay time. CLOCK_MONOTONIC is usually just the time since system startup. It comes from QNX (which, being hard real time, had to deal with this first), made it into the POSIX spec around 1996, and is now available on all major OSs. But there's still software that uses time of day where CLOCK_MONOTINIC is needed.
Then there's the smoothing approach. This document described Facebook's smoothing approach, which has a different smoothing period than Google uses.
* Facebook/Meta: "We smear the leap second throughout 17 hours, starting at 00:00:00 UTC based on the time zone data (tzdata) package content." This is puzzling. What does the time zone package have to do with UTC?
* Google: 24-hour linear smear from noon to noon UTC.[1]
* AWS: Follows Google.
* US power grid: Starts at midnight UTC and takes a few hours while all the rotating machinery takes 60 extra turns to catch up.
> What does the time zone package have to do with UTC?
The IANA TZ database includes information about leap seconds, and even supports the concept of "right" time zones in which the leap seconds are counted in the Unix timestamps. (Which violates the unix spec, and may cause problems with code that assumes it can do path like `1 day= 24*60*60`, but on the other hand, things like DST already make that unsafe).
It is mostly likely simply the case that they are using the leap second data from the time-zone database as a convenient source of this data.
I think that observation just lends further weight to the argument that the relationship between atomic time and universal time is a dynamic and unpredictable thing, which we need to handle correctly rather than pretending it doesn't exist.
That it is dynamic and unpredictable is exactly why we should not force everybody to track it.
Some people: astronomers and orbital mechanicos are obliged to care about sidereal time, regardless. Making me deal with it too is pure tax with exactly zero benefit.
>It's frustrating that programmers want to redefine civil time just because it is "hard". This article glosses over the real world problems that detaching from UTC will cause.
Yes, the actual problem exists, and ignoring/discarding reality (i.e. the "science" in computer science) will just cause further problems. If you and your modern stack of code can't handle the leap second, it's simply not production code.
Moreso: it is not a problem that dealing with the nuisance that is leap seconds even solves. It is supposed to match civil time to astronomical time, but astronomers don't use it. It just makes things even more annoying for astronomers, and annoys everyone else, over and over again, for no benefit to anyone.
Astronomy very much relies on the leap seconds. If they ever get abolished it will create lots of headache for all observatories (and hobby astronomers as well), since the telescopes will point more and more incorrectly as UTC drifts away from UT1 (the leap seconds ensure UTC is always within 0.9s of UT1).
To explain a bit further: UT1 tracks the Earth's rotation relative to distant quasars and is thus directly the correct clock/reference to use for pointing telescopes.
However it doesn't advance at a nice and stable constant frequency, but something that slowly changes over time (and can shift by strong earthquakes) and thus we approximate it with UTC, which runs at a nice constant frequency, but needs occasional correction to match up with Earth's rotation.
Because UTC already has no other purpose than to be what everybody is already using. The leap seconds in UTC benefit literally nobody. But being a standard is a purpose.
Changing to TAI means you are different from everybody else, and still have to fool with leap seconds to know what everybody else is using. Worst of all worlds.
(Except Google smearing, which is even worse than that.)
That's literally not true, since we astronomers do use UTC as it is intended (since within 0.9s of the correct time is good enough, but being many seconds off isn't anymore for many applications).
The argument that the legacy systems should maybe be updated is already being discussed elsewhere, so no need to rehash that.
Hardly. For actually observing with a telescope UT1 is the correct time scale to start the calculation from, since it's directly linked with Earth's rotation - with some complications that you have to calculate local sidereal time and so forth, but this only involves fixed constants; All TAI derived fixed-offset time scales are not linked with Earth's rotation and thus require constantly updated offsets. For most telescopes approximating it with UTC gives good enough results (pointing accuracy wise), so that's what many observatories do. And many smaller and older observatories operate quite a lot of legacy hardware and software that would need to be updated if the current UTC definition were to be changed.
Well, having worked on legacy systems it’s much easier to keep the existing protocol mostly unchanged than migrate the world to a different protocol. Even if the change is as “simple” as subtracting a constant integer everywhere. Just thinking about all the stored timestamps in all databases gives me a headache…
Because societal official/legal time is based on UTC and not on TAI. So the point is to change societal official/legal time, not just to use different time standard.
> If we end leap seconds, it doesn't take long - only until 2028 - until "midnight" is sufficiently far from "the middle of the night" that you will have to consider the legal issues caused by events that happen just before or after 0000 hours.
I'm not sure what you're getting at here. If we stopped introducing leap seconds, then why would the legal world still care about them?
I can believe that a desperate lawyer would argue the semantic distinction between clock-midnight and solar-midnight, but I have trouble believing that this would amount to anything more than one more dumb nit on a pile of dumb nits that the court has to deal with every day.
They can already argue that though, since solar-midnight is not the same as clock-midnight anyway due to timezones. Really, timezones already create this difference for the majority of people, and to a much larger degree than leap seconds likely ever will.
I'm honestly amazed to see so many people agree with this.
Timestamps are exactly what we define them to be. There is no correct and incorrect.
One option is to have a system with arbitrary unpredictable leaps to keep it synchronized to within 1 second of the mean solar time over Greenwich, England. Every computer system that has to deal with time accurately needs a lookup table for leap seconds that is occasionally amended, with only a couple months warning in advance.
Another option is to just let the clock run at a constant rate. In this case only astronomers have to keep track of the difference between solar time and clock time (which they already do anyway).
The fact that the difference will increase to an hour after several hundred years is utterly irrelevant. If people in the future care, they can simply adjust the timezone definitions to compensate, since timezones are already adjusted all the time.
When the sun is directly overhead it's meant to be 12:00 - IN THEORY!
However as Timezones are pretty wide, most of the time you'll be at least 15 minutes out. Sometimes you'll be out by as much as 3 hours - and you've probably never even noticed!
Telescopes already have to compensate for this (as well as for summer time).
Leap seconds make a shambles of book keeping too. What is "2022-07-17T12:00:00" + (60 x 60 x 24 x 365 x 5) seconds? No one knows! And the answer to that question will change depending on when you calculate it and which updates you installed!
So I say ditch the leap second and let it drift. In a few hundred years we could update our timezones if we _really_ want to (timezone changing is actually pretty common, so code should already be handling this edge-case).
> In about 600 years TI will be ahead of UT1 by half an hour, and in about 1000 years the difference will be a full hour.
That's nothing. Time zones alone already create significantly larger errors. Belgrade and Sevilla share a time zone, but the solar meridian ("noon" on a sundial) is 12:44 in Belgrade and 14:30 in Sevilla. Obviously, the same error is present in the astronomical "middle of the night". This does not, in fact, create "legal issues" for Serbs or Spaniards.
In 600-1000 years, around the time that it would actually matter, we're going to have to reform the time system anyway to account for relativistic drift between the surface of the Earth and human settlements elsewhere in the solar system.
There's no need to "detach" from UTC. Just ensure that TAI (which is consistently free of leap seconds) is also supported on an equal status to UTC, for applications where it makes the most sense. Conflating the two would only increase confusion further.
Programmers can already do this if they want to. TAI already exists. But they'd have to still display UTC as civil time to end users and I'm pretty sure they don't want to do that either because it would mean just as much code.
the hard thing about TAI is that it’s not properly supported in DBMS, RFC 3339/ISO 8601, etc… This makes it hard to use. It’s actually easier to use MJD represented as a double.
"Make two parallel time systems and allow conversion between one and the other programmatically" reduces spiritedly and unambiguously to "use one time system and care for it programmatically".
By precedent, UTC seems the logical choice for the one time system.
But the whole point of the OP is that UTC has leap seconds, which are hard to manage programmatically - and may even be impossible, wrt. future dates and times. That's literally the one relevant difference between UTC and TAI.
There is no need for UTC to continue inserting leap seconds. When they commit to stopping, everybody can relax: irritation removed.
Telling people to use TAI is telling them to have a different time from everybody else. The whole point of civil time is specifically that other people use it. Using TAI does not free anybody, because anytime you need to interact with outside, you are back in the nightmare.
Exactly. Discontinuing leap seconds is a 99.999999% compatible change.
I've tried to build systems using TAI they break down because: At some point you have to interact with something that doesn't use TAI and that fully reintroduces all the leap second issues, and because a lot of third party software has leap second handling, so the wheels fall off when you update some component and its embedded list of historical leapseconds now changes its behavior. Similarly, sometimes UTC time is all that's available and without the leap second data you can't back them out to get TAI.
And with leapsmear the challenges of backing out to TAI have increase substantially.
We live in a world where civil time moves by an hour 2x a year for no good reason.
You FAR overstate the impact on civil society of failing to change it by a second every so often.
Ironically even astronomers, who leap seconds were originally for, don't benefit because they need to know the Earth's rotation accurately to subsecond levels.
> By 2055, the "minute" displayed on a clock may be incorrect, which again may cause issues with legal timestamps.
I'm not following here. What defines "legal timestamps" in our current system? I'm unaware of any laws in the US that uses the actual position of the sun to determine the time.
"Noon" when the sun is at the highest point, can vary over an hour across a timezone.
Way more than one hour. Even without taking China into account, A Coruña in Spain and Kosice in Slovakia are in the same time zone but they are 30 degrees (2 hours) apart.
A birthday is a legal timestamp. A car crash is a legal timestamp. When the time is off by a minute, these events can’t be catalogued correctly any more.
Shifting the timezone by a couple seconds does not prevent or hinder cataloguing events in any way whatsoever, certainly not more than switching to daylight savings time does or the mere existence of timezones, which may easily be half an hour or even more off from the solar time - the offsets we use for time are effectively arbitrary already, and adjusting the arbitrary choice of the offset by some seconds is not a fundamental difference. Event timestamps already map to different days depending on different timezones, you do need to know which timezone your clock is using, of course, but you already need to do that.
For people born just around midnight, especially around new years eve, a few seconds could impact their DOB by a whole year. This could affect everything from university applications to boating licenses to social security.
Some countries have boating license laws that are different depending on whether your DOB year is >= 1980, as an example for this type of "grandfathering cutoff".
Maybe you seem to think that was is being asked for is to retroactively remove leap seconds from UTC? That is not the case, all that is being called for is to stop adding more leap seconds.
Both can easily be placed on same monotonic time. Actually makes a things simple. You don't end up having 31/12/1972:23:59:60 and wondering why is there 60 there...
“civil time” is also a construction that is flexible in many ways, so an influencial group redefining it isn’t out of norm. To note, timezones were introduced for railway purposes, and some country play a lot with them.
For “midnight” being far from “the middle of the night”, that’s already a reality for many Chinese living far enough from Beijing, or god forbid regions where “night” doesn’t mean much for half of the year.
For all intents and purposes, if a formal definition of time isn’t practical people come up with their own ways.
> it doesn't take long - only until 2028 - until "midnight" is sufficiently far from "the middle of the night"
Honestly from my perspective, 3am is the middle of the night (night-morning-afternoon-evening starts at 0-6-12-18 for me) and somewhere between 4 and 5 most people are probably asleep and the date change should occur. I can't count how often I've heard people clarify what 'tomorrow' means when the word is spoken after "midnight" but before going to sleep.
But yeah gotta pick something for the date change, it won't be worth the cost of change now. If we do end up ever switching to something like decimal time, this should be on the todo list though.
And I know "midnight" is historically supposed to be about the sun being the furthest from its zenith rather than in the middle between when you go to sleep and get up, however that occurs somewhere around 1am here (01:41 at its extreme, from July 17 till August 5th). If that's not enough to warrant a redefinition, 27 seconds accumulated since we started counting leap seconds are also not enough to warrant an update yet (following Facebook's logic here).
* "Most telescope pointing systems fail" (by 2027) (with 5s deviation from earth rotation). Pointing systems cannot blindly rely on UTC anyway, since (a) even with leap seconds UTC is up to 1 second off earth's rotation, and (b) pointing a telescope depends on where the telescope is on earth, so some offset must be added to UTC by some human.
* Hypothesized legal issues... give me a break.
It would be much less trouble for humanity to deal with this once every 100 years or so.
These "problems" are trivial. The day changes at midnight which is 12:00 AM by the clock. There is no ambiguity. Midnight is not literally the middle of the night. The minute on the clock will be correct by definition, nothing will change. Sundials are already wrong. You'll need to try a lot harder to convince me that this is a bad idea.
All these arguments based on sun position make no sense in a world where people already live in places where the sun literally never sets or never rises for months, and people already live in time zones offset many, many hours from "correct" time. The sky doesn't fall!
I don't see how you run into legal problems. The break from one day to the next still occurs at a well defined time, 23:59:59 + 1 second, or 00:00:00. Midnight isn't the middle of the night (or noon exactly at solar zenith), except on 15deg meridians anyway. What will happen is that over time, those "golden" meridians will shift slightly. The only people who will notice are those that are using time for celestial navigation. Terrestial navigation, which is almost entirely done with GPS these days, won't be affected at all (GPS already doesn't use leap seconds). And, yes, sundials will gradually get out of sync, and have eventually to be rotated on their axis to be right.
> only until 2028 - until "midnight" is sufficiently far from "the middle of the night" that you will have to consider the legal issues caused by events that happen just before or after 0000 hours.
I can't follow your logic here. In any relevant context midnight has a definition, typically UTC midnight in the applicable timezone. Eliminating leap-seconds would make the instant midnight occurs less ambiguous in 2028, because precise timing with leap-seconds is strictly harder than without. (and one can independently realize a time that closely follows TAI but one cannot independently realize UT1 without a VLBI radio telescope array, and one can't realize UT1-TAI without a datafeed because the decisions are subjective).
This isn't just a question of 'some lines of code'. Leapseconds cause widespread disruptions even when they don't occur, they cause security vulnerabilities (and slower and less secure systems because they make synchronization unreliable). People are widely deploying "leap smeared" NTP servers to try to prevent some of the worst synchronization faults, but doing so makes it impractical to back out leap seconds to derive TAI (or a more accurate TT) from the system's UTC, particularly because systems don't know if they're leapsmeared or not (and different smear sources use different smearing parameters).
Please consider that none of this actually matters if we ditched UTC for TAI. For one, time zones still exist and local solar time is already decoupled from clock time.
Why does ntpd lose the smear on a restart? I would have thought that the current smear could be calculated purely based off current non-smear time, plus the config to say when to smear, which is presumably available upon restart.
Also, why were non-linear smears thought to be desirable? Googling just turns up hand-wavy phrases like "easier on clients".
That was my thought too, pointing out why NTP smearing might be fragile is a crucial point in any argument against leap seconds, and the reasoning in this post are lacking (regardless of the conclusion's correctness).
My only guess is that because smearing takes place at Stratum 2, if the network partitions part of the NTP servers downstream (Stratum 3+), they'll have an offset as large as T/(17 x 3600) (T being the partition duration in seconds).
Yet I guess it must be something else for I cannot see why that won't be tolerable.
More generally AFAIK the NTP RFC does not include smearing period, which is why the best practices are to only use smearing in a well controlled environment rather than on public facing NTP networks, but why is this not something that can be fixed? I'm not sure.
I can't quite reconcile the FB attitude of "we only hire the best and brightest after making them demonstrate their technical prowess" vs "computers are a bit hard please can everyone change everything to make it easier?"
Why not just agitate for a move to French Revolutionary Decimal Time as well?
This is not a case of "computers are a bit hard", it's a system that we've imposed upon ourselves that has been repeatedly demonstrated to be unsafe. because of this, we've ended up with many (many) solutions baked into software that attempt to abstract away the sharp edges of this problem from individual engineers, which leads to inconsistent assumptions about what you need to consider when writing code.
even with the best and brightest engineers there is a non-trivial probability that someone will make an assumption that is invalid based on their understanding of what can happen (ex. leap seconds never go negative!) or the library their using (this ensures monotonic time!) that could lead to disastrous results. and especially at meta scale, that probability is no longer "will someone make this mistake in our code?" but is "how many times will people make this mistake in our code?", so systemic solutions that eliminate this as a class of problem an individual can create is something we should consider.
Is UTC as a system inherently unsafe or does it just expose unwary programmers to bugs, the vast majority of which are somewhere between benign and inconvenient in impact?
If you want a system with no leap seconds use TAI. This is not rocket science. I implemented this in my first real job for a broker trader in 2009 who had exposure in Japan and Australia (leap seconds happen at 10/11am there).
If a 21 year old grad can move 200 terabytes of historical data 15 years ago what are the best and brightest at FB doing with their lives?
whether you like it or not, people are using facebook (and google, and amazon, and so on) as critical infrastructure.
there are also all the other companies out there who do not have the platform that meta has that can also hit the same issues. i'm sure some of them have products you wouldn't be so glib about.
>whether you like it or not, people are using facebook (and google, and amazon, and so on) as critical infrastructure.
Only facebook here is the one suggesting we should change things, so who is using facebook as critical infra and what is your definition of "critical" here.
My exact first thought as well, which I will readily admit comes mostly from my bottomless contempt for the company and its employees. The thing that really needs to be left in the past is Facebook.
A large part of the problem is that in software there is a traditional conflation of (a) time marks to measure elapsed time and (b) events where we want to know what their wall-clock date/time is. Those should in principle be kept separate. One can use a monotonic elapsed-time clock for the former and a calendar/wall-clock based clock for the latter. Conversion between the two shouldn’t be done gratuitously, and have to be done with the awareness that the mapping to future wall-clock dates (and sometimes also to past ones) is subject to change. APIs and data types reifying that distinction would be helpful.
As long as you have timezones that change over time, and e.g. DST changes, you have the complexity anyway. It is all simply an expression of the fact that (a) earth’s rotation and movement around the sun isn’t a steady clockwork with subsecond precision, and (b) timezones and calendars are a subject of political decision-making.
It’s therefore just not possible to easily equate coordinates of civil time with physical time, and the related facilities in software development shouldn’t project an illusion to the contrary.
Geographic time zones are a completely separate matter. Dragging them in just muddies the water.
The topic here is worldwide standard time, literally the same for everybody, and whether a million programs obliged to make tiny adjustments to it on a random, unpredictable schedule was a good idea.
Anything fiddly and unpredictable a million programs have to get right will instead be got wrong.
This article makes the claim that the calculation of elapsed time is impacted by leap seconds, to the extent that the following value of “elapsed” may even be negative:
This is not correct. Leap seconds are only the concern of how to render time in a human readable way. System time itself always passes one second at a time, and calls to time.now() interspersed with sleep(1) will always go up by one second at a time. It’s just that at some times of year we render that as 23:59:60 and some times we don’t.
But perhaps clock smearing is genuinely changing the way systems measure time, as opposed to smearing how the system renders t=1658818054.0 as hours, minutes and seconds? That seems implausibly incorrect. Lining up the minute hand with the noonday sun on one particular part of the planet should not possibly have any impact on measuring how long it takes to compile my code.
If your clock is perfectly synchronised to UTC, then this is not true. Two calls to time.now() one second apart will sometimes return the same value - the same count of seconds elapsed.
Here's what POSIX has to say [1]:
> 4.16 Seconds Since the Epoch
> A value that approximates the number of seconds that have elapsed since the Epoch. A Coordinated Universal Time name (specified in terms of seconds (tm_sec), minutes (tm_min), hours (tm_hour), days since January 1 of the year (tm_yday), and calendar year minus 1900 (tm_year)) is related to a time represented as seconds since the Epoch, according to the expression below.
> If the year is <1970 or the value is negative, the relationship is undefined. If the year is >=1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the C-language expression, where tm_sec, tm_min, tm_hour, tm_yday, and tm_year are all integer types:
> The relationship between the actual time of day and the current value for seconds since the Epoch is unspecified.
> How any changes to the value of seconds since the Epoch are made to align to a desired relationship with the current actual time is implementation-defined. As represented in seconds since the Epoch, each and every day shall be accounted for by exactly 86400 seconds.
> Note:
> The last three terms of the expression add in a day for each year that follows a leap year starting with the first leap year since the Epoch. The first term adds a day every 4 years starting in 1973, the second subtracts a day back out every 100 years starting in 2001, and the third adds a day back in every 400 years starting in 2001. The divisions in the formula are integer divisions; that is, the remainder is discarded leaving only the integer quotient.
The key point to hang on to is that in POSIX "seconds since the epoch", a day is always 86400 seconds long, and midnight in GMT is always an exact multiple of 86400 - and so the value of "seconds since the epoch" must necessarily be bodged to account for leap seconds. It is not a monotonic, linear measurement of the passage of time.
Another angle on this is to look at any date and time library you fancy. To convert from linear seconds-since-the-epoch time to a human-readable timestamp, they would have to obtain a leap second correction from somewhere, and apply it before doing formatting. There is no date and time library which does this. Because they all expect a nonlinear, UTC-aligned seconds-since-the-epoch.
Fun thought, we have only ever had positive leap seconds so far, due to slowing of the Earth's rotation. However, we could have a negative leap second. The rotation has just been consistently slowing since we started caring. But it could speed up again the future. We can predict rotation speed to a certain degree, but not completely. That is why leap seconds are announced only a few months in advance, instead of years.
I don't think we will ever need a negative leap second. We can just wait longer until dispatching the next positive one. The only situation in which we'd need a negative leap second is if Earth's rotation were consistently speeding up over many years. But we can tolerate some wiggle room (as in, several seconds) between UTC and TAI (since it's not in lockstep anyway).
UTC and TAI are already over 30 seconds apart. So not using a negative leap second to keep them aligned isn't really a valid argument. On a geological scale, Earth's rotation is slowing down, but on a decade scale, it's still pretty chaotic. We could just as easily be having the reverse conversation. When we need a negative leap second, we may need a few in a row. So just waiting for a positive one to cancel them out doesn't really work. Using the logic of it will all eventually be a wash and we shouldn't use them at all is kind of the argument the article is making.
The problem is that there are hardware devices and software applications, which compute UT1 (the angle of the mean Sun) from UTC and from dUT1 (which are transmitted both on various communication channels, e.g. by radio stations) and all those expect that dUT1 is a number between 0 and 1 seconds.
If dUT1 can become either negative or larger than 1, all such hardware and software must be reviewed and possibly upgraded, to no longer make assumptions about dUT1.
Communication protocols may have to be changed, if there is no way to encode a negative dUT1.
> If dUT1 can become either negative or larger than 1,
Negative values should already be accounted for, afaict.
> UTC is maintained via leap seconds, such that DUT1 remains within the range −0.9 s < DUT1 < +0.9 s.
I can't imagine why it would ever be constrained to abs(dut) < 1, as that does not appear to be a hard spec anywhere, it's just the goal. But I could see some really stupid implementations, so maybe.
And god forbid we respond with "that's not a good solution." To restate, here's the problem, quoted from the original article:
> "As an industry, we bump into problems whenever a leap second is introduced."
Their suggested solution is:
> "As engineers at Meta, we are supporting a larger community push to stop the future introduction of leap seconds and remain at the current level of 27, which we believe will be enough for the next millennium."
Most folks are rightly pointing out that there are many other solutions that we could introduce, which wouldn't have the downsides of UTC drifting away from our wall-clock time. Facebook didn't even discuss those solutions though.
An example possible other solution: programmers (especially programmers at Facebook) should stop using UTC and should instead use TAI (which is literally UTC but without leap seconds). Indeed, using UTC time as anything other than a wallclock time should trend towards the norm. Even though this is a clear tru-ism, seeing it adopted would be way harder (due to language inertia and habits) compared with just changing the standard (nevermind that changing the standard would break a ton of logic built around expectations like "midnight UTC falls on an integer multiple of 86400").
When the problem is “this is too hard for us (but apparently not our peers)”, a valid solution is not “let’s just ignore it, even though our idea would cause an enormous pain in the neck for the rest of the world”.
Why do you think it's not also too hard for their peers? Any company with an interest in time keeping spends tons of money dealing with time zones and leap seconds.
Because I haven’t seen the Google or Amazon proposal to ditch it. Perhaps they’ve written them and I don’t know about them, but I’m not aware of them.
We do lots of things that are hard because doing them right is often more important than doing them easily. Switching from ASCII from UTF-8 was a pain, but we did it. Software upgrades are a pain. Security infrastructure is a pain. Timezones are OMG such a pain. But in all those cases, we collectively said “welp, guess we’ve gotta do it”.
And what Facebook notably didn’t propose was a way to actually make this happen. Who’s going to project manage the global coordinated effort to migrate the planet to Facebook Time? That sounds like much more work than them just fixing their time handling.
It's not just Facebook: apparently in 2015 most countries wanted to drop leap seconds, though some wanted to keep them: Most countries, including China, the United States and many in Europe, favour scrapping the leap second and basing UTC on the continuous tick of atomic clocks. -- https://www.nature.com/articles/nature.2015.18855
OK, so I do agree with that. It’s worth having a conversation about.
But I don’t feel like this rose to the level of an actual proposal. It was very short to assert a claim with such wide-reaching implications as “we should start ignoring leap seconds”. As such, I don’t think it calls for an in-depth rebuttal.
Consider:
Proposer: “It’s time to leave Unicode in the past. It requires us to update every part of our system to deal with UTF-8 strings instead of much simpler ASCII, and we’re spending a lot of resources. Because it’s so hard and expensive, we should all use well-tested ASCII code. People who want to interoperate with our system can just rename themselves to use the Latin-1 alphabet.”
Everyone else: “No.”
Yes, it is hard, and people have spent a lot of, ahem, time and money to figure out how to manage this at scale. But there are real-world-tested approaches to dealing with the issue, and I firmly believe it’s better to work out and coordinate on the remaining rough edges than to throw the whole thing away to make a handful of engineers’ jobs easier.
TFA explains that it largely doesn't matter if we just ignore the leap second. It's additional complexity to our timekeeping systems that doesn't buy us much (if any) value. All the super-precise systems which would be impacted by being one second off ignore leap seconds anyways.
If the solution implies de-engineering society from first principles to satisfy a programmer’s desire for regularity, it’s more of a thought experiment than a solution.
The world is messy, life is messy, and so is everything else. If that’s too much for poor programmers, they need to find another job.
> Google, Microsoft, Meta and Amazon launched a public effort Monday to scrap the leap second, an occasional extra tick that keeps clocks in sync with the Earth's actual rotation. US and French timekeeping authorities concur.
> ... The tech giants and two key agencies agree that it's time to ditch the leap second. Those are the US National Institute of Standards and Technology (NIST) and its French equivalent, the Bureau International de Poids et Mesures (BIPM).
Weird, I came to a different conclusion after reading the article. There's already a graceful solution to non-monotonic time, which mitigates most of the problems: smear, don't leap. Only, it's not a universal solution, so various systems are out of sync during the smear. Solution: petition for a standardized smearing strategy. But yeah, leave "leap" seconds in the past.
And, maybe, don't run sub-second benchmarks with a wallclock.
If smearing were adopted as standard, the "seconds," "minutes," and "hours" appearing in timestamps would no longer correspond to literal seconds minutes, and hours of duration, even in principle. That seems very misleading and bad.
Smearing is absolutely the worst of all possible choices. Instead of one second, you are out of sync with the whole world for all of 24 hours. And you are fooling with things for the whole period.
Of course it was Google who picked the worst of all possible choices.
But, its practical success does in many ways prove that mild inconsistencies between different time systems are...fine, and so a leap-less approach wouldn't cause any issues.
Can we not just fix our systems to run in TAI or GPS time, and convert to UTC (or the user's local timezone) when displaying timestamps, instead of causing civil time to drift off indefinitely? I thought these were the best engineers in the world, go fix the computers then!
That is what everybody ends up doing, in practice.
It is exactly the problem. Why should every software system everywhere have its own complicated, unpredictable, error-prone fudge that benefits literally nobody?
Humans want noon to be when the sun is overhead, and midnight to be the middle of the night. Almost nobody cares about sub-second accuracy or monotonic time. Track it that way internally if you like, but humans want time to correlate to what they see out the window.
This topic keeps coming up and I'm not sure I want to read all the comments just in case someone has written something that wasn't written last time it was discussed.
Personally, I would like to keep leap seconds. The only change I would make is to demand a longer notice period: 18 months instead of 6 months would be good. Presumably that would mean we'd have to tolerate a slightly bigger difference between UTC and UT1 but that seems all right.
It would be stupid and irresponsible to abolish leap seconds without deciding on and implementing an alternative way of keeping time, and therefore the calender, in phase with the cycle of daylight. There are alternatives (leap minutes, redefining the second, ...) but to me they all seem a lot worse than leap seconds.
Of course, whatever we do in the future, software will still have to handle leap seconds for processing timestamps in the past so any change to the system would mean that most software gets more complex.
Situation: there's a problem, and our current way of coping is something we've done 27 times so far.
Reaction: let's stop using the system we have some practice at doing, and instead open up the question again, but really just have big tech corporations dictate to the world that we need to do something else.
Might be before many people's memory here, but Swatch tried to create an "internet time" in the late 1990s. Nobody uses it because it doesn't solve any problems anybody has, it's just different.
IMHO, leap seconds belong at the timezone layer. I.e. ignore it internally and for things like "seconds since epoch". Adjust at display time.
Timezones are already backed by a database which needs regular updates, including leap seconds would make sense since those are also updated in an unpredictable manner.
I have a solution that no one will like, but it's probably the least insane one in practice.
1. We move from using leap seconds to leap minutes.
2. We move leap minutes to timezone offset, because time zones are already a clusterfuck and you can't make them any worse anyway.
3. One-time adjustment to all timezones to get the system started.
Result: Z timezone is equivalent to TAI. When you want to account for planet-related BS, you apply a timezone offset.
You can definitely criticize this, but with a caveat: if you believe that Unix timestamp doesn't have leap seconds or that timezones only come in 1-hour increments you clearly have no idea how computer time really works and should probably read up on that first.
But why do we need leap anything at all? Leap years are useful because the calendar drifts quickly (well, over centuries) enough for seasons shifting to become noticeable. Leap seconds correct for a problem that’s much slower, with already existing much larger errors (time zones anyone?). It’s a science project with no useful application in the real world.
It would probably even be okay to make the corrections only in whole-hour or half-hour steps, to avoid odd timezone offsets. The Brits however won’t like losing the 0-offset GMT.
Maybe it's more acceptable if the public is told they're likely to get it back at some undefined time in the future. (Or is that too soon after they severed EU relations only to immediately try and get those relations back in order?)
This post brings back horrors when I was trying to figure out from the logs why there was a sudden doubling of requests and orders on 2016-12-31:23:59:59. This is the time I learned that Linux does not distinguish the leap second from the second before it. This led me down to a rabbit hole of reading up on what the representation of a leap second was and found that there was none for Linux timestamp. Fortunately, this was in December 2019. I can only imagine the pain SREs had to suffer from on 2017's New Year's day on top of Cloudflare going down.
If utc is causing problems with your calculations due to leap seconds than you should be using Unix time to do the calculations and then translating that to a human readable format.
Imo society needs three distinct time counting systems.
1. Linux time. Essentially a universal addressing system for the measure of time in a agreed upon reference point (Earth’s surface). This should be the standard for science/computation, with the need for language to describe time dilation when comparing two reference points. Unix time doesn’t have leap seconds, or the concept of days.
2. UTC aka civil time. Things such as the orbit of the sun and the rotation of the earth aren’t in constant speed and don’t divide evenly with each other. UTC deals with short term variability with leap days and leap seconds. This is so every January 6th the earth is roughly in the same spot relative to the sun, and every 5pm the earth is in roughly the same part of its rotation. This is important because these things drift on human scale timelines. This calendar should be used for daily life, business, etc.
3. A purely astronomic calendar. A calendar that defines time by astronomic events. For example, defining a day as one rotation of Earth and not as a number of times an atom vibrates. This should be used so we can discuss astronomical events such as “A Martian year” or a “Saturn Day” and provide some meaning. This is the basics that should be taught to elementary school children to establish the cultural meaning of a day or a year and to provide some basic learnings of nature.
If we used a unix-time-including-leap-seconds instead then the date/time conversion functions would need to have a little extra smarts (a table of leap seconds in addition to a timezone database) but most of the leap second related problems would not exist.
(We already deal just fine with months having variable numbers of days. Minutes having variable numbers of seconds is obviously awkwardly rare from a testing point of view, but not fundamentally more difficult to get correct.)
The fundamental difference is that we can predict well in advance how many days will be in a given month, and the rules for calculating that number can be implemented with a short lookup table and a few modulo operations.
The lookup table for past leap seconds, however, is already longer than 12 entries, and there is no way of calculating the length of all future minutes.
On the other hand, the problem may be simpler than keeping track of changes to timezone definitions, since there are hundreds of jurisdictions that can unilaterally change those.
Never gonna happen because there are competing vital interests:
0. Human diurnal local time
1. Universal time similar to 0. but without local offsets - UTC (UT1)
2. Universal monotonic time (within comparable gravitational reference zones) - TAI
These already exist. There is nothing to add or remove, but to do things differently that have already been recommended rather than sticking with insane traditions for another 25 years wherever possible. UTC (UTn), GMT, are insane for timestampping.
Use TAI essentially everywhere and calculate it later as UT1 or local time. If you fail to do so, you risk discontinuities and security vulnerabilities. This is why you should and must use TAI64[N[A]] rather than UTC because UTC is not monotonic. You can get UT1 from TAI by downloading LS tables and doing the math yourself. It is rare to find UTC-centric code implemented correctly.
This isn't rocket science, it's first principles of recording data that doesn't vary nondeterministically in forward time.
I’m reminded of the most excellent write-up on qntm.org about abolishing time zones. If you haven’t read it, and you’re one of those kinds of folks (I was!) this will quickly sober you up :-)
>Leap second events have caused issues across the industry and continue to present many risks. As an industry, we bump into problems whenever a leap second is introduced. And because it’s such a rare event, it devastates the community every time it happens. With a growing demand for clock precision across all industries, the leap second is now causing more damage than good, resulting in disturbances and outages.
Translation: We suck at computers and think you all do too. So let's just bury our head in the sand and make some cash before it all falls down.
I'm being glib, but come on guys. You're trying to tell me that your crummy programming and cash flow is more important than the literal spinning of the Earth.
You can try all you want, but the sun will set when it does, your clock be damned.
I may be a rare person on HN that actually has dealt with these leaps from the 'source'. I used to work for a pico-second accurate timing company and had to go through the microsecond adjustment due to the March 11th, 2011 earthquake in Japan. The company was a backup site for the NIST clocks. I've even worked with the second at NIST. Yes, that little port at about eye level that the whole US uses as it's second. You can't take things out of the room (like asbestos) and you can only be in there for short periods, it's that sensitive and important.
All that said, the Earth is a really crummy clock! But guess what? You can't fix that. It's mother nature's world and we just live on it.
If FB thinks that trying to get everyone to quite literally 'not look up' is going to work in the long term, they are in for even more headaches than they already have. Many, many industries and sectors must use hyper precise clocks and they must be aligned with the Earth's rotation. If are not, then they will drift psuedo-randomly apart as all clocks do. And then you're screwed when you try to get your own little clock system to talk to other clock systems that have been drifting too. You think you have issues now with imagined negative leap seconds? Wait until you try to define time.
In general, I hate when people doing people things gets replaced by bureaucracy doing bureaucracy things. Structure should support people, not the other way around. That includes time. If people want midnight (on their clocks) to match midnight (in their lives) or whatever, then the structure and code we put to it needs to support that. That includes changing things like time zones and calendars and leap years, and yes, leap seconds too.
Just because we suck at codifying things doesn’t mean we should change the goal to make it easier to codify. If you need a more straightforward clock in your software, use just TAI or something, instead of trying to redefine calendars.
Which isn't why we created to Unix Epoch anyway? The number of seconds since an arbitrary event, with no regard of the movement of celestial bodies?
And isn't Unix Epoch derived from some sort of Atomic clock standard? I can't remember, but it seems all of our systems time should be atomic time, then you can derive UTC or whatever you want from that.
Absolutely nothing prevents Meta and the other big boys from sponsoring a time standard they like for their internal systems, or just using one of the existing ones which doesn't have them. Recode your APIs to emit that standard, if you have to still use GPS time then build a box which syncs to GPS and deletes leap seconds (of course why? You can buy rack mounted atomic clocks these days which will stay perfectly accurate for hundreds of years).
This is, in every way, not a problem they have to cooperate with anyone else about.
EDIT: also, Unix Epoch Time, has never had leap seconds. The standard is defined as UTC exactly at the start of the Unix Epoch and ticks at 1 second since then. So from that perspective, and the perspective of every sensible internal clock system in programming, you already don't have a problem.
> As already mentioned, the smearing is a very sensitive moment. If the NTP server is restarted during this period, we will likely end up with either “old” or “new” time, which may propagate to the clients and lead to an outage.
That seems to be a solvable engineering problem. One could make sure the server is up to date on all recent information before taking it back into service. It's a similar problem as making sure that cache servers that have invalidated data don't go back into service before their cache is updated - which is tablestakes for services like CDNs.
I guess the problem here are public servers that can't run software that understands doing that? I see the challenge for those, but maybe something like an improved version of NTP could fix it?
What if we drop leap seconds completely, but for display and input, we use continuous interpolation, with factors published every year, a year in advance(So that mostly offline devices stay accurate)
Imaginary civil days would always have exactly 3600×24 seconds. We could always convert any past date easily with a small table. We could convert any future date with reasonable accuracy just based on extrapolation.
No ambiguity, no jumps, we still stay roughly in sync.
We could even just use 10 year blocks so that in the short term, most projects would have the entire future conversion factor timeline known at the start.
The problem is that UTC, despite its name, is not a time (it is an angle rounded to a multiple of 1/21600 right angles).
Only TAI is a time.
Already decades ago, some programmers have argued that internally all computers and their software should use only TAI, which is a time, so it behaves as expected, while UTC must be used exactly like the local times and the time zones, only at the interfaces with humans.
If that proposal would have been adopted, there would not have been now any discussion about the leap second.
The only reason for the existence of UTC is to keep dUT1, i.e. UT1 - UTC, under 1 second.
If the leap second is eliminated, then dUT1 will grow over 1 second, so there are other cohorts of hardware devices and software applications that must be updated in order to no longer rely on the assumption that dUT1 is less than 1 second.
Anything that has any relationship with astronomy, e.g. for observations or for navigation, relies on computing UT1 (i.e. the angle between the mean Sun and Earth) from time, so it would have to be updated.
Changing the definition of UTC would make it completely redundant, Instead of giving up to add leap seconds to UTC, it would be much better to make a final leap of a half of minute and have TAI = UTC after the date of the big leap.
Having 2 identical times with an arbitrary offset between them would just add needless complications to all time-related software, forever.
Except, astronomers do not use UTC now. So its sole supposed benefit completely misses the tiny slice of people it is supposed to be for.
UTC has exactly one purpose: to be a common reference for everybody else. Making it not mess up everybody else, every year, is obviously the better choice.
You may stop beating your spouse now, and it will be purely an improvement.
Having made the mistake 27 times already does not justify making it even a single occasion more, never mind forever.
> The problem is that UTC, despite its name, is not a time.
Could you elaborate on what you mean? What is UTC if not a time? Like, datetimes vs time?
Even if Unix time was based on TAI, you'd still need leap seconds in order to resolve the correct local time (which is based of even-minute offsets of UTC). Maybe it's easier to deal with in calculation post-hoc vs smearing on the day of, but that handful-of-accruing-seconds will always be there.
UT1 is an angle, i.e. the longitude of the mean Sun projected on the Earth.
The mean Sun is a fictitious Sun with a motion that is averaged in comparison to the real Sun.
UTC is the UT1 angle rounded to a multiple of 1/21600 of a right angle, i.e. the angle corresponding to 1 second of time for something that completes a circle in 1 day.
Because of this rounding, actually a truncation, UT1 = UTC + dUT1, where dUT1 is between 0 and 1 "seconds".
Even if UT1, dUT1 and UTC are expressed in "seconds", these "seconds" are not the unit of time, but like I have said, such a "second" is 1/21600 of a right angle or pi/43200 radian.
Both UT1 and UTC are angles that approximate the longitude of the projection of the Sun on Earth, with various accuracies.
Because the rotation of the Earth, which causes most of the apparent motion of the Sun, is almost uniform, the angle of rotation is almost proportional with the time, so the angle UTC is almost proportional with the time, i.e. with TAI.
Because of the way how the "second" angle used for UT1/UTC is defined, the approximate proportionality becomes an approximate equality of TAI with UTC, but because the equality is only approximate, there is an increasing offset between them.
In the ancient times, the people did not care much about time, but only about the angle of the Sun, which determined when it was light or dark, hot or cold, enabling or preventing various activities.
In modern times with artificial lighting and heating and with many activities that proceed at predictable rates, time has become much more important than the angle of the Sun.
In any case, in all contexts we must be aware that the angle of the Sun and the time are not the same thing, even if they are almost proportional, i.e. almost equal after a change of the measurement unit.
The rate of angular change divided by angle sure as heck sounds like units of time to me. The absolute angle isn't, but the rate of change is, and UTC still tracks with rate of change, with a phase angle we periodically twiddle.
That's like saying a watch or even an atomic clock doesn't measure time, it measures oscillations. It's being pedantic to the point of obtuse.
Eventually we will have to get used to having two clocks, slowly diverging.
It’s inevitable because we don’t want hours of slip on Earth and eventually we will move to other planets which certainly don’t want Earth leap seconds.
There is literally no aspect of society, science, or culture that I’m willing to buy into changing to make things easier for software developers and tech companies (or in fact any companies, but only tech companies seem to be arrogant enough to suggest it). What an entirely backwards way of looking at the world.
If we can’t deal with IT that improperly models the universe, the right answer is not to change the world to be more like the shitty naive model the software implements, it is to be less dependent on the technology agreeing 100% with reality.
But what benefit are leap seconds to "society, science, or culture"? I've never seen one.
If the current drift would continue then we're talking about a drift of 9 minutes every millennium; less than an hour for all of recorded history (and it mostly likely won't continue as-is; sooner or later we'll get negative leap seconds).
The current situation doesn’t need to benefit society to oppose changing it for Facebook. Society, science, and culture are not the result of some utilitarian optimisation function, and that is precisely the reason to oppose changing them simply to make some company’s engineering slightly easier (or even a lot easier).
So merely because "Facebook said so" it should now be changed? A curious argument, especially considering many people have made similar arguments for years and the ITU has been considering abolishing it for a decade.
It sounds like you would have been opposed to the introduction of timezones? They made everyone switch away from local solar time, primarily for the benefit of railway timetables.
This creates a fundamental problem. It is now universally agreed that atomic clocks provide a much better measure of time than does the Earth's rotation, yet for human comfort, we would still like time to stay in sync with the Earth's rotation. How to accomplish this?
Can't we just bifurcate? Machine time and human time.
Meta is not arguing against adding the leap second, they just argue that skewing the clock slightly over some hours avoids the 'invalid' time with 61 seconds in a minute and the resulting software problems while keeping the time reasonably accurate for for the general public and computer systems.
I'm almost surprised that, after explaining conservation of angular momentum, they didn't propose a bonkers solution like building a pipeline from the sea up a mountain to create a massive artificial glacier to directly regulate the Earth's angular velocity.
It's bonkers, but so is just asking metrologists to stop being so darned concerned with accuracy of time measurement because its causing Facebook to spend money they'd prefer not to spend.
Doing away with leap seconds does not solve the problem. It only transforms it into a different problem. It's addressing the same problem that February 29th speaks to. You can do away with leap seconds, and you can do away with February 29th. But doing so with change the fact that the length of a year is not a while number of days, nor will it change the fact that the length of the year is non-stationary.
So what's the alternative? The leap second is capturing an actual skew between uniformly-ticking clocks based off of atomic decay statistics and the rotation of planet Earth. Stop updating them, and we end up with the sun going down for equatorial latitudes at 3PM eventually.
This feels very can-kicky, even if the can can be kicked a thousand years down the road. I don't think more can-kicking is really the best solution.
So their complaint is that one-second skews happen infrequently enough that they are always a headache, and we want to "solve" the problem by... Replacing it with a less frequent time modification that introduces larger delta?
I've been on the front lines of adapting code for a novel timezone change. It wasn't pretty.
It is not, ever. Doing it every year or two imposes just as much disruption every year or two. A disruption once a century is at least 50 times less disruption.
FWIW, Facebook did not use UTC when I was there. They used Pacific time everywhere, which was even worse. And half of the internal tools would convert to local time but the other half wouldn't. I just kept my work machine in Pacific time even though I'm in Eastern, so that compiling incident timelines (for example) would be less tedious and error-prone.
It’s time for FB to adopt TAI or UT1 and stop moaning.
Wtf is this attitude of modifying the definition of time to fit your engineering needs where there’s a perfectly available alternative quoted in the article.
I really don’t get it. If you don’t like leap seconds use a definition of time that does not have them. Why modify the only standard with leap seconds and annoy everybody else?
Is there a single change that can ever be proposed that would be considered worth it? I can only conclude that status quo bias is very strong among people. With this in mind, I can only conclude that Mark Zuckerburg is some kind of organizational genius with both his "move fast and break things" and his "move fast with stable infra" lines.
Whoever made that first line chart should have used step interpolation. There is no point in time when none integer amounts of seconds have been added, but it gives the impression it's been smeared over the whole year
"Facebook developers are apparently so frightened of a negative leap second that they think altering the way global timekeeping works is easier than fixing their code"
Engineers whining about having to do work that was designed in the past by someone else and coming up with a "creative" hack which will be a headache for someone in the future.
You wholly miss the point. A million programmers have to all, unanimously get a fiddly, hard to test detail right. Both those that do and those that don't get periodically out of sync with the other half, on a random schedule applied for no earthly purpose.
Dropping any new leap seconds is choosing not to break a million systems, irregularly, to no purpose. No code needs to change. Things just stop breaking.
The hubris. If the definition of time is modified to match the buggy giant code base I don’t want to touch then I have no work to do. Genius!
It’s akin to saying I want 2+2=5 because my code happens to work that way and it’s hard to fix.
If FB hates leap seconds so much they can switch to TAI or Unix Epoch time. It’s not like UTC was modified to add leap seconds. It’s always been like that. It should not have come to them as a surprise.
Switching to something else puts them out of sync with everybody else who didn't. Same for anybody else. There is no possibility of "switching". Suggesting otherwise only proves you don't understand what this is even about.
"very evidently"?
Where is this evidence that I haven't thought it through that you speak of?
As I hinted, I'm sufficiently experienced with precise timestamping to know that TAI is the proper way there, and that "improving" UTC is not going to help.
Their suggestion is that UTC should essentially be the same as TAI because there is code that makes incorrect assumptions about what UTC is. If you need a monotonic clock, there is already something for that. UTC is for the actual local time on Earth.
It took Facebook/Meta only like ~20 years to figure out they shouldn't rely on time to keep things in sync? /s
If you delve in distributed systems, I would be surprised if you never had the case of "hey 1/k of our computers think they are 1-2 hours behind in time. (because of NTP issues)"
You could simply be working on a single computer and have to deal with time issues. (E.g. Local clock resetting to a prior date and system creating files timestamped earlier than older files.)
I think the leap second is the most minor, not even worth mentioning problem. It is actually to our benefit, reminding us that (perceived) UTC is not a monotonic measure and computer time altogether is not. Computers nowadays adjust their time ~constantly.
That they have to smear their leap second throughout multiple hours, and that computers having the wrong time leads to an outage (for a less than a second difference) troubles me honestly. What you are telling me is that next leap second we should all pray for Meta's NTP servers not going down???
Also: (if you are an fb engineer) What do you do with negative leap seconds? Speed things up...? (https://www.timeanddate.com/time/negative-leap-second.html) The expectation is for a negative leap second coming up -- as the post does mention.
This is an education problem if you expect that time intervals have to be strictly positive (or jump ahead). TL;DR: systems' should never use the wall clock to compute a time interval even if we get rid of leap seconds.
I'm really confused by the common complaint in this thread about the arrogance of programmers wanting to change time for everyone else to make it simpler for themselves.
No, in today's world time is a programmers' problem. Nobody except programmers (and some very technical software product managers, e.g. astronomers) needs to deal with time issues like these.
Who builds systems that need to be synchronized to the microsecond level or more? Programmers. Who makes calendar software that should support timezones and DST correctly? Programmers. Who runs astronomy calculations that really care about UT1? Programmers working for astronomers.
Every time someone has an issue that requires very delicate handling of time, there is a programmer implementing the solution.
So of course it would be a group of the largest software companies, responsible for software like Outlook and GCal and public clouds, who would get together to say "listen, world, this leap second thing is more trouble than it's worth, how about we just stop fiddling with it?" They're the people responsible for implementing everyone else's time handling requirements. I think we should listen to them.
Another confusing argument here is "let them just move from UTC to TAI, and thus move leap seconds to be a display issue, like timezone best practice".
In a vacuum, this could have been a great solution. But we live in a world where almost all computer software already written is based on UNIX Epoch Time, which is defined as tweakable by the leap second committee, and the breakage usually occurs either in existing software or in sync issues between new software written at Meta or MS or wherever and existing software, so moving all their centrally managed systems to TAI while leaving us programmers of the less hierarchical outside world to keep suffering wouldn't solve much, and also wouldn't be very nice.
Instead, they're writing a letter saying "dear leap second committee, we are your largest users and our use case is representative of everyone else's, can you please stop fiddling with the standard agreed time, it's causing problems for everyone. You can get the same effect by twiddling with display time (timezones), which we already have to support anyway and is much easier to test."
My favorite design principle is the Representable/Valid principle: your representation/model of the world should have one way to represent each valid state, and should not have ways to represent invalid states. Lets get together as a species and refactor our model of time to have one less case of two states having the same representation!
Astronomy algorithms usually work in Barycentric Dynamic Time, or Terrestrial Time, or UT1. In reality the whole invention of UTC is so that people don't have to deal with those systems on a daily basis.
The process most astronomy programs go through is to get the UTC from the user, convert the Gregorian date to a Julian day number to get rid of the Gregorian Calendar altogether. Then look up the number of leap seconds, add those to the JD to get International Atomic Time, then add 32.184 seconds to get Terrestrial Time. If Barycentric Dynamic Time is needed, you must first compute the velocity of the Earth relative to the solar system barycenter (which itself requires TDB), then compute the relativistic effects of that motion on Terrestrial Time. If you need UT1, these can only be obtained by observation from the International Earth Rotation Service, and required daily updates, and interpolation of values in between observations.
So, as you can see, leap seconds do nothing to make an astronomer's life easier. In fact, we jump through a lot of hoops to make the average person's life easier. It sounds like Meta wants to change the definition of time for everyone just to make their programming a little easier. I find the very premise just out right ridiculous.
It's true that your average person will not be affected by the Sun setting a few seconds earlier. But eventually it will build up enough error that it eventually has to be addressed, and Meta is just trying to make their problem someone else's problem.