Computer dates and times

From Helpful
(Redirected from NTP)
Jump to: navigation, search
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

Broadcasted time synchronization

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

  • GPS - almost anywhere (not so easily indoor, though), but is a relatively pricy way of getting just time synchronization

Network time (NTP)

Basic timekeeping hardware (e.g. PC internal clocks) has speed inaccuracies that may be on the order of a second per day, which amounts to a a few minutes per year.

Not bad, yet there are certainly cases where that matters.

NTP (Network Time Protocol) allows synchronization of clocks over a network, by pointing it at a reference server.

The accuracy of NTP varies most with network latency and congestion. RTT is one thing, but it's mainly the variation in latency that matters. We can detect that it's bad by detecting that these numbers vary, but can't really correct for it any better than taking an average.

Over the modern internet, you can usually expect it to correct to within a few milliseconds of a reference, and keep it there, which is more than enough for most uses.

There is also SNTP, which is essentially a simplified client that is easier to implement, and seen where you would like to sync clocks but high-accuracy timing is not important, e.g. it's more than good enough on embedded devices that have network but no RTC.

The protocol is the same, they often connect to the same NTP servers(verify), but the clients ignore drift, and often have a simpler method of clock adjustment.

Some terminology

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)
  • Stratum means "how many hops to a very good time reference", because if there are too many servers doing a few millisecond errors to others, that adds up.
Stratum 0 are clock devices, which reference from an atomic clock, GPS, CDMA, WWV, DCF77, or such, but are not themselves networked devices.
Stratum 1 are the networked servers that are directly connected to stratum 0 devices (implicitly with very stable latency). They are the best networked reference you can get.
Stratum 2 are servers that connect to stratum 1 servers
Stratum 3 are servers that connect to stratum 2 servers
...and so on. NTP is an effectively hierarchical system (though same-stratum servers can also connect for other reasons)
The upper limit for stratum is 15.

So we want to find a low-stratum server.
It's gotten simpler and cheaper to run stratum 1 servers, so there are now more of them around.
This also means there are now enough stratum 2 servers that you get pretty precise time without thinking much - in particular with things like the NTP pool project are great for this, because they effectively send you a shortlist of probably-decent choices probably near you. That means you can get that ~millisecond sync without much thought (down from maybe a dozen milliseconds from a thoughtless choice without such help).

  • delay - networking round-trip time (a.k.a. RTT, ping time) to an NTP server
this tends to be at least 7..10ms just because broadband's latency
not itself the most important metric of a server, but higher latency tends to mean further away and correlate with higher jitter

  • offset (sometimes 'phase')
the difference between the reference time and your system clock(verify)
...which can only be estimated, as it varies with your networking performance
offset involves measurement and calculation, and is a relatively instantaneous figure - if you were to plot it it would be jittery, not smooth.
This is why you don't want to indiscriminately adjust to every offset you find. It's relevant when considering some behaviour and implementation details, and a major reason behind the slow adjustment. Read the NTP documentation if you want to know all the hairy details.
  • jitter (sometimes 'dispersion') - variation in received/calculated offset (verify)
which is related to network jitter but also NTP's cleverness

What happens to your computer's clock

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

Since a lot of programs do timing of some sort, correcting time disrupts the accuracy.

For example, stepping refers to just changing the time. But that means it is likely to make measured times larger than sensible, or negative. For things like timeouts, this can cause some odd behaviour.

Stepping may be sensible when your current time is very inaccurate - including hardware that has no RTC, which can still have accurate time by running NTP (and stepping to correct timeas part of boot).

Slewing means intentionally tweaking the clock speed a little so that the time will crawl towards the intended reference time within reasonable time.

This is the least disruptive for programs that do timing, as you can guarantee the time is monotonous (never changes to a past value), and the length of measured intervals while slewing are inaccurate by only a tiny amount.

As such, slewing is preferred whenever it is possible, and ideally ntpd will only ever need to slew (once it is actively correcting).

NTP will by default refuse to work when you have a time, but it is more than fifteen minutes incorrect. This because it's assumed to be a strange situation that an admin needs to look at.

Servers with huge offsets will never be selected, and if a selected server's offset becomes huge, ntpd will quit.

When looking at servers with small-to-moderate offset, it will take some time to estimate the quality of the time source.

Once (and while) a server is selected, NTP seems to keep a local-to-server offset in mind for a while (rejecting this time) before doing anything (why?(verify)).

Once it wants to correct, ntpd will either step to it (for offsets >128ms), or slew to it (if smaller).

Ideally, once NTP is running, the offset will never become high enough for a step to be necessary, but it can happen.

You can continuously monitor offset, to get an idea of the accuracy you're getting.

NTP accuracy

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

Stratum-1 hosts, can themselves be within ~10 microseconds of UTC.

Each stratum above, being a network connection, may add on the order of perhaps a millisecond.

Exactly how much depends on a bunch of factors, that you cannot know precisely.

Since stratum 1 servers are relatively scarce, and each server can only serve so many hosts, it's much easier to find stratum 2 and 3 servers.

You can expect your clock to become correct to within a millisecond on average

For a real example, a server on my home broadband has a ~15ms RTT to its NTP server, but the jitter in that RTT averages to approx 1ms in general (less than half of that when the dozen people in it serves aren't watching video), which allows for synchronization up to ~0.3ms or so(verify).

Note that it may take hours to days for the offset to the reference to become as low as it can become, not because we don't know the right time, but because NTP typically tries to not jump the clock back and forth in time (which might confuse code that does short-term timeouts), instead opting to spread the adjustment to many tiny ones if it can.

Basic ntpd setup (unices)

(Optional) Make sure your system's timezone is correct. Not really a requirement, but probably helpful for you.

(Optional) Configure the time servers you want to use. See #Servers below. Optional because the default pools are usually fine choices.

Set your system clock to within a few minutes of accurate time OR tell the NTP client the first correction is allowed to be huge

When time difference is more than about fifteen minutes, NTP clients may refuse outright, figuring it's a problem that a person needs to look at.

Platforms without an RTC (e.g. raspberry pi) will typically need to say "the first correction is allowed to be a very large jump" on every boot.

Most other things can do with only ever doing very subtle correction. But if you just powered it on for the first time, you may still want to correct with a big jump, once ever. It should never drift enough to require that ever again.

You can set date and time manually, but since you've probably just set up NTP to work,
ntpd -g -q
is probably simpler (-g allows big changes, -q will quit immediately instead of running as a daemon.).

Enable and start the ntpd service, check that it works, and then feel free to forget about it.

The simplest check that it works is probably to query localhost about its peers, using ntpq, e.g.:

watch -n 0.5 ntpq -pn localhost
It may take a few minutes to show useful statistics for all peers, and to start synchronizing with one (which will be indicated with
as the first character on the line)

Because ntpd corrects slowly, expect the initial adjustment to take (order of magnitude) half an hour for a few minutes of difference.

If ntpq gives you:

  • localhost: timed out, nothing received
    , then you probably have an overly strict firewall keeping you from connecting to your ntpd (e.g. not trusting localhost, and dropping the packets), though it can also indicate rejection by ntpd itself via its configuration.(verify)
  • ntpq: read: Connection refused
    , this may mean ntpd isn't running (possibly because it quit because you started it with a clock more than 15 minutes off), or that it is not configured to allow you.


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me)

The exchange of time can be done as

  • a client to a listed server (probably the most common)
  • within a network via broadcasts or multicasts
  • between a set of peers that will synchronize with each other (useful for fallback/redundancy)

Assuming you want to pick a server to synchronize with:

You can hand-pick low-stratum servers near you. Or you can be lazy and use the pool servers for almost the same effect - uses DNS tricks to resolve the names to geographically close hosts. Because they keep track of actual hosts, this means less time worrying about NTP servers that went offline.

In the case of ntpd, having the following is a simple start:


You can also use specific subdomains to narrow down the options to things that are on the same continent, or in the same country. This may help it find better options sooner.

See pages like

Note that these numbers are not strata.

See also


  • NTP v4 is in development
  • RFC 1305, "Network Time Protocol (Version 3) - Specification, Implementation and Analysis" (1992) (current)
  • RFC 1119, ""Network Time Protocol (Version 2) - Specification and Implementation"" (1989) (now obsoleted)
  • RFC 1059, "Network Time Protocol (Version 1) - Specification and Implementation" (1988) (now obsoleted)
  • RFC 958, "Network Time Protocol (NTP)" (1985) (now obsoleted)

...well, it's a little more interesting

Common date formatting

mm/dd versus dd/mm

The tendency to write things like 02/06/2009 is internationally problematic.

A lot of the world would read it as June 2nd, 2009 (dd/mm/yyyy), yet the US, Canada and a handful of other nations may easily read it as february 6th, 2009 (mm/dd/yyyy).

This is only nonambiguous when day is the 13th or later, so it's ambiguous for ~40% of all dates (worse if the year is also 2-digit) when you don't know the nationality of the person writing it. So please don't do this.

ISO 8601

ISO 8601, often referred to as ISO date format, is one solution to this last problem. It uses an order previously mostly unused, and requires a four-digit year. These ISO dates are always formatted YYYY-MM-DD, so identifiable as this format and non-ambiguous.

It is fairly generally accepted. For example, Canada uses it on official documents, and modern programming languages (e.g. .NET date functions) support it out of the box, and various people seem to like its date formatting clarity.

ISO 8601 allows varied levels of detail:

# dates: years, months, days
#date&time, with time, to minute, to second, or using second with fraction:
YYYY-MM-DDThh:mmTZD      (e.g. 1997-07-16T19:20+01:00)
YYYY-MM-DDThh:mm:ssTZD   (e.g. 1997-07-16T19:20:30+01:00)
YYYY-MM-DDThh:mm:ss.sTZD (e.g. 1997-07-16T19:20:30.45+01:00)
...and, in fact, more, including a compact form like


  • The T is a literal T, which signals that a time follows.
    • When displaying these dates, the T is sometimes a space. (Can also be done in storage/communication, if "partners in information interchange" mutually agree)

  • TZD refers to being a Time Zone Designator, and should be one of:
    • Z ('Zulu') meaning UTC
    • +hh:mm
    • -hh:mm

Note that lexically sorting ISO 8601 is also useful sorting

...except for the timezone information


A variation of ISO8601, close enough that you could consider it a restricted profile of ISO8601.

The idea seems simplicity: because it has fewer forms, it's easier to conform to RFC3339 than to all of (rather than the usual form of) ISO8601.

And that's useful for APIs and such.

The largest difference is that it doesn't have the shortened forms. Only the fractional seconds are optional.

There are some other subtleties, though, like how it allows a negative sign on a timezone offset of zero, which ISO8601 defines must be a +

W3C Date and Time Format, a.k.a. W3C Datetime

A W3 note/overview/profile of the basic and probably most useful parts and uses of ISO 8601, omitting some of the complex details and focusing on just the date-and-time part.

Since it allows most of the shorter forms, this is less strict than RFC3339.

RFC 822/1233

For example:

Thu, 11 Oct 07 12:38:29 GMT 
Thu, 11 Oct 2007 12:38:29 GMT 

RFC 822 first specified the format, and allowed 2-digit and 4-digit years.

RFC 1123 updated this to require the year to always be four-digit.

You can leave off the weekday.

Timezones can be specified in a number of ways:

  • 4-digit offset: +0330, -0100 (preferred format)
  • pre-defined zones: UT (refers to UTC), GMT, EST, EDT, CST, CDT, MST, MDT, PST, PDT
  • Military: Z for 0, and A-Z except for J

In terms of strftime(), assuming you've already converted to GMT:

"%a, %d %b %Y %H:%M:%S GMT"

(Or, if you're insane enough to like two-digit years, "%a, %d %b %y %H:%M:%S GMT")

RFC 2822

Looks equivalent to RFC1233. (verify)

Mostly a documentation thing because 2822 updates 822, but picks up the definition from 1233.

RFC 850/1036

Looks like (weekday optional):

Sunday, 06 Nov 94 08:49:37 GMT

Defined by RFC 1036 (which obsoletes RFC 850 where the format was originally defined).

While the format sees relatively little use in standards, it is not uncommon to see real-world dates that should be formatted in the 822 way that look rather like this instead -- in part because this format is valid 822 format as well.

In strftime (assuming you've converted to GMT) (verify)

%A, %d %h %y %H:%M:%S GMT

Observed variation:

  • dashes in the date
  • four-digit year


The C library's asctime() and ctime() output: (verify)

Sat May 20 15:21:51 2000
Thu Feb  3 17:03:55 GMT 1994

Common logfile format

common log format is used by various webservers (e.g. apache) and includes a date like:

03/Feb/1994:17:03:55 -0700

In strftime terms:

%d/%b/%Y:%H:%M:%S %z

More notes

GMT / UTC, daylight savings

Many date formats do not allow for specification of daylight savings details.

If you care about showing times correctly to different people (rather than just logging, where 'what the system saw at the time' is often enough), then you probably in general want to consider storing times after converting to UTC, as daylight savings does not apply to it - and not GMT, where daylight savings applies.

You can often count on a particular date library to then correctly convert to any specific format on the fly. This may also make it easier to keep up to date with country-specific changes in date/time, because that happens.

The military reference to Zulu time refers to UTC.

(Technically speaking, Universal Time (UT) contains a number of reference definitions. The most interesting is UT1 and its practical approximation, UTC.)

Used by...

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.


HTTP has historically allowed three formats:

RFC 822/1233 style
RFC 850/1036 style
and asctime style.

It seems HTTP1.1 restricted that to 822/1233 (it seems to add two details, though), but be prepared to parse/accept 850/1036.

MIME is also primarily 822/1233, though a stroll through spam will reveal dozens of types of abuse.

.NET seems to use the longest form of ISO8601

Some mentioned standards

  • RFC 3339 'Date and Time on the Internet: Timestamps'

  • RFC 822, 'Standard For The Format Of ARPA Internet Text Messages'
  • RFC 1123, 'Requirements for Internet Hosts - Application and Support'

  • RFC 850, 'Standard for Interchange of USENET Messages'
  • RFC 1036, 'Standard for Interchange of USENET Messages'

Date serialization/storage formats


Human-readable, some are also easily computer-parsed, and some are unambiguous timezonewise (ISO8601 is probably best if you want all of that).

See e.g. #Common date formatting.

Unix time

  • Counting one-second seconds elapsed since the defined epoch, namely January 1, 1970 (UTC)
    • Initially a signed 32-bit number, which will overflow in 2038 (and extends back to ~1902)
    • modern systems are moving to 64-bit, which is epoch ± 293 billion years
    • note that using float means resolution varies with actual time. The resolution drops below a second earlier than the value range (read up on storing integers in floats). If you do this, do it in 64-bit floats (order of a hundred million years before res is sub-second(verify))


Windows FILETIME is a 64-bit int representing 100-nanosecond steps since 1601-01-01T00:00:00Z

So basically

filetime = (unixtime * 10000000) + 116444736000000000
unixtime = filetime/10000000. - 11644473600.


that constant is the amount of nanoseconds between 01-01-1601 and 01-01-1970
you want to consider the details of floats and int64


  • Microsoft FILETIME
    • a 64-bit value counting 100-nanosecond intervals since January 1, 1601, UTC.
    • [1]

  • time value in UUIDs
    • a 60-bit time value counting 100-nanosecond intervals since 15 October 1582, midnight, UTC (the date of Gregorian reform)


In strftime, %Z sometimes outputs your current timezone. This may not be what you want.

If you want to output time with a timezone (e.g. for short-term cookies), it's often easiest/laziest to use a "now in GMT" function and hardcode 'GMT' into the string.

Incorrect assumptions about time