Posted
by
samzenpuson Monday May 15, 2006 @05:35PM
from the ticking-away-the-moments-that-make-up-a-dull-day dept.

Ben Rothke writes "For most people, having their clocks accurate to within a few millionths of a second is excessive. Yet there are plenty of reasons to ensure that clocks on networks and production systems are that accurate. In fact, the need for synchronized time is a practical business and technology decision that is an integral part of an effective network and security architecture. The reality is that an organizations network and security infrastructure is highly dependent on accurate, synchronized time." Read the rest of Ben's review.

Computer Network Time Synchronization

author

David L. Mills

pages

304

publisher

CRC

rating

10

reviewer

Ben Rothke

ISBN

0849358051

summary

Definitive reference on how to deploy and use NTP

From a practical perspective, nearly every activity requires synchronized time to operate at peak levels, from plane departures and sporting events, to industrial processes, IP telephony, GPS and much more. Within information technology, technologies from directory services, collaboration, to authentication, SIM and VoIP all require accurate and synchronized time to work effectively.

Computer Network Time Synchronization: The Network Time Protocol is a valuable book for those that are serious about network time synchronization. David Mills, the author of the book, is one of the pillars of the network time synchronization community, and an original developer of the IETF-based network time protocol (NTP). The book is the summation of his decades of experience and a detailed look at how to use NTP to achieve highly accurate time on your network.

While network time synchronization is indeed crucial to corporate networks, this is only the second book on the topic. Last year saw Expert Network Time Protocol: An Experience in Time with NTP, which is a most capable title. But this book is clearly the indisputable reference on the subject, given its extraordinary depth and breadth. While Expert Network Time Protocol gets into the metaphysics of time, Mills's book takes a much more rationalist and pragmatic approach, which explains the myriad mathematical equations.

Mills is an electrical engineer by training and a significant part of the books 15 chapters involve advanced mathematics. But even for those who can't manage such equations, there is enough relevant material to make the book most rewarding.

Chapters 1 and 2 provide an excellent overview of the basics of network timekeeping and an overview of how NTP works. We often take for granted that network computers have the capabilities to set their internal clock. But while the capabilities are there, the reality is that these clocks are rarely accurate and subjected to many externalities that affect their ability to provide accurate time. The book shows how highly accurate time is easily achievable; often without the need for additional hardware. The goal of book is to show the reader how they can use NTP to synchronize the time on their network hosts to within a few milliseconds.

Chapters 3 - 11 detail the internals of NTP and time synchronization. Topics such as clock discipline algorithms, clock drivers and more are detailed. For many readers, the information may be overkill, but remember that this is not a For Dummies book.

Chapters 13 - 15 ease up on the abstract mathematics and are much more readable to newbie to the world of time synchronization. Chapter 13 is quite readable and details the metrology and chronometry of how NTP measures time as opposed to other time scales.

One of the key differences is the notion of absolute vs. relative time. Relative or astronomic time is based on the earth's rotation. Since the earth's rotation is not absolute, leap seconds are added to keep UTC (Universal Coordinated Time) synchronized with the astronomical timescale.

So what exactly is this legendary thing called the second? In 1967, the 13th General Conference on Weights and Measures defined the International System unit of time, the second, in terms of atomic time rather than the motion of the Earth. Specifically, a second was defined as the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of cesium-133 atoms in their ground state undisturbed by external fields.

Since the 17th century, time has for the most part been measured astronomically via the solar day. But in the 1940s, it was established that the earth's rotation is not constant, as the earth is spinning slower than it did years ago.

Part of what NTP provides is coordination to UTC. UTC provides operating systems and applications with a common index to synchronize events and prove that events happened when timestamps state they did. UTC is a 24-hour clock system and that any given moment, UTC is the same no matter where you are located.

For the purist, UTC really stands for Coordinated Universal Time, but both terms are used. Mills somewhat humorously notes that we follow the politically correct convention of expressing international terms in English, and their abbreviations in French.

Chapter 15 concludes the book with a fascinating look at the technical history of NTP. As of mid-2006, NTP has been in use for over 25 years and remains one of the longest, if not longest running, continuously operating application protocols in use on the Internet. Currently in version 4.2.1, NTP is a well-developed, stable protocol.

For those that are simply interested in how time synchronization works, or are responsible for time synchronization in their organization, Computer Network Time Synchronization: The Network Time Protocol is the most comprehensive guide available to using NTP.

For those that need an exhaustive tome on all of the minutiae related to NTP and synchronization, this is the source. Short of a vendor and product analysis, the book covers every detail within NTP and is the definitive title on the subject.

Two new books on the subject in a year demonstrate the importance of time synchronization. While this is not likely indicative of a flood of new books on time synchronization, this book should be considered the last word on the topic."

Seriously... about how many people out there actually need to know NTP to this degree? Anyone have a rough estimate? I can't imagine any one organization would have to dedicate an individual to this sort of thing or would they?

Actually, having set up the NTP servers in our network, I have to say that the Windows version of NTP draws very substantial vacuum. It's not nearly as easy to configure. It can't be queried about what it thinks of the configured time standards, and I'm not exactly sure how they expect you to manage keys.

As long as you don't give a damn about sub-second accuracy (in our SCADA system, we like to stay in sync within 7 milliseconds or less) and as long as you don't care about traceability, then I guess it's better than nothing. However, the NT version of Mills' NTP is free, it is very stable on all versions I've tested it on from NT through 2003 server, and the configuration is exactly the same as most POSIX systems.

Having been there and tried it, I have to say that Microsoft did a piss poor job with their version of NTP. Get the GNU version. It Just Works Better.

Microsoft did not implement NTP. They first needed it to be simplified to "SNTP", which essentially is what they always did: send a query, receive the result, and put the timestamp in that result in the clock.A full NTP implementation includes a PLL that locks the clock to the consecutive incoming timestamps. This filters out jitter and ensures that the system knows about the inaccuracy of the clock oscillator. It uses this information during the intervals between incoming timestamps.

So, an NTP-controlled system smoothly advances time staying as close to real time as possible, while a Microsoft system has a sawtooth pattern and may even step the clock backward when a query happens to be delayed in the network.Don't use SNTP outside of a LAN.

Oh, I see. So what you're saying is that you'd forego actually knowing how to properly design a NTP system in lieu of simply bombarding stratum 2 and 3 servers with queries directly from your individual desktops. I see. That makes sense.

Yes folks, there is a right and a wrong way to set up NTP. Having each of your individual clients poll stratum 2 or 3 (or Allah forbid a stratum 1 server) directly is like configuring each of your clients to poll the the Internet's DNS Root Servers [root-servers.org] directly. After all

Seriously... about how many people out there actually need to know NTP to this degree?

Oh, about 10. But how many weird things do you know that not many others would value?

Some people are really, really into keeping time. It's a hobby for them. This book is for that sort of person. Besides, although my company didn't need to hire a person to do nothing but NTP, they certainly needed at least one person on staff with that skillset (hint: Active Directory, Kerberos, "clockskew") to keep everything else working. How fortunate for me that my boss needs the skills that I picked up out of personal curiosity!

I read about NTP via O'Reilly's crab-book, I think. Either that, or it was the de facto Systems Administration Handbook, which I keep giving away to people. For keeping a set of servers in sync. Generally, I just populate/etc/ntp/step-tickers and restart, but it has been made easy over the years.

I understand that I can have a GPS, and plug it into my host, that I can use NTP to distribute the time from there to my local network. I haven't had a GPS to dedicate to that purpose yet, but I found NTP interes

I'd wager the only thing they need to dedicate someone to doing is going around once a week to everyone's machine and hit "update now" under the Internet Time tab (granted an XP system, naturally). Over the week you might lose a couple seconds, but if you need that much precision, there's a reason we invented cell phones.

You'd be surprised how much you might need that precision. I have a small network here at home..... my source files are based on my server, and I work with them (mounted via nfs) under unix on one machine, and under windows (mounted via smb) on another.

RCS/CVS/make et all can get SERIOUSLY screwed up if the timestamps are off by even a second.

Mills told me he was rather popular back around the year 2000;) {to the point of being called to the White House for a series of meeting about Y2K complaince)

More interestingly, Mills said that he fears a potential DOS against the entire internet would be to use an NTP hack to advance the clocks on all the caches, thus expiring their contents and causing the root servers to be flooded. This would effectively bring down DNS until the caches could be fixed.

It's useful information if you are going to design time distribution networks, troubleshoot them, or evaluate their performance. NTP's algorithms are much more exposed to the user than those in protocols like TCP. It lets you twist many of the knobs.

Seriously... about how many people out there actually need to know NTP to this degree?

None. In fact, at wr0k, I setup NTP on all boxes in a matter of 20 minutes (without -ever- having used/configured NTP). It's just a matter of reading the man page, changing servers in config files, and... well.. starting NTP server. That's it.

One would have to be pretty thick headed to need a 300 page book to explain it.

Seriously... about how many people out there actually need to know NTP to this degree?

Roughly speaking? Nobody. Most NTP features are designed to allow large scale-sharing of a few expensive precision time devices. Nowadays, anybody who cares that much about accurate time keeping can afford a GPS or CDMA device. The rest of us can just poll pool.ntp.org occasionally. NTP software is helpful for both strategies — but you'll never need most of it.

Seriously... about how many people out there actually need to know NTP to this degree? Anyone have a rough estimate? I can't imagine any one organization would have to dedicate an individual to this sort of thing or would they?

Anyone writing hard real time distributed applications will need to know NTP this deep, or deeper. So figure, at least a couple of dozen or more individuals in the brokerage sub section of the financial world *alone*.

Seriously... about how many people out there actually need to know NTP to this degree?

A small percentage of computers need to be controlled to the accuracy of NTP's capability, and to the level of knowledge represented in this august book.

For the rest of us there's OpenNTP [openntpd.org] which is a much simplified and more secure version of NTP. If you're happy with a clock that is accurate to two- or three-hundred milliseocnds, check it out.

First, my credentials: I've been working with NTP for more than 10 years, my personal web server, which you can find via http://www.ntp.org/ [ntp.org] (I won't link directly to try to avoid the/. effect.) have hosted windows binaries of the official NTP distribution for some years now.

Since the original article didn't mention this, I would like to warn NTP users against ever configuring two servers! The reason is that NTP by design requires a plurality of all sources to agree on what the time is, before it will beli

Measurement and control systems are widely used in traditional test and
measurement, industrial automation, communication systems, electrical power
systems and many other areas of modern technology. The timing requirements
placed on these measurement and control systems are becoming increasingly
stringent. Traditionally these measurement and control systems have been
implemented in a centralized architecture in which the timing constraints
are met by careful attention to programming combined with communication
technologies with deterministic latency. In recent years an increasing number of
such systems utilize a more distributed architecture and increasingly
networking technologies having less stringent timing specifications than the
older more specialized technologies. In particular Ethernet communications are
becoming more common in measurement and control applications. This has led to
alternate means for enforcing the timing requirements in such systems. One such
technique is the use of system components that contain real-time clocks, all of
which are synchronized to each other within the system. This is very common in the general
computing industry. For example essentially all general purpose computers
contain a clock. These clocks are used to manage distributed file systems,
backup and recovery systems and many other similar activities. These computers
typically interact via LANs and the Internet. In this environment the most
widely used technique for synchronizing the clocks is the Network Time Protocol,
NTP, or the related SNTP.

Measurement and control systems have a number of requirements that must be
met by a clock synchronization technology. In particular:

Timing accuracies are often in the sub-microsecond range,

These technologies must be available on a range of networking technologies
including Ethernet
but also other technologies found in industrial automation and similar
industries,

A minimum of administration is highly desirable,

The technology must be capable of implementation on low cost and low-end
devices,

The required network and computing resources should be minimal.

In contrast to the general computing environment of intranets or the
Internet, measurement and control systems typically are more spatially
localized.

IEEE 1588 addresses the clock synchronization requirements of measurement
and control systems.

As I understand it, IEEE 1588 requires special hardware in order to achieve its full accuracy. My guess would be that on non-special hardware (i.e. a typical PC) it's not going to achieve any better accuracy than other network time protocols.

IEEE 1588 is much more accurate than NTP. Yes, to get greatly increased accuracy, it is helpful to have switches [hirschmann-ac.com] that properly handle 1588 traffic. However, this is not a huge issue with industrial automation, where one has complete control over the hardware. Yes, I'm not sure if running 1588 over the commodity internet would buy you much. However, if you really wanted tight timing, then 1588 is worth a look. The reason to use 1588 over NTP is if you need greater accuracy like +/- 60ns. My interest in

In case anyone's interested, one of the reasons that the abbreviation is UTC is because there are a series of Universal Time time references: UT0, UT1, etc. Despite being officially "Coordinated Universal Time", it's abbreviated as UTC partly to continue the UTx notation.

"UTC" also has the benefit that it fits in with the pattern for the abbreviations of variants of Universal Time. "UT0", "UT1", "UT1R", and others exist, so appending "C" for "coordinated" to the base "UT" is very satisfactory for those who are familiar with the other types of UT.

It's been proven that the Earth is rotating slower than it used to be, and the definition of a second was changed so that the length of a second remains constant. The day, however, remains the same as it always has been: one full rotation of the Earth.
Eventually there will be conflict between the two. If the rotation of the Earth continues to slow, there will be more seconds (and, in turn, more minutes, and then more hours) in a given day.
To that end, I've always wondered what would be more disruptive to the human populace: longer days or longer seconds?

To that end, I've always wondered what would be more disruptive to the human populace: longer days or longer seconds?

Longer seconds. The change in length of a day is extremely gradual ("glacial" is fast by comparison), but as seconds are defined in terms of physical constants, a changing second means that our physics have ripped and we're fixin' to die.

Changing the size of the fundamental units merely changes the value of the various physical constants. Physics works just fine no matter what units you use. However, a redefinition of the physical constants would open the possibility of accidentally mixing old and new units in a calculation (and NASA has shown us what sorts of things can happen in that event).

and the definition of a second was changed so that the length of a second remains constant

This isn't my field of study, but I believe the second is defined as a certain number of oscillations between two hyperfine levels of the cesium-133 atom. This was done in the late sixties to get the definition of the second away from earth rotations and tie it to something more reliable and easy to measure.

leap seconds are added to the "civil" day to solve that problem roughly every year at present. And there's more than one type of "day", there's mean solar day, which in 1820a.d. was 86,400 atomic seconds, and now is about 2 milliseconds longer.

Mills is a prof in my department and was my advisor back when I was an undergrad. He is a very smart guy (A bit of trivia about him - he was asked to consult for the Chinese government on the Great Firewall and turned down the offer for ethical reasons). He also prides himself on the fact that NTP has never had a serious (any?) security issue despite being around damn-near forever. One very neat observation he described during a seminar on NTP was that high CPU load increases CPU heat, and CPU heat increases clock drift. Thus, NTP can, in effect, be used to measure CPU loads remotely. Another thing is, assuming CPU load is constant, it can be used as a thermometer, and in practice he has used it to detect fan failures.

I'm the one who arranged for that seminar to be recorded (I convinced the department to pick up the $400 cost) To my knowledge, three copies were made - I got one (it's sitting on my shelf as I type this), department secretary Barbara Graham got one for the department, and Joseph (a guy in my research group) got the third and final copy. So - how did you end up with one? I am very curious to know.

a second was defined as the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of cesium-133 atoms in their ground state undisturbed by external fields.

Well of course, I mean, what took them so long? Seriously though it's things like this that make me ask, what on earth lead them to define it like that? Its not 9 million cycles, not 9.5 million, not an obvious number of cycles at all. How did 9,192,631,770 cycles become it, not 9,192,631,771, thats too long, not 9,192,631,769 thats too short. Only 9,192,631,770 was good enough.

They took the pre-existing definition of a second, and measured how many cycles happened in that second, then rounded that to the nearest integer and said "new definition which is only dependant on quantum mechanics".

To maintain the illusion that it didn't change from the previous definition, perharps (to within the accuracy of measurements at the time)? Of course, the final decimals were probably a bit of a hack, but a serious attempt to make the match as good as possible.

Seriously though it's things like this that make me ask, what on earth lead them to define it like that? Its not 9 million cycles, not 9.5 million, not an obvious number of cycles at all.

Most of the SI units have been through several iterations. At each refinement you try to have a more precise value, whilst changing the absolute value as little as possible.

For example, why do we define an international mile to be 0.9144 metres, rather than the original 1000 double paces of a Roman legionary? Well, it's pretty hard to find a properly calibrated legionary these days.

Our General raised hell over the fact that our wall clock (which is a set of LED clocks of local time, zulu time, Baghdad and Kabul) in one conference room was two minutes faster than the wall clock in another conference room. I'm not really sure why so much vitrol was spent over a clock discrepency (the clocks aren't used to conduct operations with, just to give rough situational awareness of what time it is in different parts of the world) but that day our systems guys learned the importance of synchroniz

Big LED displays aren't cheap. Usually they have serial data input, so you can scroll random stuff on them.Anyway, I used to work in support at <company that used to build really fast, big, expensive supercomputers>. Just for the hell of it, a user wanted to hook up their $30 Epson dot matrix printer up to their new supercomputer, and we didn't really have a decent cheapo Epson printer driver.

"I just paid $15 million for this damn computer and what do you mean the serial port doesn't work?"

Many thousands of Australian and New Zealand Troops died because of an 11 minute uncertainty between clocks in World War 1. Naval Bombardment of the Turkish Gallipoli Peninsula was scheduled over night, and then advancement by troops to commence immidiately after. However, because of the lack of synchronisation, the naval shelling finished 11 minutes early, allowing plenty of time for the Turkish troops to return into their bunkers and prepare for the oncoming invasion. Wave after wave of invading troops we

Accurate time is very useful in computer security work. For one, it's needed to accurately correlate log file entries from one computer to another in case of a breach, to identify means of access and creating an accurate picture of what happened and when.

Yes, that's true. having reasonable time sync is also useful in IPsec for timers, certificate validation, etc.But, the original post mentioned time accuracy to the "within a few millionths of a second".. Is there any real security need for something near that level of accuracy? In my experience, the vast majority of security applications only need accuracy to within a second, or tenth of a second type levels.

I suppose distributed IDS systems could use extreme accuracy for piecing together attacks. But

I run the network and phone system in a college, and whilst I appreciate NTP is great, it does have drawbacks.

The biggest problem is keeping computer systems synched to 'real life' systems, such as analogue clocks and college bells. These systems have a mind of their own, and are seemingly set to random times.

A prime example; my computer at work synchs from the web, as do the servers, which in turn means all the Cisco VoIP phones are synched as well. The bells however, are never quite spot on, nor are

I once wrote a bell controller app for the Apple II (using the switchable joystick outputs to close a relay). You could do something similar and run it on a UNIX box hooked up to a cheaply available USB prototyping kit. Tie it across the existing manual trigger switch for the bell system, then set the bell system to "silent" mode.

The problem is not one of keeping yourself in sync with a poorly designed system. The problem is one of the poorly designed system needing to be improved to stay in sync with

No, the point of the very complete book, which is the putative topic, is synchronization by NTP, which predates, and is entirely separate from, HTTP. I know I'm being pedantic but it irks me when folks, particularly technical persons who should know better, blithely refer to "the web" (perpetuating the miscomprehension of the online environment) and thus ignore/devalue the many non "web" parts of it.

For various reasons, I'm trying to synchronize a clock to millisecond accuracy among ~50 Microsoft Windows stations, and it's nearly impossible -- No NTP client for Windows (including AboutTime, 2000's internal client, XP's internal client, and a port of the standard NTP client) appears to be able to keep time reasonably synchronized.

Part of the problem is the Windows Kernel counting time in 10ms or 15ms (depending on whether or not you use an SMP kernel), which automatically says you can't get more than ~30ms precision. But it seems so much worse, with every machine drifting up to ~1 second daily unless they are syncrhonized very frequently -- I get somewhat reasonable results synchronizing them every minute.

On Linux and FreeBSD, this is so trivial it's not even funny; My linux machines manage to keep synchronization to ~0.5 ms over months. Please wake me up when Windows is ready for the enterprise. And, yes, the "enterprise" I work in does need millisecond precision time-of-day synchronization among machine, as does any place that seriously tries to correlate network events (especially those related to security) collected at different points in the network.

If its synchronizing on a schedule ("synchronizing them every minute"), you don't have an NTP client, you have an SNTP client. Real NTP doesn't have a concept of a synchronization interval, the clock is either synchronized or it isn't.

I think.

This [meinberg.de] appears to be a port of real-deal NTP code to windows. I've never used it, just found it in a few minutes of googling, but its worth a shot.

NTP does have a "next check" concept, or at least every NTP client I've used does (usually some form of exponential backoff when things look ok).The "synchronizing on a schedule" was the only solution that was somewhat close to being reasonable. No "real" NTP client I used seems to work. I didn't try this specific build, but I did try one compiled from the same sources, and it had lousy performance.

Thanks for the link, though.

The solution my "enterprise" finally used, btw, is to write a high precision clock

Sounds like you need the book. Seriously, real NTP does more that synchronize the clock periodically; it also determines the difference between the client clock frequency and the reference clock frequency, and the first derivative of the client clock frequency (wander), and uses those values to both determine how often to poll the reference sources, and to keep the clock well-synchronized between polls.

Windows doesn't do anything useful. That was the point of my rant. I'll probably buy the book sometime soon, but -- as I indicated -- the problem lies with the Windows kernel rather than with NTP. (And yes, I have researched this extensively).

Wrote a dumbed-down client-server utility, in which one machine tells the other (using a UDP probe, or a previously established TCP connection), "tell me your time up to the millisecond".In a reasonably loaded LAN, you get responses back after ~0.1ms, which means there is no skew due to latency. When you do that between two Linux machines, you get better than 0.1ms sync. When you do that between Windows machines, you get up to 30ms, no matter how long you allow those two machines to try to synchronize.

I agree with you in general: the man pages for NTP are quite good. However, there are a few vagaries that I certainly hope this book covers: why you always want at least 3 upstream NTP servers and at least 3 local ones if you're maintaining them (so that the 2 good ones can outvote the confused one), how to gracefully monitor the state of the NTP servers (the Nagios plugins are quite good!), etc.

to preserve the ultimate precision, about 232 picoseconds.
While the ultimate precision, is not achievable with ordinary workstations and networks of today, it may be required with future gigahertz CPU
clocks and gigabit LANs.

For my computer I am testing an old Heath Most Accurate Clock II* with its RS232 attachment that goes to the serial port on my HP Pavilion. The only problem is the brick sized power transformer gets very hot because its supplying two amp heavy circuits. Use ThinkGeek's KillAWatt to measure power consumption. AWK the transformer is hungry. I guess for real use eventually I will peek at time once a day or so.

*Heath Most Accurate Clock II, synchronizes with WWV at 10 meters.

I think that the network, with all its erratic latency, is not really the best source to use as a timing transport.

Some people have occasionally picked up old cesium clocks from ebay to set the PC's time. Most are from labs and after purchase, probably gather dust in the garage.http://tycho.usno.navy.mil/cesium.html [navy.mil]

For my wrist, myself and lots of us geeks, use a Casio G-Shock (GW-700a) that updates its time from WWV three times a night. Its more accurate than the clocks at our local public DART train station. They are always four seconds slow.

I also have a great little Nixie clock kit that gets its info, not from WWV via radio, but from satellite GPS time. Its the dinky one at the bottom of the page. Looks fantastic though.http://www.amug.org/~jthomas/clockpage.html [amug.org]

On production systems it's much more important that the servers are all close to each other, not so much that they are close to NIST time. So, don't care so much that your servers are stratum 2 or 3, set up a couple of sources and then sync the rest of your boxes to them. I'd rather have all my machines be one second off but the same one second off, than have them all be closer to real time with larger differences between them.

Also, one thing about the time on earth changing that I didn't realize before. Damming water is one of the few activities that has changed the rotation speed of the earth, I've been told. Because it collects large masses of water further from the equater.

And if you don't want to buy a GPS, the guy responsible for the NIST time standard at NIST Boulder says that syncing your clock once a day via phone from one of their services is good enough to be considered stratum 1.

One final time note... We used to hold our LUG meetings at NIST. One time during a meeting, their official digital clocks stopped for the better part of a minute, and then ran quickly to catch up.

Moving water away from the equator will, by definition, shift mass nearer the Earth's axis, causing it to spin faster, just like a skater pulling in his arms. However, that is not what is happening. The Earth's rotation might slow down more quickly without such water-damming, but the spin is still slowing down.

Dave Mills used to like to observe what happens after a leap second. Among other things, every generator on the power grid has to make sixty extra turns, which takes about four hours. Some computer clocks used to count the power line (this seems to be rare today) and you could watch, via NTP, the stress in the clock network as the power line clocks disagreed with the WWV clocks, and slowly came into synchronism.

Actually, synchronization is less important than it used to be, because more stuff is buffered. All three US television networks used to be locked together in frame sync to a master clock in New York, so that video sources could be switched without all the TV receivers rolling for a few frames. Now everything goes through frame buffers, so that's not an issue.

Similarly, US telephony used to be locked to a master clock in New Jersey, so that all the T1 lines ran in sync and bit for bit transfer worked. That's not as important as it used to be, with so many different transmission media, some synchronous and some packetized.

A rather confident 007 walks into a bar and takes a seat next to a very attractive woman. He gives her a quick glance, then casually looks at his watch for a moment.
The woman notices this and asks, "Is your date running late?"
"No", he replies, "I am here alone. Q has just given me this state-of-the-art watch and I was just testing it."
The intrigued woman says, "A state-of-the-art watch? What's so special about it?"
"It uses alpha waves to telepathically talk to me," he