Tech primer: Understanding clock jitter and how to improve it

In today’s world of terabit telecommunications systems, advanced chip-to-chip interfaces, and high bandwidth interconnects, the world of picosecond clocking has been left behind. Today's advanced clocking requires sub-picosecond performance as design engineers strive to shave off every precious femtosecond from the jitter budget. With design cycles becoming more constrained and time to market pressures mounting, this is a task that is much easier said than done.

That being the case, there still exist systematic ways to improve jitter. By carefully selecting the right technology and partnering with component vendors that have the correct expertise, it is possible to achieve the ultra-low jitter required in the end system without exhausting one’s engineering resources on the clock design. This article will discuss some of the major challenges and highlight considerations that can be used to make the correct design tradeoffs and component selection.

BackgroundIdeal systems do not exhibit bit errors because all signals arrive at the precise specified moment and all frequencies are exact. In real systems, signals may come slightly sooner or later than expected. This can cause bit errors and must be budgeted for.

Jitter is defined as the deviation of the arrival of a signal from when it is expected to arrive and phase noise is the presence of signal energy at frequencies other than the carrier. Random jitter has a Gaussian distribution and is specified as an rms value; one standard deviation of the distribution. Because Gaussian distribution is unbounded in an infinite sample, no communication system can be completely error free. Instead, communication links are rated with a maximum bit error rate (BER), which is typically around 10-12 for high speed communication equipment. To achieve a desired BER, one has to account for a number of standard deviations of random noise by using the appropriate value for N (see table 1) in the formula:

Tj = N x Rj + Dj

Where Tj is total jitter, Rj is random jitter, and Dj is deterministic jitter

Table 1: Standard deviations of random noise

In general, most datacom standards such as Ethernet and FibreChannel specify the maximum tolerable output jitter in the range of 0.3 unit interval (UI). Conversely, most telecom standards are stricter and require that the total jitter be under 0.1 UI. The table below translates UI requirements to the time domain and shows the Rj required to achieve 10-12 BER:

Not all clock phase noise transfers to the line output, but the bandwidth of phase noise over which clock jitter is measured often depends heavily or entirely on the clock. For this reason, clock selection can make the difference between meeting the output jitter budget—or failing.

AFM vs. PLLClock frequencies can be generated using analog frequency multiplier (AFM) topologies or a variety of phase locked loop (PLL) synthesizers. These frequency generation and translation circuits are necessary because often system on chip (SoC) and system components require frequencies that are not already available. Deciding which technology to use depends upon the required jitter performance and frequency flexibility necessary in the end system. It is important to balance flexibility and performance by working with a vendor that has the full toolkit of technologies to help quickly make the proper tradeoffs.

If the lowest phase noise and best jitter performance is necessary, the AFM should be used. Based on a simple concept, the AFM does not contain a feedback loop and can cover any of today’s required jitter performance. Being entirely analog, it does not add much noise to the base input frequency as it multiplies, although the basic 20logN phase noise constraint still holds. Able to multiply by powers of two, the AFM doubles an input frequency by generating a pulse at each edge of the original frequency. Repeating this effectively quadruples the frequency.

In theory, there is no limit to how many times the AFM can be cascaded. In practice, each doubling introduces a small amount of duty cycle distortion and deterministic jitter (see figure 1). The clock needs to be reshaped to remove those errors. Hence, the practical limit is quadruple of the original input frequency. The figure below shows AFM phase noise.

Figure 1: Plot of phase noise as a function of frequency shows the advantage of using AFM for high-performance applications.