Tag Archives: noise

WHAT YOU SEE IS WHAT YOU GET: We have previously studied the evolution of absolute thermal noise levels, and sampling jitter for analog-to-digital converters (ADC). Finally, the overall noise performance evolution is observed with all noise contributors included. Whereas the two previous posts analyzed two fundamental noise components in isolation, this post looks at the actual noise performance achieved with everything included. This is the ADC performance you actually get.

Observation of ADC noise floor trends

The ADC survey data spans a very wide range of converter specifications. An SNR of x dB in 20 kHz bandwidth is not as impressive as achieving the same in a 1GHz band. Using the relative noise-floor nr in dB/Hz derived by (1) allow ADCs with widely different Nyquist bandwidths (BW) to be compared with respect to noise performance.

Figure 1 shows the evolution of nr for delta-sigma modulators (DSM) and Nyquist ADCs over time. A similar plot, based on less data, and not differentiating between DSM and Nyquist ADCs is found in [1]. From a linear fit of the state-of-the-art data points, it is seen here that ADC noise-floor for has evolved at an average rate of ~2.2 dB/year until year 2000 for DSM, after which it has remained in saturation. Nyquist ADCs have developed at a slower rate of ~1.3 dB/year until 2010. The current state-of-the-art is approximately the same for both: –162 dB/Hz for DSM [2], and –161 dB/Hz for Nyquist [3]. Since the state-of-the-art for Nyquist converters was so recently reported, it cannot be concluded only from Fig. 1 that the noise floor for Nyquist ADCs is in saturation. A likely explanation for the DSM trend is, however, the lower signal swing, and thus higher relative noise-floor, implied by the continuous scaling of semiconductor technology [4],[5]. This is also seen in Fig. 2. Although the absolute noise-floor may remain constant in devices [6], the relative noise-floor is raised when signal swing is reduced. New technologies may allow higher bandwidths, but the simultaneous combination of SNR and bandwidth has not improved for a decade due to this inherent dynamic-range limitation of nanometer technology [5]. It is likely to assume that Nyquist converters will suffer from this limit at least as much as delta-sigma modulators do. Noise-performance normalized to signal bandwidth therefore seems to have reached the physical limits of process technology defined by the available signal swing. Expecting a further reduction in signal swing [9], future ADCs could very well fail to maintain the current state-of-the-art in noise performance.

Conclusion: ADC noise performance trends

Over the last three posts, it was seen that the overall state-of-the-art with respect to absolute noise power, sampling jitter, and relative total noise floor has not improved during the last 5–10 years. It is therefore concluded that all significant aspects of ADC noise performance appear to have reached saturation. This is an expected, yet significant result of the study as it clearly confirms the commonly raised concerns regarding analog design and dynamic range in scaled technologies, e.g., in [4]-[8].

My conclusion is that A/D-converters have already hit the noise floor – at least its softer upper coating.

What do you conclude?

In upcoming parts of the ADC performance evolution series of posts we will next take a look at ADC linearity trends.

Additional remarks

As commented in part 1, the trends of degradation observed below 65 nm in Fig. 2 may be due to lack of reported attempts, and not necessarily due to physics.

Using a circuit design that allows for large input signal swing can help a lot in improving relative noise floor performance. As an example, the state-of-the-art design by Hurrell et al. [3] reports an 8.2 V peak-to-peak input full-scale range.

JITTER TRENDS: Previously we observed the evolution of absolute thermal noise levels in ADCs. In this second post in a series of three, we will look at sampling time uncertainty, commonly referred to as jitter. The future evolution of A/D-converter jitter performance will have great impact on the development of advanced communications infrastructure, or any other application where you wish to sample at radio frequencies (RF) or beyond. A thorough assessment of past and present jitter evolution trends is therefore a highly valuable reference for system-level strategists, as it gives an indication of what kind of ADC performance to expect in the future.

Observation of ADC jitter trends

Sampling a single-tone input with frequency fin and rms sampling time uncertainty σt in an otherwise ideal system yields a jitter-defined SNR in dBc defined by (1).

An ADC where the circuit noise is dominated by sampling jitter has circuit signal-to-noise ratio SNRC ≈ SNRJ. See definition of SNRC in previous post. Observing the SNR achieved at a particular input frequency therefore leads to a worst-case estimate of sampling jitter, and looking at the SNRC vs. fin progress for the entire body of scientific Nyquist ADC data thus renders a conservative estimate of jitter performance evolution over time. Figure 1 shows the state-of-the-art envelopes for SNRC vs. fin at 1980, 1990, and 2000 compared to present day (~Q1 2012). By using SNRC instead of SNR, the ideal quantization noise component that would falsely add to the jitter estimate is removed. For data points where the effect of jitter has fully kicked in, this actually makes little difference. Since not all papers report performance at such high input frequencies, the use of SNRC instead of SNR still gives a better jitter estimation.

An SNRJ vs. fin jitter raster according to (1) has been included as a visual guide in Fig. 1 for jitter values of 0.01, 0.1, 1, and 10 ps. The state-of-the-art in 1980 is defined by only a few designs and therefore difficult to interpret. By 1990, a much larger number of attempts have been made, and the high-frequency roll-off is almost perfectly aligned with the theoretical SNR limit for 8–10 ps jitter. Garuts et al. achieves 5.5 ps at 1 GHz input with a 4-b, 1 GS/s ADC (9.8 ps without quantization error subtraction) [1]. During the following decade, the lowest jitter estimate was reduced by almost exactly ten times to the 0.53 ps reported by Singer et al. for an IF-sampling 12‑b, 65 MS/s ADC in [2]. A slight slowdown is then observed for the last 11 years, when the jitter evolution front progress by less than a factor of ten across most of the frequency range. Current state-of-the-art is defined by the 14‑b, 125 MS/s, RF/IF-sampling ADC reported by Ali [3]. The 88 fs rms jitter estimated at 500 MHz fin is a six times improvement over [2]. Looking at Fig. 1, it is evident where the main effort was focused. Whereas the state-of-the-art boundary is almost straight at 1990, there is an obvious “bump” in the 50–200 MHz range by 2000, which has migrated to 100–500 MHz to this date, and a secondary bump has emerged recently at 1–2GHz, defined by data reported in [4]. Although there are other applications matching these frequency ranges, the progress of the evolution front aligns very well with the evolution of requirements for wideband communications infrastructure. The current state-of-the art boundary is largely defined by publications stating wireless or digital communication as the target application. Hence it is concluded that communications applications have been a main driver for jitter performance over the last two decades. A point of concern for the communications industry could be that the current state-of-the-art [3] was achieved in 2005, using a 0.35 µm BiCMOS process. Observing the state-of-the-art estimate of σt (at any fin) over time, as shown in Fig. 2, jitter performance appears to be in a state of saturation. A closer inspection of the underlying data set gives further reason to assume saturation of jitter performance: Figure 3 shows the evolution of jitter performance vs. technology scaling. Although it’s possible that jitter eventually starts to improve in deeply scaled nodes, the current trend clearly supports the assumption of jitter performance saturation.

In the third and, for this time, final post on noise performance evolution, we will study the evolution of A/D-converter relative noise floor.

Figure 2. Evolution of worst-case jitter estimate for Nyquist ADCs in scientific publications. The state-of-the-art envelope has been highlighted.

Additional remarks

Note that jitter was analyzed by observing Nyquist ADCs. Bandpass delta-sigma modulators can achieve much better combinations of SNR and input frequency. The current state-of-the-art is a continuous-time bandpass delta-sigma modulator by Luh [5]. A significant part of the sampling jitter in a delta-sigma modulator is suppressed either by the noise transfer function or when selecting a band-of-interest in the subsequent signal processing. An arbitrarily small jitter estimate could be achieved simply by choosing to measure the SNR over a lower bandwidth. Delta-sigma, and other narrowband ADCs were therefore excluded from the jitter observation, but are included in both the previous and the upcoming noise-evolution posts.

Designs that could influence the state-of-the-art envelope, but possibly suffered from numerical problems in the SNRC estimation (for reasons described in part 1), were handled as follows: (a) All data points for which N–ENOB < 0.05 were filtered out before generating Fig. 1, and (b) The design by van Valburg [6] was left outside the main estimation of state-of-the-art envelope in Fig. 2. No special handling was necessary for Fig. 3.

ADCs using some form of optical or optoelectronic solution may be able to sample with considerably lower timing uncertainty. As of yet, such ADCs are quite rare, and mostly implemented in unusual or purely experimental technology. Optoelectronic or all-optical solutions may very well be the way forward if classic electronic sampling saturates at unacceptable jitter levels. The survey here, however, did not cover optically sampled A/D-converters.

HITTING THE NOISE FLOOR: In the quest for high-resolution, high-bandwidth ADCs it’s got to hurt when you hit the noise floor. Looking at the evolution trends for several noise-related parameters in scientific ADC publications, it certainly looks as if we’ve already touched that floor, or at least are very close to smack into it. We’ll look at performance trends for absolute and relative noise, as well as sampling jitter over a series of posts so you can make your own assessment. But before diving into the thermal noise evolution below, I will first serve up some background theory:

Separation of noise sources

Quantization errors, thermal noise, flicker noise, switching transients and sampling jitter all contribute to the noise power observed at the ADC output. In order to enable a deeper analysis of noise performance, it is desirable to separate different contributors from each other. A first step is to separate ideal quantization errors inherent to the algorithm from actual circuit noise.

Quantization vs. circuit noise

The ideal quantization error is often treated as a noise component, although in reality it is a deterministic signal with a large number of harmonics yielding a noise-like spectrum for ADCs with a sufficient number of bits N [1]-[2]. The ideal signal-to-quantization-noise ratio (SNRQ) in dBFS is defined as

Any noise above the ideal quantization noise level is due to the actual circuit implementation, and is referred to as “circuit noise” in this treatment. It originates from both analog and digital circuits, and can be further subdivided as described in the following subsections and upcoming posts. Using (1) for SNRQ, the signal-to-circuit-noise ratio (SNRC) is estimated by noise power subtraction based on reported signal-to-noise ratio (SNR) and resolution N as described by (2).

Since SNR and N were simultaneously reported in only 22% of all publications, SNR was used instead of SNRC when N was not available, and the signal-to-noise-and-distortion ratio (SNDR) was used as a conservative estimate of SNR when the latter was not explicitly reported. Using these conservative approximations, 86% of all reported ADCs could be assigned an SNRC estimate.

Although (2) gives a better view of circuit noise, some caution needs to be exercised when the reported SNR is very close to the ideal quantization noise. In such a case the subtraction in (2) becomes numerically unsound, and small rounding errors can have great impact on the estimated SNRC.

Noise sources excluded from this survey

Some well-known noise sources are unfeasible to analyze from ADC survey data, and will not be included in this treatment. These are:

Flicker noise

Switching noise

Noise due to non-zero DNL errors

The effects of flicker noise [3] can be suppressed by chopper techniques [4] or the use of correlated double sampling [5]-[6]. Very few ADC publications report flicker noise performance explicitly, and in many wide-band applications the flicker noise is not of particular concern. Thus it is not included in this study.

On-chip switching noise result from transients caused by the switching of analog and digital circuits during normal operation. Such interference propagates as transients on power supply lines, through the substrate, or by inductive and capacitive coupling. Its effect can be suppressed using supply separation, decoupling, guard-rings, electromagnetic shielding, and by controlling the slope or timing of dominating transients such as those generated by digital output pins. In a large circuit with many simultaneously switching elements, any deterministic relations between input signal and on-chip state changes are likely to be obscured by the large amount of switching nodes. Switching interference is then observed as an increased total noise level, possibly indistinguishable from thermal noise. Switching noise is usually not reported explicitly in ADC publications, and is difficult to estimate from other data typically reported. Therefore it is not treated in this survey.

Due to component mismatch, real ADC implementations have finite (non-zero) differential non-linearity (DNL) errors. Such DNL errors cause a non-uniform distribution of decision levels that increase the quantization error power above that of the ideal quantizer described by (1). The exact amount of additional noise depends non-trivially on the magnitude and distribution of DNL errors across the converter code range, and not only on the worst-case DNL error normally reported. Furthermore, because of the difficulty to define a “typical” DNL error magnitude that would correctly represent the entire gamut of ADC implementations surveyed, “DNL-noise” is not included in the treatment.

Thermal Noise

Unlike non-linear distortion, memory effects, etc., thermal white noise is entirely random from sample to sample and thus impossible to predict or compensate for by calibration. It is therefore a more fundamental limitation of ADC performance than non-linear distortion or memory effects. A simplified model of ADC noise refers the noise to a noisy input source resistance Rn while assuming the rest of the signal path to be noiseless [7]. The mean squared thermal noise over Rn in a bandwidth BW = fs/2 is [3],[7]

where k is Boltzmann’s constant (1.38×10-23 J/K) and T is the absolute temperature in Kelvin. Assuming a full-scale peak-to-peak input swing of VFS, the equivalent single-tone SNR is

Equation (4) defines a theoretical limit on the achievable SNR for an ADC with a given input swing and impedance level. Assuming fixed values of T and VFS, the theoretical limits for different Rn can be added to SNR or effective number-of-bits (ENOB) vs. fs plots of reported experimental data as a visual guide, e.g., as done by Walden [7]. In a future Converter Passion post, I actually plan to do something similar with our own survey data. It should be noted, however, that VFS vary between 5 mV [8] and 20 V [4] in the present survey. This corresponds to a 72 dB variation in (4), and simply assuming a fixed VFS therefore gives a very coarse approximation of Rn. Hence the reported values of VFS were also accounted for in this treatment. It was additionally compensated for the fact that ideal quantization noise is not contributing to the thermal noise, and Rn was therefore estimated from the estimated circuit noise rather than the total noise, as described by (5). As far as the blogger is aware, this compensation for quantization noise and full-scale range (FSR) was not applied in any other ADC survey when estimating thermal noise performance from empirical data.

Observation of ADC thermal noise trends

The evolution of Rn over time is shown in Fig. 1 for an assumed T = 300K, and the state-of-the-art envelope for delta-sigma modulator (DSM) and Nyquist ADCs are highlighted. As explained above, the value of Rn reflects the absolute noise present in the circuit with ideal quantization errors removed. The lowest absolute noise level reported for ∆-∑ modulators to this date is equivalent to Rn ≈ 2.5 kΩ for the DC measurement ADC in [9] by Thomsen et al., and the best Nyquist converter is the 250 MS/s, IF-sampling, 16-b, pipeline ADC by Ali et al. [10] with Rn ≈ 6.2 kΩ.

Note that the estimation of SNRC in (2) and (5) becomes sensitive to numerical errors when the reported SNR is close to the ideal quantization noise limit. In order to not overestimate the flatness of the saturation/slowdown region, three designs [11]-[13] were treated as potential (but not certain) outliers, possibly created by numerical problems in the estimation of SNRC, and therefore the main state-of-the-art envelope was drawn inside of these designs. As an example, a peak ENOB of ~7.97-b is reported for the 8-b folding ADC design by van Valburg et al. [11]. It corresponds to an ADC with almost nothing but ideal quantization noise, and hence a design that has extremely low analog noise levels in comparison with its overall spec. The same applies to the designs by van de Plassche et al. [12] and Garuts et al. [13]. For completeness, the Rn values estimated for these designs have been added to Fig. 1, together with the alternative state-of-the-art bound resulting from their inclusion.

It is further seen from Fig. 1 that absolute noise levels have not improved after year 2000 for delta-sigma modulators, and only improved by less than 3× since the 17 kΩ achieved in 1997 [14] for Nyquist ADCs. From what is seen in Fig. 1, it is highly likely that absolute noise power performance is currently in a state of saturation, or significant slowdown, where only moderate improvements are reported over time. Observing how Rn evolves over technology scaling, as shown in Fig. 2, gives further reason to believe in such saturation. It seems that the state-of-the-art Rn is almost independent of technology node, which is in line with [15], where it is suggested that the absolute noise in devices may remain constant over scaling. Similar trends of saturation will be shown for other noise-related performance parameters in two upcoming posts.

In the next post we will examine the evolution of A/D-converter jitter performance.

Additional remarks

Please note that what we believe to be Australia’s first – the 20 MHz bandwidth, LC bandpass delta-sigma ADC in 40 nm CMOS by Harrison et al. [16] – is also one of the ADCs currently defining the state-of-the-art envelope for Rn vs. node scaling. Now, doesn’t that seem like a good way to introduce yourself? I think so.

The apparent increase in Rn below 90 nm is not necessarily due to physics: There are too few designs below 65 nm to assess these nodes. In particular, the effort by Carlton et al. [17] is the only 32 nm ADC so far. The higher Rn in 65 nm can be a temporary setback (as also observed for 130 nm).

It should be understood that the Rn values in this post are estimates of thermal noise under the assumption that the remaining noise is dominated by thermal noise (at low input frequencies) when quantization noise is removed. For the designs with state-of-the-art noise performance, this is a reasonable assumption.

Survey data used in this post has a near-exhaustive coverage of all scientifically published ADCs, but did not include commercial parts. It is sometimes suggested that the inclusion of commercial parts would paint a completely different picture. At least when it comes to noise-related performance, they don’t seem to do that. Although commercial parts may settle to a slightly different (usually better) noise-level, they show a similar slow-down and saturation behavior over time.

The term “Node Geometry” used in Fig. 2, indicates a more liberal inclusion of all survey data points for which a process technology “feature size” was reported. In contrast, the more discriminating “Channel Length” is sometimes used in other posts.