Category Archives: Research

Figure 1. Accumulated publication count for scientifically reported ADC implementations in mainstream IEEE sources. The number of publications equivalent to 20% of total is indicated for reference.

SURVEYS GALORE: If the ten posts long A/D-converter survey just concluded did not fully satisfy your desire for scatter plots and tech trends, then this post will provide a list of prior ADC survey works as suggested “further reading”. In fact, I’d recommend everyone serious about data-converter technology trends to get hold of these documents. The list will also serve as a brief history of the ADC survey field. But first some thoughts on surveys:

Survey characteristics

When is it a “survey”? — I’m not going to spend too much energy on a stringent definition of a “survey”. My guideline is that a survey should be based on a significant amount of data, and that the visualization, discussion and interpretation of the data is the main work. Many scientific papers nowadays include a scatter plot that compares a particular design with 5–20 relevant prior efforts. While it’s a good idea to do so, these papers are not considered surveys in this context. Others use large amounts of empirical data to validate or derive a model, but the focus is more on the model.

What is “a significant amount of data”? — The size of the survey should be related to the total amount of data available at the time of the survey. A survey of 200 papers would have been exhaustive in 1990. Today it represents less than 12% of all scientific publications. The accumulated amount of scientific papers over time is shown in Fig. 1. While this is not the absolute total number of ADC publications, it covers the ADC implementations reported in nearly all journals and conferences central to the A/D-converter field, and shall for simplicity be referred to as the “total” amount here. The number of sources equivalent to 20% of the accumulated total at any given time is also shown in Fig. 1.

So, how much of the total do I need? Well, it depends on what you’re trying to do. When it comes to survey data, I’m a firm believer in “the more the merrier”, but there are tasks which can be done with a fairly small subset. For example, if you want to get an idea of the overall trend for one parameter vs. another or just make a quick sanity check.

Small subsets do have some limitations though. For the subset to function as a reasonably generic approximation of the exhaustive set, its data must span roughly the same chunk of parameter space, and have similar distribution of values in all dimensions. This is difficult to achieve unless you make a random sampling of the exhaustive set. A smaller set also risks running out of data, for example when dividing it further according to some parameter such as resolution or architecture.

How quickly will a survey become dated? — I really don’t know. I guess it depends on what you wish to study. But we can observe, as in Fig. 2, how the accumulated total at any given time relates to the overall total (here 1708 papers), and what percentage of currently available works were yet unpublished at any given time (e.g., at the end year of a particular survey). It is seen that approximately 50% of all currently available papers (~Q1-2012) were published in the last 8–8.5 years, i.e., after 2003, and almost 30% were not yet published in 2007. By the end of 1997, 70% of today’s body of empirical data was still unpublished.

You can use Fig. 2 to assess how old a survey can be before it’s no longer useful for your purpose. Can you make business decisions based on trend estimates where the most recent half of the data set is missing? Probably not. Most recent 30/20/10%? If so, you need a survey that’s less than approximately 4/3/1 years old.

It is clear that continuously updated surveys, such as Murmann’s, or the one used here at Converter Passion are preferable over single-shot attempts, since the former allows for continuously updated trend estimations.

Figure 2. Paper “yield”. The fraction of current total already published (blue) and yet to be published (red) at the end of any given year.

Known ADC surveys

Author

Size

Years

Type

What

Ref

Walden

100

≤ 1994

Both

Perf. limits, FOM

[1]

Walden

150

1978-1997

Both

Perf. limits, FOM, jitter, evolution

[2]

Merkel

150

1993-2002

Both

Perf. limits, SFDR, power, VDD, scaling, device, arch.

[3]

Le

1000

1983-2004

Parts

Perf. limits, jitter, cost, arch., no. chan, N

[4]

Walden

175

1978-2007

Both

Update++

[5]

Walden

n/a

1978-2008

Both

Update

[6]

Murmann

~260

1997-2008

Sci

Perf. limits, VDD, scaling, FOM, evolution

[7]

Jonsson

1400

1974-2010

Sci

Perf., VDD, scaling, FOM, evolution

[8]

Jonsson

1100

1976-2010

Sci

Perf. & FOM vs. CMOS scaling, evolution

[9]

Fuiano

5540

1970-2010

Sci

Data-converters, research/patent correlation

[10]

Jonsson

1400

1974-2010

Sci

Energy/sample by arch

[11]

Jonsson

1500

1974-2011

Sci

Area eff by arch

[12]

Murmann

~350

1997-2012

Sci

Online survey data

[13]

Jonsson

1700

1974-2012

Sci

Perf., VDD, scaling, jitter, SFDR, FOM, evolution

[14]

About the surveys

Walden

The “mother of all ADC surveys”, and the most frequently cited of all, is the pioneering work by Walden [2] where 150 scientific and commercial ADCs were analyzed, and performance trends were extracted. An earlier version was published already in 1994 [1], but this extended work became “The Walden Survey” to most of us. Although the 150 source documents originated from a mix of commercial and experimental designs, the Walden survey had a size equivalent to 30% of all scientific publications available at the time. The methods introduced in [2] are still useful, but Fig. 1 and Fig. 2 suggest that the trends extracted in [2] are unlikely to be valid and applicable today. At least they would have to be confirmed using more recent data. Two updated versions of the survey were published in 2008 – one covering 175 ADCs and data until 2007 [5], and one with an unspecified survey size and data until 2008 [6]. It is unclear how the 175 converters included in [5] were selected. During the time from Walden’s classic survey to 2007, the academic output alone generated another 715 new sources – commercial parts not counted. The +25 increase in source data therefore seems surprisingly incremental. Still, some of the results in [5] align very well with Converter Passion data, so apparently it was a carefully chosen subset.

Merkel & Wilson

Merkel and Wilson surveyed 150 commercial and scientific ADCs with specifications suitable for defense space applications [3]. Their data appear to span from 1993–2002, and the selection criteria for inclusion in the survey was a sampling rate fs ≥ 1 MS/s, and nominal resolution N ≥ 12 bits. The paper does not reveal the mix between scientific papers and commercial parts, but gathering 150 sources must have been quite an effortby the authors. The total scientific output matching these specs and the time period is no more than 81 papers, and only 59 in the two sources (ISSCC, JSSC) the authors mention as primary. An additional minimum of 69–91 commercial parts must have been included to reach 150 sources. It is therefore assumed that the Merkel & Wilson data set was close to exhaustive for the spec range surveyed, and exhaustive data sets are always applauded here at Converter Passion.

The analysis and discussion itself is geared towards the stated application and focused on linearity (SFDR) to the extent that noise parameters are not treated at all. Power dissipation, supply voltage, speed, device type, scaling and architecture were observed.

Le, Rondeau, Reed & Bostian

An enormous data set, covering nearly 1000 commercial ADC parts from 1983–2004 was used in the survey by Le, Rondeau, Reed and Bostian [4]. As a comparison, the scientific output from the same years (not included in their survey) is 900 papers. The work is firmly rooted in the Walden tradition, but also considers parameters such as the number of channels per package and cost vs. performance. Additionally, the treatment separates the data by architecture, which adds an interesting extra dimension. Because of the larger volume and time span of the data set, part of the focus is to establish differences between this work and the classic Walden paper. Unfortunately, some exponentially improving parameters were plotted along linear axes, which makes many results from the survey difficult to see or interpret. Nevertheless, the contribution by Le et al. is a gigantic work and a key reference.

Murmann

The survey by Murmann [7] is a significant recent contribution to the analysis of empirical performance data. It covers approximately 260 scientific ADCs reported 1997–2008 at the two conferences VLSI Circuit Symposium and ISSCC. The work analyzes ADC performance trends with a focus on energy per sample and signal-to-noise-and-distortion ratio (SNDR). The impact of process and voltage scaling is considered. If you don’t have this paper already, you should definitely head over to IEEE Xplore and get it right now.

Murmann’s survey has further benefits in that it is continuously updated and the data set is available online [13]. The latter opens up a lot of possibilities for anyone wishing to analyze the data in their own way, and makes the survey a very important contribution to the field. It currently includes around 350 sources.

Fuiano, Cagnazzo & Carbone

A rather different angle is taken in [10], where Fuiano, Cagnazzo and Carbone use survey data to analyze the correlation between scientific literature and patent activity. Compared to more “Waldenesque” surveys, this is a rather different animal. It nevertheless appeals to me as it illustrates an attempt to mine large amounts of survey data for something more unusual than ENOB, fs and FOM.

Jonsson

The ADMS Design data set used here at Converter Passion has also been used in five scientific papers, of which four are “surveys”:

ADC trends and performance evolution over time was analyzed in [8].

The impact of CMOS scaling on ADC performance was empirically analyzed in [9].

ADC architectures were compared with respect to energy efficiency in [11].

Other survey-related literature

A few other prior publications that are “survey-ish”, or otherwise use a large set of empirical data for their analysis are listed here:

Vogels and Gielen used a multidimensional regression fit to derive an ADC power dissipation model/FOM based on ≥ 70 empirical data points divided by architecture [15]. A similar approach was recently used by Verhelst and Murmann to analyze power dissipation and area vs. scaling based on Murmann’s data set [16] .

Sundström, Murmann, and Svensson derived theoretical power dissipation bounds in [17], and used the Murmann set to compare theory with empirical reality.

In [18], it was illustrated how the quality of a figure-of-merit (FOM) can be assessed by testing it against a large set of empirical data.

If you feel that I’ve left out any contributions that could have been mentioned in this post, just add a comment below.

Figure 1. Evolution of best reported thermal FOM for delta-sigma modulators (o) and Nyquist ADCs (#). Monotonic state-of-the-art improvement trajectories have been highlighted. Trend fit to state-of-the-art points for DSM [1984–2000] (dotted), and Nyquist [1982–2012] (dashed). Average trend for all designs (dash-dotted) included for comparison.

POWER EFFICIENCY TRENDS (continued): As mentioned in the previous post, a slightly different FOM, sometimes labeled the “Thermal FOM” [1]-[2], has been proposed in order to better compare high-resolution ADCs limited by thermal noise. The thermal FOM, FB1, is expressed as

The thermal FOM considers error power rather than amplitude (as in the Walden FOM), and therefore the value of FB1 improves by 4× (rather than 2×) for every additional bit of resolution. This matches the theoretical 4× minimum increase in power if ENOB is limited by kT/C-noise [3] and the architecture remains unchanged [4]. It was shown in [5] that the thermal FOM represents a better description of the state-of-the-art power-resolution tradeoffs according to empirical data than the Walden FOM for ENOB ≥ 9.

As seen in Fig. 1, there is a significant difference between DSM and Nyquist ADCs with respect to FB1. With the exception of two early 14-b designs [6]-[7], the global state-of-the-art is defined entirely by delta-sigma modulator implementations while Nyquist ADCs lag distinctly behind. A possible explanation could be that the thermal FOM favors converters whose power dissipation is truly limited by thermal noise, and that high-resolution ∆-∑ ADCs are more distinctly driven into the thermal noise limit than their Nyquist counterparts. Another point is that many scientific DSM implementations use an off-chip (i.e., zero power) decimation filter implemented in software. This will give DSM an unfair advantage over Nyquist, although it can hardly be the only explanation for a one order of magnitude FOM difference.

Since the thermal FOM for Nyquist converters has evolved over a rather uneven path, I’ll not make any elaborate interpretations of its shape. The trend (dashed) is simply fitted to all the state-of-the-art points from 1982–2012, revealing an average improvement rate of 2× every two years. The DSM envelope appears to have three main segments with breakpoints at 1990 and 2000, respectively. For simplicity, a single trend was estimated for the envelope until Naiknaware [8], after which the thermal FOM has evolved significantly slower. From Fiedler [9] to Naiknaware, the average improvement rate is 2× every 17 months (1.4 years) – again faster than Moore’s Law [10]-[11] – whereas from year 2000 to present day [12], the state-of-the-art points fit to a more modest 2×/5.5 years slope. Even if the latter is from a fit of only four data points, and the exact slopes can be discussed, it is clear from Fig. 2 that the thermal FOM for DSM experienced a distinct slowdown after year 2000. This coincides with the breakpoint where the relative noise floor – approximately the denominator in (1) – alsogoes into saturation. It can further be noticed that it coincides with the accelerated evolution of FA1 as well. A possible, but perhaps speculative interpretation is that the ADC community first focused on thermal noise performance and related design optimization, and after hitting the noise floor around year 2000 moved on to focus on power efficiency.

If you wish to suggest other explanations, please share them below.

This concludes a series of ten posts on ADC performance and technology trends. If you want to go back and read them all from the beginning, these are the topics and the order in which they were posted:

Figure 1. Evolution of best reported Walden FOM for delta-sigma modulators (o) and Nyquist ADCs (#). Monotonic state-of-the-art improvement trajectories have been highlighted. Trend fit to DSM (dotted), and Nyquist (dashed) state-of-the-art. Average trend for all designs (dash-dotted) included for comparison.

POWER EFFICIENCY TRENDS: A series of blog posts on A/D-converter performance trends would not be complete without an analysis of figure-of-merit (FOM) trends, would it? We will therefore take a look at the two most commonly used FOM, starting with the by far most popular:

where P is the power dissipation, fs is Nyquist sampling rate, and ENOB is the effective number of bits defined by the signal-to-noise and-distortion ratio (SNDR) as:

FA1 is sometimes referred to as the Walden or ISSCC FOM and relates the ADC power dissipation to its performance, represented by sampling rate and conversion error amplitude. The best reported FA1 value each year has been plotted for delta-sigma modulators (DSM) and Nyquist ADCs in Fig. 1. Trajectories for state-of-the-art have been indicated, and trends have been fitted to these state-of-the-art data points. The average improvement trend for all ADCs (2×/2.6 years) is included for comparison.

By dividing the data into DSM and Nyquist subsets, it is seen that delta-sigma modulators have improved their state-of-the-art FOM at an almost constant rate of 2×/2.5 years throughout the existence of the field – just slightly faster than the overall average. State-of-the-art Nyquist ADCs have followed a steeper and more S-shaped evolution path. Their overall trend fits to a 2× improvement every 1.8 years, although it is obvious that evolution rates have changed significantly over time. A more accurate analysis of Nyquist ADC trends should probably make individual fits of the early days glory, the intermediate slowdown, and the recent acceleration phase. This was done in [1] where evolution was analyzed with DSM and Nyquist data merged. However, for simplicity I’ll just stick to the more conservative overall Nyquist trend. [I wouldn’t want anyone to suggest that I’m producing “subjective” or “highly speculative” trend estimates, would I? 😉 ]

Still, if anyone is curious to know … 🙂 … the state-of-the-art data points fit to a 2×/14 months trend between 2000 and 2010. That’s actually faster than Moore’s Law, which is traditionally attributed a 2×/18 months rate [2]-[3]. A new twist on “More than Moore”, perhaps? Even the more conservative overall 2×/21 months trend is close enough to conclude that the state-of-the-art FOM for Nyquist ADCs has developed exponentially in a fashion closely resembling Moore’s Law. And that’s got to be an impressive trend for any analog/mixed circuit performance parameter.

Irrespective of what’s the best fit to data, it should be evident from Fig. 1 that Nyquist ADCs broke away from the overall trend around year 2000, and has since followed a steeper descent in their figures-of-merit. They have also reached further (4.4 fJ) [4] than DSM (35.6 fJ) [5]. The overall trend projects to a 0.2 fJ ADC FOM in 2020. Whether or not that’s possible, we’ll leave for another post. A deeper look at the data also reveals that:

The acceleration in state-of-the-art is almost completely defined by successive-approximation (SAR) ADCs [4], [6]-[11], accompanied by a single cyclic ADC [12]. The superior energy efficiency of the SAR architecture was empirically shown in [13].

A significant part of the acceleration can be explained by the increased tendency to leave out, for example I/O power dissipation when reporting experimental results – a trend also observed by Bult [14]. The FOM in the graph was intentionally calculated from the on-chip rather than total power dissipation because: (a) ADCs are increasingly used as a system-on-chip (SoC) building block, which makes the stand-alone I/O power for a prototype irrelevant, and (b) Many authors don’t even report the I/O power anymore.

FA1 has a bias towards low-power, medium resolution designs rather than high-resolution, and thus benefits from CMOS technology scaling as shown in [15],[16]. An analysis of the underlying data shows that, for the best FA1 every year, the trajectories for ENOB and P follows distinct paths towards consistently lower power and medium resolution. You simply gain more in FA1 by lowering power dissipation than by increasing resolution because (1) does not correctly describe the empirically observed power-resolution tradeoff for ADCs [13],[15].

In order to compare high-resolution ADCs limited by thermal noise, it has therefore been proposed to use a slightly different FOM, sometimes labeled the “Thermal FOM” [17]-[18],

SPEED/RESOLUTION TRENDS: Previous posts analyzed noise and linearity separately. Another common approach is to review the overall ADC performance in terms of sampling rate and effective resolution ENOB. In Fig. 1, the current state-of-the-art at ~Q1-2012 is compared to the envelopes for 1990 and 2000 in order to show the simultaneous evolution of the two parameters throughout the entire parameter space. SNR-only results have been excluded from this plot because ENOB is not fully defined by SNR. Hence, there is no experimental data available before 1980. By 1990 the curve has assumed the expected shape. Between 1990 and 2000 there is a 1-4 bits improvement across the full range of sampling rates. The main advances were in the 200kS/s – 100MS/s speed range. This corresponds to typical telecommunications specifications – from single-carrier GSM to multi-carrier WCDMA receivers. From year 2000 to present day, the more significant advances were at 12.5 MS/s [1], from 100–250 MS/s, [2]-[3], at 3 GS/s [4], and above 10 GS/s [5]-[7].

The thermal noise limits according to equation (4) in the thermal noise post have been included as a visual guide, using VFS = 1V, T = 300 K, and Rn = {50, 2000} Ω. Similarly, the theoretical jitter-limited ENOB at fin= fs/2 according to equation (1) in the jitter post has been added for σt = {0.1, 1, 10} ps. The Rn and σt values were deliberately chosen to simplify comparison with a similar plot in Walden’s survey [8] (see also Additional remarks below). Although the jitter limits should preferably be observed from SNR vs. fin (as done in the post on jitter trends), the shape of the state-of-the-art envelopes in Fig. 1 clearly indicate the regions where ADC performance is limited by thermal noise and jitter respectively. The design by Naiknaware et al. [10] is limited by thermal noise, while those by Poulton et al. [5] and Greshishchev et al. [7] are limited by sampling jitter (and/or metastability [9]). At the boundary between thermal noise and jitter limited designs are the ADCs that suffer from both noise sources in equal amount, such as the design by Ali et al. [3]. Designs in this corner put strict demands on the simultaneous design for jitter and thermal noise.

Additional remarks

It may seem that the state-of-the-art thermal noise according to Fig. 1 is equivalent to less than 2 kΩ for some designs. This would obviously be in contradiction to the 2.5 and 6.2 kΩstate-of-the-art reported for delta-sigma modulator and Nyquist ADCs, respectively. The thermal noise limits in Fig. 1 are only valid for VFS= 1 Vpp, and the apparently better results here are because of a larger full-scale range, e.g., 2.5 V for [3]. The correct noise-resistance estimations are found here.

The corresponding jitter limits in [8] have a 0.5-bit offset because it appears that Walden derives the rms-signal to peak-noise ratio by assuming that the signal is always sampled where the slope is greatest, i.e., in the zero-crossings [9]. In reality, the signal is sampled anywhere along the waveform for all but pathological cases, and therefore the rms slope should be used instead, as was done in this treatment.

LINEARITY TRENDS: After observing trends for technology scaling, voltage scaling, and noise, we have arrived at linearity. As far as the blogger is aware, there has been no large survey on A/D-converter linearity evolution published to this date, except for a recent work by Walden [1], where an “SFDR-bits” vs. fin scatter plot illustrates the state-of-the-art movement between 1999 and 2007 for the outermost corner/edge of the data, based on approximately 175 ADCs. In this post we will instead observe the migration of the entire envelope (state-of-the-art) for SFDR vs. input frequency (and sampling rate) in order to observe how SFDR evolved across multiple frequency ranges between 1990, 2000 and present (~Q1-2012). We will also slice through the data set, and specifically observe SFDR evolution at four very different speed grades of minimum sampling rate. The underlying data is from a survey of 1708 scientific ADC papers published between 1974 and Q1-2012.

ADC linearity trends: SFDR-vs-frequency envelope

While SNR, or an aggregate noise-and-distortion measure such as SNDR, is a sufficient measure of ADC performance for some applications, there are other applications where non-linear distortion is independently specified. Such applications include high-end audio and many wireless communication systems. Wireless communication systems often need to cope with the presence of a strong interferer in the form of a neighboring channel or carrier, while correctly interpreting a weak signal of interest. Without a sufficiently linear signal path, the interfering signal will generate harmonics or intermodulation products that may completely block the in-channel signal. The evolution of ADC linearity is therefore as important as the noise and ENOB evolution.

State-of-the-art envelopes for single-tone spurious-free dynamic range (SFDR) vs. input frequency fin and Nyquist sampling rate fs have been plotted in Fig. 1 and Fig. 2 respectively. Current state-of-the-art at ~Q1-2012 is compared to that of 1990 and 2000 in order to illustrate the evolution over all frequencies. Starting with the envelopes at 1990, the linearity vs. fin and fs is evenly distributed across almost straight lines representing the increasing difficulty to achieve high linearity as the input frequency and sampling rate is increased. The first of the two noticeable performance peaks in Fig. 1 coincide with the 20 kHz audio bandwidth, and the second peak is defined by video and instrumentation ADCs in the frequency range 10-100 MHz. It is evident from both plots that most of the progress from 1990 to the current state-of-the-art was achieved in the first decade 1990-2000 when SFDR vs. fin was improved by 20-40 dB across all input frequencies in the 100 kHz to 1GHz range, and SFDR vs. fs increased by 5-30 dB for the same range of sampling frequencies. Although state-of-the-art linearity has been increased by 5-10 dB over many segments of the frequency range during the last 11 years, Figs. 1 and 2 clearly shows that there has been a slowdown in the evolution of linearity over a broad range of frequencies and speed grades. One noticeable exception is the 30 dB performance lift in the 100-250 MS/s speed range, which reflects the specifications of the more recent wideband radio base-stations (RBS). It was concluded in a previous post that the evolution of communications standard requirements has been a strong driver for ADC jitter performance. Observing that the strongest push of the linearity envelopes also occurred at frequencies and sampling rates matching the specifications for wideband RBS, e.g., [2]-[5], it is concluded that communications applications have been a key driver for ADC linearity as well. This is also the conclusion of Walden in [1]. Another significant achievement during the last decade has been to improve performance at the high-frequency end of the spectrum, with sampling rates above 4 GS/s and input frequencies beyond 1 GHz [6]-[14]. Again, a similar observation is made in [1].

ADC linearity trends: SFDR by speed grade

Figure 3 shows the evolution of peak SFDR at minimum speed grades of fs ≥ {10k, 1M, 100M, 1G} samples/s. The curves show the monotonically improving upper edge for each subset of survey data. What is included in each subset is defined by the four minimum sampling rate constraints. As in Fig. 1 and Fig. 2, the overall scatter is removed for readability.

My interpretation of Fig. 3 is as follows:

1MS/s ADCs appear to have saturated at an SFDR of 108 dB [15], and have not improved since 1996.

At very low, and very high sampling rates there are no signs of saturation yet. For ADCs with fs ≥ 10 kS/s, there has been an almost constant progress of ~9dB/decade from 1992 [16] to 2009 [17].

ADCs with fs ≥ 1 GS/s are currently evolving at an accelerated rate of ~3 dB/year. If this rate is maintained, gigasample ADCs could go from 75 dB [18] to upwards of 100 dB SFDR by 2020.

The accelerated rate of evolution seen at different times for different speed grades may reflect how research activities migrate to higher and higher sampling rates depending on what applications are in focus. Previously more of a niche product, gigasample ADCs are now becoming a mainstream necessity.

Commercial ADC parts

Although not shown here, the results were also compared with the data from 595 commercially released ADC parts. Current state-of-the-art envelope for both sets align well across most of the speed range, with one significant exception: There are already commercial parts with significantly better SFDR than their scientific counterparts at 2 MSPS and below, e.g., AD7766 [19] and AD7986 [20]. Commercial ADCs appear to have evolved beyond their experimental siblings in later years within this speed segment(*).

Another difference is in the paths each subset has followed towards today’s (mostly similar) state-of-the-art. In the GSPS range there is for example MAX 104 [21], specifying 69 dB SFDR at fin= 125 MHz almost a decade before the scientific publication by Taft [22], while scientific efforts seem to have been ahead in other frequency ranges (e.g., below 2MS/s) during earlier years(*).

Linearity trends are therefore more difficult to interpret, and to some degree depend on what products were reported scientifically and not. Such dependency could not be observed for any noise-related parameter analyzed in this series of posts.

(*) Please note that these are only my best guesstimates, as the commercial data set (although large) is not as exhaustive as that for scientific ADCs.

In the next post I plan to review the simultaneous evolution of {ENOB, fs}. Subscribe to the blog, and you won’t miss it.