Tag Archives: figure-of-merit

ADC ENERGY EFFICIENCY EVOLUTION: What are the trends for ADC energy efficiency and why did the “Walden” figure-of-merit get almost canonical status when it doesn’t fit to current data? Read on, and you’ll know.

Evolution front

Figure 1 shows how the state-of-the-art boundary for energy-per-sample (Es) vs. effective-number-of-bits (ENOB) has progressed over time in 5-year steps from 1983 to 2013. The energy vs. resolution dependencies suggested by the Walden and Thermalfigures-of-merit (FOM) have been indicated as the Walden and Thermal slope, respectively.

Snow cone scatter

The most immediately striking feature in Fig. 1 is that the curves are increasingly more separated at lower resolutions, and tend to group more closely together as ENOB increases. With the Thermal and Walden slopes overlaid as in Fig. 1, you get a kind of “snow cone scatter plot”. It means that the energy efficiency has improved far more for low-resolution ADCs than for high-resolution converters during the 30 years of research covered by Fig. 1. A possible explanation for that is the fairly low number of attempts reported above 15-b ENOB. Another reason could be that the power dissipation at ultra-high resolution almost inevitably becomes limited by thermal noise constraints, and that the few reported designs were carefully optimized. As an example, the work by Naiknaware et al. [1] is the only scientific ADC reporting an ENOB > 20-b (other works only report static linearity or measures that did not resolve to an SNDR value). Although Naiknaware’s design was reported as early as year 2000, it is still on par with today’s noise-limited state-of-the-art. That’s quite impressive!

Slope twist

A second distinct feature in Fig. 1 is that the slope for Es vs. ENOB has changed over time from an almost perfect Walden FOM model (doubling of Es per additional bit) to an almost perfect Thermal FOM model (quadrupling of Es per additional bit).

This explains the great mystery of the Walden FOM and its near-canonical status: Even as late as 2003, the state-of-the-art edge aligned very well with a Walden model. In fact, the Walden model remained true to empirical data all the way to 2007. By 2008, however, the experimental data had started to break away from the Walden slope – something that was also noted by Murmann in his well-known CICC 2008 paper [2] – and by 2013 the experimental data fits more or less perfectly with the thermal-noise model.

The van Elzakker leap

The single most influential contribution to this shift is probably that by van Elzakker et al. [3] as it represented nothing less than a quantum leap in energy efficiency for ADCs in the lower mid-range of resolutions. It gave us a new experimental data-point that completely redefined the energy landscape as it showed their medium-resolution design to be pushed all the way to the thermal-noise power limit of 2008. I believe their contribution broke a mental barrier for many regarding how far you can actually go, and what is the real energy limit. Over the last five years, other authors have followed by reporting more experimental data – both filling the gap created by the van Elzakker leap and pushing efficiency even further [4]-[6].

Low-resolution plateau

As always, we have the low-resolution plateau. There is a slight tendency towards a plateau already in the 1998 curve, and by 2003 it was fully visible – although at a much higher Es level than today’s plateau. Figure 1 also shows us that there has been significant progress at resolutions below 9-b over the 20 years from 1988 to 2008, but almost no movement at all during the last 5 years. The relative amount of attempts below 9-b (~35%) has remained the same both before and after 2008, so it should not be due to lack of interest.

Any explanations you might have would be very interesting to hear. Pure speculations are fine too 😉

Hopes and expectations for the future

In coming years I would expect to see progress in the 10–13 bit region, which seems to be a bit underexplored at the moment. We saw an extension in this direction by the most recent state-of-the-art work by Harpe et al. [4]. I hope that future authors will continue to push the resolution for ultra-efficient ADCs. It should be possible to “iron out the wrinkles” on the current state-of-the-art border. It would be particularly nice if we could populate the border with evenly spaced SAR implementations spanning all the way up to the high resolution of commercial SAR ADCs.

I also hope that someone will explore the empty space below the low-resolution plateau. It seems to be a lot of data points missing there that could give us a better understanding of the true energy limits.

More data at ultrahigh resolution – please!

Finally, I want to plead to those of you designing ultrahigh resolution ADCs to start including traditional dynamic performance measures (at least SNDR) even if the target application you’re imagining doesn’t care about it. If nothing else, it would increase your visibility in my scatter plots, but the main benefit for our science is that we would get more experimental data and a better understanding of the design space for “20-b and beyond”.

ADC ENERGY EFFICIENCY: As a complement to the previous post, the energy vs. resolution is compared for Nyquist ADCs and ∆∑-modulators (DSM) in this post.

Class differences

Although it’s good to get an overall view of the landscape first, the previous post didn’t reveal any detail other than the basic shape, and the state-of-the-art border or envelope. We can get additional insight if we divide the data set by A/D-converter class. Every converter has been sorted into one of five classes:

Asynchronous

∆∑-modulator

Nyquist

Narrow-band

Other

Asynchronous means truly asynchronous, and does not include ADCs where the input is synchronously sampled and only the conversion is self-timed or ripple-through. Narrow-band is any converter other than ∆∑ for which the dynamic performance was calculated over a bandwidth lower than fs/2. Other is obviously the catch-all class for anything that didn’t fit in the other four.

Nearly all the data is in the DSM and Nyquist classes, so I have only used those two classes to render the Es vs. ENOB plot in Fig. 1. The global envelope is entirely defined by DSM and Nyquist converters. The envelope corner points [1]-[6] from the previous post are still annotated with first-author names, and a few more that are interesting for this discussion have been added as well [8]-[14].

As you can see from Fig. 1, the two state-of-the-art envelopes have very similar overall form: The energy seems to be limited by thermal noise constraints at higher resolutions, and they both level out to a constant Es, or at least a curve with much less slope at lower resolutions.

The main difference is that the DSM envelope defines the global state-of-the-art at high resolutions, and Nyquist converters define it for low to medium resolution. The transition point is currently at 12-b ENOB. Power-efficient ∆∑-modulators seem to have a noise-limited energy per sample from the 22-b ENOB reported by Naiknaware [6] down to the 12 bits reported by Shu [14]. Below 12 bits, the envelope quickly shifts to a much weaker dependency of resolution – not unlike the plateau observed in the previous post.

In comparison, the best Nyquist ADCs follow the thermal-noise energy model (or a slightly steeper slope) from the 15-b ENOB reported for SAR ADCs by Leung [7] and Hurrell [8] to the SAR ADCs reported by Harpe [4] and Liou [3] with 10 and 9-b ENOB, respectively. Below 9-b, I consider the envelope to be almost constant, as discussed in the previous post.

I guess you all observed the keyword SAR in the above paragraph, didn’t you? The SAR architecture defines more or less the entire shape of the Nyquist envelope, even if there are additional architectures along the plateau.

Energy bounds for low-resolution DSM

I hope there will soon be a theoretical analysis like [15] and [16] for ∆∑ too (please let me know if there is one already). Until then, we have to resort to empirical data. As briefly discussed in the previous post, it’s interesting to understand why the envelope breaks away from the thermal noise limit in the way it does, also for DSM. Are we looking at the same matching/min-size limits as suggested in a comment to the previous post. Lack of data? Lack of the “right” attempts? Limited expectations or other psychological barriers?

Since ∆∑-modulators are often viewed as “high-resolution”, I wanted to investigate the possible scarcity of data below the 12-b breakpoint around Shu [14]. Figure 2 shows how the highest ENOB reported in each paper distributes in the underlying data set. Eyeballing the histogram suggests that maybe as much as 40% of the DSM publications report a peak ENOB < 12-b, so “lack of attempts” can probably not explain why the envelope appears to degrade so quickly.

Fig. 2. Distribution of peak ENOB per scientific paper.

It is beyond the scope of this post to go really deep into the possible reasons for the “plateau-ish” low-resolution region for DSM, but I may return to investigate the composition of experimental data further to see if it can shed some light. For this post I mainly intended to show what the empirical data looks like. I also want to highlight two additional features of the DSM plateau:

Es is 1–2 orders of magnitude higher than for Nyquist converters.

The low-resolution envelope is defined by more unusual circuit implementations: Modulators presented by Daniels [13], Wismar [11] and Kim [10] are all VCO-based, whereas Chen [12] used a passive ∆∑-loop where only the comparator is active.

Why the 10–100X difference to Nyquist converters, then? From what I can see, most of the Nyquist converters that populate the low-energy envelope are the result of going to great lengths to weed out anything that has static current, and anything that switches faster or more often than it has to. Since the very foundation of oversampling is to evaluate the circuit state much more often than the Nyquist sampling rate, I assume it will be difficult to close the gap between these two envelopes.

But that’s me. Perhaps you have some idea how it could be done, or how to prove it isn’t possible?

As always, you are welcome to share your own thoughts and interpretations of the data.

Further reading

If you are curious to see what a more detailed breakdown by architecture would look like, you may find the plot in [17] interesting. Beware, though, that the data used in [17] does not include the most recent 400 or so papers from the last three years. Also, the plot is no masterpiece of readability 😉

ADC ENERGY EFFICIENCY LIMITS (Updated): A significant part of the ADC community seems to be focused on improving the energy efficiency of data converters. Essentially getting the same job done at ever decreasing energy costs. I’m sure that many of you are trying to figure out what is the absolute limit in the power-performance trade-off. Is it only our imagination or will thermal noise or some other law of physics finally stop us from improving the energy efficiency of ADCs? Well, I will not claim to give the full answer to that. What we will do, however, is to take an empirical look at where the field is today, check if the current state-of-the-art boundaries resemble any familiar theory, and observe how the energy efficiency is influenced by certain design choices and parameters. Since that might be a rather hefty undertaking, we will start out slow and let it all evolve over several blog posts.

What is “energy efficiency”?

First of all, we need to define how to measure energy efficiency. Energy efficiency in this context is about the trade-off between the performance you get and the power you burn. Performance typically means the simultaneous combination of speed and resolution. We will use the equivalent Nyquist sampling rate (fs) to measure speed, and the effective-number-of-bits (ENOB) to measure resolution.

The starting point for this treatment will therefore be to look at Es vs. ENOB, as plotted in Fig. 1.

The ADC energy landscape

Figure 1 gives a helicopter view of the ADC energy landscape. It shows the energy-per-sample vs. effective resolution for nearly all implementations reported scientifically since 1974 and to this date. The current state-of-the-art envelope is highlighted, and key corner points are indicated with first-author names [2]-[7].

The energy slopes

Two lines have been superimposed as visual guides, and to support the discussion: As mentioned in various publications, e.g., [8]-[11], the power dissipation will quadruplefor every effective bit of resolution if the ADC power is limited by thermal noise constraints (e.g., kT/C capacitor sizing), and the architecture is otherwise unchanged [12]. The solid line is therefore

and has been labeled the Thermal slope. It is equivalent to a constant Thermal FOM

The second visual guide (dashed) is

and was labeled the Walden slope because it corresponds to a doublingof power dissipation for every additional bit of resolution, as suggested by the Walden FOM. The dashed line corresponds to a constant Walden FOM

The term “–9” is arbitrarily chosen in order to get expressions that are easy to remember and that intersect Es = 1 pJ @ ENOB = 9-b, which is a relevant energy point approximately on the state-of-the-art border.

The low-resolution plateau

As seen in Fig. 1, energy-per-sample levels out to an almost constant value for low resolutions. This Low-resolution plateau is rather puzzling. While degradation below 3-b may be due to lack of data, it doesn’t appear to be any lack of attempts between 3–8 bits. Intuitively, I would not expect Es to be practically independent of ENOB from 8-b and below. Would you? I would expect it to continue to decrease with resolution, but possibly at a slower rate than the thermal slope dictates.

I do have some thoughts on how this may depend on what ADC specs scientists have chosen to target, but it would be very interesting to hear your thoughts on the plateau.

The empirical data vs. the slopes

The “slopes” in Fig. 1 represent the energy-vs.-performance models suggested by the two figures-of-merit FA1and FB1. As you can see, the thermal slope aligns very well with the state-of-the-art boundary for ENOB ≥ 9-b. There is a fair amount of randomness in the state-of-the-art progress that causes it to zigzag around the ideal model, but the thermal-noise energy model seems to be able to predict the overall slope of the curve from 9-b and above. This should be quite uncontroversial, as it is commonly understood that the power dissipation of high-resolution ADCs is limited by thermal-noise constraints. It should however be noted that recent works seem to extend this relation even to resolutions as low as 9–10-b ENOB.

Regarding the Walden slope, my interpretation is that it fails to fit to the empirical data within in any significant range of resolution, except possibly for the roughly 1-b wide region between Liou [4] and Harpe [5] where the overall curve (according to my interpretation) is in a state of transition from thermal-noise limited to approximately constant. There is also a region between 10–15 bits where the slope is almost identical to the Walden model. To the best of my understanding, this is an inevitable effect of zigzagging around the thermal slope: Locally, the curve will alternate between segments with a more shallow slope (looking like Walden) and segments with a slope even steeper than predicted by the thermal-noise model.

Since I can’t unambiguously prove the above at this point, and since I know this can be a bit sensitive, I will remain open to the possibility that the Walden energy model could still be valid over some range of resolutions.

Please fee free to share your own thoughts and interpretations of the data.

Update: I clearly forgot to mention the theoretical predictions of SAR ADC energy bounds by Zhang, Svensson, and Alvandpour in [13]. Kind of a SAR version of [10]. Since SAR ADCs dominate large segments of the low-energy scene, the paper is extremely relevant to this post. On top of it, the theoretical predictions align very well with the empirical data in Fig. 1 above. (You can try yourself to overlay the two plots in Photoshop or similar)

Upcoming posts

In a few more posts on this topic, I intend to illustrate how the empirically observed ADC efficiency limits depend on parameters such as sampling rate, process node and year.

Winter seems to be super-glued to Sweden this year, so to illustrate “spring” I had to pick this photo from the archives: Liverleaf (Hepatica Nobilis) in all its glory.

ADC FOM UPDATE: It’s now “post-ISSCC”, which is a more than sufficient reason to update the survey. If you were lucky enough to attend ISSCC this year, you may be familiar with the progress in A/D-converter figure-of-merit (FOM) since the Christmas 2012 Update. If not, I will summarize it here. This update also covers the most recent issues of IEEE Journal of Solid-State Circuits (JSSC), Transactions on Circuits and Systems pt. I and II, and ADC papers from ISOCC 2012. Unfortunately, the 2012 version of A-SSCC doesn’t seem to have made it into IEEE Xplore yet, so the 11 or so ADC papers that were published there will have to wait until next update. Even without the A-SCCC 2012, the survey now includes 4057 experimental data points extracted from 1810 scientific papers published between 1974 and Q1-2013.

ISSCC/Walden FOM

Already from the paper titles in the ISSCC 2013 Advance Program, it was clear that the previous 2.8 fJ world record by Harpe et al. [1] wasn’t going to stand for long. Of the two papers reporting an improved Walden FOM, the 10-b SAR by Liou and Hsieh [2], National Tsing Hua University, Hsinchu, Taiwan, achieves an impressive 2.4 fJ. Nevertheless, Pieter Harpe and coauthors Cantatore and van Roermund from Eindhoven University of Technology, The Netherlands, keep the leader position through their new 10/12-b SAR [3], achieving 2.2 fJ in 12-b mode.

Clearly, both of the above designs are outstanding works. Something I particularly liked with the Harpe ADC was the elegant way they reduced the impact of comparator noise only for the decision(s) when it is really needed (i.e., when the comparator input is weak). Check it out, and enjoy the beauty of it all.

Another highlight is that Harpe et al. were able to set the new FOM world record and simultaneously push ENOB to 10.1 bits. Since the Walden FOM does not correctly model the energy vs. resolution trade-off for thermal noise limited designs, it is more difficult to achieve a good FOM the higher resolution you have. We’ll take a deeper look into that very soon in future posts. For now we can just conclude that the effort represented by their result is therefore even more admirable.

Additional observations

As observed in the Christmas 2012 Update, state-of-the-art Walden FOM is typically reported at lower-than-nominal supply voltages. This is true also for the present update. If you are aiming to win the FOM race you obviously need to make a really good design in the first place. Then, when you’re measuring, it seems that a good advice would be to sweep the VDD downwards, accept that the circuit becomes slower and noisier, and simply search for the VDD sweet spot where you get the best FOM to report.

Another striking feature is that sub-10fJ Walden FOM has so far been reported from only a handful of countries, of which The Netherlands and Taiwan currently seem to have the initiative. I will probably focus on this geographical aspect in a separate post, so I’ll just leave you with this teaser for now.

Thermal FOM

As in the previous update, no progress is reported beyond the Thermal FOM of 1.1 aJ reported by Xu [4], but for Nyquist ADCs, the Walden FOM winner above [3] is also the new Thermal FOM winner with a new world record of 2.0 aJ. So, double gold medals for Harpe, Cantatore and van Roermund from Eindhoven University of Technology. Excellent job!

I also want to mention that the design by Liou and Hsieh [2] – the silver medalists in the Walden FOM category above – also weigh in as the third best Thermal FOM ever reported for Nyquist ADCs.

There are a few more designs now becoming visible on my “sub-10aJ radar”. Of these, I’d like to point out the ring-amp based ADC by Hershberg et al. [5]. First of all it’s not a SAR. Among low-energy Nyquist ADCs, that’s unusual in itself. Secondly, the authors suggest that Ring Amp realization of ADCs could be a way to beat the noise-floor vs. technology scaling limits predicted for example by myself in [6]. And, as much as I like to be right in my predictions, I still prefer that I am wrong and the ADC field continue to evolve beyond all limits we can see today. So I hope they are right about the Ring Amp ADC, and will follow up with more experimental results to establish that once and for all.

Or … that someone else of you has something even better in your drawer.

Upcoming posts

Unless I get too fascinated with the geographic aspects of low-energy ADC research, the plan is to start looking at the energy vs. performance limits from a mostly empirical perspective. I hope to deliver something that is useful for those of you active in this race.

ADC FOM UPDATE: I’ve understood that many Converter Passion readers are the very scientists who advance the state-of-the-art for A/D-converters. You are most certainly keeping a close eye on the progress yourselves. But in case you haven’t had time to scan the output of every major conference and top journal lately, this post will summarize the figure-of-merit (FOM) evolution since the Spring 2012 Update.

ISSCC/Walden FOM

As mentioned in the previous update, the 4.4 fJ reported by van Elzakker et al. at ISSCC 2008 [1] has been an impressively persistent world record for “The FOM”

It lasted for over four years until June 2012 until Tai et al [2] presented a 3.2 fJ SAR ADC at the IEEE Symposium for VLSI Circuits. Congratulations to the team from National Taiwan University, Taipei, Taiwan for their outstanding achievement. With their 0.35 V design, the NTU team were the new FOM champions between June and August 2012.

In September there was ESSCIRC. This year’s ESSCIRC had no less than four ADCs with a sub-10 fJ FOM [3]-[6]. No extra points for guessing the architecture – yes, they are all SAR. Among the four, the 2.8 fJ, 0.7 V, 7–10-b, flexible SAR by Harpe, et al. [3], is the new winner. The Eindhoven-based team behind the impressive world-record FOM are from the Holst Centre and Eindhoven University of Technology, The Netherlands. Excellent job, indeed! Many greetings from Converter Passion.

Beside the winners, all designs that achieved an FA1 < 10 fJ since the last update are listed below. Their combinations of {FA1, fs, ENOB, technology, VDDmax} are shown. In addition to all being variations of the SAR architecture, they are also close to the 9-b sweet spot for FA1, as predicted in “The path to a good A/D-converter FOM” and [7]. Except for the 7.07-b design by Yoshioka, et al. [6], they are all gathered within a 0.5-b slim interval centered just above 9-b. As explained in [7], this is not by accident.

Another clear trend is to operate the ADC at a lower-than-nominal supply voltage. As you can se from the table, the CMOS nodes range from 180 to 45 nm, but all six are run at low or ultra-low voltage. This is directly beneficial as it reduces the digital switching power. It is also likely to cause the converter to become limited by analog noise, which is pretty much a requirement when you’re aiming for energy-optimal operation.

FOM [fJ]

Speed [S/s]

ENOB

Node [nm]

VDD [V]

1st Author

Ref

2.8

2M

9.31

90

0.7

Harpe

[3]

3.2

100k

9.06

90

0.35

Tai

[2]

3.9

2M

9.29

65

0.7

Yin

[4]

4.5

1k

8.80

65

0.6

Zhang

[5]

6.1

1.3M

7.07

45

0.4

Yoshioka

[6]

8.0

200k

9.33

180

0.6

Huang

[8]

What’s in the future?

It seems that a larger body of research efforts are now catching up with the rather extreme step taken by van Elzakker et al. The region below 10 fJ is rapidly becoming more densely populated. Within a six months period we saw two new world records, and with so much focus on this particular performance measure, we are likely to see more. In fact, judging from the titles in the ISSCC 2013 advance program, there are already two designs below 2.8 fJ lining up to be presented there. Perhaps more. Wish I could go there too.

Historically, the state-of-the-art FOM has mainly been reported in JSSC and at ISSCC, with the occasional publication at other conferences. As noted above, we can expect more to come out of ISSCC in the future, but ESSCIRC has clearly raised its profile with respect to ADC FOM in this millennium. Looking at the number of unique publications advancing the state-of-the-art FOM over time sorted by source publication, we get the “market share” of world records for each conference/journal, as shown in Fig. 1. Since the data between 2000 and 2012 consists of only 7 unique FOM advancements, we can’t be too sure about the trends. But it certainly makes ESSCIRC look good, doesn’t it?

Thermal FOM

As discussed in a previous post, the overall evolution of the “Thermal FOM”

over time has slowed down, and as explained in [9] it may not improve much over technology scaling. It is therefore no surprise that the overall FB1 remains unchanged at the 1.1 aJ reported by Xu [10].

Thermal FOM for Nyquist ADCs

New thermal-FOM champions for Nyquist converters are actually the same as the Walden-FOM winners above: First, the design by Tai et al. [2] nudged the previous 6.6 aJ record by Verbruggen et al. [11] down to 6.0 aJ. After a few months, Harpe et al. took the thermal FOM down to 4.4 aJ, which is the current world record.

Final words

All papers highlighted in this update represent considerable efforts and significant achievements with respect to energy efficiency. It was a joy reading them, and it will be exciting to see how far this evolution will take us.

To the blog readers that celebrate Christmas, I wish you a Merry one – to the rest, a Joyful Season. To all of us, a Happy New Year!