Spread-spectrum clocking to reduce EMI: clever or cheat?

This common technique can help you meet a spec or solve a problem, but has worrisome implications as well.

Most designers are familiar with the technique of using a spread-spectrum clock to reduce apparent EMI/RFI emissions. By deliberately dithering the system clock, the radiated energy is spread across the spectrum and thus its peaks are reduced, which allows the product to meet regulatory or industry specifications. The technique is now well-established, and vendors offer clock ICs with adjustable spread widths and rates, as well as advanced pseudorandom spread algorithms.

But when discussing the use of spread -spectrum clocking to meet EMI/RFI requirements, I find that engineers are divided on the approach. Some feel it is a legitimate, very-low-cost tool in the designer's kit that helps the design meets market requirements. Others feel it is short-cut cheat, too often used instead of proper EMI design methods. Still others feel it should be used only after all other conventional steps have been taken, such as shielding, grounding, layout changes, ferrite beads, to cite a few.

One argument against using spread-spectrum is that you are very likely actually making your design "problem" into someone else's. As you spread the energy, yes, you may meet a specification, but you also introduce the likelihood of unexpected problems when your spread energy mixes with as-yet unknown or undefined energy in other nearby or connected systems, each with their own frequencies and amplitudes. In short: "hey, I met the spec--after that, it's your problem, not mine!"

As with most engineering designs, there is no right or best answer. What makes sense depends for your project's priorities, budget, constraints, market forces, and the balance among all the tradeoffs which every design encompasses. Perhaps in the ideal world, the design would first be made as EMI-robust as possible, and then spread spectrum would be added for a little extra insurance, but only if needed.

[Ironically, using spread spectrum is absolutely contrary to another engineering imperative, central to test equipment and many data links: to have a clock which is as perfect and jitter-free as possible. That's one of the many contradictions and sources of discomfort engineers see with it.]

Have you ever used spread spectrum to reduce EMI and meet a spec? What's your view on spread spectrum as an EMI reduction technique?

•it's a great idea, go ahead and use it right away;

•use it only after everything else that should be done has been done, and you are still stuck;

•use it only after you have already met the spec, just to buy a little extra margin;

•or don’t use it at all, since the unforeseen consequences to the overall system and broader application are too risky?

One kind of sneaky question: Why, if the receiver noise is in fact a problem for measurements taken in accordance with test directives, do test directives still specify an unrealistic measurement bandwidth? It seems that the EMI test suite should be set for a bandwidth comparable to the expected receivers that would be in use for the frequency band being tested. If this was properly implemented, then there would not be a question of sneaky interference from "certified" or "verified" devices.

I think spread spectrum clocking is a terrible thing to do in a system designed to interoperate with other components. When integrating such a system, I have seen configurations that were unstable with spread spectrum clocking in use. So, for "open" platforms I think it is a terrible method. Even in a closed system, it feels like a cheat, but at least the full system will have been tested together and, presumably, at least function properly.

If you just want to pass a regluatory hurdle, I would make it my fix of last resort. However, the last time I saw someone use this was in a 90's automotive design. Everyone loved it until the platform folks did an FM radio listenting test. When you hit "seek the next higher station" button it would stop at an "empty" station and growl at you. I was the poor schmuck who was tasked with re-designing it.
It is a blatant method of "fooling" the quasi-peak detector and spectrum analyzer sampling response time. We all better hope that CISPR 14 & 16 don't address this one day.

In GPS & ESM circles this is called noise jamming. In the former spreading noise is tolerable due to distance to satellites but created the near-far problem when local augmentation pseudolites were proposed due to their relative proximity making precessing gain much less effective. If all and sundry went for SS ( cellular services, GPS/Galilleo (GNSS), EMI measures,TV broadcasting, etc) a rise antenna noise temperatures, and therefore noise floor, would begin to defeat the original attributes of SS. Too much of a good thing?

WKetel, you have some right on your side: in some cases, spread spectrum is a means to dodge a regulation. But in an environment full of emitters, there has to be some level below which you, as the receiver, have to live with. Now if spreading your emissions widely enough puts potentially interfering emission far enough down, then that may be a case to accept the (admittedly) shady technique. At least until the requirements boys get a bit more clever and address SS techniques specifically.

Spread spectrum is another "cheap trick" to get around a regulation. It does reduce some kinds of interference, but if a switching supply, or a microcontroller is radiating noise into my audio system, it is not going to make my problem any smaller, just a bit harder to locate the source. Noise at a constant frequency can be rejected by a tuned filter, but noise that is spread spectrum would need to be bypass filtered, or band-reject filtered, both of which are a bit more complex than a single frequency rejection filter system.

Hej, from the shortwave listener's point-of-view, spread spectrum is cheating. It allows to pass EMI tests but - not reducing overall energy emitted - adds to the basic noise level.
It is what makes live hard. Like "light pollution" making the astronomers' lives hard.

The main reason for tight EMI emission standards is RF interference, and that only makes sense in the context of a given frequency. The fundamental issue isn't that your widget creates too much total emitted energy (unless your widget is very high power); but rather that the energy is concentrated at one frequency due to "flaws" in your product. These "flaws" are a side-effect of fast, synchronous designs and high-speed external connections. Spreading the energy just counteracts a side-effect of synchronous design.