Extending Debugging Resources

Are we using the debugging resources available in so many MCUs in the most effective manner?

In the olden days, processors just didn't have any on-board debugging features. This was a boon for tool makers, and kept me fed for many years as my company made in-circuit emulators. Early ICEs offered little functionality beyond a handful of instruction breakpoints, but over time they sprouted a wealth of features like data breakpoints, trace, profiling, and much more. The 80s and 90s were halcyon days for the ICE industry.

That business is all but gone. Sure, pockets still exist. Microchip's REAL ICE and a handful of other products still keep a flicker of life in the emulator business. But very high bus speeds, tiny all-but-unprobeable packages, and the staggering array of on-chip debugging features hollowed out that industry.

Multiple hardware break- and watch-points are now common on-chip, as is trace and much more. In the ARM market, of course, vendors are free to pick and choose (for a fee) from a variety of debug modules, or to have none at all.

How many of these resources do you typically use at a time when debugging? I bet the answer is generally no more than a few.

I'd like the IDE vendors to offer a mode that automatically enables these resources to capture common problems. For instance, wouldn’t it be nice if the tools always monitored stacks? Or watched large data structures for buffer overruns? Or captured null pointer dereferences by watching for accesses through location zero?

As a not very sophisticated user of native debugging features of ARM chips I am confused at the plethora of facilities in different chip generations. I know that there begin to be standard debugging cells, but it still is tricky to know what's there. FWIW, to preserve my sanity I try to stick to standard toolchain GCC/GDB for Cortex chips from M0 to A8, and as far as I know there is no way to 'introspect' into exactly which capacities are on the platform I am working with at the moment.

This is a problem with ARM in general, for instance for the system peripherals. On Intel, the PCI resources self-identify but on ARM you just have to read the SRM and write down the memory addresses.As a result, the abstraction has to be handled by software, e.g. device trees on Linux, and the result is quite cryptic. The same goes for debugging, but AFAIK there is no standard to describe it in the way device trees describe the I/O peripherals.