Can't it just be on me to not make such stuipid errors in the first case?

Yes, but we all make mistakes or forget to allow for boundary conditions or out of range values.

My example cannot be detected at compile time of course but a simple defensive coding is not about supervisory code, it's just applying sanity tests to incoming data. Here's an example from the framework I'm currently working on

VERIFY_OBJECT(s, OBJID_STRING) // OK, the struct is good we can use it with a reasonable amount of confidence

The function requires a pointer to a "string" structure and most code will simply take if for granted that "s" does in fact point to the right place. The VERIFY_OBJECT macro tests for two special bytes in the structure, if they don't conform to a particular format this is considered to be a fatal error, it is reported and the processor enters a while(1) loop.

This does add overhead that's for sure, and I see this as being a good reason for using the new Due because the overhead will be easily absorbed in the extra speed and memory.

This function is mostly error checking code, an equivalent Arduino function would probably just write into some registers without question and you spend 3 hours wondering why the serial port doesn't work because you "know" that "location == 2".

If you hangout here long enough, you will become one of us and reliably won't matter. I speak from experience, my standards are slipping.

For most programmers reliably is is handled by taking out the debug checks when software is released so the code is smaller and runs faster.

Here is a quote from long ago:

Quote

Turning off dynamic semantic checks once you've finished debugging your program is like taking off your seat belt once you've left the driveway.

This quote is wrong, modern programmers don't believe in seat belts or airbags. We are at the level of automobiles 60 years ago, there are no seat belts.

Modern programmers don't diagnose and fix bugs, they turn Bohrbugs into Heisenbugs. One way to make life better for users is to convert Bohrbugs into Heisenbugs.

Quote

This is good enough because Bohrbugs are showstoppers for users: every time the user does the same thing, he or she will encounter the same bug. With Heisenbugs, on the other hand, the bugs often go away when you run the program again. This is a perfect match for the way users already behave on the Web. If they go to a Web page and it fails to respond, they just click "refresh" and that usually solves the problem.

Here is a definition:

Quote

Jim Gray drew a distinction between two kinds of bugs. The first kind are bugs that behave predictably and repeatedly--that is, they occur every time the program encounters the same inputs and goes through the same sequence of steps. These are Bohrbugs, named for the Bohr atom, by analogy with the classical atomic model where electrons circle around the nucleus in planetary-like orbits. Bohrbugs are great when debugging a program, since they are easier to reproduce and find their root causes.

The second kind of bug is the Heisenbug, named for Heisenberg's Uncertainty Principle and meant to connote the inherit uncertainty in quantum mechanics, which are unpredictable and cannot be reliably reproduced. The most common Heisenbugs these days are concurrency errors (a.k.a. race conditions), which depend on the order and timing of scheduling events to appear. Heisenbugs are also often sensitive to the observer effect; attempts to find the bug by inserting debugging code or running in a debugger often disrupt the sequence of events that led to the bug, making it go away.

If you hangout here long enough, you will become one of us and reliably won't matter.

, I hope not.

Quote

I speak from experience, my standards are slipping.

Code I suggest on this forum never has any real tests in it because a) I haven't got the time to do it and b) most people asking for help won't understand anyway and I suspect the experienced guys here do similar. But don't let it creep into your own code and libraries.

As I said above I think there will be no excuses when we start writing code for the Due, it has the resources to include more defensive code.

Interesting document, I just read it. Did you know it was written by me? My full name is James Robert Gray, AKA Jim Gray

From that document

Quote

Availability is doing the right thing within the specified response time. Reliability is not doing the wrong thing.

I aim for reliability, I'm content to stop a system rather than continue with bogus information. Availability implies redundancy which is out of the scope of what I'm doing I think.

Quote

Make each module fail-fast -- either it does the right thing or stops.

By that definition my code is "fail-fast", meaning that it's better to stop a system entirely and right away rather than let things propagate and probably get worse.

Quote

Processes are made fail-fast by defensive programming. They check all their inputs, intermediate results, outputs and data structures as a matter of course. If any error is detected, they signal a failure and stop. In the terminology of [Cristian], fail-fast software has small fault detection latency.

As I am trying to do.

There have been a few threads asking if Arduino can be used in a commercial environment and my answer has always been "No, not in my opinion" (talking from the software point of view here). I haven't looked for a while now but IIRC things like digitalWrite() blindly index into arrays.

The framework I'm writing (examples above are from it) attempts to address these sorts of issues but it's as much an academic project as anything as to be honest nobody seems to care about such things.

The framework I'm writing (examples above are from it) attempts to address these sorts of issues but it's as much an academic project as anything as to be honest nobody seems to care about such things.

I find most of the things you say to be very interesting, unfortunately, the code you posted above is well above my understanding at this point.

So you do that in digitalRead, digitalWrite, analogWrite, digitalPinToTimer, digitalPinToBitMask, digitalPinToPort, and some equivalent in portOutputRegister, portModeRegister, portInputRegister, and code in most of the above to CHECK for ERROR on each return, till the error checking is larger than the code that actually is involved in doing something to a pin. (Those are all potentially user-accessable APIs, so they all need to do error checking, right?) And then you'd wonder why everyone is bypassing your code and using direct port IO without putting the error checking back.

Strong error checking seems like a good idea, but when you combine it with other modern programming practices that are also good ideas, you end up executing the same checks "many" times...

Pascal and Ada would have the compiler do this for you invisibly, of course, assuming that you create proper data types (ie tell the compiler that a "pin" is supposed to be from 1 to 19...)

(and that's without the issue of trying to figure out what SYS_ERROR is supposed to do on a deeply embedded system.)

(I guess in C++, the proper course of action is to make everything that's a pin be an object/class, and overload assignment/creation to do the range check so that the other methods don't need to? Except that you lose garbage collection of unused methods, leading to a different version of bloat?)

digitalPinToTimer, digitalPinToBitMask, digitalPinToPort, and some equivalent in portOutputRegister, portModeRegister, portInputRegister,

No, I don't have these functions/macros yet and may never do. I don't really need to have semi-random mapping of pins to ports and bits. The pins are mapped in port then bit order, pin 0 is PORT0:0, pin32 is PORT1:0 etc. (this is on an ARM) Less flexible but the mapping is a simple bit of arithmetic and no lookup arrays are used. Here are two of the macros that get the port and the bit of a logical pin

#define pinPort(pin) (pin / 32)#define pinPos(pin) (pin % 32)

Therefore, apart from the initial test to see if the pin # is too large there are no further tests needed. Good or bad this is a design decision I've made to make the pin mapping orthogonal, it's also less flexible I admit but so far I haven't see a reason to do otherwise. Mind you I'm not saying I've got all this right yet, it's very easy to overlook blindingly obvious issues when working alone.

In general the tests are at the API interface, once an API func is happy any lower level code does not check for errors. Such low level code is not public although as I'm using C and not C++ I can't really enforce that.

There are occasions where the same test is performed a few times during the course of a single API call, but this is (I think) only for setup funcs such as pinMode etc that in general are only called once in non-time-critical code.

My goal is to make a hobbyist level system that's more robust than usual, some things will make it slower than a naked API, but as I've said I think this is an advantage to using an ARM, you can have more features and still be faster than an AVR. As always, if you need to bypass stuff to get a 10MHz square wave out a pin then you do what you gotta do

Quote

figure out what SYS_ERROR is supposed to do on a deeply embedded system

Thanks for the reply. I am looking at the pins with a digital scope. I can try your test, but I see no transitions in the pin signals.Consequently, I ended up making my own library "SW_SPI.h" and it seems to be working fine, but I was hoping to take more advantage of your fast digitalpin functions. But this may be OK anyway, as I am sending my SPI over an RS485 hardware protocol, over 10 meters, so I may not be able to go full bore speed anyway.Thanks,