Information architect Adam Greenfield wrote an essay recently about "ubiquitous computing," and what system designers need to think about in order for such technology to be seen as useful and acceptable, rather than oppressive and unwanted. As I did with Dan Bricklin's essay on designing software to last for centuries, I found the general rules that Greenfield articulates to have broader application beyond computer code. His five principles of "designing useful, humane instantiations of ubicomp [ubiquitous computing]" can be seen more broadly as good rules for designing humane systems of all sorts -- technologies and techniques which will integrate seamlessly into human life, not run roughshod over it:

Default to harmlessness. Systems fail; when they do, they can fail gracefully or they can fail catastrophically. When a system fails, it should do so in a way which does not itself make problems worse.

Be self-disclosing. Systems should be transparent. The way in which they achieve their goals should be clear, as should the inputs and the results.

Be conservative of face. Don't humiliate the user. Don't make her or him feel stupid. Don't draw unnecessary attention to the user in public.

Be conservative of time. Don't make tasks more complicated than they need to be. Don't make people waste their time.

Be deniable. Systems should allow users to opt out, whether to use another system or to refuse participation entirely.

It will come as little surprise that voting systems, particularly electronic voting systems, seem to me to be prime examples of violations of nearly all of these principles (it's unclear the degree to which electronic voting systems would violate rule #3, but I wouldn't put it past them). I'm sure we could easily come up with a variety of other systems (traveler security, for example) which breaks these rules, too. I'd hazard a guess that these rules are more likely to be broken than observed. Still, they make for a good set of guidelines for people who are trying to fix things.