I had a frustrating morning with Ubuntu Dapper and my new
printer.
And I get up early, and "morning" in this case means that "lp
/etc/hosts" didn't work until after 11:45 AM.

First, "what he
said" about bad interfaces, bad documentation, and how
not to tweak an upstream software package. This
was a
significant part of how I wasted my morning. I encountered a
web page
like the one in his screenshot, I read the text "These web
pages allow
you to monitor your printers and jobs as well as perform system
administration tasks," I hit the Administration tab, and it
was all
downhill from there.

Second, O'Reilly
didn't
cover themselves with glory either, unfortunately. I was
using their
Ubuntu Hacks book, and as I understand it in
retrospect, much
of my confusion was caused by following their instructions
to rename
the printer device from the autogenerated name like
"/dev/usblp0" to
something permanent like "/dev/usblp-epson-c88+" (so that one's
printer configuration won't get mangled when you plug in a
new USB
device so numbers change). Sound idea in principle,
perhaps, but apparently
untested in practice (?!), because
I find that CUPS can't handle it (and thus the
following Ubuntu Hacks instructions end up failing
mysteriously).
And it's also implemented as a disturbingly flaky recipe
("chmod 777" on an "/etc/" file without explanation, and set
"NAME{all_partitions}==..." even for printers which, AFAIK,
don't
have partitions). RELEVANCE TO ADVOGATO TRUST METRIC IDEAS
ALERT: In a random net HOWTO that recipe flakiness
would've caused
me to search for another source of guidance; I could have
saved myself a lot of grief had I applied the
same skepticism to the book.:-(

Third, besides what he said (
pipitas' gentle chiding like "Dapper maintainers have
crippled
the CUPS 1.2 web interface"), my own tentative
observation is that the
Debian or gnome maintainers replacement for
the CUPS 1.2 web interface that they crippled seems to
be a distinctly less mature piece of software. In
particular, once I
did finally get to the CUPS 1.2 web interface, I found that
CUPS gives
explanatory text for what fields like "Name", "Description", and
"Location" mean. The Debian/gnome GUI dialog provides the
same fields,
but only the single-word names with no documentation (no
explanatory
text alongside the entryboxes as in CUPS, and no Help or
About button
on the dialog boxes either). This, I can report from
this morning's experience, is a
very unhelpful UI design decision: I ended up telling the
system that
the printer's "Location" was "/dev/usblp-epson-c88+", i.e., the
filesystem location that I had created as per the
misinstructions
published by O'Reilly. After all, by the time I had
double-checked all
the earlier dialog boxes that the GUI system had presented (no
documentation there either, though admittedly they were
remarkably
shiny, with nice pictures of printers and all), there seemed
to be no
other place for the "/dev/usblp-epson-c88+" information to
go. (As
above, it seems that the intended route for that information
was not
to go but to come: it is to be autodetected by some means
which works
for "/dev/usblp[0-9]" but not for the
"/dev/usblp-epson-c88+" name I
had set up as per the O'Reilly instructions.) It's a ton of
work to get internal documentation right, and it's only human
not to have it all right --- but it might also be wise to
count the cost of replacing an NIH system which seems to
do a good job of it with a homegrown system which seems to
do an only-human job of it.

Now, I haven't done that much to write
keeping-simple-stuff-simple
text, so I'm not sure I have any constructive suggestions for
O'Reilly, other than that they might want to do
something at
least should happen that their revenues suffer from folks
mysteriously
becoming hypercautious about using their books. But for the
programmers I'd like to suggest, besides the usual good idea of
documenting your interface, also the good idea of not hiding
underlying documentation. (And, as per
pipitas
yet again,
not radically changing the functionality of the upstream
software
while leaving its documentation (and implicit documentation,
like
interface layout) describing functionality which no longer
exists and
which inexplicably breaks when the user attempts to use it.)

Finally and incidentally, I would also
observe that for a decade or so I have
joined in the
chorus of mockery of MSWindows for
the misdesign theme where an
administrator must hike over to the machine being
administered. Now it
looks almost as though designers
along the Debian/gnome/whatever chain of
responsibility seem to think this is a feature, not a bug.
That is, in
what seems to be slavish imitation of MSWindows design, they
have
arranged things so that the only well-supported way of doing
printer
administration is from a GUI tool which can only be invoked
from the
gnome window manager running on that machine. (Or, possibly,
the GUI
tool could be invoked from the command line, and thus
remoted through
X? But I don't know how one would do that, because my usual
trick for
finding command line names for the gnome tools by looking at
their
Help/About text doesn't work: as mentioned above, the shiny
printer
admin tool seems to be unmarred by redundant Help/About
buttons.)

I participated in
the 2004 ICFP contest (in the Lightning Division only, as the 1-man "team" Busman Holiday Club), and enjoyed it, and did tolerably well, which might give me a reason to rationalize a fondness for programming contests:-) so take my opinions with a grain of salt.

While of course it is true that "fast programming isn't the same as good programming" as Vanier complains first, I doubt that that is all that big an issue. Playing Chess or Go under tournament conditions is not the same as really deeply understanding the game, but in practice there is a pretty strong correlation between the two.

Too my mind, a bigger issue is that small programs are not the same as big programs, and it's hard to address or even express big-program challenges in a short time. Much of what we do in programming follows from big-program difficulties which vanish in small programs. I think several things that Vanier complains about follow from this. For example, Vanier sounds quite justified in criticizing a programmer who refused to see the problems of coupling the UI with the program logic; his programming contest skills are unlikely to save him in large programming projects. Like Vanier, I find small problems to be unsatisfyingly far away from the ordinary challenges of programming, perhaps like having a contest in Chess pawn endgames instead of Chess. Still, small-problem skills are important, not so easy to learn, and not universally distributed even among graduates of good CS programs, so a contest based on them doesn't seem pointless. (Also... It seems a little odd that Vanier says he likes obfuscated C programs, for the programs are required to be truly tiny, and I haven't noticed programming-in-the-large skills on display in any that I looked at. And, for that matter, Vanier even says he likes the ICFP contest too. De gustibus et de programmibus nondisputandum est.:-)

Beyond the lack of programming-in-the-large, what I find most unsatisfying about programming contests is that they are naturally organized as surprise challenges. A surprise challenge format solves various practical problems, but it naturally ends up favoring contestants who happened to be particularly prepared for that challenge. It's a bit like having a "[generic] sports contest" where you don't reveal the rules until the day of the contest. If you think you are testing for pure athletic ability without people having drilled specialized skills, I think you are probably wrong. Your choice of ultimate-frisbee-like rules, badminton-like rules, rowing-like rules, or whatever will in practice tend to favor people who've done similar things in the past. For example, in the ICFP 2004 contest I likely got a significant benefit from having written several stripped-down assemblers when I worked as a programmer in high school; if instead I had taken the C++ job at Mentor instead of my FORTHalike/assembler real-time-control job, then in the ICFP contest I'd've had to make up more of the architecture of my software on the fly, and while it's not a terribly hard architectural problem, in the Lightning Division saving time is worth a lot. The winning Dunkosmiloolump entry also seemed to suggest a strong influence of the team having constructed similar solutions before. This isn't intended as a strong criticism of such contests or contestants --- I am still pretty pleased with myself that I did well, and I was still awed by the Dunkosmiloolump entry and thus the programmers behind it --- but it is a reason not to weight success in such a contest too heavily. It's probably important to be a reasonably good programmer-in-the-small to do well, but I don't think being the best such programmer in the contest is anywhere near a sufficient condition to win. Good luck coming in first in a contest like ICFP 2004 if your qualification is that you are stunningly good at programming --- in the field of numerical solutions to PDEs, or wireless telephony voice compression, or graphics rendering engines, or some other field where you have never happened to work on programs close to the ones required. Programming demigod (and DFW neighbor and fellow short person) John Carmack would likely stomp me harder in a problem soluble with preprocessed quadtrees than in a problem soluble with Metropolis Monte Carlo.

Finally, Vanier writes near his conclusion "Therefore, my advice to programmers who want to improve their skill: skip the programming competitions and start a free software/open source project of your own. If you do that, you have a chance of becoming a truly good programmer, not just a glorified code-grinder." Perhaps Vanier and I agree, then: I would not advise spending a lot of effort on programming contests either! My disagreements are with other points in his article, and with his original thesis that "programming competitions are (for the most part) a bad thing." Success in cooking contests might not be a particularly good measure of one's skills for the day to day challenges of feeding many people a variety of tasty food on a limited budget of time and money, but does that make cooking contests for the most part a bad thing?

gobry: Yes, I agree with various criticisms of Common Lisp, including your point about lack of features in
standard libraries that
are taken for granted in more modern languages.
(Other Lispers, and ex-Lispers, too. Note,
e.g., that Guy Steele was
a towering figure in CL standardization before he
did that Java thing.)
And, if someone had written an article which I
basically liked and
which bounced that high on Reddit praising CL without mentioning the things that worry me, I might well have written a careful-about-skipping-over-those-gotchas
message in response.

I can't think of any convenient links of me cautioning
someone else that way about CL advocacy in particular, but if you really really
search, I'm fairly sure that more than once in a web-searchable place I have said in effect
"whoa, there are good things about C++ [and STL]" when people got too enthusiastic about CL.

I have also been guilty of spontaneously criticizing CL myself from
time to time. For example, in the
micro-user's-manual of the
library I just released, on the hack that one uses in
SBCL to set print properties which apply only at
debug time, because ANSI provides no way to set them
so that they apply only to an appropriate stream
like *DEBUG-IO*,

Or for any CL users sufficiently hard-core to be
messing around with the "genesis" phase of SBCL, try
this, venting about^W^Wdocumenting the wretched hack
that I wrote to work around
the fact that we can't portably allocate a single vector
of approximately the same size as the machine's
address space:

;;; KLUDGE: This implementation seems portable enough for our
;;; purposes, since realistically every modern implementation is
;;; likely to support vectors of at least 2^16 elements. But if you're
;;; masochistic enough to read this far into the contortions imposed
;;; on us by ANSI and the Lisp community, for daring to use the
;;; abstraction of a large linearly addressable memory space, which is
;;; after all only directly supported by the underlying hardware of at
;;; least 99% of the general-purpose computers in use today, then you
;;; may be titillated to hear that in fact this code isn't really
;;; portable, because as of sbcl-0.7.4 we need somewhat more than
;;; 16Mbytes to represent a core, and ANSI only guarantees that
;;; ARRAY-DIMENSION-LIMIT is not less than 1024. -- WHN 2002-06-13

I have even been guilty of criticizing the CL code
that I write or maintain myself from time to time.
Check out a copy of SBCL and grep for "KLUDGE" and
"FIXME"; even today, a fair proportion of those are me.

(So while you may suspect I was indulging in CL bigotry or even sneaky anti-Python-the-scripting-language advocacy, I think I was basically being in character as a cranky critic of language and implementation gotchas in general.)

In case anyone is interested in the somewhat obscure problem "how could I generalize the concept of interning and sharing unique DAGs so that I can allow the interned graphs to contain cycles?" or in any other formulation of the graph minimization problem described in
Guido Tack's online notes, feel free to check out
my latest attempt to solve the problem as a Common Lisp prototype program, with an unreviewed preprint describing the basic algorithm.

(The problem comes up in various fields. One example is
detecting
the equivalence of arbitrarily complex cyclic types in programming languages.
Nonincremental minimization algorithms have been known for
a long time, I'm trying to do it incrementally.)

I have some comments on the recent http://www.defmacro.org/ramblings/fp.html
article on functional programming, and I decided I might
as well post them here and send the author a pointer
rather than just putting them in an email.

(You are approaching a skirmish in the language wars! Hit *BACK* now!)

I like functional programming, and I liked the
http://www.defmacro.org/ramblings/fp.html article, but I think he
probably should present more disadvantages up front.

One of the reasons that I prefer Common Lisp to Haskell (and why
others prefer ML variants to Haskell) is that there are things which
are clumsy or slow to express in purely functional form. Okasaki's
_Purely Functional Data Structures_ is a marvellous book well worth
reading for other good reasons, but one smaller benefit of reading it
is that some such things come through there.

One annoying performance issue for me is hash tables; in an imperative
language, it's straightforward
to use the idiom of hash index into a modifiable
collection to get O(1) lookup. Good luck doing this in FP! And, if you
tell people that they should rewrite their BDD packages for
reliability and clarity and shareability in Haskell instead of C++,
and tolerate the extra O(log2 1E7) performance hit of doing all their
cache lookups in nice purely functional search trees instead of ugly
imperative hash tables, you will encounter some sales resistance ---
though perhaps not as much, in the eventual runout,
as if you conceal this performance issue from them and
let them discover it for themselves.:-|

The issue of making small incremental modifications to large indexed
or cross-linked data structures isn't just an algorithms performance
issue, it can also make programs difficult to read and think about. I
have done a lot of work on programs to play the game of Go, where on
each move a player places a stone on one of the 361 points on a board.
At various times I have written complete programs in C++, CL, and
Haskell. In the imperative languages there is some programming hassle
involved in making the changes undoable (so that
you can try a variation,
then backtrack to the starting point to try another),
whereas you get undoability for
free in Haskell; but in Haskell
I found considerably more hassle in
trying to express the small changes without
doing a deep copy of potentially very
large cross-linked data structures.
It seems to me that this is a
fundamental issue, not just a symptom of
my naivete about functional programming.
It might not be a big issue for an Erlang telephone switch,
because I expect
most of the state in such a switch is tied to an
individual call, interacts weakly if at all with the
state of other calls, and goes
*poof* when the call ends. But if you tried to write, say,
a MMORPG server
in a purely functional language, I would expect that the ongoing small
modifications to the
complicated shared global state of the world
would be a source of programming pain.

Also, purely functional languages seem to be unusable without laziness
(to create cycles) and the purely functional languages people have not
convinced me, as a casual user, that their handling of laziness is
completely ready for prime time in large hairy systems. The difficulty
of debugging lazy code is a minor issue; the difficulty of bounding
the performance (especially heap usage) of complicated lazy code is
worrisome, a potential showstopper. I would be very nervous about
planning to develop a large Haskell system for something complicated
and not similar to existing FP software (perhaps the MMORPG server
example) without doing some serious investigation into that. I think
it might be easy to end up with a server which failed from heap
exhaustion when lazy-variable placeholders held pointers into futures
which "common sense" shows could never happen, but which aren't
manifestly unreachable and so therefore aren't garbage collected. Both
my intuition and my superficial reading of the mailing lists suggests
that such bugs are not hard to create, and can be hard to test for and
hard to debug. The existence of various large FP systems successfully
"used in anger" is reassuring, but only incompletely so: it's not hard
for me to come up with a story why from the ability of version control
systems and telephone switches and compilers to manage this problem it
does not follow that it's manageable for all systems.

These days I am at least as much an amazed spectator as active participant in free software development. Alert readers may have noticed that I haven't posted to advogato in, um, two years. But, I did just finish
an algorithms paper
(with some early guidance from
pfdietz and with helpful feedback from
crhodes).

Now that I have finally gotten around to getting a website (to have someplace to point
Citeseer to, donchaknow) I might find myself tempted to write something there. Perhaps something totally off-topic for advogato; except I did that already, huh? So perhaps something about
lizards again but this time with pictures? I can brood about it over Thanksgiving, anyway. Then, if it happens, there'll probably be some pointer to it here.

When I come home quietly at night, I usually see some unobtrusive neighbors -- several lizards who hang out over my door. I theorize that they've learned that insects have their navigational systems jammed by the lights and crash land on the walls, and jack-lighted bugs taste just as good as bugs hunted down in more sporting ways.

The lizards are weird things which look like they were bred to live in dim caves, with pale uncamouflaged bodies and outsized dark staring eyes. They're never, ever, out in the daytime, and I wonder whether their seeking out the artificial light is learned behavior which overcomes their nocturnal instincts, or what.

So many questions, actually. What are the feng shui implications of night artificial light door lizards?

Most everything seems to have become complicated. Compiler patches. Debugging problems. Even birthday shopping. (Pondering for 20 minutes or so, I finally decided I knew just the thing, drove to just the store, and found out that it's backordered indefinitely.)

Hopefully this complexity is just a statistical aberration instead of the foothills of a long-term rising trend. Or failing that, I hope I can find a trick to get smarter or sleep less or something. Or, as long as I'm wishing, both would be nice.:-)

a new apartment! hopefully this time without neighbors spraypainting graffiti on the structures they aren't burning down...

In the medium term, I should now have fewer excuses for not getting s/w things done. Just now, however, I haven't quite got the two steps forward bit yet and in the meantime have taken a step back, as the ongoing confusion of moving has promoted my usual dignified absentmindedness to fullblown ditzy scatterbrainedness.

Meanwhile people keep sending patches for SBCL. As long as I don't manage to mess up not only my own mail spool but also the SourceForge list archives, I should have a good chance of merging them presently. Meanwhile, thanks, guys.

In unrelated news, not only have several #lisp IRC denizens decided to try Go, I've found a strong (master) Chess player IRL who is actively interested in learning, which is an interesting experience for me. (Gosh, he learns tactics fast!)

It looks like my next computer will be a $350 Athlon with 512 Mbytes of memory. I rather wish that instead for $1000 or so I could get either a 64-bit single CPU or -- what would be rather niftier IMHO -- a 64-way hypercube with 4 Mbytes or so of memory at each node. Not that I'd have the s/w for either, of course. Maybe if I wait 'til Christmas?