The current version uses the existence of/etc/xdg to determine that we
are on a Freedesktop.org system; under FreeBSD (and possibly other
systems) this is incorrect. The system XDG config dir is located at/usr/local/etc/xdg, since/etc is reserved for the base system.

The other point, that filenames can contain special characters, I was aware of, but I tend to assume that 'my' files won't (unless I know that they do). If I were working on some arbitrary set of files I would have done the job in Perl (I was going to say find -print0 and {sort,uniq} -z would work, but apparently (my) uniq doesn't have a -z option. Weird.). Thanks for the correction, though, since it's important to be aware of in general.

A more important bug I was also ignoring is that the length of the list of files may exceed ARG_MAX. Since this is one of Ovid's test directories, I presume that's not actually that unlikely:).

the most important advantages of git over other systems I've used (CVS, svn, svk) are

it's fast: I can download the complete history of perl in a few minutes and checkout any commit in seconds;

it doesn't (ever, in my experience) lose its mind, which all of the others have done for me on several occasions.

I (now) like and use some of the other features, like cheap branches, easy rebasing and convenient publishing on github, but if I were looking for a reason to persuade someone to switch those would be my choice.

It is a real error, though, not a bug in the tests. If someone uses IPC::Run in a non-C locale it will fail to work correctly: if anything, tests should be added to catch the bug.

Setting the locale to C somewhere appropriate in IPC::Run itself might be a workaround, but I've no idea how the stringification of $! reacts to changing the locale. I would expect it to be quite system-specific: here (FreeBSD) I can't get perl to give me localized error messages at all (I think the translations may simply not exist, though).

Unfortunately you can't rely on $! still having the right value by the time the exception is caught. Really all those croaks in _read &c. should be throwing proper exception objects which catch the (numeric) value of $!. I would want to switch the whole thing to use autodie, but that's quite a major change.

Still, I don't think that quite solves the problem if you assert a
fact and your query fails, you probably want to retract that fact.

This feels to me like you want an 'alternative futures' model of
time: that is, when you backtrack, you not only go back to a previous
'time' but also arrange things so that when you go 'forward' again you
are moving forward on a different timeline that diverges from the first
at the point the backtrack went back to. If you change something and
don't backtrack, subsequent computations stay on the same timeline so
they keep any changes that were made.

Of course, this means that you end up storing every state that has
been present in every hypothetical timeline your program has visited,
which may be a problem. If there's no way to retrieve values from
'other' timelines, you could garbage-collect them at the point they
become inaccessible.

I don't think the analogy with pure and applied physicists is correct. Working programmers aren't applied physicists: they're builders, or at most architects. Programming isn't a science, it's a trade, and I think what's missing here is some sort of degree (or other more vocational qualification depending on the educational system involved: I'm thinking of the sort of course that would once have been taught at English polytechnics) in Programming or Software Engineering, rather than Computer Science. Ideally the top level of this would lead to some sort of professional certification equivalent to the other branches of engineering (no doubt with the attendant requirement for vast amounts of tedious paperwork, but hey, that is actually the only way to make things work reliably in the end).

The McProgrammers you mention are the equivalent of ordinary builders who know how to build a wall, or maybe a whole house, but don't have the higher-level knowledge of physics and (more importantly) engineering to design a large and complicated structure. There would be little point such a person trying to read a paper written by a physicist (pure or applied) and apply it to their day job: the research needs to be filtered through several layers of 'Engineer' before it can be useful to them.

I suspect the root of the problem here is simply that CS as a discipline is so very young. Most of the physics used in (say) construction is several hundred years old, so we've had time to find out which bits are useful and how they apply to practical problems. In CS we keep trying to take pure research done approximately last week and use it in working programs right now.