Autogenerated Changelogs are probabaly more than adequate
for projects with one or two developers, but for larger
projects (say, the Gimp),
the extra context that a hand written changelog provides can
be very useful.

Namely because hand written changelogs tend to document
what the author intended to do, where cvs logs document
what you actually changed. Especially useful in the "did
adrian forget to commit all the new files again?" scenario.

Anyway, looks like a lot of people have
expressed interest in how we do xml-rpc over
(under?)
ssl, and a lot of speculation. Of course, the answers
are in the source code ;->

Basically, there is a https class hacked into
the python package based on the M2Crypto openssl
bindings (in the openssl-python package).
Then some mods to python-xmlprc to allow
the use of https and to do CA checking.
This is what the client apps use to do ssl

As far as the question of "who needs more than 4gig of ram",
the answer is apparently "lots of people".

It was a pretty common request while I was working
in tech support. As often as not, it was people looking
for >4gig support per process, and at least with 2.2,
x86, and linux, that wasnt really an option. But it
wasnt uncommon for someone to want/need to be able
to malloc a half a dozen gigs of ram. And not just
malloc it, but use it as well.

But very often, people just needed more memory available
because of huge numbers of proccess running. Web servers
running on big hardware with long lasting cgi/asp/*let
proccesses were a common theme. Ie, a few thousand perl
processes taking a couple of megs each on a single
machine. Yes, people really do that.

Or perhaps, you just need to serve up several thousand
hits a second with a single machine. TUX and
32gigs of ram to the rescue... Okay, so I dont know
anyone actually doing that in production. But one thing I
learned
in tech support, is if there is a limit on something in
linux,
someone will run into it.

All his artwork is made from natural materials, in natural
settings, and usually extremely temporarily. The books are
collections of photographs of the pieces. Very very cool.

I had seen a couple pics of some of his work in a few of
the earthworks books I have, but didnt realize everything
he does is that good. I am impressed.

The tiny little pictures on the Smithsonian site do not do
the images justice, but it's about the best url I could
find. Anyone have a better link, preferably with high res
pictures and maybe the inscriptions/explanations?
justice.

Didn't do anything related to free/open software today.
Though I do find it humourous that one of the other
companies in the same field as my employer seem to be making
the same mistakes we did.

Updated my Linux
System Tuning page a bit. Added some info about
increasing thread limits, shm segments and sizes, and some
info about benchmarks and system monitoring utils. That
page is almost starting to become useful.

Go buy King Crimson cds. Go to The
DGM Diaries. An interesting perspective on the
whole "stealing somebody elses music is or is not part of
my human rights" debate.

However, I dont think pointing to King Crimson's "Club"
releases as a model of an alternative business model
for musicians is particularly realistic. King Crimson
have an extremely loyal "cult" following.
Not everybody can have half a dozen top notch world
class virtuosos and 30 years of recording and touring
before they expect to be able to survive.

Todays cd recomendation: "Audio" by Blue Man Group. Nice
cd. Lots of invented instruments, included some detail on
what the instruments look like and how they are constructed.
See About the
music for some online info about the cd.

The Scott McCloud books are extremely good. Just recently
purchases and read "Understanding Comics" again. The first
time I read it I just happened to pick up a friends copy,
and continued to skip two classes to finish it in one
sitting on first read. Good book. "Reinventing Comics"
seems to be equally as good. And this from someone who does
not read comics or anime.

So, one thing I learned after doing tech support
for a while is that not many people know how
to tune linux based servers. The info out there
on the net is hard to find (though there are
some excellent resources once you find them...).
And people tend to result to repeating random
bits of info that may or may not be useful.

So, I finally got around to writing up some
of the info I've picked up over the years. Much
of it has come from picking the brains of
ThoseThatMakeThingsFast (hey zab).

It's not complete, there are probably errors in it, it
has a painful lack of real numbers to back up the claims,
and some of it is soon to be outdated. But then, whats new?

And note, most of this is geared towards getting good
benchmark numbers, or for helping the occasional
ExtremeNetworking case you encounter in the real world. A
lot of it isnt really geared for your run of the mill
overpowered underbandwidth'ed webserver serving up
a few thousand measly hits a day.