It started as kite.resnet.cornell.edu, a 486 under the desk in my dorm
room. Early on, it bounced around the DNS -- kite.ithaca.ny.us,
kite.ml.org,
kite.preferred.com
-- before landing on kite.kitenet.net. The hardware has
changed too, from a succession of desktop machines, it eventually turned
into a 2u rack-mount server in the CCCP co-op. And then it went virtual,
and international, spending a brief time in Amsterdam, before relocating to
England and the kvm-hosting co-op.

Through all this change, and no few reinstalls from scratch, it's had a
single distinct personality. This is a multi-user unix system, of the old
school, carefully (and not-so-carefully) configured and administered to
perform a grab-bag of functions. Whatever the users need.

I read the olduse.nethacknews newsgroup, and I
see, in their descriptions of their server in 1984, the prototype of Kite
and all its ilk.

It's consistently had a small group of users, a small subset of my family
and friends. Not quite big enough to really turn into a community, and we
wall and talk less than we once did.

[Intentionally partially broken, being able to read the cgi source code is
half the fun.]

Kite was an early server on the WWW, and
garnered mention in books and
print articles. Not because it did anything important, but because there
were few enough interesting web sites that it slightly stood out.

Many times over these 20 years I've wondered what will be the end of Kite's
story. It seemed like I would either keep running it indefinitely, or
perhaps lose interest. (Or funding -- it's eaten a lot of cash over the
years, especially before the current days of $5/month VPS hosting.) But I
failed to anticipate what seems to really be happening to it. Just as I didn't
fathom, when kite was perched under my desk, that it would one day be some
virtual abstract machine in a unknown computer in anther country.

Now it seems that what will happen to Kite is that most of the important
parts of it will split off into a constellation of specialized servers. The
website, including the user sites, has mostly moved to
branchable.com. The DNS server, git server
and other crucial stuff is moving to various VPS instances and containers.
(The exhibit above is just one more automatically deployed, soulless
container..) A large part of Kite has always been about me playing with
bleeding-edge stuff and installing random new toys; that has moved to a
throwaway personal server at cloudatcost.com which might be gone tomorrow
(or might keep running for free for years).

What it seems will be left is a shell box, with IMAP access to a mail
server, and a web server for legacy /~user/ sites, and a few tools that
my users need (including that pine program some of them are still stuck
on.)

Will it be worth calling that Kite?

[ Kite users: This transition needs to be done by December when the current
host is scheduled to be retired. ]

Propellor ensures that a list of properties about a system
are satisfied. But requirements change, and so you might want to revert
a property that had been set up before.

For example, I had a system with a webserver container:

Docker.docked container hostname "webserver"

I don't want a web server there any more. Rather than having a separate
property to stop it, wouldn't it be nice to be able to say:

revert (Docker.docked container hostname "webserver")

I've now gotten this working. The really fun part is, some properies
support reversion, but other properties certianly do not. Maybe the code to
revert them is not worth writing, or maybe the property does something
that cannot be reverted.

For example, Docker.garbageCollected is a property that makes sure there
are no unused docker images wasting disk space. It can't be reverted.
Nor can my personal standardSystem Unstable property, which amoung other
things upgrades the system to unstable and sets up my home directory..

I found a way to make Propellor statically check if a property can be
reverted at compile time. So revert Docker.garbageCollected will fail
to type check!

The tricky part about implementing this is that the user configures
Propellor with a list of properties. But now there are two distinct
types of properties, revertable ones and non-revertable ones.
And Haskell does not support heterogeneous lists..

My solution to this is a typeclass and some syntactic sugar operators.
To build a list of properties, with individual elements that might be
revertable, and others not:

Docker containers are set up using Properties too, just like regular
hosts, but their Properties are run inside the container.

That means that, if I change the web server port above, Propellor will
notice the container config is out of date, and stop the container,
commit an image based on it, and quickly use that to bring up a new
container with the new configuration.

If I change the web server to say, lighttpd, Propellor will run inside
the container, and notice that it needs to install lighttpd to satisfy
the new property, and so will update the container without needing to take
it down.

Adding all this behavior took only 253 lines of code, and none of it
impacts the core of Propellor at all; it's all in
Propellor.Property.Docker. (Well, I did need another hundred lines
to write a daemon that runs inside the container and reads commands
to run over a named pipe... Docker makes running ad-hoc commands inside a
container a PITA.)

So, I think that this vindicates the approach of making the configuration
of Propellor be a list of Properties, which can be constructed by
abitrarily interesting Haskell code. I didn't design Propellor to support
containers, but it was easy to find a way to express them as shown above.

All puppet manages is running the image and a simple static command
inside it. All the complexities that puppet provides for configuring
servers cannot easily be brought to bear inside the container, and
a large reason for that is, I think, that its configuration file is just
not expressive enough.

Whups, I seem to have built a configuration management system this evening!

Propellor has similar goals to chef or puppet or ansible, but with an approach
much more like slaughter.
Except it's configured by writing Haskell code.

The name is because propellor ensures that a system is configured with the
desired PROPerties, and also because it kind of pulls system configuration
along after it. And you may not want to stand too close.

Disclaimer: I'm not really a sysadmin, except for on the scale of "diffuse
administration of every Debian machine on planet earth or nearby", and so I
don't really understand configuration management. (Well, I did write
debconf, which claims to be the "Debian Configuration Management system"..
But I didn't understand configuration management back then either.)

So, propellor makes some perhaps wacky choices. The least of these
is that it's built from a git repository
that any (theoretical) other users will fork and modify; a cron job can
re-make it from time to time and pull down configuration changes, or
something can be run to push changes.

A really simple configuration for a Tor bridge server
using propellor looks something like this:

Since it's just haskell code, it's "easy" to refactor out common
configurations for classes of servers, etc. Or perhaps integrate
reclass? I don't know. I'm happy
with just pure functions and type-safe refactorings of my configs, I
think.

Properties are also written in Haskell of course. This one ensures that
all the packages in a list are installed.

Here's part of a custom one that I use to check out a user's
home directory from git. Shows how to make a property require
that some other property is satisfied first, and how to test
if a property has already been satisfied.

I'm about 37% happy with the overall approach to listing properties
and combining properties into larger properties etc. I think that some
unifying insight is missing -- perhaps there should be a Property monad?
But as long as it yields a list of properties, any smarter thing should
be able to be built on top of this.

Propellor is 564 lines of code, including 25 or so built-in properties like
the examples above. It took around 4 hours to build.

I'm pretty sure it was easier to write it than it would have been to look
into ansible and salt and slaughter (and also liw's human-readable configuration
language whose name I've forgotten) in enough detail to pick one, and learn
how its configuration worked, and warp it into something close to how I
wanted this to work.

I think that's interesting.. It's partly about NIH and
I-want-everything-in-Haskell, but it's also about a complicated system
that is a lot of things to a lot of people --
of the kind I see when I look at ansible -- vs the tools and experience
to build just the thing you want without the cruft. Nice to have the latter!

Did you ever want to be able to tag your files, and use the tags to query
and select the files you want? For many sorts of files we use, this is clearly
better than being locked into a single hierarchial view of nested directories.

Semantic file systems
are a way to do things like this. But that's an entire separate filesystem,
often implemented as a FUSE layer. I have never wanted tagging enough to
take on that layer of complications.

A week ago I realized that there was a way to do this without using an
entirely separate filesystem. When we use git, we're used to having
different views of our files, called branches, that we switch between.

What then, if a git branch were generated from tags and other metadata
that meet a query, providing a custom view that meet the user's current
needs and could be incrementally refined.

To build this, I needed a nice way to store metadata in git, and since
git-annex happens to have a very nice way of
using git as a database, I naturally built it on top of
git-annex.

This means that the tags and other metadata are automatically synchronized
between different clones of the repository. Multiple users can be tagging
and setting other metadata at the same time, and their changes will merge
in a consistent way.

I have a long list of things to do to fully integrate
views into git-annex. However, they're already basically usable, and I'm
very pleased with how these dymanic views of the contents of a repository
are working out.

Today's release of git-annex includes views support, so give them a try!

Bitcoin is a piece of software which tries to implement a particular
SFnal future. One in which the world currency is de-centralized,
deflationary, all early bitcoin adopters own their own planetoids, and
all visitors are automatically charged for the air they breath.

Thing is, the real world is more complicated than that. Assuming Bitcoin
did manage to become an important currency, countries would naturally try
to regulate it. In 30 years, by the time bitcoin mining has slowed right
down, the legal system will be fully caught up to the internet.

Bitcoin tries to make its code the law (as Lessig used to say), but the law
can certainly affect its code.

The law could, for example, require that bitcoin be changed to stop
increasing the difficulty of mining new blocks. Then bitcoin is
suddenly an inflationary currency. This would be a hard fork in the block
chain, but one enforced by financial regulators. Miners would be tracked
down and forced to comply. Some would perhaps go underground and run the
deflationary bitcoin network on TOR hidden services. Lots of possible ways
it could play out.

That's only one scenario, covering one of the many problems with Bitcoin
that make Charlie hate it. So it seems to me that Bitcoin should be a gold
mine for Science Fiction authors, if nothing else..

Sometimes it makes sense to ship a program to linux users in ready-to-run
form that will work no matter what distribution they are using. This is hard.

Often a commerical linux game will bundle up a few of the more problimatic
libraries, and ship a dynamic executable that still depends on other system
libaries. These days they're building and shipping entire Debian
derivatives instead, to avoid needing to deal with that.

There have been a few efforts to provide so-called one click install
package systems that AFAIK, have not been widely used. I don't know if they
generally solved the problem.

More modern appoaches seem to be things like docker, which move the
application bundle into a containerized environment. I have not looked at
these, but so far it does not seem to have spread widely enough to be a
practical choice if you're wanting to provide something that will work for
a majority of linux users.

So, I'm surprised that I seem to have managed to solve this problem
using nothing more than some ugly shell scripts.

For example, I unpacked the tarball into the Debian-Installer initramfs and
git-annex could run there. I can delete all of /usr and it keeps working!
All it needs is a basic sh, which even busybox provides.

Looks likely that the new armel standalone tarball of git-annex will soon
be working on embedded systems as odd as the Synology NAS, and it's already
been verified to work on Raspbian. (I'm curious if it would work on
Android, but that might be a stretch.)

Currently these tarballs are built for a specific architecture, but there's
no particular reason a single one couldn't combine binaries built for each
supported architecture.

technical details

The main trick is to ship a copy of ld-linux.so, as well as all the glibc
libraries and associated files, and of course every other library and file
the application needs.

Shipping ld-linux.so lets a shell script wrapper be made around each
binary, that runs ld-linux.so and passes it the library directories to
search. This way the binary can be run, bypassing the system's own dynamic
linker (which might not like it) and using the included glibc.

I have to set quite a lot of environment variables, to avoid using any
files from the system and instead use ones from my tarball. One important
one is GCONV_PATH. Note that LD_LIBRARY_PATH does not have to be set,
and this is nice because it allows running a few programs from the host
system, such as its web browser.

worse is better

Of course I'll take a proper distribution package anytime over this.

Still, it seems to work quite well, in all the horrible cases that require
it.

If you use git-annex,
please take a few minutes to answer my questions!

Since this blog post is too short, let me also announce a minor spinoff
project from git-annex that I have recently released.

git-repair is a complement to git fsck
that can fix up arbitrarily damaged git repositories. As well as avoiding
the need to rm -rf a damaged repository and re-clone, using git-repair can
help rescue commits you've made to the damaged repository and not yet
pushed out.

I've been testing git-repair with evil code that damages git repositories
in random ways. It has now successfully repaired tens of thousands of
damaged repositories. In the process, I have found some bugs in git itself.

I've had a new laptop for a couple months. It's a lot larger and faster
than my old Dell Mini 9 netbook, which I
somewhat famously used exclusively
for 5 years.

The new laptop is a Lenovo Yoga 11,
and its screen can bend all the way back and around to under the keyboard,
converting it to a tablet. (Held together with some quite powerful magnets,
it turns out.. when did those become ok to have around computer equiment?)
This is a feature I've always felt laptops should have, especially after
the Mini 9, which couldn't even open out to flat. Although since the
accelerometer is not yet working under Linux, and Linux GUIs are not very
well suited for tablets, I have so far mostly used the feature for a)
reading and b) cleaning that usually hard to reach area around the hinge.

Anyway, this is a laptop that wants to be a tablet, including all the bad
parts, like a hard to remove battery. (20-some torx screws, according to
the service manual.) Yesterday I made the mistake of running down the
battery all day, and when I plugged it in, after a gloomy grey day, there
was not enough oomph in the house's battery bank to recharge a hungry L-ion
battery.

So I was stuck using my old laptop for several hours, with its battery removed,
and its lack of fan making it a nice warm lump. It felt kind of like driving
around in an old VW bus, everything is awkward and slow, and yet somehow also
charming, and at the end you wonder how you put up with it for so long.

This could have been a blog post about toothbrushes for monkeys, which I
seem to have dreamed about this morning. But then I was vaguely listening
to the FAIFCast in the car, and I came up with
something even more esoteric to write about!

License monads would allow separating parts of code that are under
different licenses, in a rigorous fashion. For example, we might have
two functions in different license monads:

foo ::String-> GPL String
bar ::Char-> BSD Char

Perhaps foo needs to use bar to calculate its value. It can only do so
if there's a way to lift code in the BSD monad into the GPL monad. Which
we can legally write, since the BSD license is upwards-compatable with the GPL:

liftGPL :: BSD a -> GPL a

On the other hand, there should be no way provided
to lift the GPL monad into the BSD monad. So bar cannot be written using
code from foo, which would violate foo's GPL license.

Perhaps the reason I am thinking about this is that the other day I found
myself refactoring some code out of the git-annex webapp (which is AGPL)
licensed and into the git-annex assistant (which is GPL licensed). Which
meant I had to relicense that code.

Luckily that was easy to do legally speaking, since I am the only author
of the git-annex webapp so far, and own the whole license of it.
(Actually, it's only around 3 thousand lines of code, and another thousand
of html.)

It also turned out to be easy to do the refactoring, technically speaking
because looking at the code, I realized I had accidentially written it in
the wrong monad; all the functions were in the webapp's Handler monad, but
all of them used liftAnnex to actually do their work in the Annex monad.
If that had not been the case, I would not have been able to refactor the
code, at least not without entirely rewriting it.

It's as if I had accidentially written:

foo ::String-> GPL String
foo =mapM(liftGPL . bar)

Which can be generalized to:

foo ::String-> BSD String
foo =mapM bar

I don't think that license monads can be realistically used in the current
world, because lawyers and math often don't mix well. Lawyers have, after
all, in the past written laws
re-defining pi to be 3.
Still, license monads are an interesting way to think about things for myself.
They capture how my code and licenses are structured and allow me to reason
about it on a more granular level than the licences of individual files.

(I have a vague feeling someone else may have written about this idea
before? Perhaps I should have been blogging about monkey toothbrushes after
all...)