This is an interesting repository for me for several reasons. Selfishly, I
have always wanted an easy way to download lots of DebConf photos for
offline viewing, and never seem to get around to downloading everything
from gallery.debconf.org when I have bandwidth. I've also wanted an example
repository that shows how git-annex can be used by a large group for
collaboration. Finally, the way this repository is set up with an incoming
queue is fairly unique.

With 430 files in the repository, totaling over 3.5 gigabytes (which doesn't
include all the talk videos that are #included into it),
and at least 18 people having cloned the repository so far,
the debconf-share repository is well on its way to being a sort of large
git-annex repository.

Just running git annex whereis is interesting; many of the files already
have 8 copies. Some talk videos are more popular than others and you can
see when they're downloaded too. But enough snooping.. ;)

So far people have uploaded mostly photos and talk slides. Other places
exist to store those things in the DebConf infrastructure, but it's nice
to have them all available in one tree. I particularly like today's
addition of chrysn's files
which include the raw photos and hugin files used to produce panoramas,
and then pull those together into a postcard which has all its sources
available.

In my corner of the debconf-share repository, I'm collecting together files
regarding the possibly-historic dpkg-source-git-re-re-redesign process
that would have otherwise been scattered around various places and probably
not all published. This includes an hour long recording of the main
design session (recorded with permissions) made by my laptop's mic, which,
surprisingly, turned out to be pretty listenable. I will probably have more
to say about this process later, once Ian announces dgit.

So, we're still seeing how usage develops. I hope that having this
available during the next DebConf, and other Debian meetings, rather
than only at the end, will further facilitate file sharing and storage.
Especially if a fast clone is available right on the DebConf LAN. ;)

The technical details of how the repository is put together are:

There's a repository on git.debian.org, which piggy-backs on the collab-maint
group, so most Debian people have commit access to it.

git-annex is used to upload files to that repository, as an incoming
queue.

A git-annex-shell annex-content hook is run whenever someone uploads
a file to there. It moves all annexed content over to annex.debconf.org
for publication.
This involves some ugly but safe stuff to do with publically readable
restricted use ssh private keys. It was the hardest piece to get working,
and is only necessary because we don't want to bloat git.debian.org with this
stuff and it's not practical to give everyone logins to
annex.debconf.org.

As an additional guard against accidental bloat, the git.debian.org
repository will refuse to accept uploads when there is less than 5 gb
free disk.

RichiH used flint and steel to light the bonfire. We carefully fed it up
from those sparks to a blaze. Put on the biggest logs we could find
to make it last. Now I'm sitting on the hill above it watching folks
gathered around. A poem is being read, in Hindi, then
translated for us to English. Others have shared songs and poems in a dozen
languages, both classics and their own. Out below the darkness of
the lake. This is Debian at 20.

At the start of this DebConf, I gave a talk on
"Debian Cosmology.
In that (and a later
"dh_busfactor"
talk) I shared my hopes and my fears.
I was conflicted about giving the this talk, worked on it for weeks,
felt it might not work, or be depressing. I've had nothing but good
comments about it.

20 years is ages in internet time and technical projects ossify over time.
The last session I was in this afternoon was a presentation of a new tool,
which I hope & feel has the potential to fundamentally change an important
and suboptimal part of Debian. Then I walked outside to a rainbow over
crystal clear Swiss alps on the horizon. How encouraging, and what a nice
story that will be around some future campfire.

The Fay compiler is a simple way to build fairly
comprehensible javascript code from Haskell source.

It occurs to me that it should be rather easy to modify Fay to emit perl
code rather than javascript. This would allow contributing things like
plugins to various perl programs, without writing perl.

Of course, the same idea could probably be used to compile Haskell to other
languages like python, but perl seems particularly well suited as a second
Fay target, since javascript and it have quite similar syntax and similar
support for features like closures which Fay relies on.

I do not have time to work on this idea myself. It would be a good project
for a beginning Haskell programmer. You probably don't even need to fully
understand monads to do it! Essentially, look at
Fay output examples,
translate them from javascript to perl, and then much of the code changes
in Fay would probably be in simple string generation code.

I will forward any bitcoins sent to the address
149eBtWS6i8cwQdPQJJ8hAGpDuEqNidyTj
to whoever makes this. If it doesn't happen in 1 year, any donations will
be forwarded to the EFF instead.

Woken at 3 am by fil singing
"Join Us Now And Share The Software"
(all verses!) I could not get back to sleep and spent 3 hours thinking up
a new take on the hopelessly blocked dpkg-source v3 (git) format.

A diagram of the new plan, which should meet all ftpmaster requirements,
is posted in Hacklab 1. I am looking for reviewers.

A rather hard to read photo (DebConf needs mandatory whiteboards!)
is available here:

Fil has paid me back in full for his drunken carousing by gifting me a
Rhombus-Tech system on a chip on a PCMCIA card.
I've checked this new computer, which features a modern multicore ARM CPU,
into my wallet.

And that's 10% of what went on today at DebConf for me, and we've not
even gotten to the cheese and wine party tonight.

Well, my git-annex crowdfunding campaign is
half way to its August 15th conclusion. So far it's raised more than five times
what I hoped it would. I wish I could say I'm like some canny NASA engineer
who intentionally sets low expectations for their Mars rover, but in
both the previous kickstarter and this campaign I've really had no idea how
far it'd go. I'm glad that I'll be working on
git-annex for another year.

I was particularly unsure if it'd be successful to move off Kickstarter.
During the git-annex assistant Kickstarter campaign, I saw many small
contributions from people who learned of it due to it being a successfully
funded project, a staff pick, etc. Losing that easy network effect is a
gamble.

So far I've had only half the number of contributors that I got on
Kickstarter. I've basically missed out entirely on the $5 level casual
contributors. On the other hand, my backers have generally been more
generous (and some have been exceedingly generous). And I've avoided
rewards that will cost much money, so I may end up in the same ballpark
funding level in the end!

Bitcoin

I also was curious to experiment with Bitcoin in this campaign.
Partly because Paypal isn't available everywhere internationally, and takes
really obnoxious percentages of transactions (though probably not as bad
as Kickstarter taking its percentage followed by Amazon payments taking its
percentage..) and partly because there seem to be interesting possibilities for
supporting free software with Bitcoin. (Especially if any of the
microtransactions on top of Bitcoin take off.)

So far 5% of backers have used Bitcoin. It's been quite strange to actually
have significant amounts of bitcoins in my wallet.
Wordpress
has had 94 bitcoin payments over 9 months since starting accepting them.
I've had 47 payments in the two weeks my campaign has run so far. Wow!

Most of the bitcoin payments have come in via Coinbase (a few people have
found my direct
payment address), but of those very few were using bitcoin purchased on
Coinbase. Most are probably transfers of bitcoin they already had, or
perhaps bitcoin purchased on other sites.

The one technical issue I've had with using bitcoin is that Coinbase
has not provided details about who sent most of the donations. Probably
some of them are intentionally anonymous, but I suspect Coinbase's interface
to claim incoming bitcoin transactions failed for some of them.
(If you donated bitcoin and want to actually get a reward, please email me.)

By the way, I'm converting most of the bitcoins back to USD pretty quickly.
I'm not interested in speculating on currency exchange rates with money
that has been donated so I can accomplish a particular task..

DIY

I put up the campaign website without any
means in place to handle updating it. This is because I never automate
anything until I've done it at least 10 times by hand. ;) After the first
trickle of donations became a flood, I quickly realized I needed at least
something to handle keeping the numbers straight.

What I whipped up in an hour of coding is a system where I enter incoming
payments into a hledger file and a small haskell
program parses that and writes out various files that are included into the
website. Amusingly the percentage calculation and display code was copied
from git-annex, so part of git-annex is helping run its own fundraising
campaign. The campaign video is itself hosted in a public git-annex
repository, come to think of it.

The rest of the site is built using ikiwiki. Given that it's hosted at
Branchable, this is a high level of dogfooding and DIY. There are certianly
better crowdfunding platforms, but all I miss in this one is automated
transaction entry. And I have total flexability, double entry accounting,
and a powerful static website generator that handled being on the top of
Hacker News without a sweat. Oh, and some money. What's not to like?

As a Sunday diversion, I wrote
150 lines of code
and turned git-annex into a podcatcher!

I've been using hpodder, a podcatcher written in Haskell. But John Goerzen
hasn't had time to maintain it, and it fell out of Debian a while ago.
John suggested I maintain it, but I have not found the time, and it'd be
another mass of code for me to learn and worry about.

Also, hpodder has some misfeatures common to the "podcatcher" genre:

It has some kind of database of feeds and what files have been downloaded
from them. And this requires an interface around adding feeds, removing
feeds, changing urls, etc.

Due to it using a database, there's no particularly good way to run it on
the same feeds on multiple computers and sync the results in some way.

It doesn't use git annex addurl to register the url where a file came
from, so when I check files in with git-annex after the fact they're
missing that useful metadata and I can't just git annex get them
to re-download them from the podcast.

There is no database of feeds at all. Although of course you can check a
list of them right into the same git repository, next to the files it adds.
git-annex already keeps track of urls associated with content, so it reuses
that to know which urls it's already downloaded. So when you're done with a
podcast file and delete it, it won't download it again.

This is a podcatcher that doesn't need to actually download podcast files!
With --fast, it only records the existence of files in git,
so git annex get will download them from the web (or perhaps from
a nearer location that git-annex knows about).

Took just 3 hours to write, and that's including full control over
the filenames it uses (--template='${feedtitle)/${itemtitle}${extension}'),
and automatic resuming of interrupted downloads. Most of what I needed
was already available in git-annex's utility libraries or Hackage.

Technically, the only part of this that was hard at all was efficiently
querying the git repository for a list of all known urls. I found a
pretty fast way to do it, but might add a local cache file later on.