I've been building a lot of Docker containers recently, using my
Dockerfile framework. Building Gentoo containers from a
seed stage3 is nice, but debugging iterations are a bit slow when you
have to fetch all the source from distant the mirrors. To work around
this problem, I've written package-cache which handles on-demand
caching for content from the Gentoo mirrors. Now you can setup a
locale distfiles cache as easily as you can already setup an rsync
mirror for the Portage tree. I've added a
net-proxy/package-cache package to my wtk
overlay, and the README (on PyPI) has
instruction on setting this up locally. There are also some
Docker-specific notes in my dockerfile repository.

Bower is a package manager for the user-facing website content
(JavaScript, CSS, media, …). It's less JavaScript focused than
Jam, so it can't compile optimized JavaScript. That's not a big
loss though, because you can always use the RequireJS
optimizer directly.

Salt is a remote execution and automated deployment system. It's
great for running your own clusters once your website outgrows a
single box. If you get bored of running your own boxes, you can use
salt-cloud to provision minions on someone else's cloud (Amazon
EC2, Linode, …). You can install Salt on Gentoo with:

# USE=git emerge -av app-admin/salt

Usually you'll have one master, and a host of minions
running salt daemons that locally execute commands sent from the
master. After setting up BIND so salt (the default master
name) resolves to your development box, you should be
able to run:

This leaves /srv/salt owned by my personal user (instead of root),
because as much as I love Git, I'm not going to run it as root.

Once you've got a state tree in /srv/salt, you can mock-install the
configured state for each node. It's always a good idea to test your
commands before you run them, to make sure they won't do
something wonky.

Because you don't have a master passing you state, --local calls
require you to have the state stored on your local box (in /srv/salt
by default). It's hard to imagine using Salt without storing state
anywhere ;).

Docker is a tool for managing Linux containers
(LXC). LXC allows you to provision multiple, isolated
process-spaces on the same kernel, which is useful for creating
disposable development/staging/production environments. Docker
provides a sleek frontend for container managerment. Like Salt
Stack, it uses a daemonized manager with a command-line
client.

After reading through a few docs about IP forwarding, I determined
that my internal development box could safely enable IP forwarding,
which containers use to connect to the outside network. I enabled it
for subsequent reboots:

Bridging

Docker sets up a docker0 bridge between the host's network and the
containers. Docker tries to guess an IP range that does not conflict
with your local network, but it's not omniscient. Until
1558 is fixed, you're best bet is to set up your own bridge. If
you already have a docker0 bridge:

Linking

This links webapp to redis using the alias db. DB_PORT (and
similar) environment variables are set in the webapp container,
which it can use to connect to redis container. You can also use
ambassadors to link containers on separate hosts.

eCryptfs is an encrypted filesystem for Linux. You'll need to
have a kernel with the ECRYPT_FS module configured to use eCryptfs.
Once you have the kernel setup, install the userspace tools
(sys-fs/ecryptfs-utils on Gentoo, where you may want to enable
the suidUSE flag to allow non-root users to mount their private
directories).

eCryptfs is usually used to maintain encrypted home directories, which
you can setup with ecryptfs-setup-private. I used --noautomount
because I'm not using the PAM module for automounting. Other
than that, just follow the instructions. This sets up a directory
with encrypted data in ~/.Private, which you mount with
ecryptfs-mount-private. Mounting exposes the decrypted filesystem
under ~/Private, which you should use for all of your secret stuff.
If you don't like the ~/Private path, you can tweak
~/.ecryptfs/Private.mnt as you see fit.

Jam is a package manager for front-end JavaScript. While you
want to use npm for server-side stuff, Jam is the tool to use
for JavaScript that you'll be sending to your users. Following the
docs (with my already-configured ~/.local prefix):

$ npm install -g jamjs

Integrating with Django is a bit tricky, especially since Jam doesn't
manage the CSS, images, … that are often associated with JavaScript
libraries. If you need that, you probably want to look at Bower
instead.

This last bit is really cool, and where a less JavaScript-oriented
tool like Bower falls short. Jam is using the
RequireJS optimizer under the hood for the task, so if
you don't use Jam you can always run the optimizer directly.

A while back I posted about Comcast blocking outgoing traffic on
port 25. We've spent some time with
Verizon's DSL service, but after our recent move we're back with
Comcast. Luckily, Comcast now explicitly lists the ports they
block. Nothing I care about, except for port 25 (incoming and
outgoing). For incoming mail, I use Dyn to forward mail to port
587. For outgoing mail, I had been using stunnel
through outgoing.verizon.net for my SMTP connections. Comcast
takes a similar approach forcing outgoing mail through port
465 on smtp.comcast.net.

Node is a server-side JavaScript engine (i.e. it executes
JavaScript without using a browser). This means that JavaScript
developers can now develop tools in their native language, so it's not
a surprise that the Bootstrap folks use Grunt for their build
system. I'm new to the whole Node ecosystem, so here are my notes on
how it works.

Start off by installing npm, the Node package manager. On Gentoo,
that's:

This looks in the local package.json to extract a list of
dependencies, and installs each of them under
node_modules. Node likes to isolate its packages, so every
dependency for a given package is installed underneath that package.
This leads to some crazy nesting:

This last case is easily solved by using relative submodule URLs in
.gitmodules. I've been through the relative-vs.-absolute URL
argument afewtimes now, so I
thought I'd write up my position for future reference. I prefer the
relative URL in:

Users get submodules over their preferred transport (ssh://,
git://, https://, …). Whatever transport you used to clone the
superproject will be recycled when you use submodule init to set
submodule URLs in your .git/config.

As a special case of the mirror/move situation, there's no need to
tweak .gitmodules in long-term forks. If I setup a local version
of the project and host it on my local box, my lab-mates can clone
my local superproject and use my local submodules without my having
to alter .gitmodules. Reducing trivial differences between forks
makes collaboration on substantive changes more likely.

The only argument I've heard in favor of absolute URLs is Brian
Granger's GitHub workflow:

If a user forks upstream/repo to username/repo and then clones
their fork for local work, relative submodule URLs will not work
until they also fork the submodules into username/.

This workflow needs absolute URLs:

But relative URLs are fine if you also fork the submodule(s):

Personally, I only create a public repository (username/repo) after
cloning the central repository (upstream/repo). Several projects I
contribute too (such as Git itself) prefer changes via
send-email, in which case there is no need for contributors to
create public repositories at all. Relative URLs are also fine here:

Once you understand the trade-offs, picking absolute/relative is just
a political/engineering decision. I don't see any benefit to the
absolute-URL-only repo relationship, so I favor relative URLs. The
IPython folks felt that too many devs already used the
absolute-URL-only relationship, and that the relative-URL benefits
were not worth the cost of retraining those developers.
`

Over at Software Carpentry, Greg Wilson just posted some
thoughts about a hypothetical open science framework. He uses
Ruby on Rails and similar web frameworks as examples where frameworks
can leverage standards and conventions to take care of most of the
boring boilerplate that has to happen for serving a website. Greg
points out that it would be useful to have a similar open science
framework that small projects could use to get off the ground and
collaborate more easily.

My thesis is about developing an open source framework
for single molecule force
spectroscopy, so this is an avenue
I'm very excited about. However, it's difficult to get this working
for experimental labs with a diversity of the underlying hardware. If
different labs have different hardware, it's hard to write a generic
software stack that works for everybody (at least at the lower levels
of the stack). Our lab does analog control and aquisition via an old
National Instruments card. NI no longer sells this card, and
developing Comedi drivers for new cards is too much work for many
to take on pro bono. This means that new labs that want to use my
software can't get started with off the shelf components; they'll need
to find a second-hand card or rework the lower layers of my stack to
work with a DAQ card that they can source.

I'd be happy to see an inexpensive, microprocessor-based open hardware
project for synchronized, multi-channel, near-MHz analog I/O to serve
as a standard interface between software and the real world, but
that's not the sort of thing I can whip out over a free weekend
(although I have dipped my toe in the water). I think the
missing component is a client-side version of libusb, to allow
folks to write the firmware for the microprocessor without dealing
with the intricacies of the USB specs. It would also be nice
to have a standard USB protocol for Comedi commands, so a single
driver could interface with commodity DAQ hardware—much like the
current situation for mice, keyboards, webcams, and other approved
classes. Then the software stack could work unchanged on
any hardware, once the firmware supporting the hardware had been
ported to a new microprocessor. There are two existing classes (a
physical interface device class and a test and measurement
class), but I haven't had time to dig through those with an eye
toward Comedi integration yet. So much to do, so little time…