the adventures of Jeremy Ehrhardt and his motley crew of bits

Ideally you’d ship no init script and let downstream packaging figure it out, since there are even more init systems than there are package formats: various versions of SysV init; upstart for old versions of Ubuntu; systemd for new versions of Ubuntu, Debian, and Red Hat; OS X launchd for all those Homebrew users; and if you’re generous, or a masochist, the several variants of BSD rc.d; Solaris SMF; or even Windows Services.

If you don’t have the luxury of being popular enough that distros package for you, or you’re working on in-house stuff, then you might end up writing a SysV init script. It won’t be portable. Even if you’re using daemonize to handle the hard stuff like daemonizing, PID/lock files, and running as a non-root user, there are still little matters like the helper functions you need even with daemonize to kill the process or get its status. Are they in /etc/rc.d/init.d/functions? Are they in /lib/lsb/init-functions? Does your target system actually have either, and if so, what’s in them? Do you need to run Red Hat chkconfig after installing the script? How about Debian update-rc.d? Do you need an INIT INFO block? Does your target system claim to use LSB-compliant init scripts? Which LSB version? Do you really need all those entry points? Are you going to test it on systemd or upstart systems to see if their SysV backwards compatibility actually works?

OK, for modern systems, you can probably get away with just systemd and launchd, but no wonder a lot of new kids think “background service” means “run it in tmux“.

Unzip TeamCity-9.0.3.tar.gz and find the buildAgent directory. This is where all of the agent’s code and config files live. Since TeamCity wasn’t written with the FHS in mind, we’ll put it in /opt/TeamCity/buildAgent.

Make the agent scripts executable

Run chmod +x buildAgent/bin/*.sh. You won’t need to run them on the system you’re using for packaging, but fpm preserves the executable flag when it creates a package.

This fpm invocation tells it thje package name and version, that the agent doesn’t care what architecture you’re using but requires Java 6, and that everything should go under /opt/TeamCity, except for the service file, which goes where systemd is expecting it.

You should now have a file named teamcity-build-agent_9.0.3_all.deb.

Create a user

Before installing the package on your target machine, run sudo adduser teamcity --system --group --home /opt/TeamCity, or add something equivalent to your configuration manager if that’s where you manage users. (I’ll be using Puppet to manage users on production machines that will have this package). The --group option creates a teamcity group along with the teamcity user, so that you can then add other users to the teamcity group if they need to look at TeamCity builds or configuration.

Install the package

Copy the file to your target machine and run sudo dpkg -i teamcity-build-agent_9.0.3_all.deb.

Configure the package

Copy /opt/TeamCity/buildAgent/conf/buildAgent.dist.properties to /opt/TeamCity/buildAgent/conf/buildAgent.properties. Edit serverUrl to point it at your TeamCity server. You may also want to set name to something recognizable, or the agent will default to your hostname. If you’re using Puppet, this is a good file to templatize.

Create writable directories

You’ll need to create the directories named in bin/agent.sh and conf/buildAgent.properties, including at least logs, work, temp, and system, as well as update and backup directories that I’ve only seen in log output. They should be writable by the teamcity user.

Alternatively, you can map them to more FHS-friendly locations in /tmp and /var. logs is used by agent.sh for both logging and a PID file, but that can be changed by setting the LOG_DIR and PID_FILE environment variables in the service file.

More extra credit: there’s no easy way to specify file ownership in an fpm-generated .deb package. This is a known issue. This is not a limitation of the .deb format itself, which, broadly speaking, is a pair of tarballs. Solve this problem and your package can create those writable directories itself.

Start the agent

Run sudo systemctl start teamcity-build-agent. You can also run sudo systemctl status teamcity-build-agent to see if it started successfully.

Talking in both directions

You’ll need to make sure the agent can reach the server (outbound on on whatever port you used in the serverUrl section of the agent config file), and the server can likewise reach the agent (inbound on the port specified in ownPort and the IP address specified in ownAddress, if you’ve bound your agent to a specific IP). The inbound port is 9090 by default, and again, it’s not encrypted, so you’ll need to use a VPN, SSH tunnel, or equivalent between the server and agent.

At Thumbtack, we’ve started using Amazon Elastic Beanstalk (EB) to deploy web services. We already use Papertrail as a log aggregator for our existing systems, and have installed their syslog forwarding agent on our EB apps using .ebextensions. However, I didn’t have a way to group log output from multiple EB-managed EC2 instances, or distinguish EB environments by name, because the default EC2 hostnames are just prefixed IP addresses. So I decided to use the EB environment name, which is unique across all EC2 regions, and tell Papertrail’s agent to use that instead of a hostname.

EB sets the environment ID and name as the elasticbeanstalk:environment-id and elasticbeanstalk:environment-nameEC2 tags on all of the parts of an EB app environment: load balancer, EC2 instances, security groups, etc. Surprisingly, EC2 tags aren’t available to instances through the instance metadata interface, but they are available through the normal AWS API’s DescribeTags call. EB app container instances are based on Amazon Linux and have Python 2.6 and Boto preinstalled, so rather than go through the shell gyrations suggested by StackOverflow, I wrote a Python script (get-eb-env-name.py) that uses Boto to fetch the instance ID and region, then uses those two together with the instance’s IAM role to fetch its own tags, and prints the environment name.

You’ll need to make sure the IAM role used by your EB app environment has permissions to describe tags, as well as describing instances and possibly instance reservations as well. I’ve included an IAM policy file (EC2-Describe-IAM-policy.json) that you can apply to your IAM role to grant it the permission to describe any EC2 resource.

See my gist:

[gist 08880dc2c74b9c26cb5b /]

There’s an annoying wrinkle around getting the region for an EC2 instance through the instance metadata: you can’t, at least not through a directly documented method. You can get the availability zone (AZ) from http://169.254.169.254/latest/meta-data/placement/availability-zone, which will be something like us-west-2a, and then drop the last letter, giving you a region name like us-west-2. However, Amazon has never documented the relationship between AZ names and region names, so it’s possible this could break in the future.

Since the instance identity document actually has a key named region, I decided to go with that method to find the region for an EC2 instance from inside that instance.

Previously, we’d set an environment variable or customize a config file to tell Papertrail what “hostname” to use. Finding the EB environment name at runtime from EC2 tags means that we don’t have to customize EB app bundles for each target environment, and thus can use the same app bundle across multiple environments; for example, from a CI build server’s test environment to a staging environment, and then to production.

I wrote this review paper on fractal image compression, denoising, and enlargement back in 2008 while I was at Caltech, and thought I’d lost it until discovering a copy in my backups. Fractal compression is a fascinating area, and I went on to prototype a GPGPU-accelerated image enlarger as my capstone graphics project. It actually predates OpenCL, and I hope to come back to it someday for an overhaul, using this paper for reference.

A Review of Fractal Image Compression and Related Algorithms

Introduction

Fractal image compression is a curious diversion from the main road of raster image compression techniques. Fractal compressors do not store pixel values. Rather, they store a fractal, the attractor of which is the image that was encoded. This representation of an image has a number of useful properties.

The property that drove initial development of the field is that images with fractal properties can potentially be encoded in very few bits if a close match for their generating fractal can be found. Fractal-like images are frequently found in nature, so fractal compression promised very high compression ratios for natural scenes such as clouds, landscapes, forests, etc. Another useful property is the scale independence of fractals. Fractal-encoded images may be decoded at larger or smaller sizes than the original without major distortion.

This review covers major developments unique to fractal image compression. Many published algorithms incorporate general data compression techniques such as vector quantization and entropy coding (as many non-fractal compressors do), but the details are best covered by general data compression literature, and have thus been omitted. There is also some recent work hybridizing fractal and wavelet compression, which is best left to a review of wavelet-based methods, and has also been omitted.

The first problem in the field was proving that it was possible to find a fractal representation for any image. This was proved possible with the Collage Theorem, which was followed by the development of an algorithm to generate fractal representations for arbitrary images. The algorithm was slow, so subsequent work looked at improving the speed of the search for fractal representations, and also on improving their quality. The partitioning of the image to be encoded was a common target for improvement, and several of the important schemes are discussed below.

Unfortunately, fractal compression was found to have some problems. Early research did not establish that natural images were actually self-similar to such a degree that fractal compression was the best fit. Additionally, optimal encoding was found to be NP-hard. These difficulties are the reason that fractal codecs are virtually nonexistent in the current graphics universe, having been superseded by other codecs with more general applications and higher speeds.

However, the useful properties of fractal encodings have found some use outside of the performance-sensitive area of compression. Two fractal image-processing algorithms are discussed in the last section; one for image enlargement, and one for noise removal. First, we will discuss the development of fractal image compression, from the initial mathematical development of the field in the late 1980s and the first implementation in the early 1990s, up through the various improvements over the following decades.

The Collage Theorem

Barnsley’s 1985 Collage Theorem [BARNSLEY] p. 89 made fractal image compression possible. It states that, given an iterated function system (IFS, a list of repeatedly applied contractive transforms) and some subset of a complete metric space, in order for the attractor of the IFS to be “close” to the given subset, the union of the images of the subset transformed by the member transforms of the IFS must be “close” to the original subset (for a certain definition of “close”). The Collage Theorem thus establishes a guideline for finding fractal approximations to images: work out transforms that take parts of an image to itself. The attractor of the transforms will then be close to the original image.

Barnsley presents a number of examples of simple fractals for which the reader is encouraged to figure out suitable IFS representations by hand. Barnsley does not, however, elaborate on how to find such an IFS representation for an arbitrary image.

Beginnings of fractal image compression

The first algorithm capable of generating an IFS fractal representation for an arbitrary raster image was proposed by Barnsley’s graduate student Jacquin [JACQUIN1]. Jacquin’s fractal coding scheme is based on non-overlapping square block partitions of the image being encoded. It defines a “distortion” measure based on the L2 distance between the pixels of two image blocks, as well as a number of contractive linear transformations that act on image blocks. The transformations used include geometric transforms from isometries of the square blocks, as well as pixel value transforms that change the overall luminance of the block being transformed.

Jacquin’s algorithm divides the image to be encoded into “domain” blocks of a given size, and classifies the blocks into classes based on their appearance: shade, midrange, simple edge, or mixed edge. Partitioning the blocks in this way reduces the pool of blocks that must be searched in the next step. The algorithm then divides the image into “range” blocks containing a quarter of the number of pixels, and iterates through them. For each range block, it searches other domain blocks in that class, looking for a transformation or transformations that map a domain block to the range block with minimum distortion. (Domain blocks are downsampled to the same number of pixels as range blocks for the distortion calculation.)

Figure 2: The fractal encoding process[JACQUIN2, Fig. 2]. Rᵢ is a range block, D is the domain block pool, and T is the transform pool.

This process produces a list of self-similarity-producing transforms, from regions of the image to other regions of the image, that constitute an IFS with the original image as the fractal’s attractor. If it can’t find a transformation for some domain block with a measured distortion less than a given threshold, the domain block is partitioned into 4 equally sized square children, and the search is repeated with the children as domain blocks.

Reconstructing an image from its IFS representation is a much simpler procedure than the above encoding algorithm. The list of transforms is applied some number of times to an arbitrary image. With each iteration, since the original image is the fixed point of the IFS, the starting image is transformed to more closely resemble the original encoded image. The image may be reconstructed at a higher or lower quality by increasing or decreasing the number of iterations.

Jacquin [JACQUIN1] demonstrated encoding and decoding on the standard Lena test image. It does not contain coding speed or memory usage statistics for the test image, so it is impossible to discern whether the paper’s fractal coding scheme is competitive with other image compression schemes of the time. Jacquin published a clarified and expanded version of this encoding procedure two years later [JACQUIN2]. Almost all fractal image compression algorithms are extended versions of this algorithm.

Improvements in partitioning

The simple two-level image partitioning scheme used by Jacquin has been the target for replacement in later work. The motivation in changing the partitioning scheme is reducing the error in the mapping between range and domain blocks, and thus improving the fidelity of the image encoding. Fisher describes two partitioning schemes that improve on Jacquin.

Fisher describes quadtree partitioning [FISHER, Ch. 3] as “a ‘typical’ fractal scheme and a good first step for those wishing to implement their own.” A quadtree is a data structure that subdivides 2D space: each node represents a square area, and nodes may be subdivided into four equally sized square child nodes. Quadtree partitioning is an “adaptive” scheme that produces different partitions for different images, based on their content. Fisher’s quadtree algorithm first subdivides the image to be encoded several times, producing a balanced quadtree several layers deep. The leaf nodes of this initial quadtree are range blocks as in Jacquin [JACQUIN2]. Domain blocks are chosen from nodes that are one level closer to the root than the range blocks, and thus four times the area. All domain blocks are compared with each range block, and if no transformation can be found that maps a domain block to the range block under consideration with less than some threshold level of distortion, the range block is subdivided. It may be recursively subdivided several times until a low-distortion mapping from a domain block is found, or a maximum tree depth is reached.

Jacquin’s original partitioning scheme is almost a special case of quadtree partitioning with a minimum depth of n and maximum depth of n+1, where n is however many partitions it takes to subdivide the image into blocks of the desired range block size, and the levels of the tree closer to the root are not actually represented in the scheme’s data structures.

HV partitioning [FISHER, Ch. 6] is similar to quadtree partitioning in that it involves recursive partitioning of rectangular areas. When a quadtree range block is split, it is always divided into four equal areas. HV partitioning has an additional adaptive element: when a HV-tree range block is split, the orientation (horizontal or vertical) and position of the dividing line is chosen to maximize the difference between average pixel values on either side of the line, with a bias applied to prefer splits that do not create very thin sub-rectangles. The goal of HV partitioning is to create a partition of the image that better corresponds to the structure of the image than a quadtree partition, and thus use fewer ranges when encoding the image, improving encoding time and quality.

A drawback of HV partitioning is the greater variety of possible transformations between range and domain blocks, since both are rectangles of essentially arbitrary dimensions, rather than squares with power-of-two dimensions. This increases both search times and the number of bits required to represent a mapping, although the generally lower number of mappings required offsets the latter.

Fisher notes a weakness in both quadtree and HV partitioning: reconstructed images suffer from visible blockiness at all but the highest reconstruction qualities. Fisher’s quadtree partitioning [FISHER, Ch. 3] deals with this by applying a heuristic deblocking filter during image reconstruction. The filter averages pixels on either side of a range block boundary, with pixels on the inside weighted by a factor proportional to the range block’s depth in the quadtree. Fisher describes a similar procedure for blocks in an HV tree [FISHER, Ch. 6]. Blocky artifacts are by no means unique to these two schemes or fractal image compression in general; IDCT-based codecs such as JPEG and the MPEG family are also susceptible to blockiness. The JPEG standard does not include a deblocking filter, but some MPEG variants do, such as H.264.

One scheme [DAVOINE] not based on rectangular blocks uses a mesh partition made up of triangles, which is generated by repeated adaptive Delaunay triangulation and triangle splitting. The goal of this scheme is to cover the image in triangles that are as large as possible while maintaining a variance in internal pixel brightness values that is less than some threshold parameter (triangles meeting this condition are “homogenous”). Thus, regions with internal brightness boundaries are split up. Delaunay triangulation was chosen to minimize the number of thin triangles, and thus reduce numerical problems when the internal pixels of a triangle are read from the image raster.

Initially, the image is covered with a regular grid of vertices. In the splitting phase, the Delaunay triangulation of the vertices is calculated; then, for every triangle that is not homogenous, a new vertex is added in the barycenter of that triangle. Splitting is repeated until convergence, or until some iteration limit is reached. In the merge phase, the algorithm removes vertices for which all surrounding triangles have similar pixel value mean and variance. Then a final triangulation is performed. This procedure is used to generate both domain and range sets of triangles, with domain triangles permitted a greater variance.

Adaptive triangular partitions generally use fewer blocks than quadtree or HV partitions. An additional advantage of triangular blocks is that inter-block seams do not always line up with pixel boundaries, so there is less visible blockiness in the decoded image. The major disadvantage is that both encoding and decoding require transformations between arbitrarily shaped triangular range and domain blocks; this involves substantially more interpolation than the rectangle-based schemes.

Difficulties of fractal image compression

Since the idea behind fractal image compression is that the image being compressed can be modeled as a fractal, useful fractal image compression requires that the image actually have fractal characteristics so that it can be efficiently modeled that way. Clarke & Linnett [CLARKE] pointed out that while fractal image compression is frequently proposed as a compression scheme for images of nature (such as plants, landscapes, clouds), it is not necessarily the case that those images have the affine self-similarities required for effective compression by existing fractal compression schemes, or indeed, that they have any fractal characteristics at all.

As an example, they observed that there are significant differences between a simple computer-generated fractal fern and a photo of a real fern. A fractal fern is highly idealized and may be represented very efficiently by a fractal model, but a natural fern does not have its perfect regularity, and the self-similarity breaks down at some scales. In particular, the fern leaves resemble the whole fern, but are different structures nonetheless, and can only be approximated by small copies of the whole fern.

Wohlberg & de Jager [WOHLBERG2] examined statistical properties of natural images with regard to fractal image encoding, specifically with regard to the deterministic self-similar fractal representations used by all of the above algorithms. Their conclusions seem to confirm the assertion of Clarke & Linnett: “The form of self-affinity considered here therefore does not appear to represent a more accurate characterization of image statistics than vastly simpler models such as multiresolution autoregressive models.”

It has been proved [MATTHIAS] that finding an optimal fractal encoding for an arbitrary image is NP-hard. Furthermore, it has been proved that algorithms derived from Jacquin’s original Collage Theorem-based algorithm do not generate approximations to optimal encodings. So, at least with currently known encoding methods, there is an unavoidable tradeoff between fast (polynomial-time) encoding and obtaining the highest quality representation that will fit in a given amount of space. This is a huge disadvantage relative to other image compression methods: for example, JPEG’s IDCT block coding processes one fixed-size block of pixels at a time, which results in an encoding time that is a linear function of the number of samples in the original image.

Fractal image processing

Authors focused on image compression have treated the iterative reconstruction process for fractal image encodings as a way to control the quality of a reconstructed image. Polidori & Dugelay [POLIDORI] examined the reconstruction process as a procedure for image enlargement.

The most commonly used algorithms for this purpose are variations on polynomial interpolation of samples (the pixels of the original image). The assumption underlying polynomial interpolation is that the samples are from a smooth continuous function. Polidori & Dugelay made a different assumption: that the function that the image was sampled from is fractal in nature, instead of smooth. Then a fractal encoding of that image would be an approximation of that fractal. Since fractals are scale-independent, the image could then be reconstructed at any size from the encoding.

Polidori & Dugelay first examined fractal image encoding and decoding schemes identical to those developed for image compression, based on nonoverlapping partitions of the image. They found that such schemes tended to produce blocky reconstructed images with undesirable artifacts when used to reconstruct images at a larger size than the original image. They then examined the idea of using overlapped domain blocks, which retain redundant information not desirable for compression applications, but useful for image enlargement. They proposed and tested several methods of recombining the overlapped blocks, with some methods yielding enlargements with visual quality comparable to those obtained through classical interpolation.

Another image processing technique that makes use of the scale independence of fractal image encoding is fractal denoising. Additive white Gaussian noise (AWGN) is common in images acquired from noisy sensors or transmitted through noisy communications channels. Images containing this kind of noise can be modeled as a series of samples where each sample is the sum of the value from the noise-free original and a value taken from a Gaussian distribution. Obviously, this model is scale-dependent, and one might expect that fractal image encoding poorly represents AWGN.

In fact, this is the case. Ghazel, Freeman & Vrscay [GHAZEL] noticed that “straightforward fractal-based coding performs rather well as a denoiser”. This motivated the development of their denoising algorithm, which goes a step further than that, and statistically estimates a fractal encoding of the noise-free image from a fractal encoding of an AWGN-contaminated noisy image.

In the first step of the algorithm, the noisy image is examined for areas with nearly uniform pixel values. The differences in value from pixel to pixel in such regions is likely due to noise, so the variance of the AWGN can be estimated from the variance of these regions. The image is then encoded as a series of transformations from domain blocks to range blocks in the usual fashion. (Adaptive quadtree partitioning is used, as it results in the best quality of reconstructed images.) Each transformation in the encoding that affects pixel values (“gray-level” transforms, as opposed to “geometric” transforms} is adjusted using the previously estimated variance according to a simple relation. Finally, the image is reconstructed from the modified fractal encoding.

Ghazel, Freeman & Vrscay report that fractal denoising is competitive with or superior to Lee filtering, which is a common locally adaptive linear denoising algorithm. Furthermore, fractal denoising is more likely to outperform Lee filtering as the variance of the AWGN increases, making the fractal method an attractive choice for processing very noisy images.

Conclusion

A review of fractal image compression cannot fail to state this: despite years of improvements, such as the previously discussed partitioning schemes, fractal image compression is not competitive with current block IDCT or wavelet methods. The speed and quality issues noted above have prevented broad use of fractals for storage of compressed images.

However, fractal image processing techniques have found some application: fractal image enlargement was eventually commercialized as a product known as Genuine Fractals [GENUINEFRACTALS] which is well known within the desktop publishing industry as a high-quality image scaler suitable for making large prints from digital photos.

We’ll be using Jordan Sissel’s FPM (“Effing package management!”) utility to turn a packaging directory into an actual .deb package. The packaging directory has the same structure as /: an /etc, a /usr, a /usr/lib, and so forth, and we’ll put build products from mod_auth_openid‘s Autotools project into it.

I’ve put the instructions into a gist, which should work on Ubuntu 13 (Ringtail/Salamander) and probably higher.

Note some minor wrinkles: APXS doesn’t respect the DESTDIR environment variable, so we can’t use make install into the packaging directory, and instead will have to assemble the contents of the packaging directory by hand.

I recently had the privilege of presenting a talk at GDC 2014 for KIXEYE: Building Customer Support and Loyalty, in which I talked about how and why KIXEYE built the Monocle customer support system as a web app, the challenges and rewards of building a uniform support API across multiple games, and how designing games with support scenarios in mind yields benefits across the whole product. Pretty exciting to get to show off my whole team’s work. The full talk will be in the GDC Vault later this month, but for now, I’ve uploaded slides:

SCons, the Python-based build system, tries to isolate itself from the user’s environment as much as possible, so that one developer’s weird environment variables don’t lead to an irreproducible build. This is great until you want to use tools from places that SCons doesn’t think are standard, in which case you can make use of its site_scons extension mechanism to make them standard.

In this case, I’m using Homebrew. Homebrew normally installs into /usr/local. I think this default is completely mental, because /usr/local is the Balkans of Mac software packaging: routinely invaded and full of land mines. I’ve installed Homebrew into /opt/homebrew, where it’s safe from having parts overwritten at random by unmanaged package installers. For Make builds, I’ve included these lines at the end of my .zshrc to make Homebrew components available:

For SCons, I’ve created the file $HOME/.scons/site_scons/site_init.py to modify the Environment object used by SCons subprocesses:

Specifically, my site_init.py decorates the Environment initializer to always add Homebrew to the PATH, and now I have binaries like sdl-config available. (SCons has support for parsing the flags emitted by those config programs.) This should extend to CFLAGS and LDFLAGS, except that they’re most likely strings instead of lists.I’ve since refined the GIST to add CFLAGS, CXXFLAGS, and LDFLAGS to the SCons environment.

The end result is that I can assume that any SCons build on my system will have access to libraries and tools installed through Homebrew.

I’ve got a Nokia N900 going spare since I upgraded to a phone for which people actually write apps. Lots of possibilities with the N900. It’s got a shedload of radios, a decent CPU, an IR blaster, and not one but two cameras. The one thing it doesn’t have is any way to plug in more things.

Enter the Raspberry Pi I got at PyCon last week. It comes with lots of GPIO pins, and a USB port where one might plausibly plug in an N900. Clearly, these two devices were meant to be friends. Let’s get them talking.

Prepping the N900

The N900 runs Maemo, a heavily customized Debian fork, which was a fine OS for a hackable device. However, I used mine as an actual phone for several years, and it’s gotten kinda janky, so I’m going to blast it back to factory-fresh and then bring it up to the latest community-developed version of Maemo.

Flash rootfs image

This is where Linux lives. I used Maemo 5 PR 1.3.1, which is the very last official version, postdates the skeiron.org mirror, and can be found on Nokia’s files site by Googling parts of the filename: RX-51_2009SE_21.2011.38-1_PR_COMBINED_MR0_ARM.bin

Install community-maintained Maemo (CSSU)

Go to http://repository.maemo.org/community/community-fremantle.install in Nokia Web to install Stable variant of CSSU. Add the catalog. Let it install the CSSU Enabler. Click through all the messages. Close the app manager and open the Community SSU app that it just installed. If it complains about HAM (the app manager) still being open, just run it again. Once it’s done, it’ll return you to HAM. Click Update All to install the CSSU Maemo update, which is a 34 MB download and may take a while over WiFi.

Fill the temporal void with fruit salad.

The phone will eventually reboot, and you’ll get an “Operating system updated” banner.

Useful apps

Install “OpenSSH Client and Server” from the Network section in HAM. Set a root password when it asks for one.

Install “rootsh” from the System section in HAM, which gives you the gainroot script (equivalent to su?).

Open an xterm. Become root (sudo gainroot). Edit /etc/passwd using vi or whatever to change the password field of the user account from ! to *, or you won’t be able to log in over SSH using pubkey auth because sshd will think the user account is locked out. (BTW, I learned this from running the OpenSSH server in debug mode with sshd -d, in which it’ll stay attached to the terminal and show useful status messages).

The N900 expects that the USB host will be using IP address 192.168.2.14, so let’s make that happen on system boot and when the N900 is plugged in (auto and allow-hotplug respectively). Edit /etc/network/interfaces to add these lines:

Note, as I just found out, that this only works if you connect the N900 in “PC Suite” or “Charging Only” mode. If you connect the N900 in “Mass Storage” mode, it emulates a hard drive instead to give you access to its microSD reader and internal storage, and you’ll see this in dmesg:

The mode setting seems to be sticky: you can get the N900 to go back to being a fake Ethernet adapter by connecting it at least once in PC Suite mode, after which it can be unplugged and replugged as much as you like, even if you leave it in Charging Only by not selecting a mode. However, the mode setting doesn’t persist across reboots. When I power-cycled my N900 while it was connected to the Pi, it didn’t come up as a network device until I manually selected PC Suite mode again.

but sudo lsof -i -n -P | grep LISTEN will tell you what's behind them. -i selects IPv4 and IPv6 sockets, -n turns off hostname translation, and -P turns off port name lookup, because the day /etc/services is correct or relevant will probably be the day the Sun burns out.