why does a/(b/c) = a(c/b)?

My friend Santi asked me why we divide by a fraction by interchanging
the numerator and the denominator and multiplying; that is, why
a/(b/c) = a(c/b). I wasn’t quite sure how to answer, but after
thinking about it, it turns out that there are many deep and
fascinating answers that involve many aspects of the universe of
mathematics. Here are three different answers. Sort of.
Part of the problem is that it’s difficult to say what really counts
as an explanation here, because, as Feynman explained in his famous
BBC video on “Fucking magnets, how do they work?”, an explanation has
to start with things that you already understand to be true. In cases
like this, it’s really easy to fool yourself into thinking that you
have an explanation, when all you really have is circular logic.
Here, let me demonstrate.
An answer based on group theory
-------------------------------
A “group” is a set with an operation that have the four properties of
closure, associativity, identity, and invertibility. Nonzero
fractions, together with multiplication, are a group. It turns out
that the divide-by-multiplying-upside-down thing isn’t limited to
fractions at all; it’s a much more general property that applies to
any group, including bizarre things like permutations under
composition, three-dimensional rotations of polyhedra under
composition, matrices under matrix multiplication, Gaussian integers
under addition, bit strings of some fixed length under XOR, and
integers under multiplication modulo a prime number!
To explain, first I will explain the meaning of the group properties.

summary of recent important developments (ultra-compressed, random)

I wrote this for a friend, but I thought a wider audience might find it
interesting. It's focused almost entirely on technical topics.
- The transition to BaaS is the realization what we were hoping KnowNow would
be in 2000, at least in weeks whose numbers were congruent to 2 modulo 5; we
just didn't know how to tell people about it.
- Self-certifying names as used in Bitcoin, BitTorrent, Git, Tor hidden
services, and apparently now Urbit have gone mainstream.
- Fastly has redefined what a CDN is, and might conceivably work to make origin
servers on home networks feasible again. (I may be biased.)
- TimBL's turned to the DRM dark side; see his Reddit interview for evidence.
- Rust and Golang seem to be pretty hot, but Clojure is hotter.
- The transition of marketed energy to photovoltaic started happening in
earnest in 2013 and will be complete in the mid-2020s, which could result in
a bigger (log-scale) increase in marketed energy consumption than the
Industrial Revolution, since the solar energy resource is four orders of
magnitude bigger than current world marketed energy consumption.
- There's an explosion of beta-quality time-series database management systems
coming out of the DevOps movement or buzzword from a couple of years back.
- John McAfee claims to have uncovered a massive operation to get Hizbullah
agents fake Belizean passports to get them into the US.
- Abrash claims judder-free VR (and presumably AR) is finally about to happen
because of new breakthrough displays. I bet Jeri Ellsworth can tell you if
he's full of shit.
- Tick-to-trade times in HFT have fallen into the single-digit microsecond
range, and HFT people are doing things like userspace NIC drivers in order to
stay in the game. That playing field is due for some serious consolidation
this year.
- The free software movement has died.
- Due to Bitcoin and Snowden, computer security became more than a PR issue,

README for httpdito (a tiny web server written in assembly)

We all have moments where we do things we regret. Sometimes when
we're drunk, or under the influence of bad friends, or lonely. In my
case, I guess it was probably sleep deprivation that led me to think,
"You know what? Writing a web server in assembly language sounds like
a fun thing to do."
It turned out pretty well and revealed some surprising things about
modern Linux systems. You might even find it useful. You probably
didn't expect you could write a useful HTTP server in
1. What httpdito is useful for
2. What httpdito is
3. How to build httpdito
4. How to configure httpdito
5. Performance
6. Security
7. What it tells us about Linux
8. What it suggests about operating system design
9. Are you insane?
What httpdito is useful for
---------------------------
httpdito can serve static files from your filesystem on Linux. It's
probably the easiest way to serve static files, and like any static
web server written in a language faster than Tcl, it's plenty fast for
just about anything. And it's smaller than 2K, so you can copy it
anywhere easily.
It serves up the files in the directory where you run it. It won't

Polynomial-spline FIR kernels by integrating sparse kernels

I think I have a method for reducing the computational expense of a
large and interesting class of FIR filters by an order of magnitude or
so, but I haven't tried it out yet, and it seems like the kind of
thing people would have tried by now, so it probably either won't work
or is already known.
In my case, this questionable insight came out of, among other things,
considering how to write timesheet software using functional reactive
programming, sweating through the night in the Buenos Aires heat wave
and blackouts, and considering whether it's possible to approximate
dense FIR kernels by convolving multiple sparse FIR kernels together.
I've tried to write this with some humor, although I think the result
is basically that I sound insane. Hopefully that provides some
entertainment. Don't take it too seriously.
Background
----------
(This section is basically me regurgitating shit from Wikipedia and
dspguide.com, so feel free to skip it if you know DSP. Or, better
yet, read it and correct me.)
FIR filters are common tools in DSP because they can easily be
designed to get zero-phase high-performance filters: you take the
inverse FFT of your desired frequency response, giving you the impulse
response, which you window to give it compact support without fucking
up the frequency response too much, and that gives you the weights for
your filter. (Not the only method, but a common one.)

lattices, powersets, bitstrings, and efficient OLAP

Some thoughts that occurred to me on the bus home about using
bit-twiddling tricks to speed up lattice operations. The original
genesis of the idea was the old idea that it's a shame that Unix
doesn't have "sub-users" that have the same relationship to users that
the users have to `root`, whose name is suggestive of the idea of a
hierarchy of users, something that was never added to Unix. To
implement such a thing, you'd ideally want to substitute a "user is
equal to or has power over" check for the usual "user is equal to or
is superuser" check, and you'd like it to be efficient enough that it
doesn't slow down the operations it needs to guard.
As it happens, the uid of `root`, 0, has a special bitwise
relationship to other user IDs: it is, in some sense, the AND of all
the other possible user IDs. The AND of any uid with `root`'s uid
will equal `root`'s uid. It's as if each 1 bit in your uid
represented some restriction, and `root` is the uid with no
restrictions. (And `nobody`, traditionally uid -1, is the uid with
all possible restrictions, which is nicely symmetrical. Although in
this case this would mean `nobody` was subject to the whims of any
user at all, which might not actually be what you want, but whatever.)
So you'd substitute the check `(actor_uid & actee_uid) == actor_uid`
for the check `!actor_uid || actor_uid == actee_uid`, which would be
exactly as efficient; but you'd have to have some kind of magical way
to assign uids to regular users so that you didn't accidentally end up
with one regular user being superuser over another one, or over some
sub-user of another one.
Is there such a magical way to assign uids? That's what this essay
explores. I haven't found a universal solution, but I've found some

current Fukushima leaks are not a major catastrophe

The recently-revealed continuing leaks of radioactive contaminants
from Fukushima, although they are 500 times smaller than the initial
release and 100 million times smaller than the natural radioactivity
of the Pacific Ocean, could still be dangerous to local ecology and
human health, but do not represent a global catastrophe.
This is my conclusion as a non-expert in the field summarizing the
publicly available information. It could be wrong.
Details follow.
Different accounts give different amounts of radioactive water leaked
into the Pacific from the Fukushima nuclear plant via continuing
leaks. Some of them give the amount, rather uselessly, in tons, but
the better accounts give the amount of radioactive material in the
water in becquerels. [One TV report][0] says it's a PBq, but that
sounds like it's probably referring to part of the initial, much
larger release; other reports give the amount as 10 to 50 TBq: [More
Fukushima Fallout][2] and [a Japan Times article][4] say 30 TBq,
[Asahi Shimbun's article][7] says 24 TBq, while [National Geographic's
article][5] gives the groundwater concentration of radioactive cesium
in places as around 1kBq/kg, gives the total release as 0.3 TBq/month,
describes the immediate aftermath of the disaster as a release of
around 10PBq, and contextualizes it by comparing to the 89 TBq release
of cesium-137 from the Hiroshima bombing.
[0]: http://www.youtube.com/watch?v=tSI3Rke8Zp4 "FUKUSHIMA: 1 Quadrillion Bq Radioactive Water
Discharged into the Pacific Ocean since May 2011"
[2]:
http://www.blindbatnews.com/2013/08/more-fukushima-fallout-30-trillion-becquerels-of-strontium-floods-pacific-ocean/23050

mechanical computation LUTs (lookup tables) with planar fabrication

I've written previously about heightfields for mechanical computation:
<http://lists.canonical.org/pipermail/kragen-tol/2010-June/000919.html>.
One difficulty mentioned therein is that fabrication technologies
capable of producing individual one-off parts like the
three-dimensional heightfields called for are rather expensive;
drilling a single hole in a hard material can cost tens of cents, and
thousands, if not tens of thousands, of such precise holes will be
needed for a complete computing device.
Planar fabrication techniques such as acid etching, laser cutting,
sawing with a jigsaw or fretsaw or coping saw or piercing saw,
waterjet cutting, oxy-acetylene or plasma cutting, or cutting with a
hot or abrasive wire are dramatically cheaper, but they can't cut only
partway through the material, either with precision or at all. It
would be very convenient to be able to achieve the two-dimensional LUT
effect using only such planar fabrication techniques.
This is possible by using a tapered probe that measures the width of
the hole, rather than its depth; this allows a flat plate containing
holes of various widths to be used instead of the cube containing
round holes of various depths I proposed before. The holes can be
slot-shaped rather than round, since the taper will be cut from a flat
plate. The friction and tensile forces resulting from the taper will
be the limiting factor on the number of bits in the machine; this can
be improved somewhat by making holes shaped like plus signs or
rectangles and using two separate tapers in a plus-sign-cross-section-
shaped probe, and perhaps by using stepped tapers.
Alternatively, of course, you could construct the three-dimensional
heightfield by squeezing together a bunch of parallel plates, like the

3D measurement with noise correlation (from 2006)

I came across this thing I'd written in 2006 on an 8-megabyte SD card
in an old box; I wrote it on a Zaurus or something while Beatrice and
I were traveling around the US. Apparently I independently invented
Kinect.
---
Saw a story recently about a face recognition biometric bullshit
security system. The innovation of this system was that it scanned
the face's 3-D shape quickly without expensive equipment.
Specifically it projected a pattern of light onto the face and
analyzed a single photograph of the results. A photo accompanying the
story made it clear how this works: the pattern of light is a set of
horizontal lines alternating light and dark, and the camera looks down
at the face at an angle, while the lines are projected horizontally.
The lines are still mostly continuous across the face in the image,
but wiggle up and down as the surface moves forward and back.
Of course this is a general-purpose low-cost 3-D scanning technique.
All the lines are the same, so you can't tell which particular line
from the light source corresponds to which squiggle in the image. So
you only get relative displacements. If you wanted absolute position
measurements, you could project a barcode instead of a uniform
pattern. The angle of the squiggles would still give the slope of the
surface, but now their spacing would make them identifiable.
You could carry this further by using a pattern of random 2-D 1-bit
noise instead; then you could correct for horizontal displacements of
the camera as well. If any significant area of the noise correlates

a ferromagnetic-saturation "diode" for radio wave detection?

Ferromagnetic materials have very high magnetic permeability, but
saturate at some point; in some cases, especially for extremely
ferromagnetic materials like the electrical steel used in
transformers, the transition is quite abrupt.
A major problem in the historical development of radio was the
"detector": some way of converting the high-frequency AC signal of the
detected radio wave into a DC or low-frequency signal that could be
used, for example, to activate a solenoid. This was eventually solved
by the development of the vacuum-tube diode and later the
semiconductor junction diode, but before that, there were a number of
Rube Goldberg contraptions, some of which remained in use for a long
time in special circumstances: the "coherer", which sintered metal
particles together with the RF energy and then measured the DC
resistance of the result; the "cat's-whisker detector", a delicate
Schottky diode made with a point contact between a finely-pointed wire
and a crystal of a semiconductor such as galena, iron pyrite,
carborundum, or even the iron oxide on a razor blade of a "foxhole
radio"; Marconi's "magnetic detector", which used the nonlinear
hysteresis behavior of moving iron wire to convert an RF magnetic
field into a tiny DC voltage; and Fessenden's "electrolytic detector",
which used the electrolytic formation of a layer of bubbles on a fine
platinum wire electrode to preferentially impede current in one
direction. Somewhat related is the "mercury-vapor rectifier", which
uses the enormous difference in work function between mercury and
graphite to conduct in only one direction.
It occurs to me that the saturation transition in low-hysteresis
electrical steel could be used to form a detector for frequencies up
to some limit, as follows. You bias the primary winding of an

energy harvesting of EM fields for low-power electronics

Clamp-on ammeters have a ferromagnetic "clamp" that encloses a wire in
a closed magnetic circuit; the magnetic field induced in the
ferromagnetic material is proportional to the total net current flux
enclosed by the clamp, and you can measure this field precisely with a
Hall-effect sensor, thus distinguishing wires carrying a load from
wires that don't.
You could use the same approach to build a low-power electronic device
that powers itself by leeching energy from the magnetic field around a
current-carrying wire, without needing a direct electrical connection.
This could enhance safety and reliability and ease installation, since
you can "plug in" such a device without making a direct electrical
connection, and if the device shorts out, it won't cause an electrical
fire. Not only could it harvest power without ever coming in contact
with the wire, it could harvest power without even coming near the
wire; it need only enclose the wire with a loop of ferrite without
also enclosing its return path.
Effectively this is a clamp-on transformer, with the "primary winding"
of the transformer having only one turn of wire. If that wire is
normally carrying, say, 100 amps, then the secondary winding of the
transformer with, say, 1000 turns, could draw up to 100 milliamps; but
if the total available voltage to be dropped through that one turn is
240 volts, the secondary winding would need to handle 240kV, which is
difficult. If we limit the secondary winding voltage to 10V, then the
voltage drop on the primary will be an insignificantly small 10mV, and
the total power being transmitted can be up to 10mV * 100 A = 1 watt;
the secondary winding then can draw up to 1 watt / 10 volts = 100 mA
still. That's enough power to allow the use of a very simple power
supply (say, a diode, a small capacitor, and a 7805) and a relatively