It’s well known that quaternions can be used to represent rotations in 3- and 4-dimensional space. It seems less well known that they can also be used to represent reflections and, in fact, we can derive the representation of rotations from this in a straightforward way (since all rotations can be constructed as a sequence of rotations).

We shall write quaternions p and q as:

p = x+Xq = y+Y

where x and y are scalars and X and Y are 3-vectors. Now quaternion multiplication can be defined as:

pq = (x+X)(y+Y) = (xy - X.Y) + (xY + yX + XxY)

with brackets in the result indicating the scalar and vector parts (and where X.Y and XxY are the usual dot and cross product). Quaternion multiplication is associative: p(qr) = (pq)r = pqr but not commutative (in fact, pq = qp just when XxY is zero).

We also define conjugation, the conjugate of p, which we write as p' (overbar p̅ is conventional, but I think is harder to read onscreen), is p with the vector part negated:

But xy + X.Y is just the dot product of p and q, considered as normal 4-dimensional vectors, so we can write:

pq' + qp' = 2p.q

This nicely relates quaternion algebra directly to the geometry of 4-space and we can now rewrite expressions using dot product purely in terms of quaternion arithmetic (quaternion addition and subtraction and operations with scalars are the same as for ordinary vectors, of course).

So, the usual definition, valid in any dimension, for a reflection in a (hyper)plane with normal n (assumed to be a unit vector) is:

p → p - 2(p.n)n

we can rewrite as:

p → p - (pn' + np')n = p - pn'n - np'n = p - p - np'n = -np'n

ie. reflection of p in n is just -np'n. Note that a 4-dimensional reflection is in a 3-dimensional hyperplane (or perhaps it is clearer to think of the reflection as being along the normal vector), so vectors parallel to the normal are reversed, vectors orthogonal to the normal (ie. in the normal hyperplane) are unchanged.

Interestingly, the relation pq' + qp' = 2p.q is also true for complex numbers (with the usual complex multiplication and conjugation) and we have exactly the same represention for a 2-dimensional reflection, but we can further simplify using the commutativity of complex multiplication:

z → z - 2(z.n)n = -nz'n = -n²z'

Returning to quaternions, we can combine two reflections, with normals n and m, say, to get a (simple) rotation:

p → -m(-np'n)'m = mn'pn'm

and geometry tells us that the rotation is about an angle twice that between the the planes of reflection.

In 4-space, this rotation can be thought of as in the plane spanned by n and m (ie. vectors in that plane rotate to other vectors in the plane), or around an axis of another plane, orthogonal to the first, in fact the intersection of the two hyperplanes normal to n and m.

A general 4-dimensional rotation needs 4 reflections:

p → j(l(mn'pn'm)'l)'j = jl'mn'pn'ml'j

and we lose any simple relation between the left and right quaternion – in fact p → qpr is a rotation for any unit quaternions q and r (an interesting case arises if q or r is the identity – a Clifford translation).

We can derive expressions for reflections and rotations in 3-space from this, with 3-space points represented as pure quaternions, ie. with zero scalar part, so if n and m are pure, then n' = -n and nm = (mn)'

A reflection now is:

p → -np'n = npn

and a rotation is:

p → mn'pn'm = mnp(mn)'

and we have the usual definition of a rotation in 3-space:

p → qpq'

Representing rotations as compositions of reflections also has practical uses. Consider the problem of finding a rotation to take a vector p to a vector q (assumed to both be of unit length) – we can do this as two reflections, first reflect p in the (hyper)plane normal to p+q, this aligns p with q, but in the opposite direction, so reflect again in the plane orthogonal to q (it may help to sketch a picture – the construction makes sense in 2 or more dimensions). Using the 3-space formulation above, and the fact that qq = -1 for pure q, we have:

p → qpq' where q = -q(p+q)/|p+q| = (-qp-qq)/|p+q| = (1-qp)/|p+q|

and since we know that the result is of unit length, we can just use:

p → normalize(1-qp)

(with a special case when qp = 1, ie when q = -p, when the rotation is not uniquely defined).

Also, if we have two vectors x0,y0 and wish to rotate them to new vectors x1,y1 (assume |x0| = |x1|, |y0| = |y1|, and |x0-y0| = |x1-y1|, first reflect x0 to x1 along x1-x0 (ie. in the plane orthogonal to x1-x0), this also reflects y0 to a new point y2, but we can then reflect y2 to y1 along y2-y1 (this leaves x1 unchanged as it is equidistant from from y1 and y2):

n = x1-x0y2 = y1 - 2(y1.n/n.n)nm = y2-y1r = normalize(mn)

There is always a unique rotation, but two special cases are: if x0 = x1, then swap x0 with y0 and x1 with y1 (if y0 = y1 as well, there is nothing to do); and if y2 = y1, then reflect along cross(x1,y1) rather than y2-y1.

A similar construction is also possible in 4-dimensional space: 3 point pairs define a general rotation (the first three reflections map the 3 points, the last reflection, in the hyperplane containing those points, ensures we have a proper rotation).

Much of this is taken from Coxeter’s 1946 paper, “Quaternions and Reflections”, https://www.jstor.org/stable/2304897, which of course goes into much more depth and with much greater rigour.

It’s useful to be able to set up miniature networks on a Linux machine, for development, testing or just for fun. Here we use veth devices and network namespaces to create a small virtual network, connected together with an Open vSwitch instance. I’m using a Raspberry Pi 3 for this, it’s less inconvenient when it goes wrong, but I don’t think anything is Pi specific (and I certainly wouldn’t recommend a Pi for serious routing applications).

A veth device pair is a virtual ethernet cable, packets sent on one end come out the other (and vice versa of course):

I could assign an IP to both ends and try to send traffic through the link, but since the system knows about both ends, the traffic would get sent directly to the destination interface. Instead, I need to hide one end in a network namespace, each namespace has a set of interfaces, routing tables etc. that are private to that namespace. Initially everything is in the global namespace and we can create a new namespace, which are often named after colours, with the ip command:

Note that veth0 is in state LOWERLAYERDOWN because veth1 is now DOWN (as is the local interface in the namespace). We can now assign addresses to veth0 and veth1 and make sure all the interfaces are up:

That’s all we need for the most basic setup. Now we’ll add a second namespace and connect everything together with a switch – we could use a normal Linux bridge for this, but it’s more fun to use Open vSwitch and later use some very basic Openflow commands to set up a learning switch.

For a more complicated setup it’s usually a good idea to enable IP forwarding, so while we remember:

but sadly the source address is still in the 10.0.0.0 subnet and it’s not surprising that the Google DNS server isn’t responding.

Now, part of this exercise is to find out about Open vSwitch and its capabilities and I would hope that they would include setting up simple NAT translation, but I have no idea how to do that right now, so we’ll just use IP tables, so set up NAT and make sure forwarding is enabled:

Before when we created the OVS bridge, it started in “Normal” mode, with a single flow rule that sends every incoming packet out of every interface (except the one that it came in on), so the bridge is acting like a hub. Setting “fail-mode=secure” means there are no default rules so all packets are dropped.

First, if we have been playing, it’s a good idea to clear the rule table:

ovs-ofctl del-flows ovsbr0

Now set up the learning rules. The idea is that when a packet comes in from a particular MAC address, the switch remembers which interface the packet arrived on, so when it wants to send a packet to that address, it can just send it on the interface recorded earlier. We can do a similar thing with the local interface so we don’t need to configure the rules to handle whatever the local MAC address is (maybe there is a better way to handle the local interface – comments welcome).

The first rule says that when a packet originating locally is received, ie. that is being sent from a local process, add a rule (to table 10) that says that when an incoming packet is received, addressed to same MAC address, put the value 0xFFFF in the lower 16 bits of register 0. The second is the same but for packets received from the other interfaces in the switch, add a rule that puts the interface number in register 0. Having added a rule, processing continues with table 1.

The first rule sends packets with a broadcast ethernet address directly through to table 2, the second rule goes through table 10 first – the idea being that if the packet is being sent to a known MAC address, table 10 will put the number of the interface in register 0, or 0xffff if it’s the local MAC address, or 0 if the interface hasn’t been learned yet.

Finally table 2 just sends packets off to the right place using the register 0 values:

ARM processors support various performance monitoring registers, the most basic being a cycle count register. This is how to make use of it on the Raspberry Pi 3 with its ARM Cortex-A53 processor. The A53 implements the ARMv8 architecture which can operate in both 64- and 32-bit modes, the Pi 3 uses the 32-bit AArch32 mode, which is more or less backwards compatible with the ARMv7-A architecture, as implemented for example by the Cortex-A7 (used in the early Pi 2’s) and Cortex-A8. I hope I’ve got that right, all these names are confusing

The performance counters are made available through coprocessor registers and the mrc and mcr instructions, the precise registers used depending on the particular architecture.

By default, use of these instructions is only possible in “privileged” mode, ie. from the kernel, so the first thing we need to do is to enable register access from userspace. This can be done through a simple kernel module that can also set up the cycle counter parameters needed (we could do this from userspace after the kernel module has enabled access, but it’s simpler to do everything at once).

To compile a kernel module, you need a set of header files compatible with the kernel you are running. Fortunately, if you have installed a kernel with the raspberrypi-kernel package, the corresponding headers should be in raspberrypi-kernel-headers – if you have used rpi-update, you may need to do something else to get the right headers, and of course if you have built your own kernel, you should use the headers from there. So:

Looks like we can count a single cycle and since the Pi 3 has a 1.2GHz clock the loop time looks about right (the clock seems to be scaled if the processor is idle so we don’t necessarily get a full 1.2 billion cycles per second – for example, if we replace the loop above with a sleep).

Instead of using a triangulated mesh, we can display a surface in 3d by simply generating a set of random points on the surface and displaying them as a sort of particle system. Let’s do this with the famous Clebsch cubic surface: the points are stored as homogeneous coordinates, to display we multiply by a quaternion to rotate in projective space before doing the usual perspective projection to Euclidean space.

The Clebsch surface is the set of points (x0,x1,x2,x3,x4) (in projective 4-space) that satisfy the equations:

x0 + x1 + x2 + x3 + x4 = 0
x03 + x13 + x23 + x33 + x43 = 0

To simplify things, we can eliminate x0 (= x1 + x2 + x3 + x4) and rename a little, to get the single cubic equation:

(x + y + z + w)3 = x3 + y3 + z3 + w3 [***]

defining a surface in projective 3-space, with the familiar 4-element homogeneous coordinates.

Since coordinates are homogeneous, we can just consider the cases of w = 1 and w = 0 (plane at infinity), but for w = 0, it turns out the solutions are some of the 27 lines which we shall later draw separately, so for now just consider the case w = 1 for which we have:

(x + y + z + 1)3 = x3 + y3 + z3 + 1

and given values for x and y, we can solve for z easily – the cubes drop out and we just have a quadratic equation that can be solved in the usual way:

3Az2 + 3A2z + A3 - B = 0 where A = x+y+1, B = x3 + y3 + 1

We can now generate points on the surface by randomly choosing x and y and solving for z to give a set of homogeneous points (x,y,z,w) satisfying [***] and we can get further solutions by permuting the coordinates. We don’t need all permutations since some of the coordinates are arbitrary, and points that are multiples of each other are equivalent. The random points themselves are generated by this Javascript function, that generates points between -Infinity and +Infinity, but clustered around the origin.

The Clebsch surface of course is famous for its 27 lines, so we draw these in as well, also as random selection of points rather than a solid line. 15 lines are points of the form (a,-a,b,-b,0) and permutations – since we are working in 4-space, this becomes 12 lines of form (a,-a,b,0) and three of form (a,-a,b,-b). These 15 lines are drawn in white and can be seen to intersect in 10 Eckardt points where 3 lines meet (though it’s hard to find a projection where all 10 are simultaneously visible). The other 12 lines are of the form (a,b,-(φa+b),-(a+φb),1) where φ is the golden ratio, 1.618.. and can be seen to form Schläfli’s “Double Six” configuration – each magenta or cyan line intersects with exactly 5 other lines, all of the opposite color.

All that remains is to project into 3-space – as usual we divide by the w-coordinate, but to get different projections, before doing this we rotate in projective space by multiplying by a quaternion & then varying the quaternion varies the projection. (Quaternion (d,-a,-b,-c) puts plane (a,b,c,d) at infinity – or alternatively, rotates (a,b,c,d) to (0,0,0,1) – it is well known that quaternions can be used to represent rotations in 3-space, but they also work for 4-space (with 3-space as a special case) – a 4-space rotation is uniquely represented (up to sign) by x -> pxq where p and q are unit quaternions). Here we multiply each point by a single quaternion to give an isoclinic or Clifford rotation – every point is rotated by the same angle.

We are using Three.js, which doesn’t seem to accept 4d points in geometries – we could write our own vertex shader to do the rotation and projection on the GPU, but for now, we do it on the CPU; updating the point positions is reasonably fast with the Three.js BufferGeometry. Actually displaying the points is simple with the THREE.Points object – we use a simple disc texture to make things a little more interesting, and attempt to color the points according to the permutations used to generate them.

The mouse and arrow keys control the camera position, square brackets move through different display modes, space toggles the rotation.

An excellent reference giving details of the construction of the surface (and good French practise) is:

A number n, with an even number of digits, is excellent if it can be split into two halves, a and b, such that b2 - a2 = n. Let 2k be the number of digits, then we want n = aA + b = b2 - a2 where A = 10k.

So, every 2k digit excellent number gives rise to divisors i,j of N where ij = N and i <= j

This process can be reversed: if i is a divisor of N, with j = N/i and i <= j, we have X = (j+i)/2, Y = (j-i)/2, then a = (X-A)/2 and b = (Y+1)/2. If all the divisions by 2 are exact (and in this case they are – N is odd, so i and j are too, also writing i = 2i'+1 and j = 2j'+1, we can show that i' and j' must have different parities) then we have a potentially excellent number – all we need to check is that a has exactly k digits and that b has at most k (otherwise a and b are not the upper and lower halves of a 2k digit number).

Now we have a nice algorithm: find all divisors i of N = 10k-1, with i <= sqrt(N), find a and b as above and check if they are in the appropriate range, if so, we have an excellent number and it should be clear that all excellent numbers can be generated in this way.

For small N, we can find all divisors just by a linear scan, but for larger N something better is needed: given a prime factorization we can generate all possible combinations of the factors to get the divisors, so now all we need to do is factorize 102k-1. This of course is a hard problem but we can use, for example, Python’s primefac library, and give it some help by observing that 102k-1 = (10k-1)(10k+1). The factorization is harder for some values of k, particularly if k is prime, but we can always have a look at:

if we run in to trouble. My Pi 2 gets stuck at k = 71 where 11, 290249, 241573142393627673576957439049, 45994811347886846310221728895223034301839 and 31321069464181068355415209323405389541706979493156189716729115659 are the factors needed, so it’s not surprising it is struggling. Also, the number of divisors to check is approximately 2n-1 where n is the number of prime factors, of which, for example 1090-1 has 35 so just generating all potential 180 digit numbers will take a while.

So, after all that, here’s some code. Using Python generators keeps the memory usage down – we can process each divisor as it is constructed, (though it does mean that results for a particular size don’t come out in order) – after running for around 24 hours on a Pi 2, we are up to 180 digits and around 2000000 numbers but top reports less than 1% of memory in use.

The idea is to use some fairly straightforward vector geometry to generate uniform polyhedra and their derivatives, using the kaleidoscopic construction to generate Schwarz triangles that tile the sphere. We use spherical trilinear coordinates within each Schwarz triangle to determine polyhedron vertices (with the trilinear coordinates being converted to barycentric for actually generating the points). Vertices for snub polyhedra are found by iterative approximation.

We also can use Schwarz triangles to apply other symmetry operations to the basic polyhedra to generate compound polyhedra, including all of the uniform compounds enumerated by John Skilling (as well as many others).

There are some other features including construction of dual figures, final stellations, inversions, subdividing polyhedra faces using a Sierpinksi construction, as well as various colouring effects, exploding faces etc..

This is nice, but we can see that it depends on the fact that in Javascript a function automatically converts to a string that is its own source code; also, it would be nice to get rid of the explicit function binding.

Using an idea from Lisp (and this is surely inspired by the (λx.xx)(λ.xx) of the lambda calculus), we can use an anonymous function:

This works nicely, with no function binding needed (we are just using the definition of f to get going here), but we are still using implicit conversion from functions to strings. Let’s try explicitly quoting the second occurrence of s:

That isn’t well-formed Javascript though, the single quotes in the stringified version of the function haven’t been escaped, so this won’t compile. We need to somehow insert the quotes into the string, but without getting into an endless regression with extra layer of quotes (this problem really only exists because opening and closing quotes are the same, if quoting nested in the same way that bracketing does, it would all be much easier).

(Note also that while we are using implicit function to string conversion to construct our Quine, the Quine itself doesn’t use that feature).

Now there is no quotation at all in the function body, but this doesn’t quite work as we need to pass in the character parameters in the main function call, and for this we need to pass in the comma and escape characters as well:

Not minimal in terms of length, but fairly minimal in terms of features required – no library functions, no fancy string formatting directives, no name binding apart from function application, no language specific tricks, no character set dependencies; we haven’t even made use of there being two ways to quote strings in Javascript.

Since we aren’t being very specific to Javascript, it’s easy to adapt the solution to other languages:

Here ρ is an environment, mapping variables to values, and note that in the rule for a λ expression, the λ in the right hand side is defining a function in the domain of values, whereas the left hand side λ is just a linguistic construct. We could decorate every expression with a type, but that would get untidy. There will be other rules for specific operations on whatever actual datatypes are around, but this gives the underlying functional basis on which everything else depends.

We can see that ℰ⟦e⟧ρ is just a value in some semantic domain, which contains, presumably, some basic types and functions between values and the type of ℰ is something like:

ℰ: Exp[A] → Env → A

where Exp[A] is set of expressions of type A (I’m not going to be rigorous about any of this, I’m assuming we have some type system where this sort of thing makes sense, and also I’m not going to worry about the difference between a syntactic type and a semantic type) and Env is the type of environments.

Just for fun, let’s make a distinction (not that there really is one here) between “ordinary” values and “semantic” values, with M[A] being the semantic value with underlying value type A (imagine an ML or Haskell style type constructor M, with a value constructor, also called M, though often we’ll ignore the distinction between the underlying type and the constructed type).

Now ℰ has type:

ℰ: Exp[A] → Env → M[A]

and the underlying value of a function of type A → B is now A → M[B].

We can also rewrite our semantic equations and take a little time persuading ourselves this is equivalent to the definitions above:

and we can see that we can just plug these definitions into our generic semantic equations above and get something equivalent to the specific state semantics.

So, a monad is just a typed semantic domain together with the operations necessary to specify the standard functional constructions over that domain. Which sort of makes sense, but it’s nice to see it just drop out of the equations (and of course it’s nice to see that a standard denotational semantics for something like state does actually correspond quite closely with the monadic semantics).

None of this is new, in fact the use of monads to provide a uniform framework for program semantics goes right back to Eugenio Moggi’s original work in the 1980s (which was then taken up in functional programming where elements of the semantic domain itself are modelled as normal data objects).

After the topical excitement of the last couple of posts, let’s look at an all-time great – Leslie Lamport’s Bakery Algorithm (and of course this is still topical; Lamport is the most recent winner of the Turing Award).

The problem is mutual exclusion without mutual exclusion primitives. Usually, it’s described in the context of a shared memory system (and that is what we will implement here), but will work equally well in a message-passing system with only local state (each thread or process only needs to write to its own part of the store).

For further details, and Lamport’s later thoughts see http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html#bakery: “For a couple of years after my discovery of the bakery algorithm, everything I learned about concurrency came from studying it.” – and since Lamport understands more about concurrency than just about anyone on the planet, it’s maybe worth spending some time looking at it ourselves.

I’m not going to attempt to prove the algorithm correct, I’ll leave that to Lamport, but the crucial idea seems to me to be that a thread reading a particular value from another thread is a synchronization signal from that thread – here, reading a false value for the entering variable is a signal that the other thread isn’t in the process of deciding on it’s own number, therefore it is safe for the reading process to proceed.

Implementing on a real multiprocessor system, we find that use of memory barriers or synchronization primitives is essential – the algorithm requires that reads and writes are serialized in the sense that once a value is written, other processes won’t see an earlier value (or earlier values of other variables). This doesn’t conflict with what Lamport says about not requiring low-level atomicity – we can allow reads and writes to happen simultaneously, with the possibility of a read returning a bogus value – and in fact we can simulate this in the program by writing a random value just before a process selects its real ticket number, but once a write has completed, all processes should see the new value.

Another essential feature is the volatile flag – as many have pointed out, this isn’t enough by itself for correct thread synchronization, but for shared memory systems, prevents the compiler from making invalid assumptions about consistency of reads from shared variables.

A final point – correctness requires that ticket numbers can increase without bound, this is hard to arrange in practice, so we just assert if they grow too large (this rarely happens in reality, unless we get carried away with our randomization).