Tuesday, 23 March 2010

Transitive constituent homomorphism

Last time, we looked at what it means to be a strong generating set (SGS) for a permutation group, and how to find one. We saw that with an SGS we can easily calculate the number of elements in the group, and test whether an arbitrary permutation is a member of the group.

This time, I want to start showing how strong generating sets can be used as the basis for further algorithms to investigate the structure of groups. In a previous post, I explained about normal subgroups and quotient groups. Over the next couple of posts, I want to show how, for groups having the appropriate form, we can find some of the easier normal subgroups and quotient groups. Specifically, in this post I want to look at intransitive groups, and in the next at imprimitive groups.

Many of the groups we have looked at so far - such as the symmetries of the pentagon, cube, and so on - have been "transitive" - meaning that given any two points, there is a group element which takes one to the other. (Or to put it another way, all the points "look the same".)

However, it is easy to find groups that are not transitive on their points. For example, if we take any graph which is not regular (that is, it has vertices of different valencies), then the symmetry group is not transitive (because no symmetry can take a vertex of one valency to a vertex of another valency).

In order to demonstrate the code that I want to look at this week, I'm going to need an example - so here are the generators of a group G:> :load Math.Algebra.Group.Subquotients
> let gs = map p [ [[1,2,3]], [[4,5,6]], [[1,2],[4,5]] ]

We can think of this group as the symmetry group of some kind of plastic puzzle, as shown below. The puzzle consists of a blue and a red triangle. Valid moves consist of rotating the blue triangle, rotating the red triangle, or performing a double flip, which swaps a pair of blue points and at the same time a pair of red points.

Notice that the group is not transitive: there is no move that takes a blue point to a red point, or vice versa.

So we have a group G, acting on a set X. If G is not transitive on X, then we can write X as a disjoint union of subsets A, B, C, ..., such that G is transitive on each of the subsets. In our example, we can take A = [1,2,3], B = [4,5,6], and we see that G is transitive on A, and on B. A and B are called the transitive constituents (or the orbits) of G.

It's fairly straightforward to write Haskell code to find the orbits of a permutation group. This allows us to find:> orbits gs
[[1,2,3],[4,5,6]]

[The implementation is left as an exercise.]

Okay, so what's so interesting about intransitive groups? Well, an intransitive group always has a non-trivial normal subgroup, so we can always "factor" it into smaller groups. How?

Well, take one of the transitive constituents. In our example, let's take [1,2,3]. Then we can consider the restriction of the action of the group to just this constituent. For our generators, the restriction gives us:[[1,2,3]] -> [[1,2,3]]
[[4,5,6]] -> []
[[1,2],[4,5]] -> [[1,2]]

It might look as if the first generator got taken to itself - but it is important to realise that these are elements of different groups. This would have been clearer if we had written out the permutations in row notation:1 2 3 4 5 6 -> 1 2 3
2 3 1 4 5 6 2 3 1

Or if we had included singleton cycles in the cycle notation:[[1,2,3],[4],[5],[6]] -> [[1,2,3]]

So this restriction of the action is a map from our group - a subgroup of Sym([1..6]) - to another group - a subgroup of Sym([1..3]).

If we call this map f, then it should be obvious that f has the following properties:
- f(1) = 1 (where 1 here means the identity element in the respective groups, not the point 1)
- f(g*h) = f(g)*f(h) (consider their action on a point x)
- f(g^-1) = f(g)^-1

Hence f is a homomorphism - a function between groups that preserves the group structure.

The kernel of a homomorphism is the set {g <- G | f(g) = 1}. The kernel is always a normal subgroup:
It's obviously a subgroup. (Why?)
And f(h^-1 g h) = f(h)^-1 f(g) f(h) (by the homomorphism properties).
So if f(g) = 1, then f(h^-1 g h) = 1.
In other words, if g is in the kernel, then so is h^-1 g h - which is just the condition for the kernel to be a normal subgroup.

The image of a homomorphism is the set {f(g) | g <- G}. This is naturally isomorphic to the quotient group G/K, where K is the kernel: since for k <- K, f(gk) = f(g) f(k) = f(g) (since f(k) = 1) - so elements of the image are in one-to-one correspondence with cosets of the kernel, and it is easy to check that the group operations correspond too.

Hence any (non-trivial) homomorphism induces a "factorisation" of the group into smaller groups K and G/K - the kernel and the image.

We would like a Haskell function that can find the kernel and image of a transitive constituent homomorphism for us, like so:> transitiveConstituentHomomorphism gs [1,2,3]
([[[4,5,6]]],[[[1,2,3]],[[1,2]]])

So how do we do that? Well, the basic idea is that we will convert all our permutations on X into permutations on Either A B, where A is the transitive constituent we are restricting to, and B is the rest. In our group, we have X = [1..6], A = [1,2,3], B = [4,5,6], so for example we will convert [[1,2],[4,5]] into [[Left 1, Left 2], [Right 4, Right 5]].

Next, we construct a strong generating set for the new group. Since the SGS algorithm looks for point stabilisers /in order/ - it will first find point stabilisers for all the Left points, and then for all the Right points. So our SGS will split into two parts:
- A first part consisting of elements that move some of the Left points. If we restrict this part to its action on the Left points, we will have an SGS for the image.
- A second part consisting of elements that move only the Right points. This part is an SGS for the kernel.

[Note that as our SGS algorithm uses randomness, we might get a different set of generators each time we call it.]

Okay, so we know what to do. Time for some code. We need first of all a few utility functions for packing and unpacking our group elements into the Left / Right parts of the Either type.isLeft (Left _) = True
isLeft (Right _) = False

isRight (Right _) = True
isRight (Left _) = False

unRight = fromPairs . map (\(Right a, Right b) -> (a,b)) . toPairs

restrictLeft g = fromPairs [(a,b) | (Left a, Left b) <- toPairs g]
-- note that this is doing a filter - taking only the left part of the action - and a map, unLefting

It's obvious really: the odd numbered points are corner faces, and the even numbered points are edge faces - and there is no valid move that can take a corner face to an edge face, or vice versa. So the group is intransitive. So we can define:[cornerFaces,edgeFaces] = orbits rubikCube

We then have two different ways to split the group, depending which transitive constituent we take:(kerCornerFaces,imCornerFaces) = transitiveConstituentHomomorphism rubikCube cornerFaces

Why are they not equal? It is because in fact we can't operate on corners and edges totally independently. The following examples show that we can't swap a pair of corners without also swapping a pair of edges, and vice versa:> isMemberSGS rubikSGS (p [[1,3],[17,41],[19,23],[2,6],[18,44]])
True
> isMemberSGS rubikSGS (p [[1,3],[17,41],[19,23]])
False
> isMemberSGS rubikSGS (p [[2,6],[18,44]])
False

(This is similar to the way that in our original example, we couldn't swap a pair of blue points without also swapping a pair of red points.)

That's it for now. Next time, I'll show how we can factor the corner and edge groups still further, to arrive at a fairly complete understanding of the structure of the Rubik cube group.