Now, sequences that are type A are used all the time, most explictly in cryptography. If I run an RC4 cipher with a random key I can generate megabytes of data will pass type A tests. However, you can certainly code RC4 in less than a megabyte of data on a resonable UTM. And even if you couldn't do it in less than a meg you can just generate more RC4 data. Thus my type A random sequence is not very type B.

Now sequences that aren't type A random almost certainly aren't type B. I don't have anything like a proof for this yet, but it seems sensible. However, I don't think you can ever prove that you have the shortest program that will generate a given sequence without brute-forcing all programs upto that length. I think the proof for that is in this paper, but I need to reread that as I haven't looked at it for ages

So, given the set of all sequences of length n some will be type A and some of those will be type B random (both of those lines are fuzzy, but I don't think that right now matters). Assuming you can find the shortest program that generates each of those sequences you can reinterpret each of those programs as numbers and give each member of the set a number (and thus an order). Some sequences will have shorter numbers than others and if we generated everyu possible program in length order we would expect to see outputs in roughly the same order as our ordered set.

Now, this is where Wolfram's sequences come in. Mathamatica's random function actually runs a rule 30 CA and uses the values of the center line as random data. This gives type A randomness (so he says). However, what he's calling complexity in ANKOS is type A randomness and `his' (pompous twit) `complexity generating' CAs are simply low numbered members of the set of sequences.

Like I said, that was just me writing stuff down to see what I'm thinking