There is this programming/mathematical problem I have; it is the problem of generating a sequence of n unique random numbers, where the random number $n$ is element of of the set $S = \{0, 1, 2, \cdots N-1\}$ (Like permuting the numbers, but without using memory.)

Recently I kinda solved the problem on my own, and I got a result that I've transcribed below, without the use of formal math, just programming.

Now, it seems that this problem has already been solved in past by constructing LFSR (Linear feedback shift register) by Galois or some other mathematician. (I actually found out about LFSR first, and then built my "kind of" soulution, because I couldn't understand the LFSR Wikipedia article, and didn't just want to copy-and-paste the source code.)

My problem is I don't understand a word of formal mathematics and would like to learn how to solve this problem and compare my solution with that of what LFSR produces, so my question is:

Can you compare what I did below and somehow translate it to formal math?. So I can compare what I did and what the formal mathematicians were doing, and why they were doing it?

Remember, I only really understand programming terms, e.g., I only understand memory and its adressing, memory states, strings of zero and ones, etc. I don't understand primitive polynomials, field theory, and other mathy words.

Thanks very much in advance, if you can help me I promise a beer next time I see any one of you.

Here is the code I produced below: (You can run it in your browser -- I can't guarantee that it's correct but I believe the idea should be close enough, and I lack any other idea how to solve it:

"(Like permuting the numbers, but without using memory.)" That can't be done. The pigeon-hole principle suffices to show that if you want to be able to generate any possible permutation then you need to start with internal state capable of representing $n!$, and Stirling's formula tells you that that's $\Theta(n \log n)$ bits.
–
Peter TaylorApr 19 '11 at 20:43

3 Answers
3

What you're trying to do (as far as I can tell) is generate a permutation. You do this by starting with the identity permutation, swapping element $n-1$ with a random element $t \leq n-1$, then swapping element $n-2$ with a random element $t \leq n-2$, and so on down to $t = 0$. Then you output a random cyclic shift of your permutation - there seems to be no reason to do this, since the permutation is already random.

There's no connection to LFSRs. LFSRs are used to generate "random looking" bits, although if all you do is take an LFSR then the output bits won't look random if you look close enough.

LFSR's were not invented by Galois. His name is attached to this business since there's a connection between LFSRs and Galois fields, which are named in his honor.

Your solution is correct. The proof is simple. Every number among $0,...,N-1$ have an equal chance being the last. You pick one number and fix it. Then every other number has an equal chance of being one before last under the condition that the last number is fixed. You pick one more number and fix it and so on. At every step the rest of the numbers have an equal chance of being at position $k$ if all numbers at greater positions are fixed. To prove this fact you can note, that for each number on the $k$th position there are exactly $(k-1)!$ permutations. The number of permutations are equal $\rightarrow$ the chances are equal.

Contrary to the other comments, LFSRs can be used to generate a pseudo-random permutation of elements which may be good enough for some applications. A maximal LFSR will pass through every possible state (except for the state in which the register is full of zeros) so by looking at the state, rather than the output bits, and interpreting it as a binary number (not necessarily twos complement!) you can permute an arbitrary number of items in a single, repeatable, pseudo-random way. If you need variety you could permute the order of the bits when reading the register (i.e. essentially use a straight p-box on the register contents.)

Again, perhaps a bit inadequate from a mathematical point of view, but possibly useful in practice.