There is a point of misunderstanding, though, and that is I am aiming for a sample of random sequences. Each sequence would be five to ten characters, but the sample would be comprised of a few million such sequences. Thus, if my sample size is ten million strings, and each string is ten characters, and there are a million valid utf-8 characters, the each character would be in the sample an average of 100 times. It is a statistical approach; each item in the sample has just a tiny portion of all possible values, but the whole sample includes all possible values multiple times. I tend to be a bit thorough when testing code I am not familiar with (my code for computing eigensystems of general matrices was testing on 100 million randomly generated matrices - with not one failure BTW).

(pack('U', $int) is pretty much equivalent to chr($int) except it also guarantees the output will be turned into UTF-8, and most of all: with the star this can easily join multiple characters without an explicit join.)

My point was: when printing it out, Perl will convert it to valid UTF-8. Don't worry about that.

An in case you want no duplicates, you can repeat the process for each character until you find no duplicates. With so many characters to choose from that is virtually guaranteed to be faster than shuffling the whole array (with the Fisher Yates shuffle) and next picking the first 10 code points. Or, you can make custom version of Fisher-Yates that stops shuffling after 10 iterations.