Taking the suggestion from Kernighan and Pike's The Practice of Programming, I wrote another version of their Markov Chain program (chapter 3) that allows for different length prefixes. It works best with shorter prefixes, as they are more likely to occur in the text than longer ones.

To improve of this, you may have to rethink your data structures. I've had good results by weighting the random choice of the next token based on its frequency of occurence after the prior n tokens (where n==2 in your code).

One approach would be to extend your data structure along the lines of

Thanks, I like the idea of using frequency counts very much. I turned to The Perl Cookbook for some help, and the results are below. Note that the default value for $maxwords is much lower for this version to prevent the program from taking a very long time to complete, as the subroutine weighted_suffix() often needs to loop many times when there is a low number of suffixes for a given prefix.

There are some good examples of Markov chain code floating
around on here that break up the text by letter
rather than word. In those cases I've seen the best
results with 4|1 partitioning. Sifting through it is
sometimes tedious but some really funny stuff pops up
once in a while. The letter by letter mode has a creepy
ability to create new words that seem to make sense.

I'm currently implementing one right now (looking at as few others as possible) that is just words - boring and makes many mistakes.

But I'm planning on rewriting it shortly after being done and will be doing it with digraphs (two character pairs, including punctuation and spaces, although not likely line breaks) after learning how nice those can be for old-school cryptography - the digraphs should theortically be better than trigraphs and mono (single letters).

just a thought.
(my current problems lie less in the above theory or programming - all very easy - and in the way I strip it and from where... trying several boards, as well as doing it in an amusing way in newsgroups as well)

I need some help for Markov algorithm for followign question
Markov chain algorithm that will allow you to write a program to analyze your current publication's texts and generate random text that uses phrases in a manner similar to the input text.

You ask how this works and she explains:

Find some body of text (in our case, text files) that you want to imitate.
For every pair of words that occurs in the text, keep track of each word that can follow that pair of words. So, for every pair of words, you would know a) which words followed that pair of words AND b) know at what probability those words might follow the pair of words. (See examples below.)

Using the information gathered in the previous step. Start with a pair of two consecutive words ($word_one and $word_two) that occur in the text, print those two words, then randomly choose the next word ($next_word) according to the probability that it would follow those two words. Print that word. Now use the second word ($word_two) and the new word ($next_word) as your two consecutive words and repeat this process until you have generated the amount of text you want or hit a word pair that has no next word.

Let us look at an example from The New Testament According to Dr. Suess:

He didn't come in a plane.
He didn't come in a Jeep.
He didn't come in a pouch
Of a high jumping Voveep.

If we were to analyze the word pairs, we see the following pairs of words in the text:

He didn't come [3, 100.0%]
Jeep. He didn't [1, 100.0%]
Of a high [1, 100.0%]
a Jeep. He [1, 100.0%]
a high jumping [1, 100.0%]
a plane. He [1, 100.0%]
a pouch Of [1, 100.0%]
come in a [3, 100.0%]
didn't come in [3, 100.0%]
high jumping Voveep. [1, 100.0%]
in a Jeep. [1, 33.3%] pouch [1, 33.3%] plane. [1, 33.3%]
plane. He didn't [1, 100.0%]
pouch Of a [1, 100.0%]

We can see that the word pair He didn't occurred three times, each time followed by the word come (at 100% probability). And the word pair in a occurred three times, followed by either Jeep., pouch, or plane (each of these with a 33.3% probability).

Your task is write a program called babble that will read text from <> and apply the Markov Chain algorithm to generate random text that reads like the input text.

Your program will also take three options (you are advised to use Getopt::Long qw(GetOptions) but you may use other methods if you insist):

--words (the total number of words to generate)
--paragraphs (the number of words per paragraph)
--show_pairs (show the word pairs and frequencies as in the example above, which is sorted alphabetically by word pairs, then by decreasing frequency for the next words). If --show_pairs is given as an option, your program should not do any babbling, just output the table and exit.

You are advised to implement --show_pairs first. This will require designing a data structure to store the "word pair" to "next word" mappings (when you hear "map", you might think "hash" or "hashref") and then writing a subroutine to load/build this data structure from the input text. Don't worry about capitalization and punctuation -- you can treat anything that's not whitespace as word characters (i.e., @words = split() is a perfectly acceptable construct to use to get your words). Once you have --show_pairs working, you should be able to do something like this:

When putting a smiley right before a closing parenthesis, do you:

Use two parentheses: (Like this: :) )
Use one parenthesis: (Like this: :)
Reverse direction of the smiley: (Like this: (: )
Use angle/square brackets instead of parentheses
Use C-style commenting to set the smiley off from the closing parenthesis
Make the smiley a dunce: (:>
I disapprove of emoticons
Other