> It's interesting that the current issue of Bible Review
> (hardly a conservative magazine)--the one I got in the mail last
> week (it may not be on the newsstands yet)--has an article about
> much the same thing with respect to the Torah, and how some
> respected scientists in Israel, I believe, did a scientifically
> and statistically valid series of studies proving this phenomenon
> in 1988, but their results were ignored because no one wanted to
> admit the compelling proof they were offering that the Torah's
> source was not human. The renegade "scholar," archeologist and
> former Baptist minister but now adherent of the 2-covenant theory
> and a strong proponent of rabbinic interpretation of the
> scriptures--Vendyl Jones from here in Arlington, Texas--publishes
> a newsletter, and his most recent issue deals with this same
> subject of the "Torah codes"--though his discussion is much more
> vehement and passionate.

I have not seen the article you referenced, but the topic is one that
has occurred periodically on the bible-related lists to which I
subscribe.

The following analysis indicates that the finding of "hidden words"
is more a natural phenomenon of the properties of the alphabet and
vocabulary of the Hebrew language than a supernatural occurrence.

> The thesis is that set patterns of words, or repetitions
> of significant words, are encoded in the text of the New
> Testament. I'd have to read the book to remember much more than
> that.

I have not seen any similar claims for the New Testament
except what you posted, but you did not state whether the
author was making claims for the Greek or Authorized version.

There have been several questions concerning computer processing
using algorithms of letter-skipping to search for special meanings in
the Hebrew scriptures, or to find some indications that these texts
are indeed divine, because they contain so many other messages now
available via computer processing.

This topic appeared in May 1992, on the AIBI-L discussion list,
and I posted the following analysis which would indicate that the
attributes of the Hebrew language and properties of probability and
combinatorial analysis would lead to the results found.

-----------------------------------------------------------------

Let me posit a method of evaluating these "skipping letter"
techniques based upon the nature of the data being studied.

Suppose, first of all, that there is a language composed of one-
character words. Every word in the language is one character.
Chinese is an example of such a language. In one sense, the
written Chinese language is composed of an alphabet with 30,000
different characters in it.

Suppose, again, that a biblical text in such a language were to
be examined with a computer using the letter-skipping algorithm.
The first valid question is, "Why?" Why would someone think that
taking sentences made up of words, and eliminating every N-th word
(remember, every letter is a word) would have any meaning, at all?

If someone presupposes verbal inerrancy, it would be a ridiculous
thing to remove some of the inerrant words, unless of course,
some of those words in their complete form said messages could
be understood by removing every N-th word.

Someone presupposing human authorship full of discrepancies would
be ridiculed totally for making such a study, since the resulting
sentences, whether they made sense or nonsense, could not be
attacked as representing the original (unless the original
said to do it).

So the question remains, Why?, when a one-character per word language
is considered, and no onus has yet been posited for undertaking
such a study.

Next, consider a language whose words consist of two characters each.
Chinese can also be considered as an example, using the "dictionary
order" of strokes as a means of dividing the pictographic words
into two components each. Again, even with this simplified
representation of the words, there are thousands of "letters"
of which words are composed, and taking random ones out would
seem an exercise in futility.

When we get to a language of three letter words, we get to an area
of optimum expectations. I say this because an alphabet of only
20 letters can be used to generate a language of 20*20*20=8000 words.
This is an "optimum" because 8000 words constitute a pretty complete
vocabulary. For example, Strong's Concordance indexes 8674 Hebrew
words in the bible, and Hebrew is an example of a language most of
whose words consist of three letters. The Hebrew alphabet will
allow 22*22*22 = 10648 three letter combinations.

Continuing with the postulation, a language of 4-letter words would
only need 10 letters in its alphabet to generate a 10000 word
vocabulary, and even less letters are needed if words can be longer
than 4 letters. On the other hand, if the alphabet size is fixed,
and word length can vary greater than four letters, there are
many possible letter combinations that are not words in the
language. In English, for example, with a fixed alphabet of
26 letters, there are 26*26*26*26*26*26 = approx 308 million
possible six-letter words.

The probability that a random selection of six characters is a word
in the basic English language (8000 words) is 8000 / 308 million =
approx .000026, i.e., very small. On the other hand, the probability
that a random selection of three characters in Hebrew is a word is
8000 / 10648, approx .79, or very high. This is what I meant by
saying languages with three letter words yield optimum expectations.

If, in addition, you remove the vowel constraints of the spoken
language, and only work with the consonants, the probability
of a match is even greater. Consider the following vowelized
combinations: [The word "hitler" was chosen because the original
paper used that example.]

The 25 possible vowelized combinations become one consonant-only word
HTLR. The search is reduced by a factor of 25 by removing vowels,
and the probability is increased even more, when there are two T
sounds in the target language.

As an alphabetic, written language, Hebrew lends itself to
combinatorial studies, especially since the domain of study is
entirely fixed. Laying aside the textual variants for a moment,
the entire corpus of biblical languages is fixed, and will never
change. New words like "television," "radar," and "laser" do
not occur, and will not occur in the biblical languages, even
though modern Hebrew and Greek may invent words for these inventions.

My point is that a language with three-letter roots is unique in
its ability to be studied exhaustively with computers, and to
generate valid combinations of words by "skipping letter" algorithms.

My question is, "Why spend the effort?" The target sentences are
invented by the investigators and found by the computer skipping-
letter algorithms.