Detection of Bible Code

This article describes in theory the processes of automating the detection of steganographic content hidden within the plain-text of the Holy Bible. It also deals briefly with the concept of steganalysis, probabilities, normal distribution and how they all relate to Bible Code.

Introduction

Despite what many believe, Bible Code is not an encryption, because the content we discover is not encrypted; however, it is like an encryption except that it is better defined as Linguistic Steganography which defines any intelligent message hidden within a text-based host.

In terms of Bible Code, our text-based host is the Holy Bible and our hidden intelligent message is our theory based on statistical evidence. In the following sections, we will briefly describe the concept of linguistic steganalysis and how it can be applied to Bible Code.

Linguistic Steganalysis

Linguistic Steganalysis is a new concept to Bible Code research that we are slowly developing and integrating into the DivineCoders Bible Code software. What we are accomplishing is a means to detect Bible Code in the Holy Bible using a programmatic automated process.

Steganography has been used to hide information in plain-sight for thousands of years and is still being utilized today in many electronic forms, such as the watermark. There are several ways to detect steganographic content, but not all of them apply to Bible Code.

This article focuses on linguistic steganography where we are expected to detect the presence of hidden messages within a data set for which there is no comparison; therefore, we are limited to only a few of its known detection methods which are the statistical, frequency, and logical analyses; although we have determined that there is no real deviation to be found in terms of character frequency in the Holy Bible when compared to what is known as relative frequency; therefore we are left with our remaining two viable detection methods.

Determining What’s Known

The process of Bible Code steganalysis begins with what is known. For this purpose we used a random data set containing more than 1 million terms that we consolidated to a little more than 86,000 unique terms which is used as a reference in a bruteforce algorithm that was designed to retrieve all possibilities between an equilateral skip of 1 and 1500 at each index between n and n+1000 indexes at a length between 3 and 10 characters.

So, as our automatic process is analyzing the text of the Holy Bible, it begins by focusing on blocks of text rather than the whole and quickly determines that which is known to be a valid English word from that which is unknown and assumed to be nonsensical gibberish.

At the end of our initial evaluation of each block of text, we then perform a statistical t-test, so to help us automatically determine the significance of each possible candidate.

Statistical T-Test

Our t-test allows us to determine the significance of discoveries by comparing our probability value with the mean value that is calculated from a large population of samples from which we derive our significance level which is used in comparison. To understand this concept, one should reference a Galton board and observe the law of large numbers in action.

To help imagine this process, think of a fair two-sided coin and take for instance the encoded word STATISTICS found in the book of Isaiah which has 10 characters. So, to discover a word of that size is like flipping a coin and it landing on the same side 10 times in a row.

If we were to view a Galton board, we would immediately notice that such things exist only in the outermost columns of the board which suggests significance where those columns located at the center of the device suggest what is to be expected as its normal distribution.

Therefore, our statistical t-test allows us to determine that which is the product of purpose from that which is thought to be random and it is because of this method that we are able to automate the steganalysis process in Bible Code when combined with bruteforce.

Logical Analysis

Considering that we have defined Bible Code as steganography and our hypothesis is that which expects there to exist an intelligent message encoded within the plain-text of the Holy Bible, our final step in this process is and should be a logical analysis done manually.

Here is where we begin to focus on the block of text that is defined by the index and length of each candidate. Our goal here is to locate additional terms that t-test HIGH and that also correlate logically until we form an intelligent message in adherence to our standards.

Conclusion

Automated detection is the future of Bible Code and we here at DivineCoders are working hard to bring this technology to your browser and PC. To keep up with our latest projects and research, please visit our facebook page and click “like” to receive our weekly updates.

Share this:

Related

Unfortunately automated detection wjil not be able to achieve the task of uncovering the hidden messsages of the Bible. The only way to unlock these messages is to use a key only made known by the Holy Spirit. The key is to fully understand what happened at Calvary. This understanding as I said is only through the teachings of the Holy Spirit. What is so difficult about this is that we cling to our current beliefs as though they are facts and we’re unwilling to be open to opposing teachings.”The greatest hindrance to finding the Truth is the presumption that one has already found it”(Unknown) While the Bible is completely true it is written to deceive. That is a tough concept to swallow. God… a deceiver??? Like I said – everything written is true yet it is meant to deceive – not us – but the Deceiver himself – Satan. After all he has open access to God’s playbook. For us to learn the truth we must look deeper than the surface as a treasure worth digging for. You are on the right path by understanding that the truth is hidden in plain site thru divine steganography. But automation will never be able to sense the teaching of the Holy Spirit.