Wednesday, November 12, 2008

“Readers will find this book unkind to open theism” (9). To what extent? “We have [in open theism], then, a fundamentally different god, not merely a different version of God” (230). Such are the opening and closing remarks of a devastating analysis of open theism. Bruce Ware, Senior Associate Dean of the School of Theology at The Southern Baptist Theological Seminary, is the author of God’s Lesser Glory: The Diminished God of Open Theism. The book is organized in three main sections: (1) What is it? (2) What’s wrong with it? And (3) what difference does it make? There is one question asked and answered on virtually every page of this book. It is the very question concluding a series of questions presented in the first chapter: “is such a God the God of the Bible?” (18). The answer returns again and again as an emphatic “No.”

The introductory chapter provides the motivation for the reader to be concerned about the issue. Ware details some of the influence open theism has had in Baptist General Conference as an example that this issue is not fortified in the walls of academia, but is in fact finding refuge and strength in congregations around the country. Not all churches have opened their gates to this teaching, as exemplified by the Southern Baptist Convention. Publishers such as InterVarsity Press and Baker Books along with Christianity Today have provided a platform from which open theism has reached the Christian church at large. Though this chapter is focused on understanding the importance of this issue, the analysis of open theism throughout the book solidified this theme.

Theologies usually have some measure of foundation in formerly existing doctrines, and Arminianism turns out to be the foundation of open theism. Ware explains how Arminianism in its traditional expression is seen as faulty in the eyes of open theists because it does not allow for true libertarian freedom. Therefore open theism begins with the Arminian foundation of God’s love, human freedom, and genuine worship, and seeks to be consistent in its understanding of the relationship between God’s omniscience and man’s free will. The reconciliation of God’s omniscience with man’s freedom is the root issue in open theism. Ware explains from the open theist perspective how they perceive traditional models of understanding as faulty, and their own model as viable.

The “perceived” viability of open theism is presented in chapter three as Ware brings forth the primary tenets that open theism attempts to derive from Scripture. The first major principle is that God’s openness allows for real relationship between and people. Because God learns and can be surprised by human actions, he can have a real relationship and not one based on absolute foreknowledge. Secondly, because God does not know the future, everything he does involves some measure of risk. God risked rejection when he created the world for the sake of relationship, and he lost the bet, so to speak. Third, because God cannot always know the outcome of his decisions, it is not uncommon for him to repent as in the decision to flood the world. Fourth and perhaps most important, because God does not know the future and allows people to have libertarian freedom, he is not to blame when tragedy strikes. Here in summary form are the primary issues which Ware addresses in detail throughout the rest of the book.

If one could conclusively demonstrate the fallacy of open theism’s rejection of the doctrine of God’s exhaustive foreknowledge, the debate comes to an abrupt end. This is exactly what Ware has accomplished in chapter four. Walking through the biblical texts used by open theists Ware clearly shows how the “straightforward” readings by open theists are invalid, either by immediate context or by other texts which directly relate to the issue. The careful attention to the whole counsel of God reveals how open theism must ignore or deny one text in order to affirm a certain understanding of another. At times Ware appeals to logical conclusions using the narrow straightforward interpretation to show how one cannot maintain such ideas without denying other explicit teachings about God.

Contrary to open theist ideas, Scripture has a lot to say about God’s exhaustive foreknowledge regarding the breadth and the depth, the extent and the content of what God knows. Many of these texts are treated in chapter five as Ware unleashes the doctrine of God’s foreknowledge. Though space does not allow extensive commentary on each passage, Ware shines the light of the glory of God as demonstrated in his foreknowledge. Open theists claim that often these texts refer to a specific situation, or are limited instances of foreknowledge, and it is going too far to attribute exhaustive foreknowledge. The former claim is certainly true, however Ware demonstrates that in order for God to foreknow and control one, two, and the many situations and prophecies laid out in Scripture, it would take exhaustive foreknowledge because of the infinite number of variables which could alter the future.

In attacking God’s foreknowledge, Ware demonstrates how open theism indirectly attacks God’s wisdom. Chapter six gives a clear and thorough argument against the attack of the only wise God.

The effect of all this in our daily lives is devastating. It actually creates a different kind of Christian; one who can’t pray with confidence, need not ask God for guidance, and does indeed have reason to blame God for tragedy. Chapters seven through nine are dedicated to this and one cannot help but grieve for Christians under open theist pastors.

Reading this book has two effects for the Christian who believes strongly in the deity of God. First, it creates anger and frustration over a doctrine which makes God in man’s image. Second, it causes us in the depth of our soul to worship the God of Romans 11:33-36.

Friday, November 07, 2008

The act of translating of a text from one language to another has been a necessity since the Tower of Babel when God confused the language of the people (Gen. 11:1-9). Though translation has been done throughout history, formal theories of translation—and the resulting debates—have only been in existence for the last century. Until the Gutenberg invention of the printing press in 1456, only 33 of the world’s approximately 6,170 languages had a translation of Scripture. At the end of the 20th century, over 2,000 languages, or 80% of the world, had parts or the entire Bible available to them. The increase from 33 to over 2,000 did not occur gradually. In fact, even 400 years after printing press, still only 67 languages had some portion of the Scripture. The 19th and 20th centuries saw a dramatic increase in missionaries and organizations committed to translating Scripture which naturally demonstrated the need for standard methodology of translation.

There are two primary translation theories which continue to be the center of increasingly fervent debate. Though terminology has varied over the years, it appears the dust has begun to settle regarding what to call the theories: Formal Equivalence and Functional Equivalence. Much time and effort has been spent on arguing for each position, and I have found it difficult to get a brief synopsis of each side with its proposed arguments. This paper is an effort to allow both sides to make their arguments without analytical comment. The purpose is not to come to a conclusion on which method is superior, but rather to have a fundamental understanding of the arguments for each position.

Key Terms and DefinitionMany specialized areas of study have their own set of terms and vocabulary that are necessary to keep in mind in order to navigate the field. Misunderstood definitions often draw needless arguments, therefore where a definition undergirds one side of the debate, I will present the definition from that viewpoint.

Formal Equivalence. Leland Ryken has written the most recent and somewhat controversial book in this debate. He takes his stand on the formal side of the debate and defines it as follows: “a theory of translation that favors reproducing the form or language of the original text, and not just its meaning. In its stricter form, this theory of translation espouses reproducing even the syntax and word order of the original; the formulas word for word translation and verbal equivalence often imply this stricter definition of the concept.” By mentioning the “stricter form,” Ryken hints at the reality that there are varying levels of formal equivalence. The strictest end would be a word-for-word lexical translation which makes no other changes to the text. Such a translation has not been made for distribution, but its closest cousin would be the American Standard Version (ASV) which is considered by all as the most literal translation available. The other end of the formal spectrum would most likely be the New Revised Standard Version (NRSV).

Functional Equivalence. Originally this view was called Dynamic Equivalence but more recent works attempt to leave the word “Dynamic” behind in favor of “Functional.” Eugene Nida is the undisputed father and proponent of the Functional Equivalent method. He defines this method as follows: “[it] consists in reproducing in the receptor language the closest natural equivalent of the source-language message, first in terms of meaning and secondly in terms of style.” Nida immediately states that such a definition requires “careful evaluation of several seemingly contradictory elements.”

Source and Receptor Language. Source or native language is simply the original language from which the translation is based. In the case of Bible translation, it refers to Greek, Hebrew, and Aramaic. Conversely, receptor language is the language which receives the translation.

Transparent Text. This term is used in two different ways, equally defined by both methods. In the context of functional translation, this term indicates that the message of the original text is transparent to the receiving reader. In the context of formal translation, it indicates that the translation is transparent to the original text. In the former, the message is clearly seen, in the latter, the original form is clearly seen. The two definitions are not mutually exclusive, but rather emphasize either aspect of the translation.Formal Equivalence Method

The practice of translating Scripture according to what we now call formal equivalence has been the general practice of translators through the centuries. Advocates of functional translation methods are quick to point out passages where historical translations veer from the original, yet it is clear that ancient translations are primarily formal in nature. English translations in particular have historically leaned toward a formal translation. Clearly the evidence demonstrates that an unofficial standard of formal equivalence has been the practice of the church. With that historical background, let us now look at the various arguments put forth by advocates of this position.

Arguments for Formal Equivalence Retaining the words. Words matter. Words are the fundamental units of language. Meaning and ideas are derived from words and are dependent on the words in their individual meaning combined with syntax and grammar. Therefore the most basic and objective method of transferring meaning is to maintain the translated words. Leland Ryken makes the argument that “there is no such thing as a disembodied thought… when we change words, we change meaning.”

Minimal interpretation. Robert Thomas acknowledges that translation does include a degree of interpretation, but it must be avoided “as much as possible by transferring directly from the surface structure of the source language to the surface structure of the receptor language.” Ryken refers to the needed interpretation as “linguistic interpretation” as opposed to “thematic interpretation.” The former seeks to find the receptor language words which best convey the source language word meanings as opposed to finding a new way of expressing the meaning of the section with or without the same words. The more interpretation that is done in translation, the more the translation becomes a commentary.

Original meaning. Along similar lines with the previous argument that interpretation should be kept to a minimum, is the idea that the translation should convey the meaning of the original text, not the translator’s interpretation of it. The main goal is to give the reader the transparent text of the original so that they can come to their own conclusion when there is a difficulty in the text. This comes to the forefront most on ambiguous passages. It is the job of the reader, not the translator to determine the original meaning of the ambiguity .

Leave it to the reader. According to the formal translation method, if the reader does not understand elements of the original (idioms, theological words, symbols, history, etc.), they should be willing to study or use an aid rather than have a translation that requires no effort. Since it is nearly impossible to know the background of the typical reader, it is best to make the translation transparent to the original and challenge the reader to do the work of interpretation. This not only increases the abilities of the reader, but it also causes them to think carefully about Scripture as opposed to reading quickly when everything is easy to understand.

Objective limits. Perhaps the greatest desire of formal translation advocates is to hide the translator and make the translation transparent so that the original shines through. Formal equivalence places great emphasis on the objectivity which with translation is done so that multiple translators can come up with essentially the same translation. Side-by-side comparisons of multiple formal translations show minimal difference which often come down to linguistic interpretation. Centrality of the text. When it comes to Scripture as set apart from other books, preserving the original text as much as possible should be the focus of translation. The further translations stray from the original text, the less it can be trusted and ethically deemed the Word of God.

Functional Equivalence MethodThe proliferation of missionaries around the world in the last two centuries and the subsequent need for translations in primitive languages has brought problems to the fore that had not been dealt with on a major scale in the past. These problems can be understood best in the form of questions. How do you translate “Lamb of God” when a tribe has no concept of sheep? Is it legitimate to invent words in a language which has no corresponding word for “justification”? Do you maintain a literal translation of an idiom when that same idiom has a completely different (and undesirable) meaning? In response to these and other questions Eugene Nida, in conjunction with Wycliffe Bible Translators and other organizations, has developed the functional equivalent method of translation. The functional method of translations elevates meaning over form and reader over author.

Arguments for Functional Equivalence MethodMeaning is everything. “Translating must aim primarily at “reproducing the message.” To do anything else is essentially false to one’s task as a translator.” Mark Strauss states it more bluntly, “Every translation must change what is said (in Hebrew and Greek) to capture what is meant.” No two languages have a one-to-one correspondence to any significant degree. Therefore in order to maintain meaning, it is necessary to leave the source language form behind and find a meaningful form in the target language which will carry the same meaning.Provoke a response. Scripture was not written to convey facts and truths with no impact to how we live. Therefore translation should seek to invoke the same response for the modern reader that the original hearer experienced. In other words the translation should have the same impact at the outset.

Simple, not complex. God has given us his revelation for us to understand. It is the translator’s responsibility to translate it in such a way that people can easily understand without aid. In addition there are settings where study aids are not available such as an oral reading. The translation must relieve the text of ambiguities and statements or forms which can be understood more than one way so that people can hear and respond to the Word. Nida puts it this way, “If we assume that the writers of the Bible expected to be understood, we should also assume that they intended one meaning and not several, unless an intentional ambiguity is linguistically 'marked.'”

Respect the language. Those who work on the front lines of Bible translation on the mission field are keenly aware that every language is unique. Each language has its own “word-building capacities, unique patterns of phrase order, techniques for linking clauses into sentences, markers of discourse, and special discourse types of poetry, proverbs, and song…” A functional translation takes this into account and seeks to form a translation as though it were native to the receptor. Readers of the translation should not feel like they are reading a translation.Respect the originals. Hebrew and Greek are languages like every other language. They suffer from the same limitations, ambiguities, and cultural influences. We must not treat them as though they are divine languages to be preserved for eternity. Rather, we must recognize that God communicated in the language of the people in the original writings, and as faithful translators we must translate into the modern language of the people with the receptor’s own idioms, grammar, vocabulary, and syntax.

Priorities. When we commence translating God’s Word, we must have priorities as to who the target readers are. Whether the readers are scholars or children will make a significant difference in the vocabulary and structure used. For wide-distribution translations, certain priorities must be kept in mind. Non-Christians should have priority over Christians. There are two simple reasons for this: (1) intelligibility allows Scripture to be an instrument of evangelism, and (2) it prevents Scripture from becoming obscure “high church” language. The second target age is 25-30 as opposed to children or older adults. The reason is that older adults who are used to a generational language understand terms and phrases which have gone out of use for decades. On the other hand children have a limited vocabulary and are not able to recognize literary features very well. People ages 25-30 have the established English skills and the current vocabulary which generally bridges the gap between all ages. Finally, there are times when the language of women needs priority over that of men. While this would not apply in the United States as much, foreign countries often have a “work-place vocabulary” that women are unaccustomed to. In these cultures where women remain at home, it follows that they teach the children. Therefore the women must understand the translation best in order to instruct the children.

I have attempted to demonstrate the primary arguments put forth by each side of the debate. In my research I have discovered that there are issues which need to be addressed more in order to think more critically. One unanswered question is whether the principles for translating into an established language with a tradition of Bible translations should be the same as translating into a language which has not previous translation and perhaps no literature tradition at all. Another question is whether translating Scripture should have different principles than other forms of literature, particularly since proponents on both sides believe in plenary inspiration.There is one issue to which all parties agree: translating and spreading Scripture is a high calling and we must apply all diligence in the process. Whether it is supplying a Gospel to a native in Mongolia who has only the Quran, or whether The Gideons are supplying Bibles to hotel chains across the world, the Word must get out by the hands of faithful men.