Archive for October, 2012

There are two hypotheses for an explanation for the fine-tuning (FT) of nomic behavior we observe, which permits the existence of intelligent life. These hypotheses are that there is a fine-tuner responsible for causing the nomic behavior and a non-fine-tuner hypothesis–anything but a fine-tuner.

The two competing hypotheses rise and fall on the same crux. One of the problems with falsifying ~FT is that whatever falsifier may be presented it could always be attributed to this idea of cosmic Darwinism, the evolution of the multiverse itself. Paul Davies criticizes the multiverse because he believes it could never be falsified simply because many worlds can be used to explain anything. Science would become redundant and the regularities of nature would need no further investigation, because they could simply be explained as a selection effect, need to keep us alive and observing.[1]

When considered in the range possible explanans for the origin of information in the universe/multiverse the options must meet the conditions of causal efficacy and specificity. The first condition states that origin of information must be causal. Information does not arbitrarily pop in and out of existence but requires a source. The second condition states that the origin must sufficiently explain the specificity in information and must provide more than mere Shannon information.

Consider a computer as an example for information relay (a phenomenal entity). The computer is and can be used as a channel, it can be a receiver, and it can be a source of information. However, to say that the information in the computer no longer needs an explanation for its origin would suffer the problem of information displacement. What begs the question is from where did the information in the computer come? The answer would inevitably become a software engineer or a programmer. Undirected material processes have not demonstrated the capacity to generate significant amounts of specified information. Information can be changed via materialistic means. The computer can change the initial coding from the programmer and introduce noise on the sending and receiving ends.

The argument should be understood to have the best explanatory scope and power by demonstrating that a being capable of intelligent design is a more probable conclusion than its alternatives. The teleological argument may be formulated as follows:

1) The fine-tuning of the universe is due to physical necessity, chance, or design.
2) It is not due to physical necessity or chance.
3) Therefore, it is due to design.

Premise (1) should be uncontroversial. These three options are not merely limited to these, but the range between necessity and chance seem to cover the spectrum of possibilities. According to physical necessity, the constants and quantities must have the values they do, and there was really no chance or little chance of the universe’s not being life-permitting.

In this post I’ll be responding to R.A. Fumerton’s “Inferential Justification and Empiricism” in The Journal of Philosophy 73/17 (1976).

In this paper Fumerton argues for the empiricist’s version of foundationalism. He draws important distinctions between senses of how one may be inferentially justified. His argument is matched against another argument, which proceeds from observations about what we do and do not infer. His primary contention is that is that one can never have a noninfterentially justified belief in a physical-object proposition. One must always justify one’s beliefs in propositions about the physical world by appealing to other beliefs or basic beliefs; a thesis I disagree with.

I will be faithful to knowledge being defined as a justified true belief. The task that is of concern in this paper is to examine the coherence of inferential reasoning in a foundationalist’s system. A problem with inference to the best explanation (IBE) is that it has the potential to create an infinite regress. With inferential reasoning, in an attempt to justify a belief in proposition P there may be an appeal to another proposition (or set of propositions) E, and by either explicitly or implicitly appeal to a third proposition, that E confirms or makes P probable. The challenge of inferential justification challenges one of two propositions:

God created both us and our world in such a way that there is a certain fit or match between the world and our cognitive faculties. This is the adequation of the intellect to reality (adequation intellectus ad rem). The main premise to adequation intellectus ad rem is that there is an onto-relationship between our cognitive or intellectual faculties and reality that enables us to know something about the world, God, and ourselves.[1] This immanent rationality inherent to reality is not God, but it does cry aloud for God if only because the immanent rationality in nature does not provide us with any explanation of itself.[2]

In reality all entities are ontologically connected or interrelated in the field in which they are found. If this is true then the relation is the most significant thing to know regarding an object. Thus, to know entities as they actually are what they are in their relation “webs”. Thomas Torrance termed this as onto-relations, which points more to the entity or reality, as it is what it is as a result of its constitutive relations.[3]

The methodology of the epistemological realist concerns propositions of which are a posteriori, or “thinking after,” the objective disclosure of reality. Thus, epistemology follows from ontology. False thinking or methodology (particularly in scientific knowledge) has brought about a failure to recognize the intelligibility actually present in nature and the kinship in the human knowing capacity to the objective rationality to be known.[4]

The University of Oklahoma philosopher Linda Zagzebski is a leading epistemologist in the field of virtue ethics. Virtue epistemology is an attempt at unifying virtue ethics; Zagzebski takes the Aristotelian approach, by combining it with historical or cognitive psychology in its approach to knowledge. Zagzebski is a virtue-responsibilitist, which is an internalist model and includes traits such as open-mindedness and concern when it comes to epistemic endeavors. There is a rejection of Quine’s naturalized epistemology though it still permits empiricism as a means of belief formation.

Zagzebski argues for a direction of analysis thesis, a unified account of the intellectual and moral virtues, a neo-Aristotelian approach. She suggests, that by virtue theory that makes the concept of a right act derivative from the concept of a virtue or some inner state of a person that is a component of virtue. This is a point both about conceptual priority and about moral ontology. In a pure virtue theory the concept of a right act is defined in terms of the concept of a virtue or a component of virtue such as motivation. The property of rightness is something that emerges from the inner traits of persons.[1] The entire epistemological task is thus approached with this ethical theory overlaying each epistemic consideration.

The following is the abstract and a link to the paper written by Thomas Talbott.

I argue that, contrary to the opinion of Wes Morriston, William Rowe, and others, a supremely perfect God, if one should exist, would be the freest of all beings and would represent the clearest example of what it means to act freely. I suggest further that, if we regard human freedom as a reflection of God’s ideal freedom, we can avoid some of the pitfalls in both the standard libertarian and the standard compatibilist accounts of freewill.

My purpose in this paper is to set forth a theory of agency that makes no appeal to mysterious notions of agent causation. But lest I be misunderstood at the very outset, I should perhaps clarify the point that my emphasis here is on the term “mysterious” and not on the expression “agent causation.” I shall begin with what seems to me the best possible example of agent causation: the sense in which a supremely perfect God, if one should ex- ist, would initiate or originate his own actions. I shall not, however, simply adopt without modification the standard understanding of agent causa- tion, assuming there to be such an understanding.

In December 1994, I was in the middle of writing my philosophy dissertation for the University of Illinois at Chicago while also working on a masters of divinity degree at Princeton Theological Seminary. Visiting my parents in the Tucson area for the Christmas break, I was pondering what title to put on my dissertation. The dissertation focused on small-probability events used in chance-elimination arguments. Although the dissertation addressed some long-standing questions in the foundations of statistical reasoning, I also had my eye on bigger fish. Two years earlier, in the summer of 1992, I had spent several weeks with Stephen Meyer and Paul Nelson in Cambridge, England, to explore how to revive design as a scientific concept, using it to elucidate biological origins as well as to refute the dominant materialistic understanding of evolution (i.e., neo-Darwinism).

Such a project, if it were to be successful, clearly could not merely give a facelift to existing design arguments for the existence of God. Indeed, any designer that would be the conclusion of such statistical reasoning would have to be far more generic than any God of ethical monotheism. At the same time, the actual logic for dealing with small probabilities seemed less to directly implicate a designing intelligence than to sweep the field clear of chance alternatives. The underlying logic therefore was not a direct argument for design but an indirect circumstantial argument that implicated design by eliminating what it was not.