Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.

However, he also notes the problem of logical omniscience implied in the K axiom:

A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:

Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.

He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.

Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?

1 Answer
1

Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and deductively closed they must include all the consequences along with their premises, and nothing contradicting the premises. So if we are describing acquisition of knowledge as elimination of uncertainty, by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.

Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and to describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects introduced in proofs, not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.

This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and he introduced an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The difficulty of deriving some Boolean tautologies from the axioms, even though there are no quantifiers, illustrates this depth. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.

There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong (freely available) even argues that few interesting concepts are analyzable in the reductive sense. A two part chapter there is titled The Demise of Definitions.

But to say that a concept is unanalyzable is not to say that it is not amenable to formal study. Fodor himself is well known for a formal study of meaning. Euclid's lines and points, or sets and elements of set theory, are also unanalyzable. As we know since Hilbert, basic notions can only be defined implicitly, in terms of their interrelations. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.

Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. Late Wittgenstein moved away from positivist formal approaches to knowledge and meaning to language games in the Philosophical Investigations. For the most radical theses one would have to go to the continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analyses in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".

@Eliran The problem with logical omniscience asked about in the OP is not whether rationality requires it, which is what the linked paper addresses, but rather that it was forced by the technical apparatus of possible worlds, regardless of what rationality requires. And that is no longer a problem.
– ConifoldJan 3 at 23:14