A variable binding term operator is an operator which binds variables of formulas to give origin to terms. Examples of vbtos are the description operator ι, Hilbert’s ε-symbol, the classiﬁer { : }, and Russell’s abstraction operator ˆx1xˆ2 . . . xˆnF. It is usual to introduce vbtos by contextual deﬁnition, though their treatment in ﬁrst- and higherorder languages as new primitive symbols, added to them, is more convenient, especially from the semantic point of view. A semantic approach to vbtos (...) in classical logic is contained in da Costa 1980 . In this note we outline how vbtos can be handled in some paraconsistent and paracomplete logics, precisely those logics of da Costa 1974 and da Costa and Marconi 1986. But our methods apply equally will to several other non-classical systems. Although in some particular cases, for instance in the case of the description operator in certain ﬁrst-order languages, the contextual treatment of vbtos seems to be reasonable, we limit ourselves here to the semantic approach to these operators. In order to develop our semantic analysis, we combine the methods of da Costa 1980 with those of Alves 1984; in this last paper, a paraconsistent model theory is studied relative to da Costa’s predicate calculi with equality C = n , 1 ≤ n ≤ ω. (shrink)

We motivate and introduce a new method of abduction, Matrix Abduction, and apply it to modelling the use of non-deductive inferences in the Talmud such as Analogy and the rule of Argumentum A Fortiori. Given a matrix with entries in {0,1}, we allow for one or more blank squares in the matrix, say $a_{i,j} =?.$ The method allows us to decide whether to declare $a_{i,j} = 0$ or $a_{i,j} = 1$ or $a_{i,j} =?$ undecided. This algorithmic method is then applied (...) to modelling several legal and practical reasoning situations including the Talmudic rule of Kal-Vachomer. We add an Appendix showing that this new rule of Matrix Abduction, arising from the Talmud, can also be applied to the analysis of paradoxes in voting and judgement aggregation. In fact we have here a general method for executing non-deductive inferences. (shrink)

A forcing poset of size 221 which adds no new reals is described and shown to provide a Δ22 definable well-order of the reals . The encoding of this well-order is obtained by playing with products of Aronszajn trees: some products are special while other are Suslin trees. The paper also deals with the Magidor–Malitz logic: it is consistent that this logic is highly noncompact.

Abramsky, S., Domain theory in logical form, Annals of Pure and Applied Logic 51 1–77. The mathematical framework of Stone duality is used to synthesise a number of hitherto separate developments in theoretical computer science.• Domain theory, the mathematical theory of computation introduced by Scott as a foundation for detonational semantics• The theory of concurrency and systems behaviour developed by Milner, Hennesy based on operational semantics.• Logics of programsStone duality provides a junction between semantics and logics . Moreover, the underlying (...) logic is geometric, which can be computationally interpreted as the logic of observable properties—i.e., properties which can be determined to hold of a process on the basis of a finite amount of information about its execution.These ideas lead to the following programme. A metalanguage is introduced, comprising• types = universes of discourse for various computational situations;• terms = PROGRAMS = syntactic intensions for models or points. A standard denotational interpretation of the metalanguage is given, assigning domains to types and domain elements to terms. The metalanguage is also given a logical interpretation, in which types are interpreted as propositional theories and terms are interpreted via a program logic, which axiomatises the properties they satisfy. The two interpretations are related by showing that they are Stone duals of each other. Hence, semantics and logic are guaranteed to be in harmony with each other, and in fact each determines the other up to isomorphism. This opens the way to a whole range of applications. Given a denotational description of a computational situation in our metalanguage, we can turn the handle to obtain a logic for that situation. (shrink)

We study increasing F-sequences, where F is a dilator: an increasing F-sequence is a sequence (indexed by ordinal numbers) of ordinal numbers, starting with 0 and terminating at the first step x where F(x) is reached (at every step x + 1 we use the same process as in decreasing F-sequences, cf. [2], but with "+ 1" instead of "- 1"). By induction on dilators, we shall prove that every increasing F-sequence terminates and moreover we can determine for every dilator (...) F the point where the increasing F-sequence terminates. We apply these results to inverse Goodstein sequences, i.e. increasing (1 + Id) (ω) -sequences. We show that the theorem every inverse Goodstein sequence terminates (a combinatorial theorem about ordinal numbers) is not provable in ID 1 . For a general presentation of the results stated in this paper, see [1]. We use notions and results concerning the category ON (ordinal numbers), dilators and bilators, summarized in [2, pp. 25-31]. (shrink)

Although the theory of definability had many important antecedents—such as the descriptive set theory initiated by the French semi-intuitionists in the early 1900s—the main ideas were first laid out in precise mathematical terms by Alfred Tarski beginning in 1929. We review here the basic notions of languages, explicit definability, and grammatical complexity, and emphasize common themes in the theories of definability for four important languages underlying, respectively, descriptive set theory, recursive function theory, classical pure logic, and finite-universe logic. We review (...) the history of previous studies of the similarities and differences in the theories of definability of the first three of these four languages. A seminal role leading toward unification of the theories has been played by the separation principles introduced by Nikolai Luzin in 1927. Emphasizing analogies and driving toward further unification embracing finite-universe logic we concentrate on a simple example—the first and second separation principles for existential-universal first-order sentences . Using this as a test case for the fundamental problem of how to “finitize” arguments in classical pure logic to the finite-universe case, we are led to the analogous negative solution by using the theory of certain special graphs: a graph is - special for any positive integers m , n , p , q iff it is bipartite with m red points and n blue points and for every p -tuple of red points there is a blue point to which they are all connected . As an aside we introduce for further study a natural “Ramseyesque” increasing sequence A of positive integers, where A is the least positive integer n for which an -special graph exists. (shrink)

Positivists identify science and certainty and in the name of the utter rationality of science deny that it rests on speculative presuppositions. The Logical Positivists took a step further and tried to show such presuppositions really no presuppositions at all but rather poorly worded sentences. Rules of sentence formation, however, rest on the presuppositions about the nature of language. This makes us unable to determine the status of mathematics, which is these days particularly irksome since this question is now-since Abraham (...) Robinson-one that mathematicians cannot ignore. Since mathematics is the paradigm of a logical discourse, logic must offer a system adequate enough to serve mathematics. This fact makes it difficult to avoid making question-begging moves in both mathematics and logic. We must therefore view the rationality of logic as partial and hope it is stepwise improvable. The theory of rationality thus turns to be the major presupposition of logic, and one which has ample metaphysical background to it. The very supposition, basic to all logic, that language is divisible into form andcontent is under suspicion-mathematics perhaps belongs to neither. (shrink)

We study the logic of strategic ability of coalitions of agents with bounded memory by introducing Alternating-time Temporal Logic with Bounded Memory (ATLBM), a variant of Alternating-time Temporal Logic (ATL). ATLBM accounts for two main consequences of the assumption that agents have bounded memory. First, an agent can only remember a strategy that specifies actions in a bounded number of different circumstances. While the ATL-formula means that coalition C has a joint strategy which will make φ true forever, the ATLBM-formula (...) means that C has a joint strategy which for each agent in C specifies what to do in no more than n different circumstances and which will make φ true forever. Second, an agent has bounded recall—a strategy can only take the last m states of the system into account. We use the logic to study the interaction between strategic ability, bounded number of decisions, bounded recall and incomplete information. We discuss the logical properties and expressiveness of ATLBM, and its relationship to ATL. We show that ATLBM can express properties of strategic ability under bounded memory which cannot be expressed in ATL. (shrink)

We show that the modal µ-calculus over GL collapses to the modal fragment by showing that the fixpoint formula is reached after two iterations and answer to a question posed by van Benthem in [4]. Further, we introduce the modal µ~-calculus by allowing fixpoint constructors for any formula where the fixpoint variable appears guarded but not necessarily positive and show that this calculus over GL collapses to the modal fragment, too. The latter result allows us a new proof of the (...) de Jongh, Sambin Theorem and provides a simple algorithm to construct the fixpoint formula. (shrink)

Over recent years, various semantics have been proposed for dealing with updates in the setting of logic programs. The availability of different semantics naturally raises the question of which are most adequate to model updates. A systematic approach to face this question is to identify general principles against which such semantics could be evaluated. In this paper we motivate and introduce a new such principle the refined extension principle. Such principle is complied with by the stable model semantics for (single) (...) logic programs. It turns out that none of the existing semantics for logic program updates, even though generalisations of the stable model semantics, comply with this principle. For this reason, we define a refinement of the dynamic stable model semantics for Dynamic Logic Programs that complies with the principle. (shrink)