This is a research on the first steps towards constructing a simultaneously intuitionistic and paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs) and on the Logics of Formal Undeterminateness (LFUs), while examining its role as a theory of evidence. We argue that LFIs very naturally encode an extension of the notion of probability able to express probabilistic reasoning under excess of information (contradictions), while LFUs encode extensions of the notion of probability able to express probabilistic reasoning under lack of information (incompleteness). A theory of probability is built upon the paraconsistent and paracomplete Logic of Evidence and Truth called LETj, a Logic of Formal Inconsistency and Undeterminedness (LFIU). LETj is designed to express the notions of conclusive and non-conclusive evidence, as well as preservation of evidence; it is also able to recover classical logic for propositions whose truth-value have been conclusively established. In this way, it can also express the notion of preservation of truth. By means of defining appropriate notions of conditional probability and updating via a new version of Bayes’ Theorem for conditionalization, we examine how this surprising option of probability is able to account for notions of potential evidence versus veridical evidence as proposed by Peter Achinstein (The Book of Evidence, 2001):

(1) e is potential evidence that h if and only if:
(a) e is true,
(b) e does not entail h,
(c) p(h/e) > k,
(d) p(there is an explanatory connection between h and e/h&e) > k.

(2) e is veridical evidence that h if and only if: e is potential evidence that h, h is true, and there is an explanatory connection between the truth of h and e.

Further challenges are to examine the notions of potential and veridical evidence with background, to show that our new Bayesian conditionalizations make empirical sense and to show how such paraconsistent and paracomplete notion of Bayesianism responds the critics of Bayesianism, all Herculean tasks. Bold betting may lead to spectacular ruin, but shy betting produces no more than conservative performance.

We will present a way to expand non-transitive theories in order to recover cautious versions of Cut. We will show how to do this for the logic ST in a direct way. We will obtain the logic ST^{\circ} (ST with a consistency operator), and show that it is non-trivial, and also sound and complete with respect to a disjunctive three-side sequent system called LSC^{\circ} . The way to do that with ST+ (ST+ with a transparent truth predicate) is not that straightforward. In order to accomplish our goal, we need to change the self-referential procedure, from a Strong one to a Weak one. We will shown that the resulting theory, ST∗ , is not only non-trivial, but sound and complete with respect to the proof system LSC∗

Many-valued Logics are logics that have more than two truth-values. These logics are known for disrespecting the Principle of Excluded Middle and, in a certain way, the Principle of Bivalence, which is a sum of the former principle whith the Principle of Non-Contradiction. In 1920’s, Jan Lukasiewicz started a pioneering treatment of this kind of logic. His first three-valued system is called L3, where the third value, 1/2, is called possible or indeterminated. This system was intended to give an interpretation to alethic modalities. However, in the 1930’s, he generalized this system to n values, where n is a finite cardinal and after to infinite values. Thus, prima facie, it seems somewhat easy to multiplicate truth-values to overcome the bivalence. But, in the metatheoretical level, according matrix semantics, there is a distinction between designated values and non-designated values. In a few words, a matrix is a structure which has a set of elements, that we call truth-values, a set of truth-functions and a subset of the set of truth-values, called designated values. Thus, the bivalence appears in the metatheoretical level. So, we can multiplicate truth-values as we want, but the logic remains bivalent. An interesting way to see the bivalent aspect of many-valued logics (Lukasiewicz logics, for example) is to build society semantics for them. Society semantics were introduced by Carnielli & Lima-Marques (1999) to deal with contradictory situations in a non-trivial way. A society is composed by rational agents that reason according to a given logic. There are two kinds of societies: open societies and closed societies. Roughly speaking, an open society S accepts an atomic formula p if, and only if, at least one agent accepts it. And, a closed society S accepts an atomic formula p if, and only if, every agent accepts it. In both cases the acceptance of complex formulas follows from a homomorphism, i.e., the function which preserves the valuations of the atomic formulas. Carnielli & Lima-Marques (1999) showed that if each agent in a society reason following classical logic and the society is open, then the logic of the society is P1, the paraconsistent logic formulated by Sette in (Sette, 1973). A logic L is paraconsistent if from ‘A’ and ‘not-A’ we cannot deduce any formula. And if each agent reason following classical logic and the society is closed, then the logic of the society is I1, the paracomplete logic formulated by Sette & Carnielli in (Sette & Carnielli, 1995). A logic L is paracomplete if ‘A or not-A’ is not valid. An interesting point of this semantic is that it recovers the bivalent aspect of a given logic, because from the point of view of the agents, it accepts or does not accept the formula, not both. Modifying the definition of society, specifically the homomorphism which follows from the atomic formulas, Marcos (2000) showed a constructive method in order to prove that L3 can also be described as the logic of closed societies. We intend to build a society semantics for L4, the four-valued Lukasiewicz logic, in the following way: if the logic of the agents is L3, then the logic of this society is L4. But, we cannot apply the same method as presented by Marcos (2000) because the acceptance of formulas is restricted to atomic formulas. We can overcome this situation applying a generalization of societies developed by Coniglio & Fernández (2003) called Society Semantics with Complex Base.

In this talk, we propose two new approaches for dealing with semantic and vague vocabulary, based on Interval-Type-2 Fuzzy Logic: T2 and T2*. Firstly, we show that these theories have several advantages over the Lukasiewicz approach. Secondly, we prove a limitative result for T2: the theory is omega-inconsistent. Finally, we show that T2*avoids the usual proof of omega-inconsistency, becoming our best theory.

Some logics allow for a semantics with an intuitive appeal independently of the corresponding deductive system. This is the case, for example, of the truth-tables for classical logic and the possible-worlds semantics for alethic modal logic. These semantics do provide an insight into these logics because it really seems that the semantic clauses ‘make sense’ independently of the inference rules and/or axioms. Indeed, it is a relevant matter to show that the semantics and the deductive system are two different ways of determining the same set of valid inferences. On the other hand, some semantics for non-classical logics do not seem to have any intuitive appeal independently of the deductive system. Non-functional bivalued semantics (and the respective quasi-matrices) have been presented for intuitionistic and several paraconsistent propositional logics. We propose that the values 0 and 1 attributed to formulas in such semantics are better seen as labels that intend to be faithful to the axioms and/or inference rules. Accordingly, bivalued semantics should be seen as mathematical tools capable of representing the deductive systems in such a way that some technical results may be proved. (Joint work with Walter Carnielli)

In this paper I explore a game theoretic analysis of interactions in which the agents’ payoffs are interdependent, and hence cannot be fixed. I begin by making an analogy with semantic ungroundedness and paradoxicality, and then I show that the phenomenon prevents us from selecting a unique matrix. Next I suggest a way to solve the problem so as to model the situation successfully. I argue that we obtain a second order coordination game for subjective probabilities, in which agents try to settle on a single matrix. The first order game, on the other hand, becomes a pretense game that superimposes on the initial interaction. The situation enables us to draw some morals on the concepts of common knowledge and common belief.

Juliana Bueno-Soler (CLE and FT–Unicamp): “Portraits of a logic and their algebraic soul”

I intend to compare new algebraizations (in the Blok and Pigozzi’s sense [BP89]) for the three-valued logics P^1 (paraconsistent) in [Set73] and I^1(paracomplete) in [SC95], obtained through alternative presentations of such logics. On the one hand, it is proved that the algebraizations for the standard presentations of P^1 and I^1 are defined over different quasi-varieties, as it should be expected. On the other hand, however, by changing the portraits (standard presentations) of P^1, now dubbed \bar{P^1}, and of I^1, now dubbed \bar{I^1}, both logics result to be algebraized over the same quasi-variety. Which algebraic machinery reflects, then, the genuine nature of a logic? This result extends the surprising fact found by Blok and Pigozzi that the Lukasiewicz 3-valued logic L3 and the 3-valued paraconsistent logic J3, completely different in their nature, motivations and philosophical meaning, are algebraizable with respect to same variety of 3-valued Wajsberg algebras (via appropriate translations). Those apparently surprising results illustrate the fact that the algebraic portrait of a logic, or the “algebraic counterpart” of a logic, depends critically on the way the logic is defined. The algebra of a logic would be more a picture of the actual state of the logic system than a revelation of its deep identity. This is witnessed in our examples by the fact that equivalent classes of algebra may be naturally associated with distinct logical systems, or may be not, depending upon the formal attire the logic is ‘dressing’. But this does not testify against algebraization: we should not expect that the algebraic counterpart of a logic system would reveal its philosophical motivations.

In this paper, I discuss the relation between logic and rationality. I develop (formally and conceptually) a rational requirement which can respond to the classic objections by Harman (1986). On the one hand, the requirement pays attention to the relevance of the premises and the conclusion, which is formally expressed by the notion of weak relative closure. The requirement also takes care of the complexity of the inferences. This notion of complexity is formally represented by a partially ordered scale of the difficulty of inferences, which is weaker than Jago’s (2009) notion of complexity as number of steps.

On the article ”On the Way to a Wider Model Theory”, by Walter Carnielli, Marcelo E. Coniglio, Rodrigo Podiacki and Tarcsio Rodrigues, the grounds for a paraconsistent model theory are settled. This work is the first step from a project of taking their ideas ahead. As the truth value of a formula does not determine the truth value of its negation, one fails to have control over the truth value of each formula by controlling the truth values up to the basic degree of complexity. For this reason, the classical account of morfism is not suitable for paraconsistent models. This work is an attempt to provide an account of morfism that shall be suitable for paraconsistent environments.

In this paper we discuss the extent to which LFIs and logics whose consequence relation enjoys some aboutness-preserving property can be combined. This investigation is focused on weak logics, sometimes called infectious logics, that are obtained as a generalization of the semantic behavior of logics of nonsense (e.g. Bochvar, Hallden, etc.). Starting from four basic logics, more than a dozen of combined systems are obtained, and it is shown that supplementing them with classical negations allows to obtain proper LFIs –and LFUs. Finally, the discussion of whether these LFIs may or may not be treated as “conceptivist” frameworks, is proposed.

Marcelo Coniglio (CLE and IFCH–Unicamp): “Towards an algebraic theory of non-algebraizable logics: the case of mbC”

As it is well-known, several logics in the hierarchy of the so-called Logics of Formal Inconsistency (in short LFIs, see [4, 3]) cannot be semantically characterized by a single finite matrix. Moreover, they lie outside the scope of the usual techniques of algebraization of logics such as Blok and Pigozzi’s method. Several alternative semantical tools were introduced in the literature in order to deal with such systems: (non-truth-functional) bivaluations, possible-translations semantics, and non-deterministic matrices (or Nmatrices), obtaining so decision procedures for these logics. However, the problem of finding an algebraic counterpart for this kind of logic, in a sense to be determined, remains open. A semantics of Nmatrices based on an special kind of multialgebra called swap structures was proposed in [1, 2], which generalizes the well-known characterization results of LFIs by means of finite Nmatrices due to Avron and his collaborators. Additionally, it was proved in [5] that the swap structures semantics allows a Soundness and Completeness theorem by means of a very natural generalization of the well-known Lindenbaum-Tarski process. In this talk some preliminary results towards the possibility of defining an algebraic theory of swap structures semantic will be shown, by adapting concepts of universal algebra to multialgebras in a suitable way. As a first step, we will concentrate our efforts on the algebraic theory of SWmbC, the class of swap structures for the logic mbC (which is the weakest system in the hierarchy of LFIs). Multialgebras (a.k.a. hyperalgebras) have been very much studied in the literature; however, the generalization of even basic conceps such as homomorphism, subalgebras and congruences is far to be obvious, and several different alternatives were proposed in the literature. In particular, some results related with Birkhoff’s theorem were already obtained, but the proofs are either incomplete or the notions on which they are based are not suitable for our purposes. After considering very natural notions of homomorphism, subalgebras, products and congruences (see [6]), it is proved that the class SWmbC is closed under sub-swap-structures and products, but it is not closed under homomorphic images, hence it is not a variety in the usual sense. Nevertheless, it is possible to give a representation theorem for SWmbC with a similar use of the Birhkoff’s theorem in traditional algebraic logics. As a consequence of this, the class SWmbC is generated by the structure with five elements, which is constructed over the 2-element Boolean algebra. Such structure is precisely Avron’s 5-valued characteristic Nmatrix for mbC. Finally, it is proved that, under the present approach, the classes of swap structures for the axiomatic extensions of mbC found in [2] are subclasses of SWmbC. They are obtained by requiring that its elements satisfy precisely the additional axioms. This allow a modular treatment of the algebraic theory of swap structures, as happens in the traditional algebraic setting.

An indispensability argument is an argument to the effect that we ought be committed to the existence of mathematical/theoretical objects, given that they are indispensable to our best scientific theories. Indispensability arguments have been fairly influential in contemporary debates on the philosophy of mathematics and on the philosophy of science. In ‘The Ontological Commitments of Inconsistent Theories’, Mark Colyvan presents a few versions of the argument that are specifically concerned with inconsistent or contradictory theories and whose purported effect is to show that we ought be committed to the existence of inconsistent objects. In this talk I shall sketch a line of response to Colyvan’s arguments, being mainly concerned with the indispensability of inconsistent mathematical objects. My response will draw on a relatively recent nominalistic interpretation of mathematics put forward by Jody Azzouni in his Deflating Existence Consequence: A Case for Nominalism. I will also tentatively propose a logical framework that seems to be well-suited for the regimentation of inconsistent applied mathematical theories, when these are interpreted along the lines being advanced.

In the present paper we show that in QmbC it holds a result analogous to Fraïssé-Hintikka theorem for classical logic. Firstly, we define partial isomorphism between models of QmbC. Finally, we prove that there is partial isomorphism of lenght k between models of QmbC if and only if both agree in any formula $\phi $ with prefix of quantifiers of lenght $\leq k $ such that every quantifier occurring in $\phi $ outside the scope the prefix of quantifiers is in the scope of a negation or a $\circ$-operator. Our central motivation is to be able to, with this result in hand, classify inconsistent formulas in the language of QmbC, result that we still did not achieve.

It is usually accepted that negation is a contradictory-forming operator and that two statements are contradictories if and only if it is logically impossible for both to be true and logically impossible for both to be false. These two premises have been used by Hartley Slater [Slater, 1995] to argue that paraconsistent negation is not a “real” negation because a sentence and its paraconsistent negation can be true together. In this talk we claim that a counterpart of Slater´s argument can be directed against the negation operator of classical logic. Carnap’s discovery that there are models of classical propositional logic with non-standard or non-normal interpretations of the connectives will be used to build such an argument. We ponder the consequences of these arguments for the claims that paraconsistent negations are not genuine negations and that the negation of classical logic is a contradictory-forming operator.

In 1934, F. Marty introduced the notion of hypergroups, a generalization of the concept of group based on multioperations (see [6]). Multioperations are operations which can return, for a given entry, a set of values instead of a single value. Marty’s paper motivated many later works, thus originating the concept of multialgebras (a.k.a. hyperalgebras). Multialgebras are algebraic structures equipped with at least one multioperation. A. Avron and I. Lev introduced in 2001 the notion of Non-deterministic matrices, or Nmatrices (see [1]), which are multialgebras together with a nonempty subset of its domain (the set of designated truth-values). Using this tool, which generalizes the usual notion of logical matrix, it is possible to semantically characterize several systems in the hierarchy of Logics of Formal Inconsistency (LFIs). Besides not being characterized by a single finite logical matrix, these systems do not have non-trivial logical congruences, and so the standard semantical analysis, like categorial or algebraic semantics, cannot be applied to them. W. Carnielli and M. Coniglio proposed in [2] and [3, Chapter 6] a semantics formed by classes of Nmatrices of a special kind called swap structures, which allows to characterize several LFIs. Anticipating Avron and Lev’s ideas, Y. Yvlev proposed in 1988 a family of non-normal modal logics characterized by 4-valued Nmatrices (see [5]). This work was revisited and generalized in [4]. In this talk we present a semantics of swap structures for one of the systems proposed by Ivlev. In order to prove completeness, the concept of Lindenbaum-Tarski swap structure together with the canonical valuation over it are introduced. This construction can be adapted in order to characterize several non-normal modal logics studied in [4] which have a finite characteristic Nmatrix.

In this talk I will present two systems of AGM-like Paraconsistent Belief Revision, both defined over Logics of Formal Inconsistency (LFIs) due to the possibility of defining a formal consistency operator within these logics. The AGMo system is strongly based on this operator and internalize the notion of formal consistency in the explicit constructions and postulates. Alternatively, the AGMp system uses the AGM-compliance of LFIs and thus assumes a wider notion of paraconsistency — not necessarily related to the notion of formal consistency.

We have two main goals in this talk. The first one is to show that the definition given in Zardini (2011) for the multiplicative quantifiers is a good one, in the sense that the quantifiers behave exactly as one would expect them to. The second one is to argue that, unfortunately, these quantifiers bring about self-referential paradoxes that make no use of the structural rules of contraction.

Walter Carnielli (CLE and IFCH–Unicamp): “Possible-Translations Semantics: A tool to combine logics, and to give them meaning”

It is a demanding challenge to find semantics for non-standard logics which can be simultaneously strongly adequate and intuitively acceptable. The idea of Possible-Translations Semantics (PTS’s) is to serve as a unifying framework, not only in giving meaning to such logics, but also as a handy tool to combine them (cf.[1], [2]). Contrary to some traditional philosophical views, contradictions are precious in informal reasoning. It is a task of logic to offer a suitable formal model for the perfectly licit act of reasoning under contradictions, and paraconsistent logic accomplishes this. In particular, the wide family of Logics of Formal Inconsistency (LFI’s) (cf. [3]) achieves this in a remarkably natural and elegant way. The PTS’s were devised in 1990 (cf. [5]) in order to offer a palatable interpretation for some non-classical logics, and are well adapted to LFI’s. Improved recent expositions appear in [3], [4] and [6]. This tutorial intends to be an introduction to the PTS’s, its methods and its relevant literature.