▼ We explore aspects of continuity as they manifest in two separate settings - metric model theory (continuous logic) and enriched categories - and interpret the former into the latter. One application of continuous logic is in proving that certain convergence results in analysis are in fact uniform across the choices of parameters: Avigad and Iovino outline a general method to deduce from a given convergence theorem that the convergence is uniform in a ``metastable'' sense. While convenient, this method imposes strict requirements on the kinds of theorems allowed: in particular, any functions occurring in the theorem must be uniformly continuous. In aiming to apply to a broader class of examples the Avigad-Iovino approach, we construct a variant of continuous logic that is able to handle discontinuous functions in its domain of discourse. This logic weakens the usual continuity requirements for functions, but compensates by introducing a notion of ``linear structure'' that mimics e.g. the vector space structure of Banach spaces. We use this logic to apply the Avigad-Iovino method to specific convergence results from functional analysis involving discontinuous functions, and obtain uniform metastable convergence in those examples. This is the project of the first part of this thesis.
The second part of the thesis continues this study of continuity from a different angle, starting from where Lawvere shows that enriching a category over R with the appropriate monoidal structure turns that category into a metric space. He even muses on the notion of an ``R-valued logic'', but does not make the connection to continuous logic (primarily because continuous logic did not yet exist). We introduce necessary structure that enables us to have a notion of ``uniform continuity'' and ``continuous subobjects'' in an enriched categorical setting, and use this to give an interpretation of continuous logic into a certain category of R-enriched categories.

► The aim of this dissertation is to outline and defend the view here dubbed “anti-foundational categorical structuralism” (henceforth AFCS). The program put forth is intended…
(more)

▼ The aim of this dissertation is to outline and defend the view here dubbed “anti-foundational categorical structuralism” (henceforth AFCS). The program put forth is intended to provide an answer the question “what is mathematics?”. The answer here on offer adopts the structuralist view of mathematics, in that mathematics is taken to be “the science of structure” expressed in the language of category theory, which is argued to accurately capture the notion of a “structural property”. In characterizing mathematical theorems as both conditional and schematic in form, the program is forced to give up claims to securing the truth of its theorems, as well as give up a semantics which involves reference to special, distinguished “mathematical objects”, or which involves quantification over a fixed domain of such objects. One who wishes—contrary to the AFCS view—to inject mathematics with a “standard” semantics, and to provide a secure epistemic foundation for the theorems of mathematics, in short, one who wishes for a foundation for mathematics, will surely find this view lacking. However, I argue that a satisfactory development of the structuralist view, couched in the language of category theory, accurately represents our best understanding of the content of mathematical theorems and thereby obviates the need for any foundational program.

► We consider the problem of checking whether an organization conforms to a body of regulation. Conformance is studied in a runtime verification setting. The regulation…
(more)

▼ We consider the problem of checking whether an organization conforms to a body of regulation. Conformance is studied in a runtime verification setting. The regulation is translated to a logic, from which we synthesize monitors. The monitors are evaluated as the state of an organization evolves over time, raising an alarm if a violation is detected. An important challenge to this approach comes from the fact that regulations are commonly expressed in natural language. The translation to logic is difficult. Our goal is to assist in this translation by: (a) the design of logics that let us formalize regulation one sentence at a time, and (b) the use of natural language processing as an aid in the sentential translation.
There are many features that are needed in a logic, to accommodate a sentential translation of regulation. We study two features, motivated by a case study. First, statements in regulation refer to others for conditions or exceptions. Second, sentences in regulation convey legal concepts, e.g., obligation and permission. Obligations and permissions can be nested to convey concepts, such as, rights. We motivate and design a logic to accomodate these two features of regulatory texts. The common theme is the importance of the notion of {\em saying} in such constructs.
We begin by extending linear temporal logic to allow statements to refer to others. Inter-sentential references are expressed via the use of a predicate, called "says", whose interpretation is determined by inferences from laws. The "says" predicate offers a unified analysis of various kinds of inter-sentential references, e.g., priorities of exceptions over rules, and references to definitions or list items.
We then augment the logic with obligation and permission, by considering problems in access control and conformance. Saying and permission are combined using an axiom that permits a principal to speak on behalf of another. The combination yields benefits to both applications. For access control, we overcome the problematic interactions between hand-off and classical reasoning. For conformance, we obtain a characterization of legal power by nesting saying with obligation and permission. A useful fragment of the logic has a polynomial time decision procedure.
Finally, we turn to the use of natural language processing to translate a sentence to logic. We study one component of the translation in a supervised learning setting. Linguistic theories have argued for a level of logical form as a prelude to translating a sentence into logic. Logical form encodes a resolution of scope ambiguties. We define a restricted kind of logical form, called abstract syntax trees (ASTs), based on the logic developed. Guidelines for annotating ASTs are formulated, using a case study of the Food and Drug Administration's Code of Federal Regulations.
We describe experiments on a modest-sized corpus, of about 200 sentences, annotated with ASTs. The main step in computing ASTs is the ordering or ranking of operators. We adapt a learning model for ranking to…

▼ Hilbert’s choice operators τ and ε, when added to intuitionistic logic, strengthen it. In the presence of certain extensionality axioms they produce classical logic, while in the presence of weaker decidability conditions for terms they produce various superintuitionistic intermediate logics. In this thesis, I argue that there are important philosophical lessons to be learned from these results. To make the case, I begin with a historical discussion situating the development of Hilbert’s operators in relation to his evolving program in the foundations of mathematics and in relation to philosophical motivations leading to the development of intuitionistic logic. This sets the stage for a brief description of the relevant part of Dummett’s program to recast debates in metaphysics, and in particular disputes about realism and anti-realism, as closely intertwined with issues in philosophical logic, with the acceptance of classical logic for a domain reflecting a commitment to realism for that domain. Then I review extant results about what is provable and what is not when one adds epsilon to intuitionistic logic, largely due to Bell and DeVidi, and I give several new proofs of intermediate logics from intuitionistic logic+ε without identity. With all this in hand, I turn to a discussion of the philosophical significance of choice operators. Among the conclusions I defend are that these results provide a finer-grained basis for Dummett’s contention that commitment to classically valid but intuitionistically invalid principles reflect metaphysical commitments by showing those principles to be derivable from certain existence assumptions; that Dummett’s framework is improved by these results as they show that questions of realism and anti-realism are not an “all or nothing” matter, but that there are plausibly metaphysical stances between the poles of anti-realism (corresponding to acceptance just of intutionistic logic) and realism (corresponding to acceptance of classical logic), because different sorts of ontological assumptions yield intermediate rather than classical logic; and that these intermediate positions between classical and intuitionistic logic link up in interesting ways with our intuitions about issues of objectivity and reality, and do so usefully by linking to questions around intriguing everyday concepts such as “is smart,” which I suggest involve a number of distinct dimensions which might themselves be objective, but because of their multivalent structure are themselves intermediate between being objective and not. Finally, I discuss the implications of these results for ongoing debates about the status of arbitrary and ideal objects in the foundations of logic, showing among other things that much of the discussion is flawed because it does not recognize the degree to which the claims being made depend on the presumption that one is working with a very strong (i.e., classical) logic.

► This thesis examines two approaches to Galois correspondences in formal logic. A standard result of classical first-order model theory is the observation that models of…
(more)

▼ This thesis examines two approaches to Galois correspondences in formal logic. A standard result of classical first-order model theory is the observation that models of L-theories with a weak form of elimination of imaginaries hold a correspondence between their substructures and automorphism groups defined on them. This work applies the resultant framework to explore the practical consequences of a model-theoretic Galois theory with respect to certain first-order L-theories. The framework is also used to motivate an examination of its underlying model-theoretic foundations. The model-theoretic Galois theory of pure fields and valued fields is compared to the algebraic Galois theory of pure and valued fields to point out differences that may hold between them. The framework of this logical Galois correspondence is also applied to the theory of pseudoexponentiation to obtain a sketch of the Galois theory of exponential fields, where the fixed substructure of the complex pseudoexponential field B is an exponential field with the field Qrab as its algebraic subfield. This work obtains a partial exponential analogue to the Kronecker-Weber theorem by describing the pure field-theoretic abelian extensions of Qrab, expanding upon work in the twelfth of Hilbert’s problems. This result is then used to determine some of the model-theoretic abelian extensions of the fixed substructure of B. This work also incorporates the principles required of this model-theoretic framework in order to develop a model theory over substructural logics which is capable of expressing this Galois correspondence. A formal semantics is developed for quantified predicate substructural logics based on algebraic models for their propositional or nonquantified fragments. This semantics is then used to develop substructural forms of standard results in classical first-order model theory. This work then uses this substructural model theory to demonstrate the Galois correspondence that substructural first-order theories can carry in certain situations.

In what is supposed to have been a radical break with neo-Hegelian idealism, Bertrand Russell, alongside G.E Moore, advocated the analysis of propositions by…
(more)

▼

In what is supposed to have been a radical break with neo-Hegelian idealism, Bertrand Russell, alongside G.E Moore, advocated the analysis of propositions by their decomposition into constituent concepts and relations. Russell regarded this as a breakthrough for the analysis of the propositions of mathematics. However, it would seem that the decompositional-analytic approach is singularly unhelpful as a technique for the clarification of the concepts of mathematics. The aim of this thesis will be to clarify Russell’s early conception of the analysis of mathematical propositions and concepts in the light of the philosophical doctrines to which his conception of analysis answered, and the demands imposed by existing mathematics on Russell’s logicist program. Chapter 1 is concerned with the conception of analysis which emerged, rather gradually, out of Russell’s break with idealism and with the philosophical commitments thereby entrenched. Chapter 2 is concerned with Russell’s considered treatment of the significance of relations for analysis and the overturning of his “doctrine of internal relations” in his work on Leibniz. Chapter 3 is concerned with Russell’s discovery of Peano and the manner in which it informed the conception of analysis underlying Russell’s articulation of logicism for arithmetic and geometry in PoM. Chapter 4 is concerned with the philosophical and logical differences between Russell’s and Frege’s approaches to logical analysis in the logicist definition of number. Chapter 5 is concerned with connecting Russell’s attempt to secure a theory of denoting, crucial to mathematical definition, to his decompositional conception of the analysis of propositions.

► We consider the minimal possible sizes of both maximal comparable and maximal incomparable subsets of Boolean algebras. Comparability is given upper and lower bounds…
(more)

▼ We consider the minimal possible sizes of both maximal comparable and maximal incomparable subsets of Boolean algebras. Comparability is given upper and lower bounds for familiar quotients of powerset algebras. The main upper bound is proved using a construction reminiscent of the construction of the reals from Dedekind cuts. Incomparability is placed in relation to the types of dense sets occurring, resulting in several upper bounds. Specifically, the existence of a countable dense set implies the existence of a countable maximal incomparable set, the latter being constructed using a game. A weaker result is proved for uncountable density with the aid of the diamond principle leaving open the question of whether the bound holds in ZFC.
Advisors/Committee Members: Donald Monk, Keith Kearnes, Natasha Dobrinen, Agnes Szendrei, Peter Mayr.

► This dissertation examines aspects of the interplay between computing and scientific practice. The appropriate foundational framework for such an endeavour is rather real computability than…
(more)

▼ This dissertation examines aspects of the interplay between computing and scientific practice. The appropriate foundational framework for such an endeavour is rather real computability than the classical computability theory. This is so because physical sciences, engineering, and applied mathematics mostly employ functions defined in continuous domains. But, contrary to the case of computation over natural numbers, there is no universally accepted framework for real computation; rather, there are two incompatible approaches – computable analysis and BSS model – , both claiming to formalise algorithmic computation and to offer foundations for scientific computing.
The dissertation consists of three parts. In the first part, we examine what notion of 'algorithmic computation' underlies each approach and how it is respectively formalised. It is argued that the very existence of the two rival frameworks indicates that 'algorithm' is not one unique concept in mathematics, but it is used in more than one way. We test this hypothesis for consistency with mathematical practice as well as with key foundational works that aim to define the term. As a result, new connections between certain subfields of mathematics and computer science are drawn, and a distinction between 'algorithms' and 'effective procedures' is proposed.
In the second part, we focus on the second goal of the two rival approaches to real computation; namely, to provide foundations for scientific computing. We examine both frameworks in detail, what idealisations they employ, and how they relate to floating-point arithmetic systems used in real computers. We explore limitations and advantages of both frameworks, and answer questions about which one is preferable for computational modelling and which one for addressing general computability issues.
In the third part, analog computing and its relation to analogue (physical) modelling in science are investigated. Based on some paradigmatic cases of the former, a certain view about the nature of computation is defended, and the indispensable role of representation in it is emphasized and accounted for. We also propose a novel account of the distinction between analog and digital computation and, based on it, we compare analog computational modelling to physical modelling. It is concluded that the two practices, despite their apparent similarities, are orthogonal.

► This thesis concerns embeddings and self-embeddings of foundational structures in both set theory and category theory. The first part of the work on models of…
(more)

▼ This thesis concerns embeddings and self-embeddings of foundational structures in both set theory and category theory.
The first part of the work on models of set theory consists in establishing a refined version of Friedman's theorem on the existence of embeddings between countable non-standard models of a fragment of ZF, and an analogue of a theorem of Gaifman to the effect that certain countable models of set theory can be elementarily end-extended to a model with many automorphisms whose sets of fixed points equal the original model. The second part of the work on set theory consists in combining these two results into a technical machinery, yielding several results about non-standard models of set theory relating such notions as self-embeddings, their sets of fixed points, strong rank-cuts, and set theories of different strengths.
The work in foundational category theory consists in the formulation of a novel algebraic set theory which is proved to be equiconsistent to New Foundations (NF), and which can be modulated to correspond to intuitionistic or classical NF, with or without atoms. A key axiom of this theory expresses that its structures have an endofunctor with natural properties.

This work is a study about the origins of Mathematical Logic and the limits of its applicability
to the formal development of Mathematics. Firstly, Dedekindâs arithmetical theory is
presented, which was the first theory to provide a precise definition for natural numbers and
to demonstrate relying on it all facts commonly known about them. Peanoâs axiomatization
for Arithmetic is also presented, which in a sense simplified Dedekindâs theory. Then, Fregeâs
Begriffsschrift is presented, the formal language from which modern Logic originated, and in it
are represented Fregeâs basic definitions concerning the notion of number. Afterwards, a summary
of important topics on the foundations of Mathematics from the first three decades of the
twentieth century is presented, beginning with the paradoxes in Set Theory and ending with
Hilbertâs formalist doctrine. At last, are presented, in general terms, GÃdelâs incompleteness.
theorems and Turingâs computability concept, which provided precise answers to the two most
important points in Hilbertâs program, to wit, a direct proof of consistency for Arithmetic and
the decision problem, respectively.
Keywords:
1. Mathematical Logic
2. Foundations of Mathematics
3. GÃdelâs incompleteness theorems

Farias, P. M. S. (2007). A study about the origins of Mathematical Logic and the limits of its applicability to the formalization of Mathematics. (Masters Thesis). Universidade Federal do Ceará. Retrieved from http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1516 ;

Chicago Manual of Style (16th Edition):

Farias, Pablo Mayckon Silva. “A study about the origins of Mathematical Logic and the limits of its applicability to the formalization of Mathematics.” 2007. Masters Thesis, Universidade Federal do Ceará. Accessed September 15, 2019.
http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1516 ;.

MLA Handbook (7th Edition):

Farias, Pablo Mayckon Silva. “A study about the origins of Mathematical Logic and the limits of its applicability to the formalization of Mathematics.” 2007. Web. 15 Sep 2019.

Vancouver:

Farias PMS. A study about the origins of Mathematical Logic and the limits of its applicability to the formalization of Mathematics. [Internet] [Masters thesis]. Universidade Federal do Ceará 2007. [cited 2019 Sep 15].
Available from: http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1516 ;.

Council of Science Editors:

Farias PMS. A study about the origins of Mathematical Logic and the limits of its applicability to the formalization of Mathematics. [Masters Thesis]. Universidade Federal do Ceará 2007. Available from: http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1516 ;

The purpose of this dissertation is to provide a proper treatment for two groups of logical paradoxes: semantic paradoxes and set-theoretic paradoxes. My main…
(more)

▼

The purpose of this dissertation is to provide a proper treatment for two groups of logical paradoxes: semantic paradoxes and set-theoretic paradoxes. My main thesis is that the two different groups of paradoxes need different kinds of solution. Based on the analysis of the diagonal method and truth-gap theory, I propose a functional-deflationary interpretation for semantic notions such as ‘heterological’, ‘true’, ‘denote’, and ‘define’, and argue that the contradictions in semantic paradoxes are due to a misunderstanding of the non-representational nature of these semantic notions. Thus, they all can be solved by clarifying the relevant confusion: the liar sentence and the heterological sentence do not have truth values, and phrases generating paradoxes of definability (such as that in Berry’s paradox) do not denote an object. I also argue against three other leading approaches to the semantic paradoxes: the Tarskian hierarchy, contextualism, and the paraconsistent approach. I show that they fail to meet one or more criteria for a satisfactory solution to the semantic paradoxes. For the set-theoretic paradoxes, I argue that the criterion for a successful solution in the realm of set theory is mathematical usefulness. Since the standard solution, i.e. the axiomatic solution, meets this requirement, it should be accepted as a successful solution to the set-theoretic paradoxes.

► I argue for a neutral free logic is a logic wherein sentences containing non-referring terms do not have truth value. The primary support for…
(more)

▼ I argue for a neutral free logic is a logic wherein sentences containing non-referring terms do not have truth value. The primary support for this conclusion comes by way of criticism of the alternatives. If every sentence of the form `a = a' is a logical truth and is consequently knowable a priori then it will follow absurdly that `a exists' is knowable a priori. There are several alternatives for avoiding this intolerable conclusion and I argue that, with the exception of neutral free logic which holds that `a = a' can lack truth value, their successes are not sufficient to outweigh their shortcomings.
One option is to reject the closure of a priori knowability. However, there are no plausible counterexamples to a carefully stated closure principle. Another option is to try to avoid the conclusion by rejecting the validity of `something is a, so a exists.' However, this response, in its strongest form relies on an implausible ambiguity in the quantifiers `all' and `some.' One could avoid the intolerable conclusion if `a = a' does not imply that `something is a.' There are two main possibilities for this approach: positive free logic and supervaluational logic. The former absurdly abandons one of the most obviously valid argument forms in the history of the study of logic. The latter, despite its technical sophistication and apparent utility, mischaracterizes truth. Furthermore, I argue that the intuitions that recommend supervaluational semantics can be explained by appeal only to resources available to the neutral free logician. Also, the intolerable conclusion might be avoided by maintaining that `a = a' is false rather than truth-valueless (or neutral). Such a logical system is a negative free logic. Its primary support comes from the principle of bivalence. I argue that bivalence and its syntactic relative, the law of excluded middle, are not well justified. The only remaining alternative for avoiding the intolerable conclusion is neutral free logic. There are several possible varieties of neutral free logic that, very roughly stated, vary with respect to how permissive they are of true statements containing non-referring names. I argue for the least possible permissivity, and offer criticism of the intuitions that suggest the more permissive stances.
Advisors/Committee Members: Michael McKinsey.

…Neutral Free Logic—The Master Argument
Call a sentence of the form ‗c exists‘ a simple existence… …its own motivation: logic is the study of
correct inferences, and determinations of correct… …two I will consider two ways of denying the validity of (EG): positive free logic… …negative free logic would be
preferable to one allowing a lack of truth value. In chapter three I… …support for negative free logic, and leave only the
question of exactly what flavor of neutral…

► Hoare and He's Unifying Theories of Programming (UTP) provides a rich model of programs as relational predicates. This theory is intended to provide a single…
(more)

▼ Hoare and He's Unifying Theories of Programming (UTP) provides a rich model of programs as relational predicates. This theory is intended to provide a single framework in which any programming paradigms, languages, and features, can be modelled, compared and contrasted. The UTP already has models for several programming formalisms, such as imperative programming, higher-order programming (e.g. programing with procedures), several styles of concurrent programming (or reactive systems), class-based object-orientation, and transaction processing. We believe that the UTP ought to be able to represent all significant computer programming language formalisms, in order for it to be considered a unifying theory. One gap in the UTP work is that of object-based object-orientation, such as that presented in Abadi and Cardelli's untyped object calculi (sigma-calculi). These sigma-calculi provide a prominent formalism of object-based object-oriented (OO) programs, which models programs as objects. We address this gap within this dissertation by presenting an embedding of an Abadi – Cardelli-style object calculus in the UTP. More formally, the thesis that his dissertation argues is that it is possible to provide an object-based object rientation to the UTP, with value- and reference-based objects, and a fully abstract model of references. We have made three contributions to our area of study: first, to extend the UTP with a notion of object-based object orientation, in contrast with the existing class-based models; second, to provide an alternative model of pointers (references) for the UTP that supports both value-based compound values (e.g. objects) and references (pointers), in contrast to existing UTP models with pointers that have reference-based compound values; and third, to model an Abadi-Cardelli notion of an object in the UTP, and thus demonstrate that it can unify this style of object formalism.

► Logical deduction and abstraction from detail are fundamental, yet distinct aspects of reasoning about programs. This dissertation shows that the combination of logic and abstract…
(more)

▼ Logical deduction and abstraction from detail are fundamental, yet distinct aspects of reasoning about programs. This dissertation shows that the combination of logic and abstract interpretation enables a unified and simple treatment of several theoretical and practical topics which encompass the model theory of temporal logics, the analysis of satisfiability solvers, and the construction of Craig interpolants. In each case, the combination of logic and abstract interpretation leads to more general results, simpler proofs, and a unification of ideas from seemingly disparate fields. The first contribution of this dissertation is a framework for combining temporal logics and abstraction. Chapter 3 introduces trace algebras, a new lattice-based semantics for linear and branching time logics. A new representation theorem shows that trace algebras precisely capture the standard trace-based semantics of temporal logics. We prove additional representation theorems to show how structures that have been independently discovered in static program analysis, model checking, and algebraic modal logic, can be derived from trace algebras by abstract interpretation. The second contribution of this dissertation is a framework for proving when two lattice-based algebras satisfy the same logical properties. Chapter 5 introduces functions called subsumption and bisubsumption and shows that these functions characterise logical equivalence of two algebras. We also characterise subsumption and bisubsumption using fixed points and finitary logics. We prove a representation theorem and apply it to derive the transition system analogues of subsumption and bisubsumption. These analogues strictly generalise the well studied notions of simulation and bisimulation. Our fixed point characterisations also provide a technique to construct property preserving abstractions. The third contribution of this dissertation is abstract satisfaction, an abstract interpretation framework for the design and analysis of satisfiability procedures. We show that formula satisfiability has several different fixed point characterisations, and that satisfiability procedures can be understood as abstract interpreters. Our main result is that the propagation routine in modern sat solvers is a greatest fixed point computation involving abstract transformers, and that clause learning is an abstract transformer for a form of negation. The final contribution of this dissertation is an abstract interpretation based analysis of algorithms for constructing Craig interpolants. We identify and analyse a lattice of interpolant constructions. Our main result is that existing algorithms are two of three optimal abstractions of this lattice. A second new result we derive in this framework is that the lattice of interpolation algorithms can be ordered by logical strength, so that there is a strongest and a weakest possible construction.

► A suitable subcategory of aﬃne Azumaya algebras is deﬁned and a functor from this category to the category of Zariski structures is constructed. The rudiments…
(more)

▼ A suitable subcategory of aﬃne Azumaya algebras is deﬁned and a functor from this category to the category of Zariski structures is constructed. The rudiments of a theory of presheaves of topological structures is developed and applied to construct examples of structures at a generic parameter. The category of equivariant algebras is deﬁned and a ﬁrst-order theory is associated to each object. For those theories satisfying a certain technical condition, uncountable categoricity and quantiﬁer elimination results are established. Models are shown to be Zariski structures and a functor from the category of equivariant algebras to Zariski structures is constructed. The two functors obtained in the thesis are shown to agree on a nontrivial class of algebras.

► We compute the partial isomorphism rank, in the sense Scott and Karp, of a pair of ordinal structures using an Ehrenfeucht-Fraisse game. A complete formula…
(more)

▼ We compute the partial isomorphism rank, in the sense Scott and Karp, of a pair of ordinal structures using an Ehrenfeucht-Fraisse game. A complete formula is proven by induction given any two arbitrary ordinals written in Cantor normal form.
Advisors/Committee Members: Jackson, Stephen C., Gao, Su, Mauldin, R. Daniel.

► We present dual variants of two algebraic constructions of certain classes of residuated lattices: The Galatos-Raftery construction of Sugihara monoids and their bounded expansions,…
(more)

▼ We present dual variants of two algebraic constructions of certain classes of residuated lattices: The Galatos-Raftery construction of Sugihara monoids and their bounded expansions, and the Aguzzoli-Flaminio-Ugolini quadruples construction of srDL-algebras. Our dual presentation of these constructions is facilitated by both new algebraic results, and new duality-theoretic tools. On the algebraic front, we provide a complete description of implications among nontrivial distribution properties in the context of lattice-ordered structures equipped with a residuated binary operation. We also offer some new results about forbidden configurations in lattices endowed with an order-reversing involution. On the duality-theoretic front, we present new results on extended Priestley duality in which the ternary relation dualizing a residuated multiplication may be viewed as the graph of a partial function. We also present a new Esakia-like duality for Sugihara monoids in the spirit of Dunn's binary Kripke-style semantics for the relevance logic R-mingle.
Advisors/Committee Members: Nikolaos Galatos, Ph.D..

► The goal of reverse mathematics is to study the implication and non-implication relationships between theorems. These relationships have their own internal logic, allowing some implications…
(more)

▼ The goal of reverse mathematics is to study the implication and non-implication relationships between theorems. These relationships have their own internal logic, allowing some implications and non-implications to be derived directly from others. The goal of this thesis is to characterize this logic in order to capture the relationships between specific mathematical works. The results of our study are a finite set of rules for this logic and the corresponding soundness and completeness theorems. We also compare our logic with modal logic and strict implication logic. In addition, we explain two applications of S-logic in topology and second order arithmetic.

► For each Turing machine T, we construct an algebra A'(T) such that the variety generated by A'(T) has definable principal subcongruences if and only…
(more)

▼ For each Turing machine T, we construct an algebra A'(T) such that the variety generated by A'(T) has definable principal subcongruences if and only if T halts, thus proving that the property of having definable principal subcongruences is undecidable. Using this, we present another proof that A. Tarski's finite basis problem is undecidable.
Advisors/Committee Members: Keith Kearnes, Agnes Szendrei, Don Monk, Markus Pflaum, Ross Willard.

Effective definedness checking is crucial for an implementation of a logic with undefinedness. The objective of the MathScheme project is to develop a new…
(more)

▼

Effective definedness checking is crucial for an implementation of a logic with undefinedness. The objective of the MathScheme project is to develop a new approach to mechanized mathematics that seeks to combine the capabilities of computer algebra systems and computer theorem proving systems. Chiron, the underlying logic of MathScheme, is a logic with undefinedness. Therefore, it is important to automate, to the greatest extent possible, the process of checking the definedness of Chiron expressions for the MathScheme project. This thesis provides an overview of information useful for checking definedness of Chiron expressions and presents the design and implementation of an AND/OR tree-based approach for automated definedness checking based on ideas from artificial intelligence. The theorems for definedness checking are outlined first, and then a three-valued AND/OR tree is presented, and finally, the algorithm for reducing Chiron definedness problems using AND/OR trees is illustrated. An implementation of the definedness checking system is provided that is based on the theorems and algorithm. The ultimate goal of this system is to provide a powerful mechanism to automatically reduce a definedness problem to simpler definedness problems that can be easily, or perhaps automatically, checked.

This sentence is whatever truth is not. The subject of this master’s thesis is the power, influence, and solvability of the liar paradox. This…
(more)

▼

This sentence is whatever truth is not. The subject of this master’s thesis is the power, influence, and solvability of the liar paradox. This paradox can be constructed through the application of a standard conception of truth and rules of inference are applied to sentences such as the first sentence of this abstract. The liar has been a powerful problem of philosophy for thousands of years, from its ancient origin (examined in Chapter One) to a particularly intensive period in the twentieth century featuring many ingenious but ultimately unsuccessful solutions from brilliant logicians, mathematicians and philosophers (examined in Chapter Two, Chapter Three, and Chapter Four). Most of these solutions were unsuccessful because of a recurring problem known as the liar’s revenge; whatever truth is not includes, as it turns out, not just falsity, but also meaninglessness, ungroundedness, gappyness, and so on. The aim of this master’s thesis is to prove that we should not consign ourselves to the admission that the liar is and always will just be a paradox, and thus unsolvable. Rather, I argue that the liar is solvable; I propose and defend a novel solution which is examined in detail in the latter half of Chapter Two, and throughout Chapter Three. The alternative solution I examine and endorse (in Chapter Four) is not my own, owing its origin and energetic support to Graham Priest. I argue, however, for a more qualified version of Priest’s solution. I show that, even if we accept a very select few true contradictions, it should not be assumed that inconsistency inevitably spreads throughout other sets of sentences used to describe everyday phenomena such as motion, change, and vague predicates in the empirical world.

► This thesis is a model-theoretic study of exponential differential equations in the context of differential algebra. I define the theory of a set of differential…
(more)

▼ This thesis is a model-theoretic study of exponential differential equations in the context of differential algebra. I define the theory of a set of differential equations and give an axiomatization for the theory of the exponential differential equations of split semiabelian varieties. In particular, this includes the theory of the equations satisfied by the usual complex exponential function and the Weierstrass p-functions. The theory consists of a description of the algebraic structure on the solution sets together with necessary and sufficient conditions for a system of equations to have solutions. These conditions are stated in terms of a dimension theory; their necessity generalizes Ax’s differential field version of Schanuel’s conjecture and their sufficiency generalizes recent work of Crampin. They are shown to apply to the solving of systems of equations in holomorphic functions away from singularities, as well as in the abstract setting. The theory can also be obtained by means of a Hrushovski-style amalgamation construction, and I give a category-theoretic account of the method. Restricting to the usual exponential differential equation, I show that a “blurring” of Zilber’s pseudo-exponentiation satisfies the same theory. I conjecture that this theory also holds for a suitable blurring of the complex exponential maps and partially resolve the question, proving the necessity but not the sufficiency of the aforementioned conditions. As an algebraic application, I prove a weak form of Zilber’s conjecture on intersections with subgroups (known as CIT) for semiabelian varieties. This in turn is used to show that the necessary and sufficient conditions are expressible in the appropriate first order language.

► This thesis investigates the connections between henselian valuations and absolute Galois groups. There are fundamental links between these: On one hand, the absolute Galois group…
(more)

▼ This thesis investigates the connections between henselian valuations and absolute Galois groups. There are fundamental links between these: On one hand, the absolute Galois group of a field often encodes information about (henselian) valuations on that field. On the other, in many cases a henselian valuation imposes a certain structure on an absolute Galois group which makes it easier to study. We are particularly interested in the question of when a field admits a non-trivial parameter-free definable henselian valuation. By a result of Prestel and Ziegler, this does not hold for every henselian valued field. However, improving a result by Koenigsmann, we show that there is a non-trivial parameter-free definable valuation on every henselian valued field. This allows us to give a range of conditions under which a henselian field does indeed admit a non-trivial parameter-free definable henselian valuation. Most of these conditions are in fact of a Galois-theoretic nature. Throughout the thesis, we discuss a number of applications of our results. These include fields elementarily characterized by their absolute Galois group, model complete henselian fields and henselian NIP fields of positive characteristic, as well as PAC and hilbertian fields.

► In this work, I will provide an introduction to the differential analyzer, a machine designed to solve differential equations through a process called mechanical integration.…
(more)

▼ In this work, I will provide an introduction to the differential analyzer, a machine designed to solve differential equations through a process called mechanical integration. I will give a brief historical account of differential analyzers of the past, and discuss the Marshall University Differential Analyzer Project. The goal of this work is to provide an analysis of solutions of systems of differential equations using a differential analyzer. In particular, we are interested in the points at which these systems are in equilibrium and the behavior of solutions that start away from equilibrium. After giving a description of linear systems of autonomous differential equations and the traditional analytical method for finding a solution, I will run some of these systems on the differential analyzer. With this visual approach, we look at the behavior of the rates of change to study the equilibrium solutions. I want to give a mechanical description of the relationship between the equations of the system.

► In this thesis we present a framework for automatic formal analysis of competitive stochastic systems, such as sensor networks, decentralised resource management schemes or distributed…
(more)

▼ In this thesis we present a framework for automatic formal analysis of competitive stochastic systems, such as sensor networks, decentralised resource management schemes or distributed user-centric environments. We model such systems as stochastic multi-player games, which are turn-based models where an action in each state is chosen by one of the players or according to a probability distribution. The specifications, such as “sensors 1 and 2 can collaborate to detect the target with probability 1, no matter what other sensors in the network do” or “the controller can ensure that the energy used is less than 75 mJ, and the algorithm terminates with probability at least 0.5'', are provided as temporal logic formulae. We introduce a branching-time temporal logic rPATL and its multi-objective extension to specify such probabilistic and reward-based properties of stochastic multi-player games. We also provide algorithms for these logics that can either verify such properties against the model, providing a yes/no answer, or perform strategy synthesis by constructing the strategy for the players that satisfies the specification. We conduct a detailed complexity analysis of the model checking problem for rPATL and its multi-objective extension and provide efficient algorithms for verification and strategy synthesis. We also implement the proposed techniques in the PRISM-games tool and apply them to the analysis of several case studies of competitive stochastic systems.

► This thesis studies the categorical formalisation of quantum computing, through the prism of type theory, in a three-tier process. The first stage of our investigation…
(more)

▼ This thesis studies the categorical formalisation of quantum computing, through the prism of type theory, in a three-tier process. The first stage of our investigation involves the creation of the dagger lambda calculus, a lambda calculus for dagger compact categories. Our second contribution lifts the expressive power of the dagger lambda calculus, to that of a quantum programming language, by adding classical control in the form of complementary classical structures and dualisers. Finally, our third contribution demonstrates how our lambda calculus can be applied to various well known problems in quantum computation: Quantum Key Distribution, the quantum Fourier transform, and the teleportation protocol.

► Residuated lattices, although originally considered in the realm of algebra providing a general setting for studying ideals in ring theory, were later shown to…
(more)

▼ Residuated lattices, although originally considered in the realm of algebra providing a general setting for studying ideals in ring theory, were later shown to form algebraic models for substructural logics. The latter are non-classical logics that include intuitionistic, relevance, many-valued, and linear logic, among others. Most of the important examples of substructural logics are obtained by adding structural rules to the basic logical calculus ��. We denote by ��^�_� the varieties of knotted residuated lattices. Examples of these knotted rules include integrality and contraction. The extension of �� by the rules corresponding to these two equations is equivalent to Gentzen’s original system �� for intuitionism. Apart from applications to logic and to abstract ring theory, residuated lattices are connected to mathematical linguistics, computer science, and quantum mechanics, among other areas. Even thought the connections to other disciplines are abundant, the current document is of purely algebraic nature.
Results in [17] establish the finite model property (FMP) for the implicational fragment of ��� extended by some knotted rules. The finite embeddability property (FEP) is known to hold for commutative ��^�_� (�� = ��); the strong finite model property follows for the corresponding logics. Recent results by Horčík show that the word problem is undecidable for the varieties ��^�_� when 1 ≤ � < � or 2 ≤ � < �. Therefore these varieties do not have the FEP. We refer the reader to [16] for details on how this is connected to the Burnside problems in group theory and to regularity of languages in automata theory.
In the present document, using purely algebraic methods, we prove the FEP for subvarieties of ��^�_� and ���^�_� that satisfy properties weaker than commutativity. The proof uses the theory of residuated frames introduced by Galatos and Jipsen .
In Chapter 1, we present the basic definitions and constructions that will be used throughout the full document. We point the reader towards Section 1.4, where we list a relevant list of varieties for which the FEP holds or not.
Chapter 2 presents a proof of the FEP for subvarieties of ��^�_� that satisfy the identity ��� = �2�. The proof of this case relies on finding the free object over the class of pomonoids that satisfy the previous equality and �� ≤ ��.
Chapter 3 focuses on the study of the noncommutative equation that we use to define the varieties studied in the following two chapters. This equation arises as a natural generalization of the basic equation ��� = �2�.
Chapter 4 presents the FEP for ��^�_�. In the general case, the free object in the class is fairly complicated, so we identify instead an object outside the class, which is both free and structured enough to allow us to prove the result. In the last section, we extend our result to cover some other subvarieties of knotted residuated lattices. These subvarieties include the cyclic, cyclic-involutive, and…
Advisors/Committee Members: Nikolaos Galatos, Ph.D..

▼ Description logics (DLs) are knowledge representation formalisms with well-understood model-theoretic semantics and computational properties. The DL SROIQ provides the logical underpinning for the semantic web language OWL 2, which is quickly becoming the standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. We present a novel reasoning calculus for the description logic SROIQ which addresses two of the major sources of inefficiency present in the tableau-based reasoning calculi used in state-of-the-art reasoners: unnecessary nondeterminism and unnecessarily large model sizes. Further, we describe a new approach to classification which exploits partial information about the subsumption relation between concept names to reduce both the number of individual subsumption tests performed and the cost of working with large ontologies; our algorithm is applicable to the general problem of deducing a quasi-ordering from a sequence of binary comparisons. We also present techniques for extracting partial information about the subsumption relation from the models generated by constructive DL reasoning methods, such as our hypertableau calculus. Empirical results from a prototypical implementation demonstrate substantial performance improvements compared to existing algorithms and implementations.