In the Lambek calculus of order 2 we allow only sequents in which the depth of nesting of implications is limited to 2. We prove that the decision problem of provability in the calculus can be solved in time polynomial in the length of the sequent. A normal form for proofs of second order sequents is defined. It is shown that for every proof there is a normal form proof with the same axioms. With this normal form we can give (...) an algorithm that decides provability of sequents in polynomial time. (shrink)

We survey recent advances on the interface between computability theory and algorithmic randomness, with special attention on measures of relative complexity. We focus on (weak) reducibilities that measure (a) the initial segment complexity of reals and (b) the power of reals to compress strings, when they are used as oracles. The results are put into context and several connections are made with various central issues in modern algorithmic randomness and computability.

An unknown process is generating a sequence of symbols, drawn from an alphabet, A. Given an initial segment of the sequence, how can one predict the next symbol? Ray Solomonoff’s theory of inductive reasoning rests on the idea that a useful estimate of a sequence’s true probability of being outputted by the unknown process is provided by its algorithmic probability (its probability of being outputted by a species of probabilistic Turing machine). However algorithmic probability is a “semimeasure”: i.e., the sum, (...) over all x∈A, of the conditional algorithmic probabilities of the next symbol being x, may be less than 1. Prevailing wisdom has it that algorithmic probability must be normalized, to eradicate this semimeasure property, before it can yield acceptable probability estimates. This paper argues, to the contrary, that the semimeasure property contributes substantially to the power and scope of an algorithmic-probability-based theory of induction, and that normalization is unnecessary. (shrink)

A primary goal of quantum computer science is to find an explanation for the fact that quantum computers are more powerful than classical computers. In this paper I argue that to answer this question is to compare algorithmic processes of various kinds and to describe the possibility spaces associated with these processes. By doing this, we explain how it is possible for one process to outperform its rival. Further, in this and similar examples little is gained in subsequently asking a (...) how-actually question. Once one has explained how-possibly, there is little left to do. (shrink)

This article explores the connection between objective chance and the randomness of a sequence of outcomes. Discussion is focussed around the claim that something happens by chance iff it is random. This claim is subject to many objections. Attempts to save it by providing alternative theories of chance and randomness, involving indeterminism, unpredictability, and reductionism about chance, are canvassed. The article is largely expository, with particular attention being paid to the details of algorithmic randomness, a topic relatively unfamiliar to philosophers.

The essays that constitute this dissertation explore three strategies for understanding the role of modality in philosophical accounts of propensities, randomness, and causation. In Chapter 1, I discuss how the following essays are to be considered as illuminating the prospects for these strategies, which I call reductive essentialism, subjectivism and pragmatism. The discussion is framed within a survey of approaches to modality more broadly construed. ;In Chapter 2, I argue that any broadly dispositional analysis of probability as a physical property (...) will either fail to give an adequate explication of probability, or else will fail to provide an explication that can be gainfully employed elsewhere . The diversity and number of arguments suggests that there is little prospect of any successful analysis along these lines. ;The concept of randomness has been unjustly neglected in recent philosophical literature, and when philosophers have thought about it, they have usually acquiesced in views about the concept that are fundamentally flawed. In Chapter 3 I try to redress this. After indicating the ways in which the existing accounts are flawed, I propose that randomness is to be understood as a special case of the epistemic concept of the unpredictability of a process. This proposal arguably captures the intuitive desiderata for the concept of randomness; at least it should suggest that the commonly accepted accounts cannot be the whole story and more philosophical attention needs to be paid. ;Russell famously argued that causation should be dispensed with. He gave two explicit arguments for this conclusion, which can be defused if we loosen the ties between causation and determinism. In Chapter 4, I define a concept of causation which meets Russell's conditions but does not reduce to triviality. Unfortunately, a further serious problem is implicit beneath the details of Russell's arguments, which I call the causal exclusion problem. Meeting this problem involves deploying a pragmatic account of the nature and function of modal concepts. Russell's scruples about causation can be accommodated, even as we partially legitimise the pervasiveness of causal explanations in folk and scientific practice. (shrink)

Combining physics, mathematics and computer science, quantum computing has developed in the past two decades from a visionary idea to one of the most fascinating areas of quantum mechanics. The recent excitement in this lively and speculative domain of research was triggered by Peter Shor (1994) who showed how a quantum algorithm could exponentially "speed up" classical computation and factor large numbers into primes much more rapidly (at least in terms of the number of computational steps involved) than any known (...) classical algorithm. Shor's algorithm was soon followed by several other algorithms that aimed to solve combinatorial and algebraic problems, and in the last few years theoretical study of quantum systems serving as computational devices has achieved tremendous progress. Common belief has it that the implementation of Shor's algorithm on a large scale quantum computer would have devastating consequences for current cryptography protocols which rely on the premiss that all known classical worst case algorithms for factoring take time exponential in the length of their input (see, e.g., Preskill 2005). Consequently, experimentalists around the world are engaged in tremendous attempts to tackle the technological difficulties that await the realization of such a large scale quantum computer. But regardless whether these technological problems can be overcome (Unruh 1995, Ekert and Jozsa 1996, Haroche and Raimond 1996), it is noteworthy that no proof exists yet for the general superiority of quantum computers over their classical counterparts. (shrink)

We propose a new interpretation of objective deterministic chances in statistical physics based on physical computational complexity. This notion applies to a single physical system (be it an experimental set--up in the lab, or a subsystem of the universe), and quantifies (1) the difficulty to realize a physical state given another, (2) the 'distance' (in terms of physical resources) from a physical state to another, and (3) the size of the set of time--complexity functions that are compatible with the physical (...) resources required to reach a physical state from another. (shrink)

This makes his book especially valuable." -- Yuri Gurevich, Professor of Computer Science, University of Michigan Computability and complexity theory should be of central concern to practitioners as well as theorists.

Most standard results on structure identification in first order theories depend upon the correctness and completeness (in the limit) of the data, which are provided to the learner. These assumption are essential for the reliability of inductive methods and for their limiting success (convergence to the truth). The paper investigates inductive inference from (possibly) incorrect and incomplete data. It is shown that such methods can be reliable not in the sense of truth approximation, but in the sense that the methods (...) converge to "empirically adequate" theories, i.e. theories, which are consistent with all data (past and future) and complete with respect to a given complexity class of L-sentences. Adequate theories of bounded complexity can be inferred uniformly and effectively by polynomial-time learning algorithms. Adequate theories of unbounded complexity can be inferred pointwise by less efficient methods. (shrink)

A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. (...) We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines. (shrink)

We present the simplest solution ever to 'the hardest logic puzzle ever'. We then modify the puzzle to make it even harder and give a simple solution to the modified puzzle. The final sections investigate exploding god-heads and a two-question solution to the original puzzle.

Understanding inductive reasoning is a problem that has engaged mankind for thousands of years. This problem is relevant to a wide range of fields and is integral to the philosophy of science. It has been tackled by many great minds ranging from philosophers to scientists to mathematicians, and more recently computer scientists. In this article we argue the case for Solomonoff Induction, a formal inductive framework which combines algorithmic information theory with the Bayesian framework. Although it achieves excellent theoretical results (...) and is based on solid philosophical foundations, the requisite technical knowledge necessary for understanding this framework has caused it to remain largely unknown and unappreciated in the wider scientific community. The main contribution of this article is to convey Solomonoff induction and its related concepts in a generally accessible form with the aim of bridging this current technical gap. In the process we examine the major historical contributions that have led to the formulation of Solomonoff Induction as well as criticisms of Solomonoff and induction in general. In particular we examine how Solomonoff induction addresses many issues that have plagued other inductive systems, such as the black ravens paradox and the confirmation problem, and compare this approach with other recent approaches. (shrink)

"The Hardest Logic Puzzle Ever" was first described by the late George Boolos in the Spring 1996 issue of the Harvard Review of Philosophy. Although not dissimilar in appearance from many other simpler puzzles involving gods (or tribesmen) who always tell the truth or always lie, this puzzle has several features that make the solution far from trivial. This paper examines the puzzle and describes a simpler solution than that originally proposed by Boolos.

We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin's universal distribution to identify evidence in support of or in contravention of the claim that the world is algorithmic in nature. To this end statistical comparisons are undertaken of the frequency distributions of data from physical sources--repositories of information such as images, data stored in a hard drive, computer programs and DNA sequences--and the output frequency distributions generated by purely algorithmic means--by running abstract (...) computing devices such as Turing machines, cellular automata and Post Tag systems. Statistical correlations were found and their significance measured. (shrink)

This is a presentation about joint work between Hector Zenil and Jean-Paul Delahaye. Zenil presents Experimental Algorithmic Theory as Algorithmic Information Theory and NKS, put together in a mixer. Algorithmic Complexity Theory defines the algorithmic complexity k(s) as the length of the shortest program that produces s. But since finding this short program is in general an undecidable question, the only way to approach k(s) is to use compression algorithms. He shows how to use the Compress function in Mathematica to (...) give an idea about the compressibility of various sequences. However, the idea of applying a compression algorithm breaks down for very short sequences. This is true not only for the Compress function, but also for any other compression algorithm. Zenil's approach is to construct a metric of algorithmic complexity for short sequences from scratch. He defines the algorithmic probability as the probability that an arbitrary program produces a sequence. The basic idea is to run a whole class of computational devices such as Turing Machines or Cellular Automata, and compute the distributions of the sequences they generate. Zenil presents a comparison of frequency distributions of sequences generated by 2-state 3-color Turing Machines and 2-color radius 1 Cellular Automata. He also compared these distributions to distributions found in data from the real world, and found that not only there is correlation across different systems, but also that the distributions are rather stable, and the difference between the distributions in abstract systems and real-world data can be attributed to noise. In his paper Zenil elaborates on the nature of the noise he has encountered. Zenil conjectures that the correlation distances between different systems decreases with a larger number of steps, and converge in the infinite limit case. (shrink)

The book is intended to explain the larger and intuitive concept of randomness by means of computation, particularly through algorithmic complexity and recursion theory. It also includes the transcriptions (by A. German) of two panel discussion on the topics: Is The Universe Random?, held at the University of Vermont in 2007; and What is Computation? (How) Does Nature Compute?, held at the University of Indiana Bloomington in 2008. The book is intended to the general public, undergraduate and graduate students in (...) math, computer science, physics and other sciences, but also to philosophers of science and researchers. (shrink)

Applying the concepts of Kolmogorov-Chaitin complexity and Turing’s uncomputability from the computability and algorithmic information theories to the irreducible and incomputable randomness of quantum mechanics, a novel argument for the existence of God is presented. Concepts of ‘transintelligence’ and ‘transcausality’ are introduced, and from them, it is posited that our universe must be epistemologically and ontologically an open universe. The proposed idea also proffers a new perspective on the nonlocal nature and the infamous wave-function-collapse problem of quantum mechanics.