Contributed Papers

Regular submissions

Abstract:We study Turing computable embeddings for various classes of linear orders. The concept of a Turing computable embedding (or $tc$-embedding for short) was developed by Calvert, Cummins, Knight, and Miller as an effective counterpart for Borel embeddings. We are focused on $tc$-embeddings for classes equipped with computable infinitary $\Sigma_{\alpha}$ equivalence, denoted by $\sim^c_{\alpha}$. In this paper, we isolate a natural subclass of linear orders, denoted by $WMB$, such that $(WMB,\cong)$ is not universal under $tc$-embeddings, but for any computable ordinal $\alpha \geq 5$, $(WMB, \sim^c_{\alpha})$ is universal under $tc$-embeddings. Informally speaking, $WMB$ is not $tc$-universal, but it becomes $tc$-universal if one imposes some natural restrictions on the effective complexity of the syntax. We also give a complete syntactic characterization for classes $(K,\cong)$ that are Turing computably embeddable into some specific classes $(\mathcal{C},\cong)$ of well-orders. This extends the similar result of Knight, Miller, and Vanden Boom for the class of all finite linear orders $\mathcal{C}_{fin}$.

Zenon Sadowski – Total nondeterministic Turing machines and a p-optimal proof system for SAT

Abstract:We show that the open problem of the existence of a p-optimal proof system for SAT can be characterized in terms of total nondeterministic Turing machines. We prove that there exists a p-optimal proof system for SAT if and only if there exists a proof system h for SAT such that for any total nondeterministic Turing machine working in polynomial time its totality is provable with short proofs in h and these proofs can be efficiently constructed. We prove that the standard proof system for SAT (a satisfying truth assignment is a proof of a satisfiable propositional formula)is p-optimal if and only if for any total nondeterministic Turing machine working in polynomial time its totality is provable with short proofs in the standard proof system for SAT and these proofs can be efficiently constructed.Additionally we show that the problem of the existence of an optimal proof system for TAUT can be characterized in terms of pairs of nondeterministic Turing machines which are disjoint (do not accept the same strings). We prove that there exists an optimal proof system for TAUT if and only if there exists a proof system f for TAUT such that for any pair of disjoint nondeterministic Turing machines working in polynomial time their disjointness is provable in f with short proofs.

Abstract:We prove that there exists a properly 2-c.e. Turing degree d which cannot be a degree of categoricity of a rigid structure.

Iosif Petrakis – McShane-Whitney pairs

Abstract:We present a constructive version of the classical McShane-Whitney theorem on the extendability of real-valued Lipschitz functions defined on a subset of a metric space. Through the introduced notion of a McShane-Whitney pair we study some abstract properties of this extension theorem showing how the behaviour of a Lipschitz function defined on the subspace of the pair affect its McShane-Whitney extensions on the space of the pair. As a consequence, a Lipschitz version of the theory around the Hahn-Banach theorem is formed. We work within Bishop’s informal system of constructive mathematics BISH.

Stefan Arnold and Jacobo Toran – A Deterministic Algorithm for Testing the Equivalence of Read-once Branching Programs with Small Discrepancy

Abstract:The problem to test the equivalence of two given read-once branching programs is a well-known problem in the class BPP that is not known to be solvable in deterministic polynomial time. The standard probabilistic algorithm to solve the problem reduces it to an instance of Polynomial Identity Testing and then applies the Schwartz-Zippel Lemma to test the equivalence. This method needs $O(n\log n)$ random bits, where $n$ is the number of variables in the branching programs. We provide a new method for testing the equivalence of read-once branching programs that uses $O(\log n +\log |D|)$ random bits, where $D$ is the set of assignments for which the two branching programs compute different results. This means $O(n)$ random bits in the worst case and a deterministic polynomial time algorithm when the discrepancy set $D$ is at most polynomial.

We also show that the equivalence test can be extended to the more powerful model of deterministic, decomposable negation normal forms (d-DNNFs).

Abstract:A homomorphism from a graph $G$ to a graph $H$ is a vertex mapping $f$ from the vertex set of $G$ to the vertex set of $H$ such that there is an edge between vertices $f(u)$ and $f(v)$ of $H$ whenever there is an edge between vertices $u$ and $v$ of $G$. The $H$-Colouring problem is to decide whether or not a graph $G$ allows a homomorphism to a fixed graph $H$. We continue a study on a variant of this problem, namely the Surjective $H$-Colouring problem,which imposes the homomorphism to be vertex-surjective. We build upon previous results and show that Surjective $H$-Colouring is NP-complete for every connected graph $H$ that has exactly two vertices with a self-loop as long as these two vertices are not adjacent. As a result, we can classify the computational complexity of Surjective $H$-Colouring for every graph $H$ on at most four vertices.

Abstract:Highly repetitive strings are increasingly being amassed by genome sequencing and genetic variation experiments, as well as by storing all versions of human-generated files. We explore the practical advantages of combining data structures based on the run-length encoded BWT of a string, with data structures based on the boundaries of the Lempel-Ziv 77 factors, or on the compact directed acyclic word graph, taking advantage of multiple classes of repetitive substrings whose sizes all grow sublinearly in the length of a highly-repetitive string. Our algorithms provide competitive tradeoffs on significant datasets.

Abstract:We study infinite time Turing machines that attain a special state at a given class of ordinals during the computation. We prove results about sets that can be recognized by these machines. For instance, the recognizable sets of natural numbers with respect to the cardinal-detecting infinite time Turing machines introduced by Habic in 2013 are contained in a certain countable level of the constructible hierarchy, and the recognizable sets of natural numbers with respect to finitely many ordinal parameters are constructible.

Lorenzo Carlucci, Leszek Aleksander Kolodziejczyk, Francesco Lepore and Konrad Zdanowski – New bounds on the strength of some restrictions of Hindman’s Theorem

Abstract:We prove some results on upper and lower bounds on the effective content and logical strength for a variety of natural restrictions of Hindman’s Finite Sums Theorem.For example, we show that Hindman’s Theorem for sumsof length at most 2 and 4 colors implies $\ACA_0$. An emerging {\em leitmotiv} is that the known lower bounds for Hindman’s Theoremand for its restriction to sums of at most 2 elements are already valid for a number of restricted versions which havesimple proofs and better computability- and proof-theoreticupper bounds than the known upper boundfor the full version of the theorem. We highlight the role of a sparsity-like condition on the solution set,which we call apartness.

Victor Selivanov – Extending Wadge Theory to k-Partitions

Abstract:We extend some results about Wadge degrees of Borel subsets of Baire space to finite partitions of Baire space. A typical new result is the characterization up to isomorphism of the Wadge degrees of k-partitions with $\mathbf{\Delta}^0_3$-components.

Abstract:Physical universality of a cellular automaton was defined by Janzing in 2010 as the ability to implement an arbitrary transformation of spatial patterns. In 2014, Schaeffer gave a construction of a two-dimensional physically universal cellular automaton. We construct a one-dimensional version of the automaton and a reversibly universal automaton.

Abstract: In the framework of computable topology, we propose an approach how to develop higher effective descriptive set theory. We introduce a wide class $\mathbb{K}$ of effective $T_0$-spaces admitting Borel point recovering. For this class we propose the notion of a $(\alpha,m)$-retractive morphism that give a great opportunity to extend classical results from EDST to the class $\mathbb{K}$. We illustrate this by several examples.

Abstract:Substrate cycles in metabolic networks play a role in various forms of homeostatic regulation, ranging from thermogenesis to the buffering and redistribution of steady-state populations of metabolites. While the general problem of enumerating these cycles is #P-hard, it is unclear if this result holds for realistic networks where e.g. pathological vertex degree distributions or minors may not exist. We attempt to address this gap by showing that the problem of counting directed substrate cycles (#DirectedCycle) remains #P-complete (implying #P-hardness for enumeration) for any superclass of cubic 3-connected bipartite planar digraphs, and at the limit where all reactions are reversible, that the problem of counting undirected substrate cycles (#UndirectedCycle) is #P-complete for any superclass of cubic 3-connected bipartite planar graphs where the problem of counting Hamiltonian cycles is #P-complete. Lastly, we show that unless NP=RP, no FPRAS can exist for either counting problem whenever the Hamiltonian cycle decision problem is NP-complete.

Abstract:We consider complexity theory for Koepke machine or Ordinal Turing Machines (OTM) and define the OTM analogue of the satisfiability problem infty-SAT. We show that infty-SAT is in infty-NP and infty-NP-hard, but not OTM decidable.

Hugo Nobrega and Arno Pauly – Game characterizations and lower cones in the Weihrauch degrees

Abstract:We introduce generalized Wadge games and show that each lower cone in the Weihrauch degrees is characterized by such a game. These generalized Wadge game subsume the original Wadge games, the eraser and backtrack games and well as the variants of Semmes’ tree games. In particular, we propose that the lower cones in the Weihrauch degrees are the answer to Andretta’s question on which classes of functions admit game characterizations. We then discuss some applications of such generalized Wadge games.

Abstract:In this paper we use Koepke-Seyfferth’s ordinal Turing Machines to generalise the notion of type two computability to $2^\kappa$. Then we start the study of the computational properties of $R_\kappa$, a real closed field extension of the real line of cardinality $2^\kappa$, defined by the first author using surreal numbers and proposed as the candidate for generalising real analysis. In particular we introduce representations of $R_\kappa$ under which the field operations are computable. Finally we show that this framework is suitable for generalising the classical Weihrauch hierarchy. In particular we start the study of the computational strength of the generalised version of the Intermediate Value Theorem.

Thierry Monteil – A universal oracle for signal machines

Abstract:We construct two universal oracles for signal machines, one via the binary expansion of irrational numbers, another via their continued fraction expansion, settling a conjecture of Durand-Lose (CiE 2013). This latter is optimal in the number of speeds and irrational parameters involved in the construction (three and one respectively).

Gleb Novikov – Randomness deficiencies

Abstract:The notion of infinite random sequence was introduced by Martin-Lof using algorithmic approach. He also defined the so-called randomness deficiency function that shows how close are random sequences to non-random (in some natural sense). Other deficiency functions can be obtained from the Levin-Schnorr theorem, that describes randomness in terms of Kolmogorov complexity. The difference between all of these functions is bounded by a logarithmic term. In this paper we show that the difference between some deficiencies can be as large as possible.

Abstract:We consider clockable ordinals for Infinite Time Turing Machines (ITTMs), i.e., halting times of ITTMs on the empty input. It is well-known that, in contrast to the writable ordinals, the set of clockable ordinals has `gaps’. In this paper, we show several results on gaps, mainly related to the admissible ordinals they may properly contain. We prove that any writable ordinal can occur as the order type of the sequence of admissible ordinals in such a gap. We give precise information on their ending points. We also investigate higher rank ordinals (recursively inaccessible, etc). Moreover, we show that those gaps can have any reasonably effective length (in the sense of ITTMs) compared to their starting point.

Abstract:The decidability of the finiteness problem for automaton groups is a well-studied open question on Mealy automata. We connect this question of algebraic nature to the periodicity problem of one-way cellular automata, a dynamical question known to be undecidable in the general case. We provide a first undecidability result on the dynamics of one-way permutive cellular automata, arguing in favor of the undecidability of the finiteness problem.

Abstract:In 1962, Hungarian mathematician Tibor Radó introduced the busy beaver competition for Turing machines: in a class of machines, find one which halts after the greatest number of steps when started on the empty input. In this paper, we generalize the busy beaver competition to the infinite time Turing machines (ITTMs) introduced by Hamkins and Lewis in 2000. We introduce two busy beaver functions on ITTMs and show both theoretical and experimental results on these functions. We give in particular a comprehensive study, with champions for the busy beaver competition, of the classes of ITTMs with one or two states (in addition to the halting and limit states). The computation power of ITTMs is humongous and thus makes the experimental study of this generalization of Radó’s competition and functions a daunting challenge. We end this paper with a characterization of the power of those machines when the use of the tape is limited in various ways.

Neil Lutz and Donald Stull – Dimension Spectra of Lines

Abstract:This paper investigates the (effective Hausdorff) dimension spectra of lines in the Euclidean plane. Given any line $L$ with slope a and vertical intercept $b$, we show that if the effective Hausdorff dimension $dim(a, b)$ is equal to the effective packing dimension $Dim(a, b)$, then the dimension spectrum of $L$, $\{dim(x, ax + b) : x real\}$, contains a unit interval.

Abstract:The forbidding and enforcing paradigm was introduced by Ehrenfeucht and Rozenberg as a way to define families of languages based on two sets of boundary conditions. In this paper, an fe-system defines a single language. We investigate fe-systems, where both the forbidding and enforcing sets are finite and show that the languages that they define are regular. We prove that the class of languages defined by finite fe-systems is strictly between the strictly locally testable languages and the class of locally testable languages.

Informal presentations

Abstract:A set $A\subseteq \omega$ is \emph{cototal} under enumeration reducibility if $A\le_e \overline{A}$, that is, if the complement of $A$ is total under enumeration reducibility. We show that the $e$-degrees of cototal sets characterize the $e$-degrees of maximal anti-chain complements, the $e$-degrees of enumeration-pointed trees on $\omega^{<\omega}$, and the $e$-degrees of languages of minimal subshifts on $2^\omega$.

As a consequence, we obtain a complete characterization of the Turing degree spectra of nontrivial minimal subshifts: they are the enumeration cones of cototal sets. We also obtain an application to computable structure theory, showing that the enumeration cones of cototal sets characterize those structure spectra which are Turing-upward closures of $F_\sigma$ sets of reals.

Abstract:In programming languages, “name binding” is about the association of identifiers or “names” with “values”, passive or active data-objects. In imperative languages, identifiers are names of states, i.e. associations of locations with data. The data of the locations is mutable and operations in expressions like “X=X+2” are state transformations. In mathematics names appear in equations like “5-X=2”. The connection between names and values is thought of as immutable, an idea which functional programming languages subscribe to. In the praxis of programming, both kind of bindings are needed. For that reason, modern programming languages have “namespaces” which manage the reference of identifiers to data explicitly and rely on the separation of identity and state. In CLOJURE for example,an identifier refers to a series of shared immutable data on a time-axis (point-in-time values). “Change” corresponds to function application which moves the state forward. 1969 Dana Scott used topological ideas to obtain a mathematical model for functions as operations in the lambda-calculus. The major contribution of Scott is the “inverse limit construction”, a generic method by which value-spaces can be constructed that are isomorphic to their function- (and other) spaces. This construction is here adapted to name-binding. The inverse limit construction does not work on sets but requires domains, complete partial orders (cpo’s) with a least element. Of two objects a,b with a>b in a cpo, b carries the same information as a but with ancillary detail. The inverse limit construction constructs omega-chains of increasing embedded domains commencing from an initial domain. To use the inverse limit construction as a method which delivers consistent name-binding and global information storage, the initial domain is interpreted as a namespace of potential names which are not yet used or mentioned. For any a,b of the initial domain, a#b means, that it is still undecided, if either a>b or b>a will hold in the future. That missing information comes from an input stream of partial orders. Each stream element is processed by a finite automaton which operations are derived from the axioms and theorems of domain theory. The automaton activates names in the namespace and assigns values to (some of) them in a new domain (state) which it generates. The automaton terminates for all finite input and the construction of a chain of embedded domains relative to a stream is confluent. Name-binding by inverse limit construction is monotone and accumulative. Due to the symmetry between domain objects and relations on many levels, information from the stream is stored redundantly in each domain and globally in the chain as a whole. When information is locally lost, it often can be reconstructed or salvaged. The principles of continuous name binding are demonstrated “hands on” with the help of the implemented automaton and displayed graphically via chains of embedded domains.

Abstract:We introduce Monte Carlo computability as a probabilistic concept of computability on infinite objects and we prove that Monte Carlo computable functions are closed under composition. We can mutually separate the classes of multi-valued functions which are non-deterministically computable, Las Vegas computable and Monte Carlo computable, respectively. These separations can also be witnessed by natural examples of computational problems. As a specific problem which is Monte Carlo computable but neither Las Vegas computable nor non-deterministically computable, we study the problem of sorting infinite sequences that was recently introduced by Neumann and Pauly. Their results allow us to provide some conclusions on the relation of algebraic models to Monte Carlo computability.

We identify conditions, satisfied by dyadic rationals, that generically include the compact ‘hyper’ spaces of (i) nonempty closed subsets of X w.r.t. Hausdorff metric, and of (ii) equicontinuous functions on X.

The thus obtained Cartesian closure allows us to assert the computability of (iii) Frechet Distances between curves and between loops, as well as of (iv) constrained/Shape Optimization.

Abstract:In 1971 B. Cooper proved that there exists a 2-c.e. Turing degree
which doesn’t contain a c.e. set. Hence, he showed that the second
level of the Ershov hierarchy is proper. In this paper we
investigate proper levels of some extensions of the Ershov hierarchy
to higher levels of the arithmetical hierarchy. Thus we contribute
to the theory of $\Delta^0_3$-degrees by extending Cooper’s theorem
to some levels in the fine hierarchy within $\Delta^0_3$-sets.

Abstract:The main result of this paper is the decidability of the membership problem for 2×2 nonsingular integer matrices. Namely, we will construct the first algorithm that for any nonsingular 2×2 integer matrices M_1,…,M_n and M decides whether M belongs to the semigroup generated by {M_1,…,M_n}.

Our algorithm relies on a translation of numerical problems on matrices into combinatorial problems on words. It also makes use of some algebraic properties of well-known subgroups of GL(2,Z) and various new techniques and constructions that help to convert matrix equations into the emptiness problem for intersections of regular languages.

Patrick Sibelius – The Axiom of Choice and Peano Arithmetic do not match

Abstract:Using a modification of the semantic tableau method [1] that generates a tree structure with paths of increasing finite term structures (sort of Herbrand[2] structures in being) we show that Peano arithmetic (PA) (being deductively incomplete in the sense proven by Gödel (GI1) and yet deductively consistent as proven by Gentzen (GC)) does not have any standard model with the set $N$ of all natural numbers as its domain. The result is unexpected and thus in need of comments, of course.

Firstly, the claim that $PA$ is indeed true in the natural model $\mathcal{N}$ of elementary arithmetic, is not strictly provable (cf. Gödel’s second incompleteness theorem (GI2)) but it is supported by arguments that rely on higher-order principles, notably the Axiom of Choice (AC). Using AC one can argue that countably infinite paths of a semantic tableaux can be treated such as to determine complete consistent theories in the formal language in question that in their turn determine countable models of the theories. Secondly, since such an argument is not first-order, it is (according to Hilbert’s Thesis and GI2) either open to various (intuitive) interpretations or else it is inconsistent. The disproof presented here of the claim that PA is true in $\mathcal{N}$ boils down to the following: PA and first-order deductive calculi (here treated using the modified semantic tableaux) is not consistent with the interpretation of AC that is used in the argument to introduce the complete extension and $\mathcal{N}$ at the limit of some specific path in the semantic tableau.

From the point of view of computing and computer science standard set theoretical model theory is neither natural nor feasible, because of its abundant explicit use of AC for forming actually infinite structures and limits. Computers cannot handle actually infinite structures and they cannot in general construct limits (exactly) as the number of steps tends to infinity. Yet computing and computers, as well as all recursive definitions in the deductive calculus, allow for ever ongoing (i.e. potentially infinite) progressions and procedures. It is a central idea of set theory that one can represent (and identify) potentially infinite procedures always by their limits, viz. some specific actually infinite structures. History of mathematics reveals that the idea is indeed not self-evident. Originally limit constructions were used for the purpose of handling approximations and averages, not for constructing new objects as limits (cf. Bolzano’s, Cauchy’s and Weierstrass’ $\epsilon$-$\delta$ calculus). By adopting set theory with AC the latter purpose of creating new objects overshadowed and finally expunged the original purpose of limit constructions. However, if one regards AC in a more discriminative manner, then the original purpose of handling means and approximations can be extracted and put to use again. This is done, here, within a formal semantics with ever increasing semantic tableaux. Then it becomes evident that one can make a consistent use of the idea of the natural model of arithmetic only if one recognizes it as a sort of average structure or as a structure in which the concepts of average and approximation are represented, too. Clearly, this makes the structure different from a model in the model theoretic sense. Almost a century ago Toralf Skolem[1] seems to have been contemplating a similar (reversed) conception as a consequence of his discovery that finiteness and infinite cardinals were no first-order concepts.

Abstract:We introduce a generalization of Pascal triangle based on binomial coefficients of finite words. These coefficients count the number of times a word appears as a subsequence of another finite word. Similarly to the Sierpinski gasket that can be built as the limit set, for the Hausdorff distance, of a convergent sequence of normalized compact blocks extracted from Pascal triangle modulo 2, we describe and study the first properties of the subset of [0,1]x[0,1] associated with this extended Pascal triangle modulo a prime p.

Abstract:We consider the degree spectrum of a structure from the point of view of enumeration reducibility and $\omega$-enumeration reducibility. We will give an overview of several structural properties of degree spectra and their co-spectra, such as a minimal pair theorem and the existence of quasi-minimal degrees for degree spectra and receive as a corollary some fundamental theorems in enumeration degrees. We will show that every countable ideal of enumeration degrees is a co-spectrum of a structure and if a degree spectrum has a countable base then it has a least enumeration degree. Next we investigate the $\omega$-enumeration co-spectra and show that not every countable ideal of $\omega$-enumeration degrees is an $\omega$-co-spectrum of a structure.

Abstract:We apply the theory of average case complexity for real functions recently introduced by Schröder, Steinberg and Ziegler to the famous N-body problem and show for the special case of the planar circular restricted three body problem that it can be computed in polynomial time on average.

Abstract:We state that Turing’s analysis of computation in his seminal paper in 1936, particularly the type (ii), where Turing’s machines are connected to formal proofs in Arithmetics, can be considered as a completeness result very similar to Godels’s completeness theorem for FOL. In both cases we get what we call {\bf mechanical completeness} for a mechanical/formal notion given by “rules” vis a vis a “semantic” notion of consequence/natural computation. But in both the mechnical completeness is “by design”, viz. it has a price: the language for which it is proved is diluted in its resources of expression. We suggest that both G\”odel’s 1930 FOL completeness (for deducibility vis a vis all FO set models) and Turing completeness dilute the full natural notions in the manner of Henkin for higher order logic by means of his {\bf general models}. Nonstandard models are introduced, less is expressed, the validities are minimized and now mechanical completeness becomes possible. This is a common analysis of the second order logic and some modal logic cases (reducible to the second order case). We argue for a universal Skolem(1922)-Henkin key to mechanical completeness: introduce nonstandard models that are –expressibility-wise–“the least common denominator”; now mechanical rules can enumerate all the trimmed validities. In a nutshell-mechanical completeness is the tip of an iceberg; the iceberg is an expressive incompleteness result.

We use as a tool Kripke’s proof for Church-Turing thesis implied by the Hilbert’s thesis which states that the steps of any mathematical argument can be given in a language based on first order logic.

In his proof Kripke uses the Hilbert thesis to make every step of deduction in mathematical computations to correspond the logical consequence in FOL and using Godel’s completeness theorem and Turing’s analysis of type (ii) to prove Church Turing thesis. The relativization to Hilbert thesis is the analog to the Skolem-Henkin introduction of general models to “kill” expressibility. We conjecture that vis a vis full semantic expressibility no mechanical completeness may be given.

The above leads us to discuss about the role of definability in judgments of the adequacy of mechanical deductions-Turing computations vis a vis reasoning in informal mathematics where full expressibility is available.

Richard Whyman – On the Computation and Complexity of Arbitrary Physical Computers

Abstract: Over the years various forms of physical computation whose method of computation is clearly distinct from that of the classic Turing machine model have been proposed and indeed utilised. Quantum computation being one of the most notable of these.

Various methods have already been proposed for describing physical computers in general. However, when it comes to describing efficiency attention has typically focused on deriving complexity measures that are specific to particular physical systems, and are dependent on their discrete or indiscrete context.

In this talk I shall present a new method for describing physical computation that avoids the time-dependent sequential algorithm and instead looks to the first-order theory of a given physical system for describing its decision capabilities. Computability-wise this new method suggests, as expected, that physical computers have the same decision capacity as a Turing machine. However complexity-wise, our method suggests that a physical computer given access to polynomial resources can, by going beyond the sequential algorithm, decide problems in $NP \cap co-NP$ rather than $P$.

Since factoring and the discrete logarithm problem lie in $NP \cap co-NP$ our method potentially explains where the additional power of quantum computation is derived.

Abstract:In this paper we deal with stability languages of networks of Watson-Crick D0L systems (NWD0L systems). We define a word of an NWD0L system stable at a node if it never leaves the node and is able to generate only itself. The strong stability language of the NWD0L system consists of those words which are stable in every output node, the weak stability language is the set of words which are stable in at least one output node but not in all of them. It is shown that each word of a recursively enumerable language is an element of the strong stability language of an NWD0L system with two output nodes and every word of the complement language is an element of a weak stability language of the same NWD0L system.

Shilpa Garg and Tobias Moemke – A PTAS for Gapless MEC

Abstract:We consider the problem Minimum Error Correction (MEC). A MEC instance is an $n \times m$ matrix $M$ with entries from $\{0,1,-\}$. Feasible solutions are composed of two binary $m$-bit strings, together with an assignment of each row $M_i$ of $M$ to one of the two strings. The objective is to minimize the number of mismatches (errors) where the row has a value that differs from the assigned solution string.
The optimization goal is to find a feasible solution $(\sigma,\sigma’)$ that minimizes

The symbol “$-$” is a wild card that matches both $0$ and $1$. A MEC instance is called gapless, if in each row of $M$ all entries $0$, $1$ are consecutive.

Gapless-MEC is a relevant problem in computational Biology, and it is closely related to segmentation problems that were introduced by [Kleinberg–Papadimitriou–Raghavan STOC’98] in the context of data mining.

In Computational Biology, different sequencing machines deliver the DNA sequences in fragments (consecutive $0$ and $1$), the goal is to reconstruct two DNA solution strings known as haplotypes. Determining these haplotypes is important for population genetic analyses of admixture, migration and selection, but also to study allele-specific gene regulation, compound heterozygosity and their roles in human disease.

Without restrictions, it is known to be $\UG$-hard to compute an $O(1)$-approximate solution to MEC. For both MEC and Gapless-MEC, previous to our result the best approximation algorithm had a logarithmic performance guarantee. Building on a work of [Li–Ma–Wang J.Comput.Syst.Sci’02], [Jiao–Xu–Li CPM’04] presented a deterministic PTAS for Binary-MEC. Notably, the minimization version of MEC is considerably harder to approximate than its maximization counterpart. For H2C, [Alon–Sudakov J.Algo.’99] obtained a polynomial time approximation scheme (PTAS). These are different approximation results about MEC, but none of which obtained a polynomial time approximation scheme (PTAS) for the minimization version of Gapless-MEC.

We settle the approximation status of sub-interval free instances of Gapless-MEC by providing a polynomial time approximation scheme (PTAS). The sub-interval free property ensures that no interval of a string $s$ is a proper subinterval of a string $s’$. The key to this result is to generate a partition set $\{p_0, \ldots, p_j\}$ such that each string of Gapless-MEC instance crosses exactly one parition $p_j$. For each $p_j$, we provide some local constraints, that allows, in principle, a decomposition of the problem into independent subproblems. For each subproblem, we compute the feasible assignment of rows that is $\epsilon$ factor from $OPT$ for sufficiently small $\epsilon > 0$. We further embed these subproblems in a dynamic program and move in a left to right manner over partitions $p_j$. In the DP, we enumerate all possibilities of assignment of rows over two consecutive partitions simultaneously, which makes the algorithm find the correct solution with the polynomial running time. We show that the dynamic program is a PTAS for sub-interval free instances of Gapless-MEC.

Abstract:Fokina, Semukhin, and Turetsky [3] proposed the following generalization of the degree spectra of structures. Let $E$ be an equivalence relation. The $E$-{\it spectrum} of a structure $\mathcal{S}$ with universe $\omega$ is

Then the classical degree spectrum of $A$ is the $\cong$-spectrum, while its theoryspectrum (as defined by Andrews and Miller [1]) is the $\equiv$-spectrum. In [3], the focus is on degree spectra obtained by $\Sigma_n$-equivalence.

We study degree spectra of bi-embeddability (denoted by $\text{DgSp}_\approx(\mathcal{A})$). We show that any bi-embeddability spectrum is either a singleton or upwards closed, and that no bi-embeddability spectrum can be union of two cones of Turing degrees.

Montalb\’an [4] proved that any hyperarithmetical linear order $\mathcal{L}$ is bi-embeddable
with a computable one, thus showing, in our terminology,
that ${\bf 0} \in \text{DgSp}_\approx(\mathcal{L})$. We show that the same holds for any equivalence
structure. Next, we begin to investigate graphs consisting of finite components
(the same class of structures has been studied by Csima, Khoussainov,
and Liu [2] with respect to categoricity).

We conclude by comparing $\cong$-spectra and $\approx$-spectra in the following
way. A structure $\mathcal{B}$ is a {\it bi-embeddability basis} for a structure $\mathcal{A}$, if $\mathcal{A}$ and
$\mathcal{B}$ are bi-embeddable and $\text{DgSp}_\approx(\mathcal{A})=\text{DgSp}_{\cong}(\mathcal{B})$. We prove that there
are structures with no bi-embeddability basis, and we provide a sufficient
structural condition for their existence.

Kentaro Sato – A study on complexity of winning strategy

Abstract:The recursion-theoretic complexity of winning strategies of games of the complexity below \Delta^0_2 will be considered, via a reverse-mathematical framework.

Abstract:A new approach for obtaining analytical expressions of solutions is developed for complex systems of ordinary differential equations. In this method an approximation is represented as a segment of a series in the powers of the step size $\tau$ of the argument. This segment approximates a numerical solution based on a finite difference scheme (basic difference scheme), which in turn approximates the exact solution of the differential equation. To guarantee that the segment of a power series converges to the numerical solution we introduce a digit shifting procedure: in each layer we redistribute the values of the coefficients of the power series so they wouldn’t exceed the inverse value of the time step. This results in frequent value changes in the coefficients with higher order of $\tau$. The method is constructed in fact taking into account operations in a real computer, so it is called the method of computer analogy.

The solution of each equation in the system is represented as a sum of two parts, namely deterministic and pseudo-random. The parts are the segments of the power series in powers of $\tau$. In deterministic part we retain the terms up to the order of the accuracy of the basic difference scheme.
The values from the pseudo-random part are added to deterministic part in each layer. In order to exclude intermediate computations we obtain a theoretical expression of the probability of values transferred from pseudo-random part and replace them with their expected values.

For the well-known Lorenz system the series of the third extent is required (if one use the first order of accuracy of the basic finite difference scheme). For a mentioned series with only the first power the method provides the analytical expression for this approximation which represents nevertheless the complicated behavior of the solution of a strange attractor type. The approximation provides a quantitative information (with the appropriate evaluation) as well. The analogous first analytical approximation for Van der Pol oscillator demonstrates the limit cycle with a scale comparable with that of the exact solution, and the characteristics of cycles are expressed analytically. This approximation (as a ”linear part” of the solution) is the basis for the next order approximations. Now we construct the second and the other approximations searching for the analytical expressions of such solutions. Perspectives of computer analogy method for constructing analytical or semi-analytical approximations of solutions of other systems of nonlinear equations are also discussed.

Abstract:This talk relates two classical concepts of the modal $\mu$ calculus ($L_\mu$) literature — the relationship between $L_\mu$ and parity games, and the $L_\mu$ alternation hierarchy — to argue that the former is even more robust than previously shown. It considers whether the descriptive complexity of $L_\mu$ model-checking games, previously known to depend on the syntactic complexity of a formula, depends in fact on its semantic complexity.

The index of a $L_\mu$ formula counts the alternations between $\mu$ and $\nu$ and is a way of measuring formula complexity. Indeed, model-checking a $L_\mu$ formula of syntactic index $I$ reduces to solving parity games of the same index; Conversely, a formula of index $I$ suffices to describe the winning regions of parity games of index $I$. This talk argues that this relationship extends to account for the semantic complexity of formulas: if the winning regions of the model-checking games generated by a formula $\Psi$ can be described by a formula $\Phi$ — written $\Phi$ interprets $\Psi$ — then $\Psi$ is equivalent to a formula of the same index as $\Phi$. It then considers the converse problem: if a formula $\Psi$ is equivalent to some formula of index $I$, then is it interpreted by some formula of index $I$? This is shown to be the case for the alternation class $\mathit{ML}$, and for the classes $\Pi^\mu_1$ and $\Sigma^\mu_2$ as long as the input is in disjunctive form, and finally for any disjunctive alternation class, as long as the input formula is in co-disjunctive form.

This connects two important concept of $L_\mu$ theory: the index-problem and the relationship between $L_\mu$ and parity games. It shows that the latter can be extended in a natural way to account for the former: the descriptive complexity of the model-checking games of $\Psi$ is an upper bound on the semantic complexity of $\Psi$. It leaves open the question of whether a formula of semantic index $I$ always has model-checking games of descriptive complexity $I$, but shows that this holds at least for the alternation classes currently known to be decidable, and then some. The $L_\mu$–parity game relationship is even more robust than previously thought.

The practical consequence of this insight is that some formulas can be simplified by analysing the model-checking games they generate. On the theoretical side, the study of the descriptive complexity of the model-checking games is a novel approach to the $L_\mu$-index problem. Although the three decision procedures for $\mathit{ML}$, $\Pi^{\mu}_1$ and $\Sigma^{\mu}_2$ each uses a completely different strategy, they can all be reformulated as ways to describe the winning regions of formulas semantically in the target class. Extending the interpretation theorems of this paper seems likely to lead to characterizations of higher $L_\mu$ alternation classes, providing a potential stepping stone towards further decidability results.

Abstract:We introduce the concept of effective exact dimension for a wide class of metric spaces that are not required to have a computable measure.

Effective dimension was defined by Lutz in (Lutz 2003) for Cantor space and has also been extended to Euclidean space and more recently to general metric spaces (Mayordomo 2016). Then exact effective dimension was considered for Cantor space first in the case of resource-bounds adapted to complexity classes (Hitchcock, Lutz, Mayordomo 2004) and more recently for the constructive and computable cases (Staiger 2016).

We present here the concept of constructive exact dimension for general metric spaces and its characterization in terms of Kolmogorov complexity, using the concept of Kolmogorov complexity to any metric space defining the Kolmogorov complexity of a point at a certain precision in (Mayordomo 2016). Applications and further research directions are indicated.