Tools

"... We obtain new results regarding the precise average-- case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the average-case complexity of an algorithm is related to the analytic behaviou ..."

We obtain new results regarding the precise average-- case analysis of the main quantities that intervene in algorithms of a broad Euclidean type. We develop a general framework for the analysis of such algorithms, where the average-case complexity of an algorithm is related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely on properties of transfer operators suitably adapted from dynamical systems theory and provide a unifying framework for the analysis of the main parameters ---digits and continuants--- that intervene in an entire class of gcd-like algorithms. We operate a general transfer from the continuous case (Continued Fraction Algorithms) to the discrete case (Euclidean Algorithms), where Ergodic Theorems are replaced by Tauberian Theorems.

"... We propose a notion of interval object in a category with finite products, providing a universal property for closed and bounded real line segments. The universal property gives rise to an analogue of primitive recursion for defining computable functions on the interval. We use this to define basi ..."

We propose a notion of interval object in a category with finite products, providing a universal property for closed and bounded real line segments. The universal property gives rise to an analogue of primitive recursion for defining computable functions on the interval. We use this to define basic arithmetic operations and to verify equations between them. We test the notion in categories of interest. In the

"... There are known algorithms based on continued fractions for comparing fractions and for determining the sign of 2x2 determinants. The analysis of such extremely simple algorithms leads to an incursion into a surprising variety of domains. We take the reader through a light tour of dynamical systems ..."

There are known algorithms based on continued fractions for comparing fractions and for determining the sign of 2x2 determinants. The analysis of such extremely simple algorithms leads to an incursion into a surprising variety of domains. We take the reader through a light tour of dynamical systems (symbolic dynamics), number theory (continued fractions), special functions (multiple zeta values), functional analysis (transfer operators), numerical analysis (series acceleration), and complex analysis (the Riemann hypothesis). These domains all eventually contribute to a detailed characterization of the complexity of comparison and sorting algorithms, either on average or in probability.

"... Several methods to perform exact computations on real numbers have been proposed in the literature. In some of these methods real numbers are represented by infinite (lazy) strings of digits. It is a well known fact that, when this approach is taken, the standard digit notation cannot be used. New f ..."

Several methods to perform exact computations on real numbers have been proposed in the literature. In some of these methods real numbers are represented by infinite (lazy) strings of digits. It is a well known fact that, when this approach is taken, the standard digit notation cannot be used. New forms of digit notations are necessary. The usual solution to this representation problem consists in adding new digits in the notation, quite often negative digits. In this article we present an alternative solution. It consists in using non natural numbers as “base”, that is, in using a positional digit notation where the ratio between the weight of two consecutive digits is not necessarily a natural number, as in the standard case, but it can be a rational or even an irrational number. We discuss in full detail one particular example of this form of notation: namely the one having two digits (0 and 1) and the golden ratio as base. This choice is motivated by the pleasing properties enjoyed by the golden ratio notation. In particular, the algorithms for the arithmetic operations are quite simple when this notation is used.

"... Abstract. In this paper we present the Coq formalisation of the QArith library which is an implementation of rational numbers as binary sequences for both lazy and strict computation. We use the representation also known as the Stern-Brocot representation for rational numbers. This formalisation use ..."

Abstract. In this paper we present the Coq formalisation of the QArith library which is an implementation of rational numbers as binary sequences for both lazy and strict computation. We use the representation also known as the Stern-Brocot representation for rational numbers. This formalisation uses advanced machinery of the Coq theorem prover and applies recent developments in formalising general recursive functions. This formalisation highlights the rôle of type theory both as a tool to verify hand-written programs and as a tool to generate verified programs. 1

"... We discuss the use of the lazy evaluation scheme as coding tool in some algebraic manipulations. We show --- on several examples --- how to process the infinite power series or other open-ended data structures with co-recurrent algorithms, which simplify enormously the coding of recurrence relations ..."

We discuss the use of the lazy evaluation scheme as coding tool in some algebraic manipulations. We show --- on several examples --- how to process the infinite power series or other open-ended data structures with co-recurrent algorithms, which simplify enormously the coding of recurrence relations or solving equations in the power series domain. The important point is not the &quot;infinite&quot; length of the data, but the fact that the algorithms use open recursion, and the user never thinks about the truncation. 1 Introduction This article develops some applications of the functional lazy evaluation schemes to symbolic calculus. Neither the idea of non-strict semantics, nor its application to generate infinite, open structures such as power series, are new, see for example [1, 2], some books on functional programming ([3, 4]), etc. The lazy evaluation (or call by need is a protocol which delays the evaluation of the arguments of a function: while evaluating f(x) the code for f is entered, ...

"... Computers manipulate approximations of real numbers, called floating-point numbers. The calculations they make are accurate enough for most applications. Unfortunately, in some (catastrophic) situations, the floating-point operations lose so much precision that they quickly become irrelevant. In thi ..."

Computers manipulate approximations of real numbers, called floating-point numbers. The calculations they make are accurate enough for most applications. Unfortunately, in some (catastrophic) situations, the floating-point operations lose so much precision that they quickly become irrelevant. In this article, we review some of the problems one can encounter, focussing on the IEEE754-1985 norm. We give a (sketch of a) semantics of its basic operations then abstract them (in the sense of abstract interpretation) to extract information about the possible loss of precision. The expected application is abstract debugging of software ranging from simple on-board systems (which use more and more on-the-shelf micro-processors with floating-point units) to scientific codes. The abstract analysis is demonstrated on simple examples and compared with related work. 1

"... We extend the framework for exact real arithmetic using linear fractional transformations from the non-negative numbers to the extended real line. We then present an extension of PCF with a real type which introduces an eventually breadth-first strategy for lazy evaluation of exact real numbers. In ..."

We extend the framework for exact real arithmetic using linear fractional transformations from the non-negative numbers to the extended real line. We then present an extension of PCF with a real type which introduces an eventually breadth-first strategy for lazy evaluation of exact real numbers. In this language, we present the constant redundant if, rif, for defining functions by cases which, in contrast to parallel if (pif), overcomes the problem of undecidability of comparison of real numbers in finite time. We use the upper space of the one-point compactification of the real line to develop a denotational semantics for the lazy evaluation of real programs. Finally two adequacy results are proved, one for programs containing rif and one for those not containing it. Our adequacy results in particular provide the proof of correctness of algorithms for computation of single-valued elementary functions.