BibTeX

Years of Citing Articles

Bookmark

OpenURL

Abstract

Block structure is a fundamental mechanism for expressing the scope of variables in a computation. Scope can be expressed either explicitly with formal parameters or implicitly with free variables. We discuss the transformations between explicit and implicit scope. Making scope explicit is current practice in functional programming: it is called "lambdalifting. " Our thesis addresses the transformations between explicit and implicit scope. We show lambda-dropping to be a useful transformation that can be applied to clarify the structure of programs and to increase the efficiency of recursive functions. In addition, we demonstrate that lambda-dropping is of practical use as a back-end in a partial evaluator. 1 Preface This document describes most of the work that was done in conjunction with the author's "speciale" (master's thesis). One of the primary subjects of this work is a program transformation called "lambda-dropping." It was first conceived by my thesis advisor, Olivier Dan...

Citations

...ate data needed to invoke procedures and to store the local variables of procedures. This works perfectly because the invocations of and returns from first-order procedures follows a stack discipline =-=[19]-=-. The data needed by a procedure must be available when it is invoked. The free variables are stored in the stack in the data area of the enclosing block. If the procedure associated with the enclosin...

...minate: run p 〈s, d〉 = run ps 〈 , d〉 Operations may have been removed from ps by performing them during the specialization process, so ps can run faster then p. In fact, S is Kleene’s S m n -function =-=[31]-=-. This function is computable and thus it can be implemented: the result is what is called a partial evaluator, denoted PE. Partial evaluation: how run PE 〈p, 〈s, 〉〉 = S(p, 〈s, 〉) Specializing a progr...

...yvariant specializer can produce many specialized versions of a source function [8]. 7.3 Lambda-Dropping Our compelling motivation to sort out lambda-lifting and lambda-dropping is partial evaluation =-=[28]-=-. Both program transformations are highly useful within the field of partial evaluation.sCHAPTER 7. PARTIAL EVALUATION 120 As analyzed by Malmkjær and Ørbæk [34], polyvariant specialization of higher-...

...ituations, and usually vice versa. Appel presents no fewer than six alternatives to flat or deep closures, remarking that “Clearly, there is an infinite variety of closure-representation strategies.” =-=[2]-=-. Most of these representations try to combine the best of flat closures with the best of deep closures. With flat closures free variables can be ��sCHAPTER 1. INTRODUCTION 17 s(d1) return �� �� �� c2...

...ve equations. In each instance, they are defined by the functions of a program. Peyton Jones describes how these “super-combinators” can be used to efficiently compile and execute functional programs =-=[38]-=-. In this work, the translation to supercombinators is called lambda-lifting. To avoid confusion, we refer to Johnsson’s algorithm as “lambda-lifting,” and to the supercombinator translation algorithm...

...re that is passed as a an argument. We have taken the alternative approach of removing from the flow graph the definition nodes for any function that is passed as an argument. A control-flow analysis =-=[41]-=- can be used to limit the set of procedures that must be considered in an application of a higher-order procedure, making the former alternative of introducing extra edges a feasible solution since th...

...ntended as an introduction to partial evaluation for someone not familiar with this field of study. It is taken with a few modifications from Consel and Danvy’s “Tutorial Notes on Partial Evaluation” =-=[14]-=-. Partial evaluation is a source-to-source program transformation technique for specializing programs with respect to parts of their input. In essence, partial evaluation removes layers of interpretat...

...rograms are not lambda-lifted and program points are specialized with respect to higher-order values exhibit the problem. Of these, there are few: Lambda-Mix [23] and type-directed partial evaluation =-=[15]-=- are monovariant; Schism [12] and Similix [7] lambdalift before binding-time analysis; Pell-Mell [33] lambda-lifts after binding-time analysis; ML-Mix [6] does not specialize with respect to higher-or...

...ameter passing semantics, namely call by name, call by value, call by reference, or call by need. 1 This is a self-modifying model. This is basically the approach used in the Glasgow Haskell Compiler =-=[29]-=-. Call by need parameter passing semantics can be implemented in many ways.sChapter 2 On Block Structure and Trees 2.1 Introduction This chapter presents two independent topics both of which are relat...

...loating phase presented in Figure 3.2. Only the first program transformation step is of interest here. The correctness of the second transformation step follows from Landin’s correspondence principle =-=[32]-=-: (let x = e1 in e2) = (λx.e2) e1 The second step is simultaneously applied to all procedure definitions of the program. In a single macro step all function definitions are made global. This is equiva...

...ional cost to be determined for several actual models of computation, such as functional languages and imperative languages.sChapter 3 Lambda-Lifting 3.1 Introduction We consider Johnsson’s algorithm =-=[4, 26, 27]-=-. Johnsson’s target is the G-machine, which can run recursive equations efficiently. Lambda-lifting transforms a block-structured program into a set of recursive equations. These recursive equations c...

... but with a focus on lambda-dropping. Chapter 7 outlines the main application of lambda-dropping: partial evaluation. 1.2 Block Structure The origins of block-structure can be traced back to ALGOL 60 =-=[5, 36]-=-, known as “The Father of Modern Programming Languages.” This was the first language with a syntax formally defined in BNF. It spirited many features found in most modern programming languages, such a...

...de range of applications, the most prominent of which is partial evaluation.sAppendix A CPS Transformation by Fold This appendix shows a simple implementation of the single-pass CPS transformation of =-=[16]-=-, without tail-call optimization. The second, simplifying pass of other CPS transformations is eliminated by recognizing static redexes and reducing them during the transformation, using higher-order ...

...ncy analysis, since each set of functions that share types is reduced in size. Mycroft provides a more detailed discussion of the type-checking problems that may crop up without a dependency analysis =-=[35]-=-. Assuming the lambda-lifted version of the DFA program (see Figure 3.4) was extended with a main expression calling the function r with appropriate arguments, Peyton Jones’s dependency analysis could...

...ism [12] and Similix [7] lambdalift before binding-time analysis; Pell-Mell [33] lambda-lifts after binding-time analysis; ML-Mix [6] does not specialize with respect to higher-order values; and Fuse =-=[40]-=- does not allow upwards funargs. 7.5 Summary The lambda-dropping algorithm can be fitted as a back-end to a partial evaluator. This allows a polyvariant specializer for higher-order programs to produc...

... but with a focus on lambda-dropping. Chapter 7 outlines the main application of lambda-dropping: partial evaluation. 1.2 Block Structure The origins of block-structure can be traced back to ALGOL 60 =-=[5, 36]-=-, known as “The Father of Modern Programming Languages.” This was the first language with a syntax formally defined in BNF. It spirited many features found in most modern programming languages, such a...

...of control, the frames on the stack must be preserved whenever the continuation is saved. There are numerous strategies for accomplishing this efficiently, such as the segmented stacks of Hieb et al. =-=[25]-=-. Formal parameters and temporary values are stored on the stack. Operations that read these from the stack are tagged local. Operations that write to the stack are tagged save. We assume that a globa...

...eclarations only, the lambda-lifting step is not necessary. The lambda-dropping algorithm then works in two stages. It handles other binding constructs (such as let or the letType construct of Schism =-=[12]-=-), but we have made no attempt to drop parameters that are bound to variables defined by these constructs (see also Section 5.3.6). 1. Block sinking. Each reference to a function introduces restrictio...

...ional cost to be determined for several actual models of computation, such as functional languages and imperative languages.sChapter 3 Lambda-Lifting 3.1 Introduction We consider Johnsson’s algorithm =-=[4, 26, 27]-=-. Johnsson’s target is the G-machine, which can run recursive equations efficiently. Lambda-lifting transforms a block-structured program into a set of recursive equations. These recursive equations c...

...n. This is appears to be similar to parameter dropping, and is discussed in Section 5.5.3. 5.5.1 Let floating Peyton Jones et al. describe a number of transformations all classified as “let-floating” =-=[37]-=-. These transformations move let-block value bindings either inwards into let blocks or outwards outside procedure declarations. A number of additional transformations serve to fine-tune the let-block...

...function is never returned past the scope of of a specialization points that it refers to. Thus, recursive equations offer a convenient format for a partial evaluator. Similix and Schism, for example =-=[7, 12]-=-, lambdalift source programs before specialization and produce residual programs in the form of recursive equations. source block-structured program lambda lifting source �� recursive equations partia...

...ogram of Figure 7.3, using mere unfolding and no memoization). The first Futamura projection Let us consider a while-loop language as is traditional in partial evaluation and semanticsbased compiling =-=[13]-=-. Figure 7.4 displays a source program with several while loops. Specializing the corresponding definitional interpreter (not shown here) using Schism with respect to this source program yields the re...

...ional cost to be determined for several actual models of computation, such as functional languages and imperative languages.sChapter 3 Lambda-Lifting 3.1 Introduction We consider Johnsson’s algorithm =-=[4, 26, 27]-=-. Johnsson’s target is the G-machine, which can run recursive equations efficiently. Lambda-lifting transforms a block-structured program into a set of recursive equations. These recursive equations c...

...ix [23] and type-directed partial evaluation [15] are monovariant; Schism [12] and Similix [7] lambdalift before binding-time analysis; Pell-Mell [33] lambda-lifts after binding-time analysis; ML-Mix =-=[6]-=- does not specialize with respect to higher-order values; and Fuse [40] does not allow upwards funargs. 7.5 Summary The lambda-dropping algorithm can be fitted as a back-end to a partial evaluator. Th...

...esidualizing calls. A monovariant specializer produces at most one specialized function for every source function. A polyvariant specializer can produce many specialized versions of a source function =-=[8]-=-. 7.3 Lambda-Dropping Our compelling motivation to sort out lambda-lifting and lambda-dropping is partial evaluation [28]. Both program transformations are highly useful within the field of partial ev...

...s passed to the traversal function and returned in each invocation. Access to the flow of control can be implemented in a pure language by transforming the program to Continuation-Passing Style (CPS, =-=[20]-=- provides an introduction to CPS). After CPS transformation, the current continuation (the flow of control) can be explicitly manipulated. The result of evaluating an expression is a computation (the ...

...woBit uses lambda-lifting to improve the register allocation algorithm [10]. Throughout this section we present a number of program transformations. We use a syntax similar to that of rewrite systems =-=[39]-=-. A square bracket after an expression is used to denote the free variables of this expression. A lambda-bound variable must appear within the brackets of an expression for it to occur in the expressi...

...d present a proof of correctness for the incremental lambda-lifting process used in the optimizing Scheme compiler TwoBit [24]. TwoBit uses lambda-lifting to improve the register allocation algorithm =-=[10]-=-. Throughout this section we present a number of program transformations. We use a syntax similar to that of rewrite systems [39]. A square bracket after an expression is used to denote the free varia...

... values that are computed as the result of a request and the values that are used to provide auxiliary features. The notation and some of the naming conventions used in this section are due to Wadler =-=[43]-=-. 5.6 Summary Lambda-dropping is achieved by let floating of function definitions into other functions definitions and eta-reduction. The lambda-dropping algorithm starts by sinking the functions of t...

...G 116 6.8 Related Work This section contains short descriptions of and references to work related to lambdadropping. It is based on part of the corresponding outline found in Danvy and Schultz’ paper =-=[18]-=-. 6.8.1 Continuation-based programming Shivers optimizes a tail-recursive function by “promoting” its CPS counterpart from being a function to being a continuation [41]. For example, consider the func...

... lambda-dropping is partial evaluation [28]. Both program transformations are highly useful within the field of partial evaluation.sCHAPTER 7. PARTIAL EVALUATION 120 As analyzed by Malmkjær and Ørbæk =-=[34]-=-, polyvariant specialization of higher-order, block-structured programs faces a problem similar to Lisp’s “upward funarg.” An upward funarg is a closure that is returned beyond the point of definition...

... collector can compress the accessible (live) data of the program by copying the live data into a separate heap area. A traditional copying garbage collector uses a heap separated into two semispaces =-=[9]-=-. At any time only one of the semispaces are active. Data is always allocated in the active semispace. When the active semispace becomes full, the live data is marked and copied into the other semispa...

... A complete lambda-lifter for higherorder programs (i.e. a standard lambda-dropper) was implemented in Scheme as part of the work of the author’s MS. The implementation handles the language of Schism =-=[11]-=- and has been fitted as a back-end for Schism, as described in Chapter 7. The current implementation does not generate maximally localized block structure, although it does come very close. In the sec...

... a term expressed using K and S is usually longer and thus takes more steps to fully reduce. The combinators K and S can be expressed using a single combinator only. We refer the interested reader to =-=[22]-=-. The IKBCS translation has been treated in many different texts using different notations. The presentation in this sections is due to Goldberg [22], with some minor modifications. 3.5.3 Supercombina...

...fy the storage properties of the program that is transformed. Sullivan and Wand present a proof of correctness for the incremental lambda-lifting process used in the optimizing Scheme compiler TwoBit =-=[24]-=-. TwoBit uses lambda-lifting to improve the register allocation algorithm [10]. Throughout this section we present a number of program transformations. We use a syntax similar to that of rewrite syste...