Invited speakers

The focus of Franz's research and development efforts over the past
few years has been storing billions of objects in AllegroCache and
billions of RDF triples in AllegroGraph. In the process we had to
invent and create a lot of new and interesting technology in Lisp. We
implemented new types of B+trees, efficient marshalling, resourcing,
new types of hash tables, efficient memory copies, new data type
dependent sorts, cluster computing, and better support for memory
mapped files. A number of Common Lisp function implementations were
improved to be almost completely cons free. In the first part of my
presentation I will discuss the scalability issues with AllegroGraph,
and give an overview of the new technology and techniques that we had
to add to make these products successful. In part two, I will argue
that some of this technology should lead the way as part of a new
standard for Common Lisp. For those interested, Duane Rettig will be
holding a tutorial session that will be complimentary to my
presentation detailing in-depth treatment of some of these new
techniques.

Jans Aasman started out as an experimental and cognitive
psychologist. He earned his Ph.D in cognitive science with a detailed
model of car driver behavior using Lisp and Soar. He spent most of his
professional life in telecommunications research, specializing in
intelligent user interfaces and applied artificial intelligence
projects. From 1995 to 2004 he was also a part-time professor in the
Industrial Design department of the Technical University of
Delft. Jans joined Franz Inc. in 2004, and is currently its Director
of Engineering. His current interests are applications of graph
databases and social network analysis.

Garbage collection (GC) is a key component of almost all modern
programming languages. The advent of conventional object-oriented
languages supported by managed run-times (e.g. Java, C# and even
Managed C++) has brought GC into the mainstream and, as memory manager
performance is critical for many large applications, brought GC to the
attention of programmers outside its traditional functional
programming language community. In this talk, I shall start by
reviewing how GC got to where it is today, why it is desirable, what
performance you might reasonably expect and I shall outline the
directions in which GC research is moving. In particular, I'll look at
some of the challenges facing modern GC, in contexts ranging from GC
for high-performance, multiprocessor systems to GC for real-time
systems and limited devices, from better integrating with its
operating environment to supporting specific applications. I shall
speculate wildly on future directions for research.

Richard Jones is Deputy Director and Senior Lecturer in the
department of Computer Science at the University of Kent. He was made
a Distinguished Scientist of the ACM in 2006, and an Honorary Fellow
of the University of Glasgow in July 2005, the only computer scientist
to have received this accolade. He received IBM Faculty Awards in
2003, 2004 and 2005.

Since it was first implemented in 1994, CL-HTTP has continued to
mature as a production Web server and gain new features. This talk
will overview the major components, including the server, mark up
generation tools, client, caching proxy, Web walker, as well as the
bundled tools. It will also describe new capabilities, including SSL
(server, client, proxy, Web walker) and generation of HTML 4.0.1 as
well as XHTML 1.0. The various Common Lisp ports of CL-HTTP will be
reviewed with special attention to the production quality
implementations. Some history and future directions will also be
discussed. For further details, see: http://www.cl-http.org:8001/

John C. Mallery has been affiliated with the MIT Artificial
Intelligence Laboratory and its successor, the Computer Science &
Artificial Intelligence Laboratory, since 1981. With a research focus
on computational politics, John Mallery has worked at MIT in the areas
of natural language understanding and machine learning since 1980, and
more recently, biologically-grounded cognitive architectures and
computer security from the levels of hardware up through network
infrastructure. As a Research Scientist at the MIT AI Lab during the
1990s, he was the principal architect of the White House Electronic
Publications System (1992- 2000), which showcased numerous Internet
firsts (inter alia, HTTP 1.1, fragment-aware URNs). Before this, he
developed some early systems for online politics in 1992, including
systems for multi-protocol hierarchical adaptive surveys (1992, 1994,
1996) and wide-area collaboration (1994 Vice President President's
Open Meeting on the National Performance Review).

Building a Commercial OWL Reasoner with LispRalf Moeller, Hamburg University of Technology (Germany)audio

This paper describes the functionality of the RacerPro OWL
reasoner. Then, in order to demonstrate the power of Lisp technology
through available software components, it describes the architecture
of RacerPro in terms of its main components, as well as in its use of
specific Common Lisp concepts such as, e.g., 'closure's for
backtracking in the core reasoner, 'advise' for RacerPro-specific
tracing facilities, and compiler-specific tail-call optimizations for
saving stack space. Next, the core data structures used in the
implementation are investigated to suggest possibilities for
extensions to Common Lisp that could help in making Lisp the first
choice for reasoner implementation. The conclusion evaluates current
deployment possibilities for commercial Lisp-based applications.

Ralf Moeller is Professor for Computer Science at the Hamburg
University of Technology (TUHH) since 2003. From 2001 until 2003 he was
Professor at the University of Applied Sciences in Wedel/Hamburg. In
1996, Ralf Moeller received the degree Dr. rer. nat. from the University
of Hamburg; and he successfully submitted his Habilitation thesis in
2001 also at the University of Hamburg. His research interests include
software technology for distributed systems as well as the application
and theory of conceptual modeling and knowledge representation
languages. His research goals encompass the development practical
inference algorithms for embedding description logic systems into
software engineering and web technology. Together with Volker Haarslev
(Concordia Univ. Montreal) he is the principal architect of the
description logic reasoner Racer, which is being used as a core engine
for building ontology development tools, and for agent systems of the
semantic web by many research groups around the world. Dr. Moeller was
the co-organizer of several international workshops on description
logics and is the author of numerous workshop and conference papers as
well as several book and journal contributions in this research area.
From 2001 to 2004 he was the co-project leader of a DFG project for
developing description logic inference systems supporting Aboxes and
spatial applications. Since 2005 he has lead the TUHH group of the
EC-funded project TONES, which investigate logical formalisms for
ontology development and usage. In 2006 he became the leader of the
TUHH group of the EC-funded project BOEMIE investigating
ontology-learning from interpreting multimedia documents.

This presentation is about some experiments done at UPMC since 2000
in the CS undergraduate department. An initiation course, based on
Scheme, was introduced offering some facilities for home work and
stand alone offline studies. I will describe "Web
continuations" that were invented for that occasion. Many of
these innovations were ported to other languages or environments to
grade students' programs en masse.

PhD in 1978, Technologist for French Ministry of Defence
through 1986, Researcher at LIX (Laboratoire d'Informatique de
l'École Polytechnique), Leader of ICSLA project at
INRIA-Rocquencourt and convener of ISO Lisp standardization
process. Full-time teaching at UPMC in 1996, then head of
undergraduate CS department 2000 to 2006. Now at LIP6 (Laboratoire
d'Informatique de Paris 6 (former alias for UPMC.) Author of several
books about languages full of parentheses.

Hop is a language dedicated to programming reactive and dynamic
applications for the web, such as web agendas, web galleries, web mail
clients, etc. In this presentation, we highlight the linguistic
novelties introduced by Hop and its execution environment by
describing Hop's user libraries, its extensions to the HTML-based
standards, and its execution platform, the Hop web broker. There will
be several live demonstrations during the presentation.

Manuel Serrano is a Senior Scientist at Inria
Sophia-Antipolis. Involved with Lisp and Scheme since the early 90's
(first at Inria Rocquencourt,) he has worked on optimizing compilers
for Scheme, and in 1994 he received his PhD. His thesis, titled
"Toward a portable and efficient compilation of functional
languages," describes a process that initially compiled Scheme to
C code (Bigloo.) Maintaining and developing Bigloo has been an
important part of Dr. Serrano's research activities. At its beginning
this compiler accepted source code in Scheme R4RS extended with
modules; but new extensions quickly followed (such as a CLOS-like
object layer and multi-threading support). Recent versions provide
additional APIs demanded by many modern applications (networking,
multi-media facilities, XML skills, etc.). In 2001, and 2002, two new
back-ends have been added to Bigloo: a first one for compiling to the
JVM, a second one for compiling to the CLR. While a professor at the
University of Nice in southern France, he developed Bee, which
attempts to provide a richer development environment for Scheme by
taking advantage of the language's advanced features. It also provides
a symbolic debugger, a memory debugger, a performance profiler, a
memory profiler, indexing facilities, and so on, and has been
described in several research papers. For the last couple of years,
his research focuses on the development of applications for the Web
2.0, particularly with the creation of a new programming language
'Hop' which unsurprisingly rests on top of Scheme. Hop is meant for
programming applications such as web agendas, web galleries, web mail
clients, etc. Its first version has been released in June
2006.

It's All about Being Right: Lessons from the R6RS ProcessMichael Sperber, University of Tuebingen (Germany)audio

In the Revised Reports on Scheme up to R5RS, the language could
only be changed by unanimous consent. It has been widely believed
that any language changes made in this way would clearly be the right
thing. Arguably, this process reached its limits with the
Revised5 Report on Scheme: Crucial language additions such
as modules, records and exceptions had little chance of reaching
unamimous consent, no matter what the specific design. While the
editors of the Revised6 Report no longer follow this rule,
standardization is still driven by a strong desire to do the right
thing. Continuing the tradition of Lisp culture, reaching this goal
has been difficult and elusive, as the participants hold different and
strongly opinionated ideas about what the right thing is. In the
talk, I will review the R6RS process, and attempt to show that R6RS is
indeed the right thing for Scheme.

Mike Sperber is an independent software developer who does
computer science research as a hobby. He received his masters and
doctorate degrees at the University of T201bingen. His Ph.D. thesis
described a professional lighting design and control system written in
Scheme. His first book on programming in C on the Atari ST appeared in
1987; his most recent book "Die Macht der Abstraktion"
("The Force of Abstraction"), an introduction to programming
using Scheme, appeared in December 2006. Mike has been active in the
Scheme community since his undergraduate days in the early 90s. He
co-founded the Scheme Request for Implementation (SRFI) process in
1999, and is a co-maintainer of the Scheme 48 implementation of
Scheme. He joined the R6RS process in 2002, and is the Project Editor
of the Revised6 Report on the Algorithmic Language
Scheme.

This presentation will cover several themes connected with
Lisp. There will be some part about history, some part about
semantical equivalences of code pieces in Lisp, etc.

Herbert Stoyan studied Mathematics at the Technical
University Dresden, receiving his PhD in 1970. He joined the AI group
of Egbert Lehmann at Robotron and learned Lisp. When a computer was
available - but no Lisp1.5 manual - he used the 1964 book of Berkeley
and Bobrow to implement Lisp. This system, with a compiler added in
1972, was the basis for all AI work in Eastern Germany. In 1977 he
began his studies in the history of Lisp, and published a book about
the concepts and history of Lisp in 1979. In 1981 he moved to Western
Germany and started a career as university teacher. By 1986 he had
become Professor of Information Sciences at the University of Konstanz
(Constance), in 1989 Professor of Interdisciplinary Studies in
Darmstadt, and in 1990 Professor of Artificial Intelligence of the
University of Erlangen. Beginning in 1992 he focused on knowledge
acquisition. His group in Erlangen developed several assistant systems
for knowledge acquisition and used these in projects for knowledge
based systems and knowledge management systems in industrial
applications. With the development of the WWW he created a historical
site for people of nobility, and wrote a translator for a heraldic
language into Postscript in Scheme. He wrote a two-volume book about
Programming in Artificial Intelligence which contains pretty much Lisp
code. The book describes several programming languages with diverse
execution models (problem solvers, provers, pattern matcher, relational
algebra handler, object-based systems etc.)

Our problem is to build a maximally efficient Bayesian classifier
when each parameter has a different cost and provides a different
amount of information toward the solution. This is an extremely
computationally expensive problem. We accept a sub-optimal, although
demonstrably good, solution based on Shannon.s definition of
Information and Uncertainty. Our solution scales up well and provides
powerful diagnostics with no extra work. Our program has been used in
Military Command and Control and in developing maintenance diagnostics
for a very complex military electronic system. The elements of the
solution to the problem are naturally computed recursively, so Lisp
implementation is an effective approach.

A Metaobject Protocol for CLforJava
Jay Cotton, College of Charlston (USA)
Jerry Boetje, College of Charlston (USA)audio

CLforJava is a new implementation of Common Lisp that intertwines
its architecture and operation with Java. The authors describe a new
architecture for a CLOS MOP that supports transparent, bi-directional
access between Lisp and Java. The access requires no special
techniques nor syntactic mechanisms on the part of the programmer -
being either Java or Lisp. The core of the new MOP is a data structure
that melds the fundamental structures of Java instances (N-tuples) and
CLOS instances (2-tuples) in such a way that the respective object
systems can interact without cumbersome translations. Methods from
their respective object systems can interact freely. We discuss
certain aspects of the respective MOPs that prevent a complete
integration and replacement of one system by the other.

SRI's FREEDIUS (Free Image Understanding System) provides a
foundation for image and video understanding in geospatial framework.
FREEDIUS is written primarily in Common Lisp, using C and C++ for
specialized operations, and for incorporating 3rd-party libraries.
FREEDIUS supports general image processing operations, sensor
modeling, and hierarchies of coordinate transformations and
projections. GUI support is provided by foreign function interfaces
to Tcl/Tk and OpenGL, allowing fast rendering of images, geometry, and
video.

FREEDIUS has been applied to image-based site modeling and
visualization, as well as video event analysis. The expressiveness of
CLOS permits FREEDIUS to render and inspect objects conditionally
depending on data source and visualization mode. In particular, video
event analysis is aided by both spatial and temporal views of objects
such as tracks derived from monocular or stereo sensors, on both fixed
and mobile platforms. Sensors are rigorously modeled so that tracks
are precisely registered in space and time. This allows tracks from
multiple sources to be visualized in a common reference frame. Site
models provide additional geometric context for understanding track
events. Spatial and timeline views allow tracks to be browsed and
manipulated by location and time.

This demonstration will illustrate transformation hierarchies, site
model representations, track representations and visualizations, and
video event extraction. The use of CLOS for management and
visualization of different track classes, visualization frameworks,
and data sources will also be illustrated.

Constraint solving has become an established approach for the
handling of complex combinatorial and scheduling problems. We present
a constraint solver framework that enables the interchange of most
solver aspects through its extensive modular design. Here, we
especially focus on the search protocol design.

This paper introduces Liskell, a new syntax for Haskell. Liskell
belongs to the Lisp family of computer programming language when
judged by its syntax, but is mostly Haskell when it comes to language
semantics.

We argue that meta-programming in Haskell has not found wide-spread
adoption because of the disparity between the abstract syntax tree and
its visual appearance in source code form. Liskell uses an extremely
minimalistic parse tree and shifts syntactic classification of parse
tree parts to a later compiler stage to give parse tree transformers
the opportunity to rewrite the parse trees being compiled. These
transformers can be user supplied and loaded dynamically into the
compiler to extend the language.

This paper introduces the Liskell syntax and serves as first draft
for a language definition. We conclude the paper with a demonstration
of meta-programming capabilities ranging from quasiquotation to an
embedded version of Prolog. We implement Liskell as syntax frontend
for the Glasgow Haskell Compiler. The implementation is publicly
available from http://clemens.endorphin.org/liskell

Knowledge Base for Elementary Geometry On Ontology
Hongguang Fu, University of Electronic Science and Technology of China (China)
Xiuqin Zhong, Chinese Academy of Sciences (China)
Wenyuan Wu, University of Western Ontario (Canada)

It illustrates how to use ontology to design and implement
knowledge base of elementary geometry. Firstly we introduce the idea,
applications and prospects of ontology, then show the building process
of geometry ontology. Finally describe the architecture of our
knowledge base which is based on geometric ontology. In addition, we
present several techniques, such as RDF triple, heuristic forward and
backward reasoning, combination rules, numerical-test, reasoning
strategies etc., by which it can generate relatively optimum,
traditional and readable proofs for geometry theorems, and it can
realize the reusability and sharing of domain knowledge.

Computational Tools for the Analysis of Spatial Patterns of Gene Expression in Common Lisp
Cyrus Harmon, Department of Molecular and Cell Biology, University of California, Berkeley (USA)audio

Scientific and technological advances in biotechnology have led to
the determination of the sequence of the genomes of a growing number
of organisms. Tools such as DNA microarrays offer the ability to
determine the levels of expression of substantially all of the genes
in a given genome in a single experiment, yet are limited to providing
the levels of expression across an ensemble of cells, such as a
particular organ or the entire organism. In order to understand the
developmental roles of these genes, it is highly desirable not just to
know if a given gene expressed, but in which tissues and at which
times a given gene is expressed. Using high-throughput laboratory
techniques we have constructed an atlas of spatial patterns of gene
expression in Drosophila melanogaster imaginal discs. Building such an
atlas requires a diverse array of computational techniques and,
ideally, the atlas would be presented to the users not just in a
static, browsable form, but in a computable form, along with tools to
analyze the data contained in the atlas. To this end, we have
developed a number of algorithms, tools and libraries, implemented in
Common Lisp, to address problems such as matrix and image data
representation, computer vision tasks, knowledge representation of the
genes in the genome and maps of their global and spatial patterns of
gene expression, and to query and analyze these data sets.

While many implementations of Common Lisp provide graphical
toolkits, Common Lisp as a whole lacks a standard way of creating
graphical user interfaces. This is related to the fact that there is
also no portable Common Lisp foreign function interface. To work
around this limitation, the Lisp bindings to the Tk graphics toolkit
use Lisp streams to talk to a Tcl/Tk subprocess. As a consequence, LTk
is a highly portable solution to the GUI problem, allowing LTk based
programs to run under 9 different Lisp implementations and the
multitude of operation systems supported by Tk. LTk wraps all Tk
widgets in CLOS objects, so creating and interacting with widgets is
fully integrated into Lisp. Even creating new widgets by inheriting
from an existing widget is supported.

The LTk-remote package extends LTk by the ability to use sockets
for the communication with the Tcl/TK process and thus is network
transparent. To run a LTK-remote program, only a very small Tcl script
is needed on the client machine. As the widgets are fully rendered by
the Tcl clients, this mechanism is also highly efficient with respect
to the necessary network bandwidth.

This paper describes the implementation of LTk and shows the CLOS
interface to the Tk widgets.

Some program concerns cannot be cleanly modularized, and their
implementation leads to code that is both hard to understand and
maintain. In this paper we consider extending an e-commerce
application, written in CLOS, with two of such crosscutting
concerns. Though most of the time Common Lisp.s macro facilities and
CLOS. method combinations can be used to modularize crosscuts, we
discuss the use of a more declarative solution when crosscuts depend
on the execution history. For this purpose we give an overview of
HALO, a novel pointcut language based on logic meta programming and
temporal logic, which allows one to reason about program execution and
(past) program state.

Experience with SC: Transformation-based Implementation of Various Extensions to C
Tasuku Hiraishi, Graduate School of Informatics, Kyoto University (Japan)
Masahiro Yasugi, Graduate School of Informatics, Kyoto University (Japan)
Taiichi Yuasa, Graduate School of Informatics, Kyoto University (Japan)audio

We have proposed the SC language system which facilitates language
extensions by translation into C. In this paper, we present our
experience with the SC language system and discuss its design,
implementation, applications and improvements. In particular, we
present the improvement to the design of transformation rules for
implementing translations, which includes the feature to extend an
existing transformation phase (rule-set). This enables us to implement
many transformation rule-sets only by describing the difference, and
helps us to use commonly-used rule-sets as part of the entire
transformation. We also show several actual examples of extensions to
C: garbage collection, multithreading and load balancing.

List Comprehension is a succinct syntactic form to describe lists
in functional languages. It uses nested generators (i.e. iterators)
and filters (i.e. Boolean expressions). The former generates lists,
whereas the latter restricts the contents of these lists. List
Comprehension translation is commonly based on Wadler's rules that
emit code based on recursive functions. This code often results either
in stack overflow or in ineffcient execution for many Common Lisp
implementations.

We present a very simple technique to compile List Comprehension in
the Loop Facility of Common Lisp, resulting in effcient code that
avoids stack overflow. Our translation code is also very short, with
an emitted code very close to the user-specified list comprehension.

We also present a translation into more fundamental constructs of
Common Lisp that often results in even more efficient code, although
this translation is more complex than using the Loop Facility.

The author has used the first translation technique for compiling a
new language called BioVelo, which is part of bioinformatics software
called Pathway Tools.

Modern software requirements are more diverse than before and can
only be timely fulfilled via the extensive use of libraries. As a
result, modern programmers spend a significant fraction of their time
just mixing and matching these libraries. Programming language success
becomes, then, more dependent on the quality and broadness of the
accompanying libraries than on the language intrinsic
characteristics.

In spite of its recognized qualities, Common Lisp lags behind other
languages in what regards the quality and availability of its
libraries. We argue that the best solution to overcome this problem is
to automatically translate to Common Lisp the best libraries that are
available in other languages. In this paper, we specifically address
the translation of Java libraries using the Jnil translator tool and
we provide a detailed explanation of the problems found and the
lessons learned during the translation of a large Java library.

Although many problems remain to be solved, the experiment proved
the feasibility of the translation process and significantly increased
our confidence in the future of Common Lisp.

We present a domain specific language for manipulation of binary
data, or structured byte sequences, as they appear in everyday
applications such as networking or graphics file manipulation. Our DSL
is implemented as an extension of the Dylan language, making use of
the macro facility. Dylan macros, unlike Common Lisp macros, are
implemented as pattern matches and substitutions on the parse tree,
and we will show the strengths and limits of this approach for the
given problem. http://www.networknightvision.com/

XMLisp unites S-expressions with XML into X-expressions that unify
the notions of data sharing with computation. Using a combination of
the Meta Object Protocol (MOP), readers and printers, X-expressions
uniquely integrate XML at a language, not API level, into Lisp in a
way that could not be done with other programming languages. CLOS
objects can be directly serialized into XML. XML expressions can be
evaluated in listeners, complete or sub-elements of XML can be
evaluated in regular Lisp editors such as EMACS, and XML can even be
compiled using the Common Lisp compiler. Existing tools such as
inspectors will print XML expressions and allow users to interactively
explore complex XML structures. Because of this type of integration,
XML becomes much more tangible to developers enabling the incremental
development style Lisp programmers have become accustomed to. This
article describes XMLisp in the context of the AgentCubes simulation
and game-authoring tool. AgentCubes is the 3D version of AgentSheets
system, which is the world.s most distributed Lisp-based educational
simulation and game-authoring tool.

Extensible Sequences in Common Lisp
Christophe Rhodes, Goldsmiths, University of London (England)audio

Common Lisp is often touted as the programmable programming
language, yet it sometimes places large barriers in the way, with the
best of intentions. One of those barriers is a limit to the
extensibility by the user of certain core language constructs, such as
the ability to define subclasses of built in classes usable with
standard functions: even where this could be achievable with minimal
penalties. We introduce the notion of user-extensible sequences,
describing a protocol which implementations of such classes should
follow. We show examples of their use, and discuss the issues observed
in providing support for this protocol in a Common Lisp, including
ensuring that there is no performance impact from its inclusion.

LispWorks is an integrated development environment for ANSI Common
Lisp. The feature-rich environment and its unique cross-platform
graphical toolkit make LispWorks the ideal tool for Common Lisp
application development and deployment.

This demonstration will show development of an application for
managing tenanted pieces of land. The application uses the CLOS-based
CAPI graphical toolkit and connects to a database using Common
SQL.

The demonstration will show how LispWorks makes the Lisp
development process easier with its intuitive, integrated
environment. The tools shown will include:

Editor

Source-level Stepper

Debugger

Object Inspector

Class Browser

and much more

By the end we will have a multi-platform graphical application with
a common code base.

We describe ESA (for Emacs-Style Application), a library for
writing applications with an Emacs look-and-feel within the Common
Lisp Interface Manager. The ESA library takes advantage of the layered
design of CLIM to provide a command loop that uses Emacs-style
multi-keystroke command invocation. ESA supplies other functionality
for writing such applications such as a minibuffer for invoking
extended commands and for supplying command arguments, Emacs-style
keyboard macros and numeric arguments, file and buffer management, and
more. ESA is currently used in two major CLIM applications: the
Climacs text editor (and the Drei text gadget integrated with the
McCLIM implementation), and the Gsharp score editor. This paper
describes the features provided by ESA, gives some detail about their
implementation, and suggests avenues for further work.

Research on social networks in various contexts requires processing
of graphs representing social relations in specialized ways. This
paper presents an overview of the Lisp application, CL-SNA, being
developed for social network analysis. CL-SNA aims to provide a
convenient and flexible interface for social network researchers and
to be easy to extend. Despite its infancy in terms of the coverage of
social network metrics implemented, it does offer integration with a
common visualization framework and utilities to import or export data
between common social network representation formats used in the
research. As such, it provides a framework which is extensible with
new analysis procedures.

This paper introduces DAUTI, a network traffic simulator,
applies it to a simulation research problem, and demonstrates
the ease-of-use and applicability of functional languages
to network simulation. DAUTI is an Open-Source,
fully configurable, rapidly advancing, and efficient Scheme
program that can be used to simulate Cluster load and job
delay on any size of network, and provide basic statistics
and analysis. In this paper, we use DAUTI to compute the
relative performance of load distribution algorithms on a
two-stage pipeline with heterogeneous cluster sizes. Specific
applications include web servers, and industrial, research
and financial batch processing. The choice of the Scheme
language (a relative of Lisp) presented numerous opportunities
to quickly produce compact and efficient code. This
language selection incurred no obstacles.

Dynamic ADTs: a "don't ask, don't tell" policy for data abstraction
Geoff Wozniak, University of Western Ontario (Canada)
Mark Daley, University of Western Ontario (Canada)
Stephen Watt, University of Western Ontario (Canada)audio

We outline an approach to abstract data types (ADTs) that
allows an object of the type specified by the ADT to take
on one of many possible representations. A dynamic abstract
data type (DADT) is dual to dynamic algorithm selection
and facilitates profiling of data in conjunction with
the profiling of code. It also permits a programmer to delay
or ignore details pertaining to data representation and
enhance the efficiency of some algorithms by changing representations
at run time without writing code extraneous to
the algorithm itself. Additionally, we demonstrate that run
time optimization of data objects is possible and allows for
acceptable performance compared to traditional ADTs. An
implementation is presented in Common Lisp.