Selected Publications

Note:
The documents distributed by this server have been provided by the
contributing authors as a means to ensure timely dissemination of
scholarly and technical work on a noncommercial basis. Copyright and
all rights therein are maintained by the authors or by other copyright
holders, notwithstanding that they have offered their works here
electronically. It is understood that all persons copying this
information will adhere to the terms and constraints invoked by each
author's copyright. These works may not be reposted without the
explicit permission of the copyright holder.

For the fuller list of publications including those in Japanese (we write
quite a few of them, mostly with my students and colleagues), please visit
here.

Alimujiang Yasen and Kazunori Ueda,
Name Binding is Easy with Hypergraphs.
To appear in IEICE Transactions on Information and Systems,
Vol.E101-D, No.4, April 2018.

We develop a technique for representing variable names and name
binding which is a mechanism of associating a name with an entity in
many formal systems including logic, programming languages and
mathematics. The idea is to use a general form of graph links (or
edges) called hyperlinks to represent variables, graph nodes
as constructors of the formal systems, and a graph type called
hlground to define substitutions.
Our technique is based on simple notions of graph theory in which
graph types ensure correct substitutions and keep bound variables
distinct. We encode strong reduction of the untyped
lambda-calculus to introduce our technique. Then we encode a more
complex formal system called System F<:, a
polymorphic lambda-calculus with subtyping that has been one of
important theoretical foundations of functional programming languages.
The advantage of our technique is that the representation of terms,
definition of substitutions, and implementation of formal systems are
all straightforward. We formalized the graph type hlground,
proved that it ensures correct substitutions in the
lambda-calculus, and implemented hlground in HyperLMNtal,
a modeling language based on hypergraph rewriting. Experiments were
conducted to test this technique. By this technique, one can
implement formal systems simply by following the steps of their
definitions as described in papers.

Yutaro Tsunekawa, Taichi Tomioka and Kazunori Ueda:
Implementation of LMNtal Model Checkers: a Metaprogramming Approach.
To appear in the The Journal of Object Technology, 2017.

LMNtal is a modeling language based on hierarchical graph rewriting,
and its implementation SLIM features state space search and an LTL
model checker. Several variations and extensions of the SLIM have
been developed, and all of them achieve their functionalities by
modifying SLIM written in C. If a model checker is implemented in the
modeling language itself, it should be easy to develop prototypes of
various model checkers without changing the base implementation of the
model checker. This approach is called metaprogramming which has been
taken extensively in Lisp and Prolog communities.
In this paper, we design a framework for implementing extendable model
checkers. First, we define first-class rewrite rules to extend a
modeling language. Second, we design an API to operate on the states
of programs. These features enable programmers to handle state
transition graphs as first-class objects and implement diverse
variants of a model checker without changing SLIM. We demonstrate it
by implementing an LTL model checker and its variant and a CTL model
checker. Furthermore, we show how easy it is to extend these model
checkers in our framework by extending the CTL model checker to handle
fairness constraints. The overhead of metainterpretation is around an
order of magnitude or less. All these results demonstrate the
viability of the resulting framework based on metainterpreters that
handle explicit state space in a flexible manner.

We developed a technique for modeling formal systems involving name
binding in a modeling language based on hypergraph rewriting. A
hypergraph consists of graph nodes, edges with two endpoints and edges
with multiple endpoints. The idea is that hypergraphs allow us to
represent terms containing bindings and that our notion of a graph
type keeps bound variables distinct throughout rewriting steps.
We previously encoded the untyped lambda-calculus and the evaluation
and type checking of System F_sub, but the encoding of System F_sub
type inference requires a unification algorithm. We studied and
successfully implemented a unification algorithm modulo
alpha-equivalence for hypergraphs representing untyped lambda-terms.
The unification algorithm turned out to be similar to nominal
unification despite the fact that our approach and nominal approach to
name binding are very different. However, some basic properties of
our framework are easier to establish compared to the ones in nominal
unification. We believe this indicates that hypergraphs provide a
nice framework for encoding formal systems involving binders and
unification modulo alpha-equivalence.

The technical goal of the Fifth Generation Computer Systems (FGCS)
project (1982-1993) was to develop Parallel Inference technologies,
namely systematized technologies for realizing knowledge information
processing on top of parallel computer architecture. The Logic
Programming paradigm was adopted as the central working hypothesis of
the project. At the same time, building a large-scale Parallel
Inference Machine (PIM) meant to develop a novel form of
general-purpose computing technologies that are powerful enough to
express various parallel algorithms and to describe a full operating
system of PIM. Accordingly, the research goal of the Kernel Language
was set to designing a concurrent and parallel programming language
under the working hypothesis of Logic Programming. The aim of this
article is to describe the design process of the Kernel Language (KL1)
in the context of related programming models in the 1980's, the
essence of Concurrent Logic Programming and Constraint-Based
Concurrency, and how the technologies we developed in those days
evolved after their conception.

In scalable storage systems, there are two kinds of
methods for data redundancy: mirroring and parity. Each has its
pros and cons. Mirroring creates a large amount of redundancy
data, resulting in less usable space. Write performance degrades
proportionally to the redundancy level due to an increase in
communication. Parity-based methods partition data into
multiple pieces, add parity information, and distribute the pieces
of data and parity information. Parity-based methods are not
often used with memory class media that are faster than the
network, because distributing the data across servers results in
low read performance. This research aims to establish an
efficient data protection method that can be applied to fast,
memory class media. We propose a new parity-based method
called Multi-stage Erasure Coding (MEC), which creates two
different erasure codes: one at the data transmission source
server, and the other at the destination server. We show that our
method reduces the space required to achieve redundancy while
achieving high performance by making the amount of write
communication independent of the redundancy level. We built a
prototype program using MEC on a commodity cluster server.
We show that compared with conventional parity-based methods
with redundancy level 2, read I/O throughput is over one order of
magnitude higher thanks to local reads and that write I/O
throughput is almost the same due to network bottleneck.

LMNtal is a modeling language based on hierarchical graph
rewriting, and its implementation SLIM features an LTL
model checker. Several variations and extensions of the
SLIM model checker have been developed, and all of them
achieve their functionalities by modifying SLIM. If a model
checker is implemented in the modeling language itself,
it should be easy to develop prototypes of various model
checkers without changing the implementation of the model
checker. This approach is called metaprogramming which
has been taken conventionally in Lisp and Prolog.
In this paper, we define first-class rewrite rules to enable
metaprogramming in LMNtal. In addition, we design APIs
to operate states of LMNtal programs. These features allow
us to implement diverse variants of a model checker without
changing SLIM.We demonstrate it by implementing an LTL
model checker and a CTL model checker in LMNtal. The
resulting framework should be sufficient to develop metacir-
cular interpreters that treat program states flexibly.

Hybrid systems are dynamical systems that include both continuous and
discrete changes. Some hybrid systems involve a large or infinite
number of discrete changes with in an infinitesimal-width region of
phase space. Systems with sliding mode are typical examples of such
hybrid systems. It is difficult to analyze such hybrid systems
through ordinary numerical simul ation, since the time required for
simulation increases in proportion to the number of discrete changes.
In this paper, we propose a method to symbolically analyze such models
involvin g numerous discrete changes by detecting loops and checking
loop invariants of the model's behavior. The method handles
parameterized hybrid systems and checks inclusion of paramet erized
states focusing on the values of a switching function that dominate
the dynamics of sliding mode. We implemented the main part of the
method in our symbolic hybrid system simula tor HyLaGI, and conducted
analysis of example models.

The purpose of this research is to develop a highly
reliable simulator of hybrid systems, i.e., systems involving both
discrete change and continuous evolution. In particular, we aim
at rigorous simulation of parametrized hybrid systems, which
enables not only the analysis of modelfs possible behavior but also
the design of parameters that realize desired properties. Simula-
tors with interval arithmetic can reliably compute a reachable set
of states, but preserving the dependency of uncertain quantities
in models is still challenging.
In this paper, we discuss a simulation method that is based on
symbolic computation and cooperates with the interval Newton
method and affine arithmetic, which is able to preserve first-
order dependency of uncertain quantities. We implemented the
algorithm on the symbolic simulator we have been developing and
evaluated the performance of the method with example models.

In recent data centers, large-scale storage systems storing big data
comprise thousands of large-capacity drives. Our goal is to establish
a method for building highly reliable storage systems using more than
a thousand low-cost large-capacity drives. Some large- scale storage
systems protect data by erasure coding to prevent data loss. As the
redundancy level of erasure coding is increased, the probability of
data loss will decrease, but the increase in normal data write
operation and additional storage for coding will be incurred. We
therefore need to achieve high reliability at the lowest possible
redundancy level. There are two concerns regarding reliability in
large- scale storage systems: (i) as the number of drives increases,
systems are more subject to multiple drive failures and (ii)
distributing stripes among many drives can speed up the rebuild time
but increase the risk of data loss due to multiple drive failures. If
data loss occurs by multiple drive failure, it affects many users
using a storage system. These concerns were not addressed in prior
quantitative reliability studies based on realistic settings. In this
work, we analyze the reliability of large-scale storage systems with
distributed stripes, focusing on an effective rebuild method which we
call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from
those with the lowest redundancy and strategically selects blocks to
read for repairing lost data. We modeled the dynamic change of amount
of storage at each redundancy level caused by multiple drive failures,
and performed reliability analysis with Monte Carlo simulation using
realistic drive failure characteristics. We showed a failure impact
model and a method for localizing the failure. When stripes with
redundancy level 3 were sufficiently distributed and rebuilt by
Dynamic Refuging, the proposed technique turned out to scale well, and
the probability of data loss decreased by two orders of magnitude for
systems with a thousand drives compared to normal RAID. The
appropriate distribution level could localize the failure.

Substitution in the lambda-calculus is a subtle operation. In a formal
description, Barendregt's variable convention is assumed to avoid
variable capture, but such an assumption is not well suited for
implementation on computers.We introduce graph representation and
manipulation of lambda-terms, in which bound and free variables are
encoded by using hyperlinks with different attributes. A graph type
called hlground is generalized to identify the scope of a term n in
substitution m[x := n], which enables bound variables and free
variables to have suitable behavior during the substitution. Our
representation of the lambda-terms are readable and the definition of
substitution in this technique is free from any side conditions on the
freeness and freshness of variables.

The technical goal of the Fifth Generation Computer Systems (FGCS)
project (1982-1993) was to develop Parallel Inference
technologies, namely systematized technologies for realizing knowledge
information processing on top of parallel computer architecture. The
Logic Programming paradigm was adopted as the central working
hypothesis of the project. At the same time, building a large-scale
Parallel Inference Machine (PIM) meant to develop a novel form of
general-purpose computing technologies that are powerful enough
to express various parallel algorithms and to describe a full
operating system of PIM. Accordingly, the research goal of the Kernel
Language was set to designing a concurrent and parallel programming
language under the working hypothesis of Logic Programming. The aim
of this article is to describe the design process of the Kernel
Language (KL1) in the context of related programming models in the
1980's, the essence of Concurrent Logic Programming and
Constraint-Based Concurrency, and how the technologies we developed in
those days evolved after their conception.

In graph-based model checking, systems are modeled with graph
structures which are highly expressive and feature a symmetry
reduction mechanism. However, it involves frequent isomorphism
checking of graphs generated in the course of model checking.
Canonical labeling of graphs, which gives a unique representation to
isomorphic graphs, is expected to be an effective method to check
isomorphism among many graphs efficiently. It is therefore important
to efficiently compute canonical forms of graphs in graph rewriting
systems. For this purpose, we propose an optimization technique for
McKay's canonical labeling algorithm that reuses information of graph
structures that does not change by rewriting. To enable reuse, we
reformulated McKay's algorithm to clarify what substructures of graphs
are utilized in its execution, and designed an algorithm for
successive graph canonicalization that organizes graph information in
such a way that recomputation may be minimized. We evaluated the
performance of the proposed algorithm, and found that it achieved
sublinear time complexity with respect to the number of vertices for
many cases of re-canonicalization.

A modeling language for hybrid systems HydLa and its implementation
HyLaGI are described. HydLa is a constraint-based language that can
handle uncertainties of models smoothly. HyLaGI calculates
trajectories by symbolic formula manipulation to exclude errors
resulting from floating-point arithmetic. HyLaGI features a
nondeterministic simulation algorithm so it can calculate all possible
qualitative different trajectories of models with uncertainties.

In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed storage areas from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamically changing amount of storage at each redundancy level due to multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, we found that the probability of data loss decreased by two orders of magnitude for systems with 384 or more drives compared to normal RAID. This technique turned out to scale well, and a system with 1536 inexpensive drives attained lower data loss probability than RAID 6 with 16 enterprise-class drives.

[pdf]
Ryo Yasuda, Taketo Yoshida, and Kazunori Ueda:
Reduction of the Number of States and the Acceleration of LMNtal
Parallel Model Checking.
Trans. Japanese Society for Artificial Intelligence,
Vol.29, No.1 (2014), pp.182-187.

SLIM is an LMNtal runtime. LMNtal is a programming and modeling
language based on hierarchical graph rewriting. SLIM features
automata-based LTL model checking that is one of the methods to solve
accepting cycle search problems. Parallel search algorithms OWCTY and
MAP used by SLIM generate a large number of states for problems having
and accepting cycles. Moreover, they have a problem that performance
seriously falls for particular problems. We propose a new algorithm
that combines MAP and Nested DFS to remove states for problems
including accepting cycles. We experimented the algorithm and
confirmed improvements both in performance and scalability.

Hybrid systems are dynamical systems with continuous changes of states
and discrete changes of states and governing equations. Hybrid systems
are appliable to diverse fields including physics.
HydLa is a declarative language for describing hybrid systems using
constraints, and we have developed a symbolic simulator of HydLa named
Hyrose.
Hyrose is our first publicly available implementation of HydLa.
Features of Hyrose include the simulation of systems with parameters,
search based on automatic case analysis and bounded model checking.
In this paper, we first describe the architecture and features of Hyrose,
as well as semantical issues that had to be clarified to implement Hyrose.
Then, in order to demonstrate the power of symbolic execution, we show
several examples of the analysis of systems whose behavior
exhibits quantitative and qualitative changes depending on parameters.

A grand challenge in computing is to establish a substrate
computational model that encompasses diverse forms of non-sequential
computation. This paper demonstrates how a hypergraph rewriting framework
nicely integrates various forms and ingredients of concurrent computation
and how simple static analyses help the understanding and
optimization of programs. Hypergraph rewriting treats processes and
messages in a unified manner, and treats message sending and parameter
passing as symmetric reaction between two entities. Specifically, we
show how fine-grained strong reduction of the lambda-calculus can be concisely
encoded into hypergraph rewriting with a small set of primitive
operations.

We have been working on the design and implementation of HydLa, a
high-level modeling language for hybrid systems. Its objectives and
features include a constraint-based (as opposed to automata-based)
formalism, proper handling of uncertainties, and nondeterministic
execution. The talk will describe an overview of the language with
live demonstration of our prototype implementation.

LMNtal is a language model based
on hierarchical graph rewriting that uses point-to-point links to
represent connectivity and membranes to represent hierarchy. LMNtal
was designed to be a substrate language of various computational
models, especially those addressing concurrency, mobility and multiset
rewriting.
Although point-to-point links and membranes could be used together to
represent multipoint connectivity, our experiences with LMNtal told
that \textit{hyperlinks} would be a important and useful extension to
the language.
We have accordingly expanded LMNtal into a hierarchical hypergraph
rewriting language model, HyperLMNtal. HyperLMNtal enabled
concise description of computational models involving flexible and
diverse forms of references between data; in particular, it enabled
efficient encoding of a constraint processing language CHR in terms of
both performance and computational complexity.
This paper describes the design and implementation of HyperLMNtal as a
case study of language evolution.

Model checking is a verification technique based on the exhaustive
search of erroneous behaviors of state transition systems, which is
attracting growing interest. The LMNtal language allows us to
represent state transition systems using hierarchical graph rewriting,
and its runtime system, SLIM, has recently evolved into an
explicit-state model checker. Hierarchical graphs are a powerful data
structure that features a built-in symmetry reduction mechanism that
plays a key role in state space search. However, model checking is
still prone to state space explosion and has called for time- and
space-efficient methods for state management. To address this problem
and make our model checker more useful, we have extended the SLIM
runtime further to a parallel model checker to run on shared-memory
multiprocessors and developed various optimization techniques of state
management. These two enhancements drastically improved the scale of
problems the LMNtal model checker can handle as well as its
performance.

This paper presents a bounded model checking (BMC) tool called
hydlogic for hybrid systems. It translates a reachability problem of a
nonlinear hybrid system into a predicate logic formula involving
arithmetic constraints, and checks the satisfiability of the formula
based on a satisfiability modulo theories (SMT) method. We tightly
integrate (i) an incremental SAT solver to enumerate the possible sets
of constraints and (ii) an interval-based solver for hybrid constraint
systems (HCSs) to solve the constraints described in the formulas. The
HCS solver verifies the occurrence of a discrete change by enclosing
continuous states that may cause the discrete change by a set of
boxes.We adopt the existence property of a unique solution in the
boxes computed by the HCS solver as (i) a proof of the reachability of
a model, and (ii) a guide in the over-approximation refinement
procedure. Our hydlogic implementation successfully handled several
examples including those with nonlinear constraints.

Hybrid systems are dynamical systems with continuous changes of states
and discrete changes of states and governing equations. HydLa is a
constraint-based modeling language for hybrid systems, and its
implementation aims for system verification by means of simulation
with guaranteed accuracy. However, how to simulate HydLa programs with
initial values given as ranges or with symbolic parameters has not
been clarified. This paper gives a nondeterministic execution
algorithm that correctly handles discrete and continuous changes in
HydLa programs.

verification. Automata-based LTL model checking is one of the methods
to solve accepting cycle search problems. Model checking is prone to
state-space explosion, and we may expect that parallel processing
would be a promising approach. However, the optimal sequential
algorithm is P-complete and is difficult to parallelize. Alternative
parallel algorithms have been proposed, and OWCTY reversed is one of
them. OWCTY reversed is known to be a stable and fast algorithm for
models without bugs, but it does not use the characteristics of the
automata used in LTL model checking. We propose a new algorithm named
SCCOWCTY that exploits the SCCs (strongly connected components) of
property automata. The algorithm removes states that are judged not to
form accepting cycles faster than OWCTY reversed. We experimented and
compared the two algorithms using DiVinE, and confirmed improvements
both in performance and scalability.

Hybrid systems are dynamical systems with continuous evolution of
states and discrete evolution of states and governing equations. We
have worked on the design and implementation of HydLa, a
constraint-based modeling language for hybrid systems, with a view to
the proper handling of uncertanties and the integration of simulation
and verification. HydLa's constraint hierarchies facilitate the
description of constraints with adequate strength, but its semantical
foundations are not obvious due to the interaction of various language
constructs. This paper gives the declarative semantics of HydLa and
discusses its properties and consequences by means of examples.

LMNtal was designed and implemented as a unifying computational model
based on hierarchical graph rewriting. In pursuit of its application
to the fields of verification and search, the system has recently
evolved into a model checker that employs LMNtal as a modeling
language, and we have been accumulating experiences with modeling and
verification. This paper describes LMNtalEditor, an integrated
development environment (IDE) featuring both state space search and
visualization of verification, and gives various examples of model
description, verification and visualization. LMNtalEditor features
the visualization and browsing of state space and counterexamples, the
searching of states of interest, and so on. We have successfully run
diverse examples taken from the fields including concurrency and AI
search, and found that the IDE plays an essential role in
understanding the models and counterexamples and thus greatly eases
the task of verification. Furthermore, through the encoding of models
in Promela, MSR and Coloured Petri Nets into LMNtal, we have extended
the expressive power of LMNtal to the field of verification.

An approach for reliable modeling, simulation and verification of
hybrid systems is interval arithmetic, which guarantees that the
resulting intervals do not exceed specified size. Interval-based
computation on hybrid systems are often difficult, especially when the
systems are described by nonlinear ordinary differential equations
(ODEs) and nonlinear algebraic equations. We formulate the problem of
detecting a discrete change in hybrid systems as a hybrid constraint
system (HCS), consisting of a flow constraint on trajectories
(i.e. continuous functions over time) and a guard constraint on states
causing discrete changes. We also implemented a technique for solving
HCSs by coordinating (i) interval-based solving of nonlinear ODEs, and
(ii) a constraint programming technique for reducing of interval
enclosures of solutions. The proposed technique reliably solves HCSs
with nonlinear constraints. Our technique employs the interval Newton
method to accelerate the reduction of interval enclosures, while
guaranteeing that the enclosure contains a solution.

We have designed and implemented LMNtal (pronounced "elemental"), a
language based on hierarchical graph rewriting that allows us to
encode diverse
computational models involving concurrency, mobility and multiset rewriting.
Towards its novel applications, the system has recently
evolved into a model checker that employs LMNtal as the
modeling language and PLTL as the specification language. The
strengths of our LMNtal model checker are its powerful data
structure, highly nondeterministic computation it can express, and
virtually no discrepancy between programming and modeling languages.
Models expressed in Promela, MSR, and Coloured Petri Nets can be
easily encoded into LMNtal.
The visualizer of the LMNtal IDE turned out to be extremely useful in
understanding models by state space browsing.
The LMNtal IDE has been used to run and visualize
diverse examples taken from the fields of model checking, concurrency
and AI search.

Parallelizing modern SAT solvers for clusters such as Beowulf is an
important challenge both in terms of performance scalability and
stability. This paper describes a SAT Solver c-sat, a
parallelization of MiniSat using MPI. It employs a layered
master-worker architecture, where the masters handle lemma exchange,
deletion of redundant lemmas and the dynamic partitioning of search
trees, while the workers do search using different decision heuristics
and random number seeds. With careful tuning, c-sat showed
good speedup over MiniSat with reasonably small communication overhead
on various clusters. On an eight-node cluster with two Dual-Core
Opterons on each node (32 PEs),
c-sat ran at least 23 times faster than MiniSat using 31 PEs
(geometric mean; at least 31 times for satisfiable problems) for 189
large-scale problems from SAT Competition and two SAT-Races.

LMNtal (pronounced "elemental")
is a simple language model based on hierarchical graph rewriting that
uses logical variables to represent connectivity and membranes to
represent hierarchy. LMNtal is an outcome of the attempt to unify
constraint-based concurrency and Constraint Handling Rules (CHR), the
two notable extensions to concurrent logic programming. LMNtal is
intended to be a substrate language of various computational models,
especially those addressing concurrency, mobility and multiset
rewriting.
Although the principal objective of LMNtal was to provide a
unifying computational model, it is of interest to equip
the formalism with a precise logical interpretation.
In this paper, we show that it is possible
to give LMNtal a simple logical interpretation based on
intuitionistic linear logic and a flattening technique.
This enables us to call LMNtal a hierarchical,
concurrent linear logic language.

Hybrid systems are systems consisting of discrete changes and
continuous changes over time. Problems in various fields such as
physics can be modeled as hybrid systems. In a simulation of a hybrid
system, it is difficult to obtain rigorous solution of the model
because of computation errors in the handling of continuous
states. Our proposed method integrates interval-based techniques for
initial value problems for ordinary differential equations and
consistency techniques for nonlinear problems. The method obtains
complete interval enclosures of hybrid trajectories efficiently with a
given accuracy. We implemented the method and evaluated its
effectiveness. Using the method, we can simulate hybrid systems
reliably with guaranteeing its accuracy.

Fine-grained reformulation of the lambda calculus is expected to solve
several difficulties with the notion of substitutions--definition,
implementation and cost properties. However, previous attempts
including those using explicit substitutions and those using
Interaction Nets were not ideally simple when it came to the encoding
of the pure (as opposed to weak) lambda calculus. This paper presents
a novel, fine-grained, and highly asynchronous encoding of the pure
lambda calculus using LMNtal, a hierarchical graph rewriting language,
and discusses its properties. The major strength of the
encoding is that it is significantly simpler than previous encodings,
making it promising as an alternative formulation, rather than just
the encoding, of the pure lambda calculus. The membrane construct of
LMNtal plays an essential role in encoding colored tokens
and operations on them. The encoding has been
tested using the publicly available LMNtal implementation.

LMNtal is a language model based on hierarchical graph rewriting that
allows the representation and the manipulation of connectivity
expressed using links and hierarchy expressed using membranes. The
model allows concise description of programs involving dynamic data
structures and multiset processing. Since LMNtal features
fine-grained concurrency whose atomic operation is the application of
a rewrite rule, how to implement it correctly and efficiently is far
from obvious. This motivated us to implement LMNtal using Java and
establish implementation techniques that are both correct and
reasonably efficient. Our implementation consists of a compiler into
intermediate code, an interpreter of the intermediate code, and a
translator from the intermediate code to Java source code. It also
provides a variety of useful features including foreign-language
interface. The cruxes of the implementation techniques include the
handling of links that cross membranes and the asynchronous execution
of rewrite rules belonging to different membranes. This paper
describes the design and implementation of our publicly available
LMNtal system, focusing on technical issues raised in the course of
the development. Those issues include the representation of
hierarchical graph structures, intermediate code, and safe
asynchronous execution of multiple tasks.

LMNtal is a simple language model based on hierarchical graph
rewriting that uses logical variables to represent connectivity and
membranes to represent hierarchy. The major goals of LMNtal have been
to unify various computational models addressing multiset rewriting,
concurrency and mobility, and at the same time to provide a practical
programming language based on hierarchical graph rewriting and
demonstrate its versatility. The purpose of this paper is to present
the aspects of LMNtal as a full-fledged programming language and
demonstrate its expressive power using a number of examples. We have
designed important language features such as arithmetics, rule
application control, modularity and foreign-language interface and
integrated them into the hierarchical graph rewriting model. To
demonstrate the expressive power of the language, we have successfully
encoded diverse related computational models including the
lambda-calculous, the pi-calculus, the ambient calculus and CHR
and run them on our LMNtal system.

Towards a unifying model of concurrency, we have designed and
implemented LMNtal (pronounced "elemental"),
a model and language based on hierarchical graph
rewriting that uses logical variables to represent
connectivity and membranes to represent hierarchy.
Diverse computational models
including the pi-calculus and the lambda-calculus
have been encoded into LMNtal
and tested on our LMNtal system. This paper describes the
encoding of the ambient calculus with running examples.
The technical crux is the distributed management of names
in the presence of locality an mobility.
We have designed and implemented a
self-adjusting management technique of names in which the name
management and mobility operations are performed concurrently.

Hybrid cc is a constraint-based programming language aimed for simple
modeling and rigorous simulation of hybrid systems. The existing
implementation of Hybrid cc uses intervals to handle partially
specified models. Although the implementation solves nonlinear
equations and ordinary differential equations (ODEs) based on interval
arithmetic, discrete changes are not properly computed with
intervals. We propose a method for enclosing every solution of a
hybrid system with a set of boxes. Our method computes continuous
state changes by solving ODEs with initial time and values expressed
by intervals. Then the method detects and encloses every state that
will cause a discrete change. We devise algorithms for pruning and
splitting intervals to obtain tight enclosures. The experimental
results show the efficiency of the algorithms.

LMNtal (pronounced "elemental") is a simple language model
based on hierarchical graph rewriting that uses logical variables to represent
connectivity and membranes to represent hierarchy. LMNtal is an
outcome of the attempt to unify constraint-based concurrency and Constraint
Handling Rules (CHR), the two notable extensions to concurrent
logic programming. LMNtal is intended to be a substrate language of
various computational models, especially those addressing concurrency,
mobility and multiset rewriting. Another important goal of LMNtal has
been to put hierarchical graph rewriting into practice and demonstrate
its versatility by designing and implementing a full-fledged, monolithic
programming language. In this paper, we demonstrate the practical aspects
of LMNtal using a number of examples taken from diverse areas
of computer science. Also, we discuss the relationship between LMNtal
and CHR, which exhibit both commonalities and differences in various
respects.

LMNtal (pronounced "elemental") is a simple language model based on
graph rewriting that uses logical variables to represent links and
membranes to represent hierarchies. The two major goals of LMNtal are
(i) to unify various computational models based on multiset rewriting
and (ii) to serve as the basis of a truly general-purpose language
covering various platforms ranging from wide-area to embedded
computation. Another important contribution of the model is it
greatly facilitates programming with dynamic data structures.

This paper discusses the construction of a
meta-interpreter of Flat GHC, one of the simplest and
earliest concurrent constraint languages.
Meta-interpretation has a long history in logic programming, and has
been applied extensively to building programming systems, adding
functionalities, modifying operational semantics and evaluation
strategies, and so on.
Our objective, in contrast, is to design the pair of
(i) a representation of programs suitable for code mobility and
(ii) a pure interpreter (or virtual machine)
of the represented code,
bearing networked applications of concurrent constraint programming
in mind. This is more challenging
than it might seem; indeed,
meta-interpreters of many programming languages
achieved their objectives by adding small
primitives into the languages and exploiting their functionalities.
A meta-interpreter in a pure, simple concurrent language is
useful because it is fully amenable to theoretical support including
partial evaluation.
After a number of trials and errors, we have arrived at treecode,
a ground-term representation of Flat GHC programs that can be easily
interpreted, transmitted over the network, and converted back to the
original syntax. The paper describes how the interpreter
works, where the subtleties lie, and what its design implies. It also
describes how the interpreter, given the treecode of a program, is
partially evaluated to the original program by the unfold/fold
transformation system for Flat GHC.

We have implemented Kima, an automated error correction system for
concurrent logic programs. Kima corrects near-misses such as wrong
variable occurrences in the absence of explicit declarations of program
properties.
Strong moding/typing and constraint-based analysis are turning out to
play fundamental roles in debugging concurrent logic programs as well as
in establishing the consistency of communication protocols and data
types. Mode/type analysis of Moded Flat GHC is a constraint
satisfaction problem with many simple mode/type constraints, and can be
solved efficiently. We proposed a simple and efficient technique which,
given a non-well-moded/typed program, diagnoses the ``reasons'' of
inconsistency by finding minimal inconsistent subsets of mode/type
constraints. Since each constraint keeps track of the symbol occurrence
in the program, a minimal subset also tells possible sources of program
errors.
Kima realizes automated correction by replacing symbol occurrences
around the possible sources and recalculating modes and types of
the rewritten programs systematically. As long as bugs are near-misses,
Kima proposes a rather small number of alternatives that include an
intended program. Search space is kept small because the minimal subset
confines possible sources of errors in advance. This paper presents the
basic algorithm and various optimization techniques implemented in Kima,
and then discusses its effectiveness based on quantitative experiments.

Constraint-based concurrency, also known as the cc
(concurrent
constraint) formalism, is a simple framework of concurrency that
features (i) asynchronous message passing, (ii) polyadicity and data
structuring mechanisms, (iii) channel mobility, and (iv) nonstrictness
(or computing with partial information). Needless to say, all these
features originate from the use of constraints and logical variables for
interprocess communication and data representation.
Another feature of constraint-based concurrency is its remarkable
stability; all the above features were available essentially in its
present form by mid 1980's in concurrent logic programming languages.
Another well-studied framework of concurrency is name-based
concurrency represented by the family of pi-calculi, in which
names represent both channels and tokens conveyed by channels.
Some variants of the original pi-calculus featured asynchronous
communication, and some limited the use of names in pursuit of nicer
semantical properties. These modifications are more or less related
to the constructs of constraint-based concurrency.
Integration of constraint-based and name-based concurrency can be
found in proposals of calculi such as the gamma-calculus, the
rho-calculus and the Fusion calculus, all of which incorporate
constraints or name equation into name-based concurrency.
This tutorial takes a different, analytical approach in
relating the two formalisms; we compare the roles of logical
variables and names rather than trying to integrate one into the
other. Although the comparison under their original, untyped
setting is
not easy, once appropriate type systems are incorporated to both
camps, name-based communication and constraint-based communication
exhibit more affinities.
The examples of such type systems are linear types for the
pi-calculus and the mode/linearity/capability systems for Guarded
Horn Clauses (GHC). Both are concerned with the polarity and
multiplicity of communication, and prescribe the ways in which
communication channels can be used. They help in-depth understanding
of the phenomena occurring in the course of computation.
The view of constraint-based concurrency that the tutorial intends to
provide will complement the usual, abstract view based on ask
and tell operations on a constraint store. For instance, it
reveals the highly local nature of a constraint store (both at
linguistic and implementation levels) which is often understood to be
a global, shared entity. It also brings resource-consciousness into
the logical setting.
This is expected to be a step towards the deployment of
cc languages as a common platform of
non-sequential computation including parallel, distributed, and
embedded computing.

The use of types to deal with access capabilities of program
entities is becoming increasingly popular.
In concurrent logic programming, the first attempt was made in
Moded Flat GHC in 1990, which gave polarity structures (modes) to
every variable occurrence and every predicate argument. Strong
moding turned out to play fundamental roles in programming,
implementation and the in-depth understanding of constraint-based
concurrent computation.
The moding principle guarantees that each variable is written
only once and encourages capability-conscious programming.
Furthermore, it gives less generic modes to programs that discard
or duplicate data, thus providing the view of "data as
resources." A simple linearity system built upon the mode system
distinguishes variables read only once from those read possibly
many times, enabling compile-time garbage collection. Compared to
linear types studied in other programming paradigms, the primary
issue in constraint-based concurrency has been to deal with
logical variables and highly non-strict data structures they
induce.
In this paper, we put our resource-consciousness one step
forward and consider a class of `ecological' programs which
recycle or return all the resources given to them while allowing
concurrent reading of data structures via controlled aliasing.
This completely recyclic subset enforces us to think more about
resources, but the resulting programs enjoy high symmetry which we
believe has more than aesthetic implications to our programming
practice in general.
The type system supporting recyclic concurrent programming
gives a [-1,+1] capability to each occurrence of variable and
function symbols (constructors), where positive/negative values
mean read/write capabilities, respectively, and fractions mean
non-exclusive read/write paths. The capabilities are intended to
be statically checked or reconstructed so that one can tell the
polarity and exclusiveness of each piece of information handled by
concurrent processes. The capability type system refines and
integrates the mode system and the linearity system for Moded Flat
GHC. Its arithmetic formulation contributes to the simplicity.
The execution of a recyclic program proceeds so that every
variable has zero-sum capability and the resources (i.e.,
constructors weighted by their capabilities) a process absorbs
match the resources it emits. Constructors accessed by a process
with an exclusive read capability can be reused for other
purposes.
The first half of this paper is devoted to a tutorial
introduction to constraint-based concurrency in the hope that it
will encourage cross-fertilization of different concurrency
formalisms.

We have implemented Kima, an automated error correction system for
concurrent logic programs. Kima corrects near-misses such as wrong
variable occurrences in the absence of explicit declarations of program
properties.
Strong moding/typing and constraint-based analysis are turning to play
fundamental roles in debugging concurrent logic programs as well as in
establishing the consistency of communication protocols and data types.
Mode/type analysis of Moded Flat GHC is a constraint satisfaction
problem with many simple mode/type constraints, and can be solved
efficiently. We proposed a simple and efficient technique which, given
a non-well-moded/typed program, diagnoses the "reasons" of
inconsistency by finding minimal inconsistent subsets of mode/type
constraints. Since each constraint keeps track of the symbol occurrence
in the program, a minimal subset also tells possible sources of program
errors.
Kima realizes automated correction by replacing symbol occurrences
around the possible sources and recalculating modes and types of
the rewritten programs systematically. As long as bugs are near-misses,
Kima proposes a rather small number of alternatives that include an
intended program. Search space is kept small because the minimal subset
confines possible sources of errors in advance. This paper presents the
basic algorithm and various optimization techniques implemented in Kima,
and then discusses its effectiveness based on quantitative experiments.

Automatic memory management and the hiding of the notion of pointers
are the prominent features of symbolic processing languages.
They make programming easy and guarantee the safety of memory references.
For the memory management of linked data structures, copying garbage collection
is most widely used because of its simplicity and desirable properties.
However, if certain properties about runtime storage allocation and the
behavior of pointers can be obtaind
by static analysis, a compiler may be able to generate
object code closer to that of procedural programs. In the fields of
parallel, distributed and real-time computation, it is highly
desirable to be able to identify data structures in a program
that can be managed without using garbage collection.
To this end, this paper proposes a framework of linearity analysis for
a concurrent logic language Moded Flat GHC, and proves its basic
property. The purpose of linearity analysis is to distinguish
between fragments of data structures that may be referenced by two or
more pointers and those that cannot be referenced by two or more
pointers. Data structures with only one reader are amenable to
compile-time garbage collection or local reuse. The proposed
framework of linearity analysis is constraint-based and involves both
equality and implicational constraints. It has
been implemented as part of klint v2, a static analyzer for KL1
programs.

Concurrent logic/constraint programming is a simple and elegant
formalism of concurrency that can potentially address a lot of important
future applications including parallel, distributed, and intelligent
systems. Its basic concept has been extremely stable and has allowed
efficient implementations. However, its uniqueness makes this paradigm
rather difficult to appreciate.
Many people consider concurrent logic/constraint programming to have
rather little to do with the rest of logic programming. There is
certainly a fundamental difference in the view of computation, but
careful study of the differences will lead to the understanding and
the enhancing of the whole logic programming paradigm by an analytic
approach.
As a model of concurrency, concurrent logic/constraint programming has
its own challenges to share with other formalisms of concurrency
as well. They are: (1) a counterpart of lambda-calculus in the
field of concurrency, (2) a common platform for various
non-sequential forms of computing, and (3) type systems that cover
both logical and physical aspects of computation.

We study how constraint-based static analysis can be applied to the
automated and systematic debugging of program errors.
Strongly moding and constraint-based mode analysis are turning out to play
fundamental roles in debugging concurrent logic/constraint programs as
well as in establishing the consistency of communication protocols and
in optimization. Mode analysis of Moded Flat GHC is a constraint
satisfaction problem with many simple mode constraints, and can be
solved efficiently by unification over feature graphs. We have proposed
a simple and efficient technique which, given a non-well-moded program,
diagnoses the "reasons" of inconsistency by finding minimal
inconsistent subsets of mode constraints. Since each constraint keeps
track of the symbol occurrence in the program that imposed the
constraint, a minimal subset also tells possible sources of program
errors. The technique is quite general and can be used with other
constraint-based frameworks such as strong typing.
Based on the above idea, we study the possibility of
automated debugging in the absence of mode/type declarations. The
mode constraints are usually imposed redundantly, and the
constraints that are considered correct can be used for correcting
wrong symbol occurrences found by the diagnosis. As long as bugs
are near-misses, the automated debugger can propose a rather small
number of alternatives that include the intended program. Search space
is kept small because constraints effectively prune many irrelevant
alternatives. The paper demonstrates the technique by way of
examples.

The KLIC system has achieved both high portability and extensibility by
employing C as an intermediate language and featuring generic objects
that allow users to define new classes of data. It is also efficient
for an untyped and unmoded language with fine-grained concurrency, but its
flexibility incurs runtime overhead that could be reduced by static
analysis. This paper studies how constraint-based static analysis and
abstract interpretation can be used to reduce dynamic data checking and
to optimize loops.
We applied the proposed technique to the optimization of floating-point
numbers and their arrays. The optimized KL1 programs turned out to be only
34%-70% slower than the comparable C programs.

Strong moding and constraint-based mode analysis are expected to play
fundamental roles in debugging concurrent logic/constraint programs
as well as in establishing the consistency of
communication protocols and in optimization.
Mode analysis of Moded Flat
GHC is a constraint satisfaction problem with many simple mode
constraints, and can be solved efficiently by unification over feature
graphs. In practice, however, it is important to be able to analyze
non-well-moded programs (programs whose mode constraints are
inconsistent) and present plausible "reasons" of inconsistency
to the programmers in the absence of mode declarations.
This paper discusses the application of strong moding to systematic
and efficient static program debugging. The basic idea, which turned
out to work well at least for small programs, is to find a minimal
inconsistent subset from an inconsistent set of mode constraints and
indicate the symbol( occurrence)s in the program text that imposed
those constraints. A bug can be pinpointed better by finding more
than one overlapping minimal subset. These ideas can be readily extended
to finding multiple bugs at once. For large programs,
stratification of predicates narrows search space and produces more
intuitive explanations. Stratification plays a fundamental role in
introducing mode polymorphism as well.
Our experiments show that the
sizes of minimal subsets are small enough for bug location and do not
depend on the program size, which means that diagnosis can be done in
almost linear time.

Strong moding is turning out to play fundamental roles in concurrent
logic programming (or in general, concurrent constraint programming) as
strong typing does but in different respects. "Principal modes" can
most naturally be
represented as feature graphs and can be formed by unification. We
built a mode analyzer, implementing mode graphs and their operations
by means of concurrent processes and streams (rather than records
and pointers).
This is a non-trivial
programming experience with complicated process structures and has
provided us with several insights into the relationship between
programming with dynamic data structures and programming with dynamic
process structures.
The mode analyzer was then applied to the analyzer itself to study the
characteristics of the mode constraints it imposed and of the form of
large mode graphs.
Finally, we show how our framework based on principal
moding can be extended to deal with (1) random-access data structures,
(2) mode polymorphism, (3) higher-order constructs, and (4) various
non-Herbrand constraint systems.

This paper briefly reviews concurrent logic programming
and the I/O mode system designed for the concurrent logic language Flat
GHC. The mode system plays fundamental roles both in programming
and implementation in almost the same way as type systems do but in
different respects. It provides us with the information on how data are
generated and consumed and thus the view of "data as resources". It
statically detects bugs resulting from ill-formed dataflow and advocates
the "programming as wiring" paradigm. Well-modedness guarantees the
safety of unification, the basic operation in concurrent logic
programming. Information on the numbers of access paths to data can be
obtained by slightly extending the framework, which can be used for
compile-time garbage collection and the destructive update of structures.

Concurrent logic languages have been used mainly for the (parallel)
processing of rather irregular symbolic applications.
However, since concurrent logic languages are essentially
general-purpose, they should be applicable to problems with
regular structures and their data-parallel processing as well. This
paper studies
the possibility of massively parallel processing in concurrent logic
programming, focusing on arrays and its data-parallel processing.

Concurrent processes can be used both for programming
computation and for programming storage. Previous implementations of
Flat GHC, however, have been tuned for computation-intensive programs,
and perform poorly for storage-intensive programs (such as programs
implementing reconfigurable data structures using processes and streams)
and demand-driven programs.
This paper proposes an optimization technique for programs in
which processes are almost always suspended. The technique compiles
unification for data transfer into message passing. Instead of
reducing the number of process switching operations, the technique
optimizes the cost of each process switching operation and reduces the
number of cons operations for data buffering.

The technique is based on a mode system which is powerful enough to analyze
bidirectional communication and streams of streams. The mode system is
based on mode constraints imposed by individual clauses
rather than on global dataflow analysis, enabling separate analysis of
program modules.
The introduction of a mode system into Flat GHC effectively subsets Flat
GHC; the resulting language is called Moded Flat GHC.
Moded Flat GHC programs enjoy two important properties under certain
conditions:
(1) reduction of a goal clause retains the well-modedness of the clause,
and (2) when execution terminates, all the variables in an initial goal
clause will be bound to ground terms.
Practically,
the computational complexity of all-at-once mode analysis can be made
almost linear with respect to the size n of the program and the complexity
of the data structures used, and the complexity of separate
analysis is higher only by O(log n) time. Mode analysis provides
useful information for debugging as well.

Benchmark results show that the proposed technique well improves the
performance of storage-intensive programs and demand-driven programs
compared with a conventional native-code implementation. It also improves
the performance of some computation-intensive programs.
We expect that the proposed technique will expand the
application areas of concurrent logic languages.

We proposed in [Ueda and Morita 1990] a new, message-oriented
implementation technique for Moded Flat GHC that compiled unification
for data transfer into message passing. The technique was based on
constraint-based program analysis, and
significantly improved the performance of programs that used goals
and streams to implement reconfigurable data structures. In this paper
we discuss how the technique can be parallelized. We focus on
a method for shared-memory multiprocessors, called the
shared-goal method, though a
different method could be used for distributed-memory multiprocessors.
Unlike other parallel implementations of concurrent logic languages
which we call process-oriented, the unit of parallel execution is
not an individual goal but a chain of message sends caused successively
by an initial message send. Parallelism comes from the existence of
different chains of message sends that can be executed independently or
in a pipelined manner. Mutual exclusion based on busy waiting and on
message buffering controls access to individual, shared goals. Typical
goals allow last-send optimization, the message-oriented
counterpart of last-call optimization. We are building an experimental
implementation on Sequent Symmetry. In spite of the simple scheduling
currently adopted, preliminary evaluation shows good parallel speedup
and good absolute performance for concurrent operations on binary
process trees.

[pdf]
Kazunori Ueda, The Fifth Generation Project: Personal Perspectives.Commun. ACM, Vol.36, No.3, pp.65-76, March 1993.
(The paper is avaiable from ACM Ditital Library as a joint article with
Kazuhiro Fuchi, Robert Kowalski, Koichi Furukawa, Ken Kahn, Takashi Chikayama and Evan Tick under the title "Launching the New Era".

This article reviews the design process of KL1, the kernel language for
the Parallel Inference Machine in the Fifth Generation Computer Systems
(FGCS) project. The FGCS project chose logic programming as its central
principle, but the shift to concurrent logic programming started very
early in the R&D of the kernel language, which provoked many
discussions and many criticisms. The author proposed a concurrent logic
language GHC in December 1984 as a basis of KL1. GHC was recognized as
a stable platform with a number of justifications, and the design of KL1
started to converge.
The article focuses on the history of the design of KL1, and also
presents personal principles behind the language design and perspectives
on the future of GHC and KL1.

[ps]
Kazunori Ueda,
Designing a Concurrent Programming Language.
In Proceedings of an International Conference organized by the IPSJ to
Commemorate the 30th Anniversary (InfoJapan'90), Information Processing
Society of Japan, October 1990, pp.87-94.

This paper reviews the design and the evolution of a
concurrent programming language Guarded Horn Clauses (GHC). GHC was
born from a study of parallelism in logic programming, but turned out to
be a simple and flexible concurrent programming language with a number
of nice properties. We give both an abstract view of GHC computation
based the notion of transactions and a concrete view, namely an
operational semantics, based on reductions. Also, we discuss in detail
the properties of GHC such as its atomic operations, which have much to
do with the design of GHC.

Transformation rules for (Flat) GHC
programs are presented, which refine the previous rules proposed by one of
the authors (Furukawa et al. 1987). The rules are based on
unfolding/folding and are novel in that they are stated in terms of
idempotent substitutions with preferred directions of bindings. They are
more general than the old rules in that no mode system is assumed and that
the rule of folding is included. The presentation of the rules suggests
that idempotent substitutions with preferred directions of bindings are an
appropriate tool for modeling information in (concurrent) logic
programming. A semantic model is given which associates a multiset of
goals with the set of possible finite sequences of transactions (via
substitutions) with the multiset.
A transformation preserves the set of transaction sequences that are made
without the risk of the failure of unification.
The model also deals with anomalous behavior such as the failure of
unification and deadlock, so it can be shown with the same model that the
transformation cannot introduce those anomalies.
Applications of the
transformation technique include the fusion of communicating parallel
processes. Some familiarity with GHC is assumed.

A Concurrent Prolog compiler, whose target language is (sequential)
Prolog, was implemented in Prolog. The object program obtained can
further be compiled into machine codes by a Prolog compiler. Due to
the similarity among the source, target and implementation languages,
the compiler and the runtime support were small and very rapidly
developed. Benchmark tests showed that (twice) compiled Concurrent
Prolog programs ran 2.7 to 4.4 times faster and 2.7 to 5.3 times
slower than comparable Prolog programs running on the interpreter and
compiler, respectively, of the same Prolog system. After these
experiments, the Concurrent Prolog compiler was modified to obtain a
compiler of the new parallel logic programming language, GHC (Guarded
Horn Clauses), and almost the same efficiency was achieved. These
compilers will serve for practice of parallel logic programming.

A set of Horn clauses augmented with a `guard' mechanism is shown to be a simple and yet powerful parallel logic programming language.
(The first paper on the concurrent logic programming langauge GHC (Guarded Horn Clauses).)

The Concurrent Prolog predicate for merging n input streams is
investigated, and a compilation technique tor getting its efficient
code is presented. Using the technique, data are transferred with a
delay independent of n. Furthermore, it is shown that the addition
and the removal of an input stream can be done within an average time
of 0(1). The predicate for distributing data on an input stream to n
output streams can also be realized as efficiently as n-ary merge.
The compilation technique for the distribute predicate can further be
applied to the implementation of mutable arrays that allow
constant-time accessing and updating. Although the efficiency stated
above could be achieved by a sophisticated compiler, the codes should
be provided directly by the system to get rid of the bulk of source
programs and the time required to compile them.

This thesis introduces the programming language Guarded Horn Clauses
which is abbreviated to GHC. Guarded Horn Clauses was born from the
examination of existing logic programming languages and logic programming
in general, with special attention paid to parallelism.

The main feature of GHC is its extreme simplicity compared with the
other parallel programming languages. GHC is a restriction of a
resolution-based theorem prover for Horn-clause sentences. The restriction
has two aspects: One is the restriction on data flow caused by
unification, and the other is the introduction of choice nondeterminism.
The former is essential for a general-purpose language and it also provides
GHC with a synchronization primitive. The latter is required
by the intended applications which include a system interacting with the
outside world. What is characteristic with GHC is that all the
restrictions have been imposed as the semantics given to the sole
additional syntactic construct, guard.

Although Guarded Horn Clauses can be classified into the family of
logic programming languages, it has close relationship to other formalisms
including dataflow languages, Communicating Sequential Processes, and
functional languages for multiprocessing. Except for the lack of
higher-order facilities, GHC can be viewed as a generalization of these
frameworks. The simplicity and generality of GHC will make it suitable for
a standard not only of parallel logic programming languages but of parallel
programming languages. Moreover, it is simple enough to be regarded as a
computation model as well as a programming language.

Attention has always been paid to the possibility of efficient
implementation during the design stage of GHC. We showed that stream
merging and distribution which are expected to be heavily used can be
implemented with the same time-complexity as the time-complexity of
many-to-one communication in procedural languages. Furthermore, we made
available an efficient compiler-based implementation of a subset of GHC on
top of Prolog.

GHC has lost the completeness as a theorem prover deliberately, not as
a result of compromise. Nevertheless, it can be used for efficient
implementation of exhaustive solution search for Horn-clause programs. We
showed how to automatically compile a Horn-clause program for exhaustive
search into a GHC program.

We have been working on the design and implementation of HydLa, a
high-level modeling language for hybrid systems. Its objectives and
features include a constraint-based (as opposed to automata-based)
formalism, proper handling of uncertanties, and nondeterministic
execution. The talk will describe an overview of the language with
live demonstration of our prototype implementation.

We propose LMNtal, a simple language model based on the
rewriting of hierarchical graphs that use logical variables to represent
links. The two major goals of the model are (i) to unify various
computational models based on multiset rewriting and (ii) to serve as the
basis of a truly general-purpose language covering various platforms
ranging from wide-area to embedded computation. Another important
contribution of the model is it greatly facilitates programming with
dynamic data structures.

This paper informally introduces a programming language Guarded
Horn Clauses (GHC) through program examples. GHC is a paral-
lel programming language devised from investigation of the basic framework and
practice of logic programming. It has introduced the guard construct with sim-
ple semantics into logic programming to express interacting processes. A GHC
program naturally expresses parallelism inherent in the original problem. The
simple and uniform framework of GHC should be far easier to understand than
the constructs of conventional parallel programming languages. We explain GHC
in comparison with Prolog, the best-known logic programming language.