by
David Gelernter, David Kaminsky
- Sixth ACM International Conference on Supercomputing, 1992

"... In this paper we present a new system for making use of the cycles routinely wasted in local area networks. The Piranha system harnesses these cycles to run explicitly parallel programs. Programs written for Piranha are specializations of Linda master /worker programs[5]. We have used Piranha to run ..."

In this paper we present a new system for making use of the cycles routinely wasted in local area networks. The Piranha system harnesses these cycles to run explicitly parallel programs. Programs written for Piranha are specializations of Linda master /worker programs[5]. We have used Piranha to run a number of production applications. We present a description of the Piranha prototype, briefly explain the Piranha programming methodology, and explore different types of Piranha algorithms. This work was supported by the National Science Foundation under grant number CCR-8657615 and NASA under grant number NGT-50719. 1 Introduction As local area networks spanning large numbers of powerful workstations become commonplace, researchers have come to realize that at most sites, many nodes are idle much of the time. Ideally there would be some way to recapture some of these lost cycles, which grow increasingly formidable in the aggregate as workstations grow more powerful. In the Piranha model...

...outinely wasted in local area networks. The Piranha system harnesses these cycles to run explicitly parallel programs. Programs written for Piranha are specializations of Linda master /worker programs=-=[5]-=-. We have used Piranha to run a number of production applications. We present a description of the Piranha prototype, briefly explain the Piranha programming methodology, and explore different types o...

"... . Under "adaptive parallelism," the set of processors executing a parallel program may grow or shrink as the program runs. Potential gains include the capacity to run a parallel program on the idle workstations in a conventional LAN---processors join the computation when they become idle, ..."

. Under &quot;adaptive parallelism,&quot; the set of processors executing a parallel program may grow or shrink as the program runs. Potential gains include the capacity to run a parallel program on the idle workstations in a conventional LAN---processors join the computation when they become idle, and withdraw when their owners need them---and to manage the nodes of a dedicated multiprocessor efficiency. Experience to date with our Piranha system for adaptive parallelism suggests that these possibilities can be achieved in practice on real applications at comparatively modest costs. Keywords: Parallelism, networks, multiprocessors, adaptive parallelism, programming techniques, Linda, Piranha. 1 Introduction Most work on parallelism is &quot;static&quot;: it assumes that programs are distributed over processor sets that remain fixed throughout the computation. If a program starts out on 64 processors, it runs on exactly 64 until completion, and specifically on the same 64. &quot;Adaptive parallelism&quot; (AP) abo...

"... Little-JIL, a language for programming coordination in processes is an executable, high-level language with a formal (yet graphical) syntax and rigorously defined operational semantics. The central abstraction in Little-JIL is the “step,” which is the focal point for coordination, providing a scopin ..."

Little-JIL, a language for programming coordination in processes is an executable, high-level language with a formal (yet graphical) syntax and rigorously defined operational semantics. The central abstraction in Little-JIL is the “step,” which is the focal point for coordination, providing a scoping mechanism for control, data, and exception flow and for agent and resource assignment. Steps are organized into a static hierarchy, but can have a highly dynamic execution structure including the possibility of recursion and concurrency. Little-JIL is based on two main hypotheses. The first is that coordination structure is separable from other process language issues. Little-JIL provides rich control structures while relying on separate systems for resource, artifact, and agenda management. The second hypothesis is that processes are executed by agents that know how to perform their tasks but benefit from coordination support. Accordingly, each Little-JIL step has an execution agent (human or automated) that is responsible for performing the work of the step. This approach has proven effective in supporting the clear and concise expression of agent coordination for a wide variety of software, workflow, and other processes.

... as defined by Carriero and Gelernter is “the process of building programs by gluing together active pieces” and is a vehicle for building programs that “can include both human and software processes”=-=[1]-=-. From this perspective, it can be seen that coordination is a logically central aspect of process semantics. As with Linda [1], in Little-JIL we have separated coordination from such other language s...

"... Portability, efficiency, and ease of coding are all important considerations in choosing the programming model for a scalable parallel application. The message-passing programming model is widely used because of its portability, yet some applications are too complex to code in it while also trying t ..."

Portability, efficiency, and ease of coding are all important considerations in choosing the programming model for a scalable parallel application. The message-passing programming model is widely used because of its portability, yet some applications are too complex to code in it while also trying to maintain a balanced computation load and avoid redundant computations. The shared-memory programming model simplifies coding, but it is not portable and often provides little control over interprocessor data transfer costs.This paper describes a new approach, called Global Arrays (GA), that combines the better features of both other models, leading to both simple coding and efficient execution. The key concept of GA is that it provides a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes. We have implemented GA libraries on a variety of...

"... . In this paper we are going to analyze mobile code issues in the perspective of Object Oriented systems in which thread migration is not supported. This means that both objects' code and data can be transmitted from a place to another but not the current execution state (if any) associated to ..."

. In this paper we are going to analyze mobile code issues in the perspective of Object Oriented systems in which thread migration is not supported. This means that both objects&apos; code and data can be transmitted from a place to another but not the current execution state (if any) associated to the object. This is the case with the Java language which is often used in the Word Wide Webfor developing applets which are little applications downloaded on the fly and executed in the client machine. While this mechanism is quite useful for enhancing HTML documents with sound and animation, we think that this technology can give its best in the field of distributed-cooperative work, both in the perspective of Internet and Intranet connectivity. Java is indeed a concurrent, multithreaded language, but it offers little help for distributed programming. Thus, we introduce Jada, a coordination toolkit for Java where coordination among either concurrent threads or distributed Java objects is achiev...

...iver reads the message from the object space (and in the synchronous case puts an ack object in the object space). Many other concurrency problems can also be easily solved using shared object spaces =-=[CG90]-=-. We are using Jada as coordination kernel for implementing more complex Internet languages and architectures. In particular, we are developing on top of Jada the Shade/Java agent based coordination l...

"... We are designing, implementing, deploying, and operating a secure measurement platform capable of performing various types of Internet infrastructure measurements and assessments. We integrate state-of-the-art measurement and analysis capabilities to try to build a coherent view of Internet topology ..."

We are designing, implementing, deploying, and operating a secure measurement platform capable of performing various types of Internet infrastructure measurements and assessments. We integrate state-of-the-art measurement and analysis capabilities to try to build a coherent view of Internet topology. In September 2007 we began to use this novel architecture to support ongoing global Internet topology measurement and mapping, and are now gathering the largest set of IP topology data for use by academic researchers. We are using the best available techniques for IP topology mapping, and are developing some new techniques, as well as supporting software for data analysis, topology generation, and interactive visualization of resulting large annotated graphs. This paper presents our current results, next steps, and future goals. 1.

...ward a common task. To enable coordination, Ark employs a new implementation, called Marinda, of the tuple-space coordination model first introduced by D. Gelernter in his Linda coordination language =-=[17, 14]-=-. A tuple space is a distributed shared memory combined with a small number of easy-to-use operations. The tuple space stores tuples, which are arrays of simple values (strings and numbers), and clien...

"... Experience has shown us that object-oriented technology alone is not enough to guarantee that the systems we develop will be flexible and adaptable. Even "welldesigned " object-oriented software may be difficult to understand and adapt to new requirements. We propose a conceptual framew ..."

Experience has shown us that object-oriented technology alone is not enough to guarantee that the systems we develop will be flexible and adaptable. Even &quot;welldesigned &quot; object-oriented software may be difficult to understand and adapt to new requirements. We propose a conceptual framework that will help yield more flexible object-oriented systems by encouraging explicit separation of computational and compositional elements. We distinguish between components that adhere to an architectural style, scripts that specify compositions, and glue that may be needed to adapt components&apos; interfaces and contracts. We also discuss a prototype of an experimental composition language called PICCOLA that attempts to combine proven ideas from scripting languages, coordination models and languages, glue techniques, and architectural specification. 1 Introduction The last decade has shown that object-oriented technology alone is not enough to cope with the rapidly changing requirements of ...

"... In the last few years the use of distributed structured shared memory paradigms for coordination between parallel processes has become common. One of the most well known implementations of this paradigm is the shared tuple space model (as used in Linda). In this paper we describe a new set of primit ..."

In the last few years the use of distributed structured shared memory paradigms for coordination between parallel processes has become common. One of the most well known implementations of this paradigm is the shared tuple space model (as used in Linda). In this paper we describe a new set of primitives for fully distributed coordination of processes and agents using tuple spaces, called the Bonita primitives. The Linda primitives provide synchronous access to tuple spaces, whereas the Bonita primitives provide asynchronous access to tuple spaces. The proposed primitives are able to mimic the Linda primitives, therefore providing the ease of use and expressibility of Linda together with a number of advantages for the coordination of agents or processes in distributed environments. The primitives allow user processes to perform computation concurrently with tuple space accesses, and provide new coordination constructs which lead to more efficient programs. In this paper we present the ...

"... Many-to-Many Invocation (M2MI) is a new paradigm for building collaborative systems that run in wire-less proximal ad hoc networks of xed and mobile computing devices. M2MI is useful for building a broad range of systems, including multiuser applica-tions (conversations, groupware, multiplayer games ..."

Many-to-Many Invocation (M2MI) is a new paradigm for building collaborative systems that run in wire-less proximal ad hoc networks of xed and mobile computing devices. M2MI is useful for building a broad range of systems, including multiuser applica-tions (conversations, groupware, multiplayer games); systems involving networked devices (printers, cam-eras, sensors); and collaborative middleware systems. M2MI provides an object oriented method call ab-straction based on broadcasting. An M2MI invoca-tion means \Every object out there that implements this interface, call this method. &quot; An M2MI-based application is built by dening one or more inter-faces, creating objects that implement those inter-faces in all the participating devices, and broadcast-ing method invocations to all the objects on all the devices. M2MI is layered on top of a new messag-ing protocol, the Many-to-Many Protocol (M2MP), which broadcasts messages to all nearby devices us-ing the wireless network&apos;s inherent broadcast nature instead of routing messages from device to device. M2MI synthesizes remote method invocation prox-ies dynamically at run time, eliminating the need to compile and deploy proxies ahead of time. As a re-sult, in an M2MI-based system, central servers are not required; network administration is not required; complicated, resource-consuming ad hoc routing pro-tocols are not required; and system development and deployment are simplied.

...pplication thatsnds printers wherever the user happens to be, or a surveillance application that displays images from nearby video cameras. Collaborative middleware systems like shared tuple spaces =-=[14, 7]-=-. In many such collaborative systems, every device needs to talk to every other device. Every person's chat messages are displayed on every person's device; every person's calendar on every person's d...