The computing power of recent massively parallel supercomputers is rising to the challenge of exploding demands for speed and memory that can be dedicated to a single problem. Still the difficulty in parallel programming persists and there is increasing demand for high level support for building discrete event models to execute on such platforms. We present a parallel DEVS-based (Discrete Event System Specification) simulation environment that can execute on distributed memory multicomputer systems. Underlying the environment is a parallel container class library for hiding the details of message passing technology while providing high level abstractions for hierarchical, modular DEVS models. The objective of Heterogeneous Container Class Library (HCCL) is to provide convenient object-oriented primitives for utilizing a collection of distributed computing resources to solve large problems and to speed up computations. Implemented by ensemble methods, parallel container class provides concurrency and a parallel computing paradigm at a higher level of abstraction encapsulating the details of the underlying message passing mechanisms. The difficulty of the synchronization problem was reduced by the inherent nature of ensemble method primitives. The DEVS/containers architecture for parallel simulation was first implemented on a massively parallel platform (CM-5) using CMMD message passing library. Then the SP2 implementation uses portable MPI so that the simulation architecture can be mapped to any heterogeneous and distributed computing environment. Observed performance of the C++ implementation working on the Thinking Machines CM-5 and IBM SP2 for high resolution ecosystem models demonstrates that high performance need not be sacrificed in providing high level abstractions to the discrete event modelling. The study of performance and exploitation of the natural parallelism in hierarchical discrete event models are also supported by capability of mapping DEVS models to the processors. The closure under coupling property and a mail message approach to interprocessor communication enable a user to easily partition and map DEVS models onto parallel platforms. We study how the mapping of DEVS models affect the performance and the efficiency of parallel simulation. The results are in agreement with earlier theory which predicts that optimal mappings are predictably influenced by communication overhead and communication/computation ratio.

The computing power of recent massively parallel supercomputers is rising to the challenge of exploding demands for speed and memory that can be dedicated to a single problem. Still the difficulty in parallel programming persists and there is increasing demand for high level support for building discrete event models to execute on such platforms. We present a parallel DEVS-based (Discrete Event System Specification) simulation environment that can execute on distributed memory multicomputer systems. Underlying the environment is a parallel container class library for hiding the details of message passing technology while providing high level abstractions for hierarchical, modular DEVS models. The objective of Heterogeneous Container Class Library (HCCL) is to provide convenient object-oriented primitives for utilizing a collection of distributed computing resources to solve large problems and to speed up computations. Implemented by ensemble methods, parallel container class provides concurrency and a parallel computing paradigm at a higher level of abstraction encapsulating the details of the underlying message passing mechanisms. The difficulty of the synchronization problem was reduced by the inherent nature of ensemble method primitives. The DEVS/containers architecture for parallel simulation was first implemented on a massively parallel platform (CM-5) using CMMD message passing library. Then the SP2 implementation uses portable MPI so that the simulation architecture can be mapped to any heterogeneous and distributed computing environment. Observed performance of the C++ implementation working on the Thinking Machines CM-5 and IBM SP2 for high resolution ecosystem models demonstrates that high performance need not be sacrificed in providing high level abstractions to the discrete event modelling. The study of performance and exploitation of the natural parallelism in hierarchical discrete event models are also supported by capability of mapping DEVS models to the processors. The closure under coupling property and a mail message approach to interprocessor communication enable a user to easily partition and map DEVS models onto parallel platforms. We study how the mapping of DEVS models affect the performance and the efficiency of parallel simulation. The results are in agreement with earlier theory which predicts that optimal mappings are predictably influenced by communication overhead and communication/computation ratio.

en_US

dc.type

text

en_US

dc.type

Dissertation-Reproduction (electronic)

en_US

dc.subject

Engineering, Electronics and Electrical.

en_US

dc.subject

Computer Science.

en_US

thesis.degree.name

Ph.D.

en_US

thesis.degree.level

doctoral

en_US

thesis.degree.discipline

Graduate College

en_US

thesis.degree.discipline

Electrical and Computer Engineering

en_US

thesis.degree.grantor

University of Arizona

en_US

dc.contributor.advisor

Zeigler, Bernard P.

en_US

dc.identifier.proquest

9720633

en_US

dc.identifier.bibrecord

.b34548695

en_US

All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.