Model abstraction is a method for reducing the complexity of a simulation model while maintaining the validity of the simulation results with respect to the question that the simulation is being used to address. Frantz identified and categorized a number of abstraction techniques drawn from both traditional simulation modeling and more recent approaches such as qualitative simulation and metamodeling. In this paper, we consider how to make practical use of the abstraction concept. We first look at some examples of abstractions to illustrate the concepts and applications of abstractions in the context of actual models. We then generalize these examples to discuss a methodology for using abstractions as the basis for model analysis and design.

The dynamic focusing approach (DFA) has been under development for several years. Its intent is to address several of the issues of mixed level simulations, particularly the aggregational issues. Though the approach requires that the system be able to be modeled within certain constraints, many systems of interest fit well within them. This approach combines a hierarchical representation of knowledge with a stochastic propagation mechanism; this provides capability to gracefully move from coarse granularity to fine granularity under user guidance. Prototype tools have been developed for engineering analysis, combat simulation and TQM process implementation. This paper gives an overview of the approach and its current status.

As complex models are used in practice, modelers require efficient ways of abstracting their models. Through the use of hierarchy, we are able to simplify and organize the complex system. The problem with the hierarchical modeling is that system components in each level are dependent on the next- lowest level so that we are unable to run each level independently. We present a way to augment hierarchical modeling where abstraction can take place on two fronts: structural and behavioral. Our approach is to use structural abstraction in order to organize the system hierarchically, and then apply behavioral abstraction to each level in order to approximate lower level's behavior so that it can be executed independently. The proposed abstraction method is done by semi-automatic way and gives advantages to view and analyze complex systems at different levels of abstraction.

A hybrid system consists of continuous systems and discrete event systems, which interact with each other. In such configuration, a continuous system can't directly communicate with a discrete event system. Therefore, a form of interface between two systems is required for possible communication. An interface from a continuous system to a discrete event system requires abstraction of a continuous system as a discrete event system. This paper proposes a methodology for abstraction of a continuous system as a discrete event system using neural network. A continuous system is first represented by a timed state transition model and then the model is mapped into a neural network by learning capability of the network. With a simple example, this paper describes the abstraction process in detail and discusses application methods of the neural network model. Finally, an application of such abstraction in design of intelligent control is discussed.

Complex interactions of natural and artificial processes over both time and space calls for powerful new modeling methodologies. Approaches based on partial differential equations entail an enormous computational burden that greatly limits their applicability. Deriving DEVS representations with a well justified process of abstraction from traditional differential equation models can assure relative validity and realism while gaining orders of magnitude speedup. However, when executed within an optimization loop, distributed models must still be greatly simplified in order to allow with-our- lifetime convergence. To perform such simplification, we propose a multiresolution search strategy utilizing aggregation through parameter morphisms. The idea is that we can successively improve the result of optimization by successively increasing model resolution which narrowing its scope through constraint propagation of parameter values from low resolution to successively higher resolution models.

Given a physical system, there are experts who have knowledge about how this system operates. In some cases, there exits quantitative knowledge in the form of deep models. One of the main issues dealing with these different types of knowledge is 'how does one address the difference between the two model types, each of which represents a different level of knowledge about the system?' We have devised a method that starts with (1) the expert's knowledge about the system, and (2) a quantitative model that can represent all or some of the behavior of the system. This method then adjusts the knowledge in either the rule-based system or the quantitative system to achieve some degree of consistency between the two representations. Through checking and resolving the inconsistencies, we provide a way to obtain better models in general about systems by exploiting knowledge at all levels, whether qualitative or quantitative.

MOOSE (multimodel object oriented simulation environment) is an enabling environment for modeling and simulation, under construction at University of Florida, based on OOPM (object oriented physical modeling). OOPM extends object-oriented program design with visualization and a definition of system modeling that reinforces the relation of model to program. OOPM is a natural mechanism for modeling large-scale systems, and facilitates effective integration of disparate pieces of code into one simulation. Components of MOOSE are modeler, translator, engine, and scenario: (1) Modeler interacts with model author via GUI to capture model design; (2) Translator is a bridge between model design and model execution, reading Modeler output, building structures representing model, and emitting C++ (or potentially other) code for model; (3) Engine is a C++ program, composed of translator output plus runtime support, compiled and linked once, then repeatedly activated for model execution; (4) Scenario is a visualization-enabling GUI which activates and interacts with engine, and displays engine output in a form meaningful to user. Dynamic model types supported include finite state machine, functional block model, and equational constraint models; alternatively, model authors may create their own C++ 'code models;' model types may be freely combined; class libraries facilitate reuse. MOOSE emphasizes multimodeling, which glues together models of the same or different types, produced during model refinement, reflecting various abstraction perspectives, to adjust model fidelity, during development and during model execution. Underlying multimodeling is 'block' as fundamental object. Every model is built from blocks, expressed in a modeling assembly language.

Object-oriented methodologies provide robust designs that focus on the essence of the problem to be solved. These methodologies are bottom up design processes. The process begins with the object models (what), then build the dynamic model (when), and then the function model (how). While development of the object model is very straightforward and direct, development of the dynamic and functional models is more circuitous. Also with this process, the structure or architecture of the system does not become apparent until the development of the functional model. In addition, many methods such as the object modeling technique are difficult to use for large systems where there are more than 20 developers. Systems engineering on the other hand, is a powerful, functionally- based process that can be used to address very complicated, large, and technically difficult problems. It is a top down approach that connects an analysis and design loop through the use of functional decomposition. Although some authors criticize functional approaches for incomplete specifications and designs, properly applied they can generate designs that are robust, modular, scaleable and extensible. At first glance it would seem that these orthogonal approaches could be combined using each to address different elements of the problem. However, some problems arise when one tries to combine the power of systems engineering to partition the problem into manageable systems with the capability of object- oriented modeling and design. This paper addresses that problem and provides a solution.

We present an approach called object behavior specification that combines principles of systems theory with those of object orientation. The approach adds a middle layer between the set-theoretic formalism of DEVS (discrete event system specification) and its implementation in C++ and JAVA. Historically, the implementation came first and the object behavior specification was abstracted from it. However, once established, the approach may enable enhanced reuse of DEVS implementation designs. We also show the applicability of the approach to improved formalization of animation and dynamic structure implementations.

SHIFT is a programming language with simulation semantics. The main distinguishing features of SHIFT are: (1) it models agents that have continuous time and discrete event dynamics and provides explicit syntax to specify such behavior; (2) it models systems that consist of heterogeneous set of interacting agents where models of individual agents are known and the goal is the study of the emergent behavior resulting from their interaction; (3) it can simulate a dynamic set of agents; and (4) it can represent changes in the interaction dependencies among the agents. This paper provides an overview of the concepts and constructs in the SHIFT language.

High speed simulation of concurrent systems requires distributed processing if meaningful results are to be obtained for large systems in a reasonable timeframe. One of the most common methods used for such simulation is parallel discrete event simulation (PDES). A range of PDES simulation kernels have been developed and much research has been devoted to optimistic execution strategies such as timewarp. Unfortunately in all this effort some fundamental aspects of object oriented modeling for simulation have received scant attention. In particular the ability of simulation kernels to act on truly generic simulation objects. In this context we define a truly generic object to be one which totally defines its responses to external stimuli, but which has no concept of its place in the interconnected web of objects that comprise the total simulation environment. To address this problem, we propose a new modeling approach based on interacting objects, and an associated simulation kernel architecture. This paper describes the architecture and features of our simulation kernel in detail, and demonstrates, using a small example, the benefits of adopting such a modeling approach. The major specific benefits include true object genericity, enhanced scope for object re-use, and enhanced opportunities to use polymorphism.

Simulation of large complex systems for the purpose of evaluating performance and exploring alternatives is a computationally slow process, currently still out of the domain of real-time applications. This paper overviews advances in three directions aimed at overcoming this limitation. First, based on developments in the theory of discrete event systems, concurrent simulation enables the extraction of information from a single simulation that would otherwise require multiple repeated simulations. This effectively provides simulation speedups of possibly orders of magnitude. A second direction attempts to use simulation for the purpose of obtaining a 'metamodel' of the actual system, i.e., an approximate 'surrogate' model which is computationally very fast, yet accurate. We specifically discuss the use of neural networks as metamodeling devices which may be trained through simulation. Finally, hierarchical simulation provides yet another means for speedup, a major challenge being the preservation of fidelity between hierarchical levels. In practice, using the statistical average of a high resolution level simulator output as the input for a lower resolution level causes significant loss of stochastic fidelity. We present an approach in which we cluster the high resolution simulation output into 'path bundles' as the input for the lower resolution level. The paper includes applications of these new directions to areas such as combat simulation and design of C3I systems.

Monte Carlo simulation are often used as the performance evaluation tool of choice for the analysis and optimization of complex human-made systems such as manufacturing plants, communication networks, logistic-command-control systems and other civilian/military systems

By using the event synchronization scheme, we develop a new method for parallel simulation of many discrete event dynamic systems concurrently. Though a few concurrent simulation methods have been developed during the last several years, such as the well-known standard clock method, most of them are largely limited to Markovian systems. The main advantage of our method is its applicability to non-Markovian systems. For Markovian systems a comparison study on efficiency between our method and the standard clock method is done on connection machine CM-5. CM-5 is a parallel machine with both SIMD (single instruction, multiple data) and MIMD (multiple instruction, multiple data) architectures. The simulation results show that if event rates of Markovian systems do not differ by much then both methods are compatible but the standard clock method performs better in most cases. For Markovian systems with very different event rates, our method often yields better results. Most importantly, our simulation results also show that our method works as efficiently for non-Markovian systems as for Markovian systems.

There are significant opportunities for the development of parallel/distributed simulation algorithms in the context of parametric study of discrete event systems. In such studies, simulation of multiple (often a large number of) parametric variants is required in order to, for example, identify significant parameters (factor screening), determine directions for response improvement (gradient estimation), find optimal parameter settings (response optimization), or construct a model of the response (meta-modeling). The computational burden in this case is to a large extent due to the large number of alternatives that need to be simulated. An effective strategy in this context is to concurrently simulate a number of parametric variants: the structural similarity of the variants often allows for significant amount of sharing of the simulation work, and the code for concurrent simulation of the variants can often be implemented in a parallel/distributed environment. In this paper, we describe two methods of parallel/distributed/concurrent simulation called the standard clock (SC) and the general shared clock (GSC) simulation. Both approaches rely on an event-reservation approach: by contrast to most discrete-event simulation approaches that are based on an event-scheduling approach, in the SC and GSC simulation, the occurrence instances of all events are reserved on the time axis. These instances may or may not be used. This event-reservation approach frees the clock mechanism of the simulation from needing feedback from the state-update mechanism. Due to this autonomy of the clock mechanism, a single clock can be used to drive a number (possibly large) of variants concurrently and in parallel. The autonomy of the clock mechanism is also the key to the different implementation strategies we adopt. To illustrate, we describe the simulation of parametric versions of wireless communication networks on message passing and shared memory environments.

Recent research has demonstrated that ordinal comparison, i.e., comparing relative orders of performance measures of different designs, is efficient in the comparison and selection of designs for discrete event dynamic systems. This paper is concerned with comparison and selection in concurrent simulation where a large number of sample paths under different designs are generated efficiently. Particularly, effect of correlation among simulation processes on the convergence of ordinal comparison is investigated. By using a concept of indicator process, bounds on the rate of convergence of ordinal comparison are obtained. Such bounds imply that perturbation analysis, a powerful technique for performance analysis of discrete event dynamic systems, can be advantageous for ordinal comparison. Simulation examples show that appropriate correlation can significantly increase the convergence of ordinal comparison. Furthermore, it is shown that positive quadrant dependence provides guaranteed acceleration of the convergence of ordinal comparison compared with independent simulations.

There is a need to design, develop, and test new mobile communication networks for military applications. The hardware cost to outfit a single node may be quite high. Much of the cost is in rf hardware, modems, and encryption devices. Replicating such costs over several nodes and adding the cost of maintaining a field site can quickly lead to unacceptable budget levels. One solution to this problem is, in the initial development and testing phase, to develop network communication systems that can operate with either real or simulated transmitters, receivers, modems, etc. This paper describes how we accomplished this task for the development of a high frequency, data/voice (D/V) mobile network. The underlying, distributed, real-time simulation software evolved from Sim++2. On top of this we built a simulation package to model mobile communication networks. Software for the SubNet Controller (SNC) of the hf D/V Network was developed to work with these simulation packages as well as to work with real rf equipment. The SNC software was tested in a 6-node network in which some of the rf equipment was simulated and some was real. The resultant system provides a testbed for examining the performance of command and control systems that must operate over mobile rf communication systems.

The emergence of new standards in distributed communicating objects (e.g. CORBA, Java) and distributed simulation (e.g. HLA) offers the potential for simulations to be constructed by inter-connecting existing component models. However, past experience has shown that object orientation alone is not sufficient to exploit the opportunities for reuse. This paper proposes some items for inclusion in the requirement for a smart computer assistant which would manage the construction of distributed simulations and actively encourage the reuse of components by automating much of the process.

The construction, execution, and analysis of application- oriented simulations is difficult; the integration, coordinated execution, and after action review of heterogeneous distributed simulations can be overwhelming. Economy, risk mitigation, and just plain common sense compel us to utilize legacy simulations but discrepancies in controllability, fidelity, implementation paradigm, algorithms, representations, time management, construction, etc. tend to negate any potential gain. While several generations of interoperability approaches and associated standards have emerged and matured, even they have been limited in their ability to accommodate disparate classes of simulations. Within the permitted scope of this paper, a taxonomy for the most common interoperability issues (portcullises) for distributed simulation is developed. Part of this identification process will consist of establishing contexts and/or prerequisites for the issues, e.g. under what conditions are the issues actually issues at all. As a result, the prioritization will become application dependent. Methods for resolving the issues (battering rams), couched in the form of case studies, are subsequently presented to close the circle. Sources will include industry and government state-of- the-practice, academic state-of-the-art, and our own broad experience. Specific topics to be discussed include application philosophy, the integration of live entities, investigative versus analytical simulation, implications of human-in-the-loop, mixed and/or variable fidelity, heterogeneous time management schemes, current and emerging distributed simulation standards, simulation/exercise management, and control and data distribution. Discussion will focus heavily on examples and experience.

One of the major new technical challenges for distributed simulations is the distribution and presentation and distribution of the natural atmosphere-ocean-space environment. The natural terrain environment has been a part of such simulations for a while, but the integration of atmosphere and ocean data and effects is quite new. The DARPA synthetic environments (SE) program has been developing and demonstrating advanced technologies for providing tactically significant atmosphere-ocean data and effects for a range of simulations. A general-purpose data collection, assimilation, management, and distribution system is being developed by the TAOS (Total Atmosphere-Ocean System) Project. This system is designed to support the new high level architecture (HLA)/run- time infrastructure (RTI) being developed by the Defense Modeling and Simulation Office (DMSO), as well as existing distributed interactive simulation (DIS) network protocols. This paper describes how synthetic natural environments are being integrated by TAOS to provide an increasingly rich dynamic synthetic natural environment. Architectural designs and implementations to accommodate a range of simulation applications are discussed. A number of enabling technologies are employed, such as the development of standards for gridded data distribution, and the inclusion of derived products and local environmental features within 4-dimensional data grids. The application of TAOS for training, analysis, and engineering simulations for sensor analysis is discussed.

NRL is developing an end-to-end distributed simulation environment in support of several Navy technology development programs. One of the critical needs in this distributed simulation environment is a tactical communications emulator. Current distributed simulation exercises seldom provide a flexible communications system model with realistic fidelity. As a result, we are creating a tactical communications model server architecture which operates in advanced distributed simulations (ADS). We are taking an evolutionary approach that will allow integration of specialized or general commercial off-the-shelf (COTS) communication connectivity models. This paper describes our objectives, the system architecture and design trade-off considerations.

Research in the use of DEVS (discrete event system specification) based representation of large models on massively parallel platforms is summarized here. We show that parallel DEVS-representation of large scale models can achieve several orders of magnitude speedup. When mapped onto distributed memory multicomputer systems, additional speedup is obtained. An example of a watershed simulation is presented which has been executed in the DEVS-C++/MPI environment working on the CM-5 an IBM SP2 massively parallel systems.

This paper deals with reusability of the DEVS (discrete event systems specification) models in the hierarchical models development framework within an object-oriented simulation environment, called DEVSim++. The DEVSim++ environment supports models reusability in two dimensions during models development. One way of reusability is achieved from the hierarchical model construction technology from the DEVS formalism and the other from the inheritance mechanism from the underlying object-oriented environment. This paper proposes a set of metrics to measure both hierarchical reuse and inheritance reuse of DEVS models developed in DEVSim++. It also suggests a set of guidelines to improve reusability. Empirical measurement of the proposed metrics shows that the guidelines improve reusability of DEVS simulation models in the DEVSim++ environment.

A USAF project has been initiated to enable groupware that currently supports IDEF activity model capture to be extended to support DEVS model construction. The methodology developed for this purpose enables team participants to enter activity data and then be queried for additional data that support DEVS system decomposition, assigning the activities to components and adding in relevant dynamics.

Conventional methodologies constraint models to be static during the all simulation. However, some systems are better perceived if represented by dynamic structure models. Recent work on modeling methodologies and environments has focused on the representation of discrete event systems that undergo structural changes. We describe the dynamic structure discrete event system specification (DSDEVS) and several other approaches to dynamic structure modeling and simulation. The DSDEVS formalism and its implementation in the DELTA simulation environment provide support for building dynamic structure simulation models in an hierarchical and modular manner. We conclude that general systems theory based formalisms offer sound semantics for dynamic structure modeling and simulation.

Present-day product development endeavors utilize the concurrent engineering philosophy as a logical means for incorporating a variety of viewpoints into the design of products. Since this approach provides no explicit procedural provisions, it is necessary to establish at least a mental coupling with a known design process model. The central feature of all such models is the management and transformation of information. While these models assist in structuring the design process, characterizing the basic flow of operations that are involved, they provide no guidance facilities. The significance of this feature, and the role it plays in the time required to develop products, is increasing in importance due to the inherent process dynamics, system/component complexities, and competitive forces. The methodology presented in this paper involves the use of a hierarchical system structure, discrete event system specification (DEVS), and multidimensional state variable based metrics. This approach is unique in its capability to quantify designer's actions throughout product development, provide recommendations about subsequent activity selection, and coordinate distributed activities of designers and/or design teams across all design stages. Conceptual design tool implementation results are used to demonstrate the utility of this technique in improving the incremental decision making process.

The emergence of the World Wide Web (WWW) as a universal medium of information exchange provides new opportunities for those involved in the modeling and simulation fields. While technologies such as distributed interactive simulation (DIS) and high level architecture (HLA) have provided mechanisms which allow simulations to interact, WWW technology will allow the users of these simulations to interact. This research project is investigating mechanisms that will provide the ability for multiple users to perform analyses on spatial data in a collaborative, distributed environment. These mechanisms will be capable of displaying static spatial data, as well as real time information from data feeds and the output of models and simulations. Such a system will not only allow analysts at the same location to work together, but it will also allow others to work on the project from remote locations via the Internet.

The United States Army is presently developing a new family of high technology, computer-based training facilities called the Combined Arms Tactical Trainer (CATT). The first of these will be the Close Combat Tactical Trainer (CCTT) where soldiers from armored and mechanized forces at battalion and below will conduct realistic training in manned modules using semi- automated forces (SAF) that operate on a digitized, virtual battlefield. The major scheduling tasks for planning a days' training include selecting training scenarios, scheduling the scenarios throughout the planning horizon, and scheduling training resources for conducting each training scenario. CCTT scheduling is complicated by several factors. Multiple scenarios may be scheduled simultaneously, training scenario duration varies by scenario, and resources for conducting each scenario vary by scenario type and may vary within a scenario type as well. This paper presets a heuristic approach to the CCTT scheduling problem. Scheduling results from the automated scheduling system demonstrate that the heuristic provides 'good' training schedules in a timely manner.

Amherst Systems, Inc., under the sponsorship of Rome Laboratory, has developed a visualization tool for use within both the DIS and the simulation and modeling communities. This product uses the IVIEW2000 software provided by the Air Force's National Air Intelligence Center (NAIC). The capabilities of IVIEW2000, combined with a DIS interface, provide analysts and developers with a free, government-owned visualization tool for the evaluation of DIS exercises. IVIEW2000 is a software package providing a visualization environment for engagement reconstruction used by the simulation, modeling and analysis community. IVIEW2000 provides many different viewing capabilities, access to player data, and a fully featured control panel with VCR-like functions. It allows analysts to view the entire or selected portions of the scenario from a variety of different angles simultaneously, while monitoring various types of player data. Prior to the addition of the DIS interface, IVIEW2000 allowed only scripted scenarios to be played and re-played. With the addition of the DIS interface, IVIEW2000 is now able to read and process DIS PDUs and operate as a real-time visualization tool. It allows an operator to attach to any/multiple DIS entity(s), provides bird's-eye view, out-the-window views, data -readouts, etc. IVIEW2000, with the DIS interface, operates on a standard SGI indigo-class machine, making it an affordable alternative to commercial DIS visualization packages.

The spacecraft simulation toolkit (SST) is an advanced software architecture for the modeling and simulation of spacecraft and spacecraft interactions based upon state-of- the-art techniques, the Khoros visual programming environment, and accurate physical phenomenology. The SST simulates spacecraft systems and subsystems; the user virtually 'builds' a spacecraft by selecting and integrating components, duplicating real world actions. The models in the SST perform algorithmic simulation of spacecraft functions. System design and simulation databases provide both the knowledge base within which detailed characteristics of the system are described and an efficient means to store simulation results for additional analysis. Representation of the real-world, natural environment is provided. Khoros also provides integrated data analysis and software development tools. The SST is being configured to meet a broad range of requirements in design, technology development, systems acquisition, on- orbit operations, and spacecraft operator training.

Monte Carlo based multi-vehicle mission effectiveness analysis is constrained by the computational resources and time required to perform non-trivial studies. Discrete event simulators offer the potential to significantly reduce the amount of processing that is required to perform such analyses. The issues associated with exploiting discrete event simulation architectures for performance improvement are discussed. Strategies for adaptively recognizing non-critical temporal intervals are presented. Pleiades, an implementation of a DEVS compliant mission effectiveness simulator, is discussed.

This paper presents an overview of model abstraction methods. Model abstraction methods are techniques that derive simpler conceptual models while maintaining the validity of the simulation results. These methods include variable resolution modeling, combined modeling, multimodeling, and metamodeling. In addition, some taxonomies include approximation, aggregation, linear function interpolation, and look up tables as model abstraction methods. We discuss these methods in a general framework to assist in understanding the applicability of the various model abstraction methods.

Traditional document construction, while potentially concurrent and occasionally collaborative, is rarely both. As a result, opportunities for compressing time to the first draft and increasing content value can not be easily leveraged. Further, the absence of a framework that supports collaboration constrains the potential for continuous document evolution (i.e., a living document). These conditions stem from not only lack of technology application, but equally an evolution in document construction culture. The Internet serves as a collection of models in which varying levels of collaboration are supported. For example, news groups and e- mail permit dialogues in both broadcast and directed modes, and HTML formatted files provide a forum for hyperlinking multi-media documentation. These technologies, however, exist in isolation and do not individually provide the services necessary for supporting collaboration during concurrent document construction. While commercially-available document- centered frameworks are emerging as viable virtual integration environments, their fairly minimal level of integration with Internet protocols and standards (most notably HTTP and HTML) constrain widespread use within organizational intranets. The lack of native support for HTML mandates deliberate conversion steps for access by common client applications (e.g., WWW browsers). As a result, GRC International, Inc., (GRCI) initiated development of a WWW-based, virtual integration environment (VIE) in which documentation contributors, integrators and reviewers could collaborate using common Internet client applications for user access. The VIE was developed as a collection of services interfaced to an HTTP server via the common gateway interface (CGI). These services implemented as a single CGI program dynamically construct the VIE user interface to reflect continuing updates to documentation submissions, review, and revisions. Contributions submitted as MIME-encoded e-mail messages may be integrated as new documents or as revisions without requiring the user to understand HTML or the underlying document storage implementation.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews