Contents

Basic concepts

Entities

Model is a structure that is created in purpose of analysing some existing or planned system. It is usually tied to a particular method of analysis such as dynamic or steady-state simulation or model-checking. What is inside the model depends strongly on the analysis method. Model is however always a unit that can be exported from the Simantics workspace and imported back to some other workspace.

Model contains one or more configurations. Configuration is a description of the system being modelled. Usually (always?) one of the configurations is the root configuration describing the most aspects of the system and other configurations specify some deviations from it. A configuration can be parametrized. Multiple configurations are used to maintain many different but related designs (cases) of the system within the same model or to parametrize the configuration so that optimization, sensitivity analysis or similar method can be applied to the system.

The main purpose of creating a model of a system is to apply some analysis to it. We call these analyses experiments. An experiment points to a certain configuration but may also contain an additional specification of how the analysis is executed such as simulation sequence, list of subscribed variables, simulation method used, etc..

Each individual execution of the experiment is a run. What a single run generates, depends on the analysis method and the experiment specification. Typical artifacts produced include:

State is an assignment of values to the properties of the components in the configuration

History is an assignment of time series to the properties

Additionally the run can be interactive so that the current state being simulated can be accessed and even modified during the simulation.

States and histories can be also independent entities in the model that are not produced by experiment runs. They can be used as an input in the experiments.

Multiple runs can be executed in parallel, some in remote machines. One of the runs (states, or histories?) is the active experiment whose state is visualized in the UI.

Some analysis methods have a capability of storing a snapshot of the state of the analysis algorithm. We call these snapshots ICs. An experiment may specify IC to be used to initialize the analysis. IC and state are slightly overlapping concepts. The main difference between them is that IC contains a complete state of the analysis algorithm including the internal state not seen by users in a representation that is optimized for fast initialization of the algorithm. On the other hand a state contains only properties of components in the configuration, it is optimized for efficient browsing and may be partial (not assigning value to all possible properties).

Analogy

Consider crash testing of cars. The configuration describes the car and possibly how the crash test dummy is positioned in it. There may be many different configurations with varying safety equipments and we may for example parametrize the size of the airbag in order to find the size that minimizes head injuries. The experiment describes which configuration is used and how the crash test is executed (for example crashing speed). It also describes the variables that are measured during the crash. A run is one crash test. Each run produces time series of all variables that were measured, maybe a high speed video of the crash and the final state of the car and the dummy after the crash.

Operations

We describe here the basic operations involving models and experiments. They are not necessarily the same operations that are presented to user in UI but building blocks with smaller granularity. In particularly, we consider starting an experiment an explicit operation while some this may be an automatic operation in UI. If the analysis is fast enough, even simulation results can be updated automatically when the user modifies the configuration.

Running an experiment creates a new run starting the corresponding runtime entities. This involves:

Start the actual analysis algorithm (if a remote server is used, this may include waiting that computational resource become available)

Initialize the algorithm state. This can be done in many ways:

Write the configuration in a form understood by the algorithm (for example Modelica code)

Load previously stored IC and synchronize the algorithm state with the current configuration

Initialize the algorithm in a "blank" state and synchronize the current configuration

Run the analysis

This phase may be interactive so that state of the algorithm can be monitored and mutated

It may be possible to run synchronization operation during the analysis

Make the results of the analysis available

If the analysis is fast running all these phases happen almost immediately after the experiment is started.

Synchronization is the operation of making the current state of an analysis algorithm compatible with a certain configuration (and parameters, if the configuration is parametrized).

Save/load IC

Archive simulation results

Questions

The line between configuration and experiment is not well defined (for example is the crashing speed in the analogy part of configuration or experiment). Experiments and configurations are probably often tied together. Also experiments (such as simulation sequences) are parametrizable. Would it be possible to consider experiments as part of the configuration?