MicroMegas (also known as 'mM') is an open source program for DD (Dislocation Dynamics) simulations originally developed at the 'Laboratoire d'Etude des Microstructures', [http://zig.onera.fr/mm_home_page/index.html CNRS-ONERA], France. mM is a free software under the terms of the GNU General Public License as published by the [http://www.gnu.org/philosophy/free-sw.html Free Software Foundation]. Discrete dislocation dynamics (DDD) is a numerical tool used to model the plastic behavior of crystalline materials using the elastic theory of dislocations [http://en.wikipedia.org/wiki/Dislocation]. DDD is the computational counterpart to in site TEM tests. MicroMegas is a legacy simulation code used to study the plasticity of mono-crystalline metals, based on the elasticity theory that models the dislocation interactions into an elastic continuum. In crystalline materials, plastic deformation may be explained by (i) twinning, (ii) martensic transformation or/and (iii) dislocation interactions (see Figure 1).

+

MicroMegas (also known as 'mM') is an open source program for DD (Dislocation Dynamics) simulations originally developed at the 'Laboratoire d'Etude des Microstructures', [http://zig.onera.fr/mm_home_page/index.html CNRS-ONERA], France. mM is a free software under the terms of the GNU General Public License as published by the [http://www.gnu.org/philosophy/free-sw.html Free Software Foundation]. Discrete dislocation dynamics (DDD) is a numerical tool used to model the plastic behavior of crystalline materials using [http://en.wikipedia.org/wiki/Dislocation the elastic theory of dislocations]. DDD is the computational counterpart to in site TEM tests. MicroMegas is a legacy simulation code used to study the plasticity of mono-crystalline metals, based on the elasticity theory that models the dislocation interactions into an elastic continuum. In crystalline materials, plastic deformation may be explained by (i) twinning, (ii) martensic transformation or/and (iii) dislocation interactions.

MicroMegas is used at CAVS for modeling dislocation interactions and reactions in an elastic continuum. The code is used in a hierarchical multiscale framework of the plasticity to obtain information related to the hardening of the material (see for example, the multiscale framework presented in [[Media:Multiscale_Al.pdf | this]] review paper‎). Details of the discrete dislocations model can be found in [[Media:Multiscale_Modeling.pdf‎ | the methodology]]‎ paper and in the references at the bottom of the page.

MicroMegas is used at CAVS for modeling dislocation interactions and reactions in an elastic continuum. The code is used in a hierarchical multiscale framework of the plasticity to obtain information related to the hardening of the material (see for example, the multiscale framework presented in [[Media:Multiscale_Al.pdf | this]] review paper‎). Details of the discrete dislocations model can be found in [[Media:Multiscale_Modeling.pdf‎ | the methodology]]‎ paper and in the references at the bottom of the page.

Line 45:

Line 44:

The discrete dislocation simulation code can be used for HCP, BCC and FCC materials.

The discrete dislocation simulation code can be used for HCP, BCC and FCC materials.

−

== Available DDD Codes ==

+

== Available versions of microMegas ==

−

* microMegas [http://zig.onera.fr/mm_home_page/index.html]

+

This section includes links to versions of the discrete dislocation dynamics codes. microMegas is commonly used at CAVS to simulate the behavior of dislocations for metals at the microscale.

−

** serial version [[CodeRepository:mM|'''mM''']]

+

−

** parallel version (openMP) [[CodeRepository:mMpar|'''mMpar''']]

+

−

Micromegas is written in a mix of Fortran 90 and Fortran 95, consists of 16 source modules and contains roughly

+

* microMegas (download the original microMegas code from the French Aerospace Lab [http://zig.onera.fr/mm_home_page/index.html here])

microMegas can be freely downloaded from the original development site at the [http://zig.onera.fr/mm_home_page/doc/Releases.html French Aerospace Lab]. It can also be downloaded from the CAVS Cyberinfrastructure [[Repository_of_codes| Repository of Codes]] in two versions:

* [https://icme.hpc.msstate.edu/viewvc/CMD%20Codes%20Repository/mMpar/trunk/ '''mMpar ver. 1.0'''] – parallel version of mM [original ver. 3.2], where force calculations for each segment are calculated in parallel using OpenMP threads.

+

+

=== Download ===

+

+

microMegas is not available in a system-wide implementation on the HPC2 systems. To use microMegas, please choose one of the available versions above to download on your local computer or workstation. After downloading, untar the tarball by typing:

+

+

<pre>

+

tar xzf [name_of_the_tarball].tar.gz

+

+

or

+

+

tar xzf [name_of_the_tarball].tar

+

</pre>

+

+

Go to the directory resulting from the above operations. Please follow the instructions in the '''readme''' files provided in the directory to setup microMegas on your system.

+

+

=== Setup ===

+

+

In this page we describe how to install, configure and run DDD simulations using mM ver. 1.0 (with Intel Compiler optimizations) and mMpar ver. 1.0 (with Intel Compiler optimizations and OpenMP threads). Installation instructions for mM ver. 1.0 and mMpar ver. 1.0 can also be found in the ‘readme’ files provided in each directory and subdirectory of the code.

+

mM can be run in ''batch mode'' to get data analyzed with conventional graphical display programs (exemples of Gnuplot scripts are provided) or it can be used in ''interactive mode'' to simply visualize dislocations activity. Herein, we describe how to run the code in batch mode. For instructions on how to run mM in interactive mode, please refer to the ‘readme’ files provided with the code.

+

+

Before compiling microMegas on any of the HPC-CAVS computing systesm, one needs to route to the proper compiler and MPI path on the system using:

+

* '''swsetup intel''' - to use the latest Intel Fortran Compiler installed on the system, and

+

* '''swsetup openmpi-intel-64''' - to use the latest version of the OpenMPI libaries, compiled with the Intel compiler for 64-bit systems.

+

+

The general workflow for running discrete dislocation dynamics simulations using microMegas is illustrated in the figure below:

The input files are located in the '''mM/in''' directory. The following files are needed to run the mM simulation.

+

+

* [[microMegas Input files:input.dd | '''input.dd''']]

+

+

In this file, the three files needed to run a microMegas simulation are defined. These input files must be declared in this directory "in". They are used at the beginning of the simulation and correspond to a simple classification.

+

+

* [[microMegas Input files:inputconfinit | '''inputconfinit''']]

+

+

In this file one enters the necessary parameters for generating the initial configuration.

+

+

* [[microMegas Input files:INPUTCONFIG.CFC | '''INPUTCONFIG.CFC''']]

+

+

Initial configuration file.

+

+

* [[microMegas Input files:ContCu | '''ContCu''']]

+

+

This is the input file with control parameters used for the simulation. For instance, one can select the type of simulation (initial or restart from previous simulation) via the parameter SIDEJA. One can also select whether cross-slip displacement of dislocations is desired by setting the GLDEV parameter accordingly (‘T’ for enabled and ‘F’ for disabled). Also, one can set the total number of simulation steps via the NSTEP parameter. Each simulation time step corresponds to 10-9 real time seconds. Therefore, for a very small simulation use NSTEP=500 while for a long running simulation set NSTEP to anything from 106 and above. Finally, one can also select how often should the simulation save the current state of the code, via the KISAUVE, KISTAT, KKIM and KPREDRAW parameters. For more details, check the file that comes with code.

+

+

* [[microMegas Input files:Cu | '''Cu''']]

+

+

This is the file containing the material variables. See the existing file for more details.

+

+

* [[microMegas Input files:SegCu | '''SegCu''']]

+

+

This is the file containing the initial dislocation segments configuration (e.g., the active slip systems, the number of segments, the dimensions of the simulation reference volume box, etc). See the bottom of the existing file for more details.

+

+

* [[microMegas Input files:Segments | '''Segments''']]

+

This is the file describing the initial number, type and characteristics of the dislocation segments. See the existing file for more details.

* '''10dynam.F90''' - module where the moving velocity of each segment is calculated

+

** uses 01constantes, 04varglob, 06debug and 08connec modules

+

+

* '''11topolo.f90''' – module containing the procedures used to generate the boundary conditions, to discretize the dislocation lines into segments and to locate the segments before they are eliminated

+

** uses 04varglob, 06debug, 08connec and microstructure modules

+

+

* '''12contact.f90''' – module containing simple displacements and where the interactions between segments are updated in four steps.

To compile microMegas, you need to buildup a makefile dedicated to the machine you want to run the simulation, in the ‘mM/bin’ or ‘mMpar/bin’ directory. Solutions already exist for many different platforms; you should be able to do your one without too much effort.

+

+

The "'''config'''" file is the part of "'''makefiles'''" which is the same on all the machines

+

+

To create a new machine "'''makefile'''" you must add at the end of config the corresponding "'''.PHONY'''" definition.

+

+

Then, you need to buildup your one "Make_DEFS" file. The latter must contains all the headers useful for your new machine. See the following examples.

+

+

Make_DEFS.amd -> An AMD Linux platform with gcc and the Intel FORTRAN compilers

+

Make_DEFS.dec -> A DEC Alpha machine with the native C and FORTRAN compilers

+

Make_DEFS.g5 -> An Apple G5 machine with gcc and the IBM FORTRAN compilers

+

Make_DEFS.mac -> An Apple G4 or G3 Machine with gcc and the ABSOFT FORTRAN compilers

+

Make_DEFS.mad -> An AMD Cluster

+

Make_DEFS.madmax -> A cluster of Xeon machines with gcc and the Intel FORTRAN cimpilers

+

Make_DEFS.pc -> A simple PC workstation

+

Make_DEFS.sgi -> An SGI Itanium machine with gcc and the Intel(64) FORTRAN Compiler

+

etc....

+

+

Once you have made your "'''Make_DEFS.machine_type'''", type:

+

+

<pre>

+

make -f config machine_type

+

</pre>

+

+

For instance for my machine I simply type "'''make -f config mac'''")

+

+

At that stage you should have a "'''makefile'''" file created in the bin directory

+

+

====Execute the version of microMegas of your choice ====

+

+

According to the version of microMegas you want to execute, type:

+

+

* '''make''' or '''make all''' - to compile all the binaries (this does not include the MPI binary)

+

+

* '''make mm''' - to compile only the batch version of the simulation

+

+

* '''make gmm''' - to compile only the simulation with its graphical interface (interactive mode)

+

+

* '''make mm_omp''' - to compile only the batch version for OpenMP parallel threads

+

+

* '''make mmp''' - to compile only the batch version for MPI clusters

+

+

* '''make cam''' - to compile only the graphical interace (needed to see the simulation film)

+

+

* '''make base''' - to compile only the code needed to generate the simulation vectors base

All the inputs data are defined in the directory "mM/in". Take a look to the README file in this directory for more information.

+

+

All the outputs data are written in the directory "mM/out". Take a look to the README file in this directory for more information.

+

+

+

== Running microMegas ==

A typical simulation run in Micromegas requires somewhere between 10^6 to 10^9 time steps to gain more insight about the plastic deformation range. Simulations with a smaller number of steps will very likely not capture the plastic range of deformation – the region of interest for the materials scientists studying plastic deformation. A simulation run over 10,000 steps using serial version of Micromegas requires 68 hours on average and reaches 0.2% of the plastic deformation on a Nehalem quad-core Xeon W3570 processor, with 6GB of triple channel 133MHz DDR-3 RAM. Simulations of about 10^9 time steps are needed to reach the desired percentage of deformation, that is, the strain rate as high over 1% as possible.

A typical simulation run in Micromegas requires somewhere between 10^6 to 10^9 time steps to gain more insight about the plastic deformation range. Simulations with a smaller number of steps will very likely not capture the plastic range of deformation – the region of interest for the materials scientists studying plastic deformation. A simulation run over 10,000 steps using serial version of Micromegas requires 68 hours on average and reaches 0.2% of the plastic deformation on a Nehalem quad-core Xeon W3570 processor, with 6GB of triple channel 133MHz DDR-3 RAM. Simulations of about 10^9 time steps are needed to reach the desired percentage of deformation, that is, the strain rate as high over 1% as possible.

−

== Setup ==

+

To get an idea of the type of simulations that can be conducted with microMegas, we give here the parameters of a representative simulation selected in the input files, the compilation and execution commands. The simulation parameters of a representative microMegas simulation are:

+

* 0.5% plastic deformation

+

* 10x10x10 µm^3 simulation box dimensions

+

* 1012 1/m^2 initial density

+

* 10 1/s strain rate in multi-slip conditions

+

''Note'': Multi-slip calculations were performed to evaluate and demonstrate the efficiency of the parallel version of microMegas.

* For '''compression simulations''': loading along the [100] direction

+

* strain rate of 20 1/s

+

* temperature of 300 K under periodic boundary conditions

+

* time step was considered to be 1/10^9 seconds

+

''Note'': Screw dislocations were not allowed to cross-slip at any time.

−

microMegas can be freely downloaded from the original development site at the [http://zig.onera.fr/mm_home_page/doc/Releases.html French Aerospace Lab]. It can also be downloaded from the CAVS Cyberinfrastructure [https://icme.hpc.msstate.edu/mediawiki/index.php/Repository_of_codes Repository of Codes] in two versions:

* '''mMpar ver. 1.0''' – parallel version of mM [original ver. 3.2], where force calculations for each segment are calculated in parallel using OpenMP threads.

+

−

+

−

In this page we describe how to install, configure and run DDD simulations using mM ver. 1.0 (with Intel Compiler optimizations) and mMpar ver. 1.0 (with Intel Compiler optimizations and OpenMP threads). Installation instructions for mM ver. 1.0 and mMpar ver. 1.0 can also be found in the ‘readme’ files provided in each directory and subdirectory of the code.

+

−

mM can be run in ''batch mode'' to get data analyzed with conventional graphical display programs (exemples of Gnuplot scripts are provided) or it can be used in ''interactive mode'' to simply visualize dislocations activity. Herein, we describe how to run the code in batch mode. For instructions on how to run mM in interactive mode, please refer to the ‘readme’ files provided with the code.

+

−

Before compiling microMegas on any of the HPC-CAVS computing systesm, one needs to route to the proper compiler and MPI path on the system using:

+

==== Serial microMegas (mm) ====

−

* '''swsetup intel''' - to use the latest Intel Fortran Compiler installed on the system, and

+

−

* '''swsetup openmpi-intel-64''' - to use the latest version of the OpenMPI libaries, compiled with the Intel compiler for 64-bit systems.

+

−

== Input files ==

+

To run '''serial microMegas''' for production simulations from the command line add the corresponding software modules (compilers, libraries, etc.) to load in your ‘'''.bashrc'''’ file (in your home directory, i.e. /home/<your_username>/). To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

−

The input files are located in the '''mM/in''' directory. The following files are needed to run the mM simulation.

+

<pre>

+

swsetup intel

+

</pre>

−

* '''ContCu'''

+

Then in the ''''mM/bin'''’ directory, to compile only the batch version of the simulation type:

−

The input file with parameters used for the simulation. For instance, one can select the type of simulation (initial or restart from previous simulation) via the parameter SIDEJA. One can also select whether cross-slip displacement of dislocations is desired by setting the GLDEV parameter accordingly (‘T’ for enabled and ‘F’ for disabled). Also, one can set the total number of simulation steps via the NSTEP parameter. Each simulation time step corresponds to 10-9 real time seconds. Therefore, for a very small simulation use NSTEP=500 while for a long running simulation set NSTEP to anything from 106 and above. Finally, one can also select how often should the simulation save the current state of the code, via the KISAUVE, KISTAT, KKIM and KPREDRAW parameters. For more details, check the file that comes with code. An example of the content of this file is given below.

+

<pre>

+

make –f serial-Makefile clean

+

make –f serial-Makefile mm

+

</pre>

−

{|border ="0"

+

Launch the serial version of the simulation from the same directory by typing:

−

|<pre>

+

−

0 SIDEJA Simulation state key: 0=New or restart with control modification; 1=simple restart

100 TauINT_LIMITE Critical stress at which the segments are considered as singular (MPa)

+

Before running mm_omp, one needs to configure the target system for executing OpenMP programs. This is done by ensuring that the environment variables used by the chosen compiler and its OpenMP extension are properly set. For a quad-core Linux system running SuSE SLES 10, and the Intel Compiler version 11.1, the following values are recommended.

−

400 KISAUVE Writing periodicity of the simulation segment configuration and information needed to restart a computation

+

<pre>

−

400 KSTATS Writing periodicity if simulation results

+

export OMP_THREAD_NUM=4 // This value can be adjusted to match the existing number of cores in the compute node of your choice.

−

400 KKIM Writing periodicity of the trajectory film information

+

// E.g., in talon nodes, this can be set to 12.

−

100 KPREDRAW Periodicity of the graphical interaface refresh

+

export KMP_AFFINITY=verbose,respect,granularity=core,scatter

+

export KMP_LIBRARY=turnaround

+

export KMP_SETTINGS=1

+

export KMP_STACKSIZE=512m

+

export KMP_VERSION=.TRUE.

+

</pre>

−

0 shift_rotation Key for the translation and rotation of the simulation box boundary conditions (see the file shift_rotation)

+

For more details on the values and meaning of these environment variables, please consult the Intel Compiler manual and its OpenMP specification. Note that these environment variables are specific to the Intel Compiler and its OpenMP specification, and that they may differ based on the compiler of your choice and the specifics of its own OpenMP extension.

−

-6542 iterinfo step of debbug (negative means no debbug)

+

Launch the parallel OpenMP version of the simulation from the same directory by typing:

−

-2 sysinfo slip system of interest in the debbug procedure (needed in simulations with many segments)

+

+

<pre>

+

mkdir ../production_runs

</pre>

</pre>

−

|}

+

+

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

For the parallel microMegas simulations, also load the MPI libraries, e.g. OpenMPI ver. 1.4.2, by typing:

+

+

<pre>

+

swsetup openmpi-intel-64

+

</pre>

+

+

Note: To avoid any compilation or execution errors, please make sure that during the selection of any additional libraries, such as MPI, you choose the library version that was compiled using the same compiler of your choice. For instance, if you compile the code using Intel compilers, please select the MPI library that was compiled using Intel compilers. Not doing so, may cause unpredictable errors during the simulation.

+

+

Then in the ‘'''mM/bin'''’ directory, to compile only the batch version of the simulation, type:

+

+

<pre>

+

make –f openMPI-Makefile clean

+

+

make –f openMPI-Makefile mmp

+

</pre>

+

+

Launch the parallel MPI version of the simulation from the same directory by typing:

+

+

<pre>

+

mkdir ../production_runs

+

</pre>

+

+

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

The serial code (mm), the OpenMP-based code (mm_omp) and the OpenMPI-based code (mmp) can be launched either locally (as described in the Subsection above - Simple batch execution) or remotely. For remote execution on high performance compute clusters, a PBS (Portable Batch Script) is needed to submit the execution as a job.

+

+

Below are three sample PBS scripts one could use to run microMegas in any of the three versions on the talon.hpc.msstate.edu high-performance cluster at HPC2. Each of these scripts can be cut and pasted into a file, e.g., mm.pbs.talon or mm_omp.pbs.talon or mmp.pbs.talon. To submit a pbs script to the jobs queue on talon, first log on to the talon-login node, typing:

+

+

<pre>

+

rlogin talon-login

+

</pre>

+

+

from any HPC2 machine, and then type:

+

+

<pre>

+

qsub mm.pbs.talon or qsub mm_omp.pbs.talon or qsub mmp.pbs.talon

+

</pre>

+

+

''Note'': microMegas is a long running code. To run long simulations please contact the HPC2 administrators to request access to the ''''special'''' queue.

The output files are located in the '''mM/out''' directory. The most important output are briefly described below. For more details on the content and meaning of each file, please refer to the actual content of these files.

+

+

* [[microMegas Output files:BVD.CFC | '''BVD.CFC''']] - the set of reference vectors used in the simulation for a given crystal

* [[microMegas Output files:stat | '''stat''']] - a file where most of the global statistics of the simulation are written

+

+

* [[microMegas Output files:travapp | '''travapp''']] - a file containing the evolution of the applied mechanical work (presently do not trust those computations)

+

+

* [[microMegas Output files:travint | '''travint''']] - is a file containing the evolution of the internal mechanical work (presently do not trust those computations)

+

+

== More about microMegas==

+

This section also includes links to a ''''Dislocations Generator'''' code, developed by [mailto:Sebastien.Groh@imfd.tu-freiberg.de Sebastien Groh]. To download the code from the Codes Repository at CAVS, click [https://icme.hpc.msstate.edu/viewvc/CMD%20Codes%20Repository/DisloStructures/tags/1.0/ '''here'''] and then click "Download GNU tarball". Information on how to compile and run the code is provided in the ''''readme'''' file.

+

+

+

=== Downloading and configuring microMegas ===

+

+

microMegas is not available in a system-wide implementation on the HPC2 systems. To use microMegas, please choose one of the available versions above to download on your local computer or workstation. After downloading, untar the tarball by typing:

+

+

<pre>

+

tar xzf [name_of_the_tarball].tar.gz

+

+

or

+

+

tar xzf [name_of_the_tarball].tar

+

</pre>

+

+

Go to the directory resulting from the above operations. Please follow the instructions in the '''readme''' files provided in the directory to setup microMegas on your system.

+

+

=== Getting Started Tutorial ===

+

+

For a beginner's step-by-step tutorial on how to use and run '''microMegas''' please visit the [[Code:_microMegas | microMegas]] page.

+

+

For a more detailed step-by-step tutorial, please download the PDF version [[Media:MicroMegas_manual.pdf |'''TUTORIAL''']].

+

+

The '''microMegas input decks''' and a step-by-step Tutorial on how to use them to run discrete dislocation dynamics simulations can be downloaded ('Download GNU tarball') [https://icme.hpc.msstate.edu/viewvc/CMD%20Codes%20Repository/inputDecks/microMegas_decks '''here'''], or can be viewed online by clicking on the name of each of the files on the [[Code:_microMegas| microMegas]] page.

== References ==

== References ==

−

Please remember to cite the relevant references from the list below when publishing results obtained with microMegas:

+

Please remember to cite the relevant references from the articles below when publishing results obtained with microMegas:

MicroMegas (also known as 'mM') is an open source program for DD (Dislocation Dynamics) simulations originally developed at the 'Laboratoire d'Etude des Microstructures', CNRS-ONERA, France. mM is a free software under the terms of the GNU General Public License as published by the Free Software Foundation. Discrete dislocation dynamics (DDD) is a numerical tool used to model the plastic behavior of crystalline materials using the elastic theory of dislocations. DDD is the computational counterpart to in site TEM tests. MicroMegas is a legacy simulation code used to study the plasticity of mono-crystalline metals, based on the elasticity theory that models the dislocation interactions into an elastic continuum. In crystalline materials, plastic deformation may be explained by (i) twinning, (ii) martensic transformation or/and (iii) dislocation interactions.

MicroMegas is used at CAVS for modeling dislocation interactions and reactions in an elastic continuum. The code is used in a hierarchical multiscale framework of the plasticity to obtain information related to the hardening of the material (see for example, the multiscale framework presented in this review paper‎). Details of the discrete dislocations model can be found in the methodology‎ paper and in the references at the bottom of the page.

The discrete dislocation simulation code can be used for HCP, BCC and FCC materials.

microMegas is not available in a system-wide implementation on the HPC2 systems. To use microMegas, please choose one of the available versions above to download on your local computer or workstation. After downloading, untar the tarball by typing:

In this page we describe how to install, configure and run DDD simulations using mM ver. 1.0 (with Intel Compiler optimizations) and mMpar ver. 1.0 (with Intel Compiler optimizations and OpenMP threads). Installation instructions for mM ver. 1.0 and mMpar ver. 1.0 can also be found in the ‘readme’ files provided in each directory and subdirectory of the code.
mM can be run in batch mode to get data analyzed with conventional graphical display programs (exemples of Gnuplot scripts are provided) or it can be used in interactive mode to simply visualize dislocations activity. Herein, we describe how to run the code in batch mode. For instructions on how to run mM in interactive mode, please refer to the ‘readme’ files provided with the code.

Before compiling microMegas on any of the HPC-CAVS computing systesm, one needs to route to the proper compiler and MPI path on the system using:

swsetup intel - to use the latest Intel Fortran Compiler installed on the system, and

swsetup openmpi-intel-64 - to use the latest version of the OpenMPI libaries, compiled with the Intel compiler for 64-bit systems.

The general workflow for running discrete dislocation dynamics simulations using microMegas is illustrated in the figure below:

In this file, the three files needed to run a microMegas simulation are defined. These input files must be declared in this directory "in". They are used at the beginning of the simulation and correspond to a simple classification.

This is the input file with control parameters used for the simulation. For instance, one can select the type of simulation (initial or restart from previous simulation) via the parameter SIDEJA. One can also select whether cross-slip displacement of dislocations is desired by setting the GLDEV parameter accordingly (‘T’ for enabled and ‘F’ for disabled). Also, one can set the total number of simulation steps via the NSTEP parameter. Each simulation time step corresponds to 10-9 real time seconds. Therefore, for a very small simulation use NSTEP=500 while for a long running simulation set NSTEP to anything from 106 and above. Finally, one can also select how often should the simulation save the current state of the code, via the KISAUVE, KISTAT, KKIM and KPREDRAW parameters. For more details, check the file that comes with code.

This is the file containing the initial dislocation segments configuration (e.g., the active slip systems, the number of segments, the dimensions of the simulation reference volume box, etc). See the bottom of the existing file for more details.

To compile microMegas, you need to buildup a makefile dedicated to the machine you want to run the simulation, in the ‘mM/bin’ or ‘mMpar/bin’ directory. Solutions already exist for many different platforms; you should be able to do your one without too much effort.

The "config" file is the part of "makefiles" which is the same on all the machines

To create a new machine "makefile" you must add at the end of config the corresponding ".PHONY" definition.

Then, you need to buildup your one "Make_DEFS" file. The latter must contains all the headers useful for your new machine. See the following examples.

Make_DEFS.amd -> An AMD Linux platform with gcc and the Intel FORTRAN compilers
Make_DEFS.dec -> A DEC Alpha machine with the native C and FORTRAN compilers
Make_DEFS.g5 -> An Apple G5 machine with gcc and the IBM FORTRAN compilers
Make_DEFS.mac -> An Apple G4 or G3 Machine with gcc and the ABSOFT FORTRAN compilers
Make_DEFS.mad -> An AMD Cluster
Make_DEFS.madmax -> A cluster of Xeon machines with gcc and the Intel FORTRAN cimpilers
Make_DEFS.pc -> A simple PC workstation
Make_DEFS.sgi -> An SGI Itanium machine with gcc and the Intel(64) FORTRAN Compiler
etc....

Once you have made your "Make_DEFS.machine_type", type:

make -f config machine_type

For instance for my machine I simply type "make -f config mac")

At that stage you should have a "makefile" file created in the bin directory

A typical simulation run in Micromegas requires somewhere between 10^6 to 10^9 time steps to gain more insight about the plastic deformation range. Simulations with a smaller number of steps will very likely not capture the plastic range of deformation – the region of interest for the materials scientists studying plastic deformation. A simulation run over 10,000 steps using serial version of Micromegas requires 68 hours on average and reaches 0.2% of the plastic deformation on a Nehalem quad-core Xeon W3570 processor, with 6GB of triple channel 133MHz DDR-3 RAM. Simulations of about 10^9 time steps are needed to reach the desired percentage of deformation, that is, the strain rate as high over 1% as possible.

To get an idea of the type of simulations that can be conducted with microMegas, we give here the parameters of a representative simulation selected in the input files, the compilation and execution commands. The simulation parameters of a representative microMegas simulation are:

0.5% plastic deformation

10x10x10 µm^3 simulation box dimensions

1012 1/m^2 initial density

10 1/s strain rate in multi-slip conditions

Note: Multi-slip calculations were performed to evaluate and demonstrate the efficiency of the parallel version of microMegas.

To run serial microMegas for production simulations from the command line add the corresponding software modules (compilers, libraries, etc.) to load in your ‘.bashrc’ file (in your home directory, i.e. /home/<your_username>/). To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

swsetup intel

Then in the 'mM/bin’ directory, to compile only the batch version of the simulation type:

make –f serial-Makefile clean
make –f serial-Makefile mm

Launch the serial version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

To run parallel microMegas for production simulations add the corresponding software modules (compilers, libraries, visualisers, etc.) to load in your ‘.bashrc’ file. To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

swsetup intel

Then in the ‘mMpar/bin’, tp compile only the batch version of the simulation type:

make –f openMP-Makefile clean
make –f openMP-Makefile mm_omp

Before running mm_omp, one needs to configure the target system for executing OpenMP programs. This is done by ensuring that the environment variables used by the chosen compiler and its OpenMP extension are properly set. For a quad-core Linux system running SuSE SLES 10, and the Intel Compiler version 11.1, the following values are recommended.

export OMP_THREAD_NUM=4 // This value can be adjusted to match the existing number of cores in the compute node of your choice.
// E.g., in talon nodes, this can be set to 12.
export KMP_AFFINITY=verbose,respect,granularity=core,scatter
export KMP_LIBRARY=turnaround
export KMP_SETTINGS=1
export KMP_STACKSIZE=512m
export KMP_VERSION=.TRUE.

For more details on the values and meaning of these environment variables, please consult the Intel Compiler manual and its OpenMP specification. Note that these environment variables are specific to the Intel Compiler and its OpenMP specification, and that they may differ based on the compiler of your choice and the specifics of its own OpenMP extension.

Launch the parallel OpenMP version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

For the parallel microMegas simulations, also load the MPI libraries, e.g. OpenMPI ver. 1.4.2, by typing:

swsetup openmpi-intel-64

Note: To avoid any compilation or execution errors, please make sure that during the selection of any additional libraries, such as MPI, you choose the library version that was compiled using the same compiler of your choice. For instance, if you compile the code using Intel compilers, please select the MPI library that was compiled using Intel compilers. Not doing so, may cause unpredictable errors during the simulation.

Then in the ‘mM/bin’ directory, to compile only the batch version of the simulation, type:

make –f openMPI-Makefile clean
make –f openMPI-Makefile mmp

Launch the parallel MPI version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

The serial code (mm), the OpenMP-based code (mm_omp) and the OpenMPI-based code (mmp) can be launched either locally (as described in the Subsection above - Simple batch execution) or remotely. For remote execution on high performance compute clusters, a PBS (Portable Batch Script) is needed to submit the execution as a job.

Below are three sample PBS scripts one could use to run microMegas in any of the three versions on the talon.hpc.msstate.edu high-performance cluster at HPC2. Each of these scripts can be cut and pasted into a file, e.g., mm.pbs.talon or mm_omp.pbs.talon or mmp.pbs.talon. To submit a pbs script to the jobs queue on talon, first log on to the talon-login node, typing:

rlogin talon-login

from any HPC2 machine, and then type:

qsub mm.pbs.talon or qsub mm_omp.pbs.talon or qsub mmp.pbs.talon

Note: microMegas is a long running code. To run long simulations please contact the HPC2 administrators to request access to the 'special' queue.

The output files are located in the mM/out directory. The most important output are briefly described below. For more details on the content and meaning of each file, please refer to the actual content of these files.

BVD.CFC - the set of reference vectors used in the simulation for a given crystal

bigsave.bin - a binary file containing everything needed to re-start a simulation if it is accidentally stopped

film.bin - a binary file where the coordinates of segments are periodically saved to buildup a trajectory file

gamma - a file containing the evolution of gamma for all existing slip systems

gammap - a file containing the evolution of the instantaneous gamma dot for all the slip systems

rau - a file containing the evolution of rho, the dislocation density, for all the slip systems

raujonc - a file containing the evolution of the junction density and number for all slip systems

resul - a GNU plotting script for plotting various simulation data (run 'gnuplot resul' to see the results)

This section also includes links to a 'Dislocations Generator' code, developed by Sebastien Groh. To download the code from the Codes Repository at CAVS, click here and then click "Download GNU tarball". Information on how to compile and run the code is provided in the 'readme' file.

microMegas is not available in a system-wide implementation on the HPC2 systems. To use microMegas, please choose one of the available versions above to download on your local computer or workstation. After downloading, untar the tarball by typing:

For a beginner's step-by-step tutorial on how to use and run microMegas please visit the microMegas page.

For a more detailed step-by-step tutorial, please download the PDF version TUTORIAL.

The microMegas input decks and a step-by-step Tutorial on how to use them to run discrete dislocation dynamics simulations can be downloaded ('Download GNU tarball') here, or can be viewed online by clicking on the name of each of the files on the microMegas page.