To run '''serial microMegas''' for production simulations from the command line add the corresponding software modules (compilers, libraries, etc.) to load in your ‘'''.bashrc'''’ file (in your home directory, i.e. /home/<your_username>/). To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

To run '''serial microMegas''' for production simulations from the command line add the corresponding software modules (compilers, libraries, etc.) to load in your ‘'''.bashrc'''’ file (in your home directory, i.e. /home/<your_username>/). To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

−

'''swsetup intel'''

+

<pre>

+

swsetup intel

+

</pre>

Then in the ''''mM/bin'''’ directory, to compile only the batch version of the simulation type:

Then in the ''''mM/bin'''’ directory, to compile only the batch version of the simulation type:

−

'''make –f serial-Makefile clean'''

+

<pre>

+

make –f serial-Makefile clean

−

'''make –f serial-Makefile mm'''

+

make –f serial-Makefile mm

+

</pre>

Launch the serial version of the simulation from the same directory by typing:

Launch the serial version of the simulation from the same directory by typing:

Overview of microMegas

MicroMegas (also known as 'mM') is an open source program for DD (Dislocation Dynamics) simulations originally developed at the 'Laboratoire d'Etude des Microstructures', CNRS-ONERA, France. mM is a free software under the terms of the GNU General Public License as published by the Free Software Foundation. Discrete dislocation dynamics (DDD) is a numerical tool used to model the plastic behavior of crystalline materials using the elastic theory of dislocations [1]. DDD is the computational counterpart to in site TEM tests. MicroMegas is a legacy simulation code used to study the plasticity of mono-crystalline metals, based on the elasticity theory that models the dislocation interactions into an elastic continuum. In crystalline materials, plastic deformation may be explained by (i) twinning, (ii) martensic transformation or/and (iii) dislocation interactions (see Figure 1).

MicroMegas is used at CAVS for modeling dislocation interactions and reactions in an elastic continuum. The code is used in a hierarchical multiscale framework of the plasticity to obtain information related to the hardening of the material (see for example, the multiscale framework presented in this review paper‎). Details of the discrete dislocations model can be found in the methodology‎ paper and in the references at the bottom of the page.

The discrete dislocation simulation code can be used for HCP, BCC and FCC materials.

mMpar ver. 1.0 – parallel version of mM [original ver. 3.2], where force calculations for each segment are calculated in parallel using OpenMP threads.

In this page we describe how to install, configure and run DDD simulations using mM ver. 1.0 (with Intel Compiler optimizations) and mMpar ver. 1.0 (with Intel Compiler optimizations and OpenMP threads). Installation instructions for mM ver. 1.0 and mMpar ver. 1.0 can also be found in the ‘readme’ files provided in each directory and subdirectory of the code.
mM can be run in batch mode to get data analyzed with conventional graphical display programs (exemples of Gnuplot scripts are provided) or it can be used in interactive mode to simply visualize dislocations activity. Herein, we describe how to run the code in batch mode. For instructions on how to run mM in interactive mode, please refer to the ‘readme’ files provided with the code.

Before compiling microMegas on any of the HPC-CAVS computing systesm, one needs to route to the proper compiler and MPI path on the system using:

swsetup intel - to use the latest Intel Fortran Compiler installed on the system, and

swsetup openmpi-intel-64 - to use the latest version of the OpenMPI libaries, compiled with the Intel compiler for 64-bit systems.

Input files

The input files are located in the mM/in directory. The following files are needed to run the mM simulation.

ContCu

The input file with parameters used for the simulation. For instance, one can select the type of simulation (initial or restart from previous simulation) via the parameter SIDEJA. One can also select whether cross-slip displacement of dislocations is desired by setting the GLDEV parameter accordingly (‘T’ for enabled and ‘F’ for disabled). Also, one can set the total number of simulation steps via the NSTEP parameter. Each simulation time step corresponds to 10-9 real time seconds. Therefore, for a very small simulation use NSTEP=500 while for a long running simulation set NSTEP to anything from 106 and above. Finally, one can also select how often should the simulation save the current state of the code, via the KISAUVE, KISTAT, KKIM and KPREDRAW parameters. For more details, check the file that comes with code. An example of the content of this file is given below.

This is the file containing the initial dislocation configuration (e.g., the active slip systems, the number of segments, the dimensions of the simulation reference volume box, etc). See the bottom of the existing file for more details.

base.f90 – module that reads all the data of the main program, in three groups of files:

materiaux – given material physical properties

control – given simulation parameters

seg3D – regroups the characteristics of the segments given at the beginning of the simulation

uses 01constantes, 02bricamat and 04varglob modules

carto.f90 – to be written

microstructure.F90 – module containing the subroutines used to detect the obstacles, i.e., subroutine barriere_spherique and subroutine barriere_plane; it prints the segments structure

uses 02bricamat, 03varbase, 04varglob, 06debug and 08connec modules

Compiling microMegas

Compiling the original microMegas code

Compile the simulation

To compile microMegas, you need to buildup a makefile dedicated to the machine you want to run the simulation, in the ‘mM/bin’ or ‘mMpar/bin’ directory. Solutions already exist for many different platforms; you should be able to do your one without too much effort.

The "config" file is the part of "makefiles" which is the same on all the machines

To create a new machine "makefile" you must add at the end of config the corresponding ".PHONY" definition.

Then, you need to buildup your one "Make_DEFS" file. The latter must contains all the headers useful for your new machine. See the following examples.

Make_DEFS.amd -> An AMD Linux platform with gcc and the Intel FORTRAN compilers
Make_DEFS.dec -> A DEC Alpha machine with the native C and FORTRAN compilers
Make_DEFS.g5 -> An Apple G5 machine with gcc and the IBM FORTRAN compilers
Make_DEFS.mac -> An Apple G4 or G3 Machine with gcc and the ABSOFT FORTRAN compilers
Make_DEFS.mad -> An AMD Cluster
Make_DEFS.madmax -> A cluster of Xeon machines with gcc and the Intel FORTRAN cimpilers
Make_DEFS.pc -> A simple PC workstation
Make_DEFS.sgi -> An SGI Itanium machine with gcc and the Intel(64) FORTRAN Compiler
etc....

Once you have made your "Make_DEFS.machine_type", type:

make -f config machine_type

For instance for my machine I simply type "make -f config mac")

At that stage you should have a "makefile" file created in the bin directory

Execute the version of microMegas of your choice

According to the version of microMegas you want to execute, type:

make or make all - to compile all the binaries (this does not include the MPI binary)

make mm - to compile only the batch version of the simulation

make gmm - to compile only the simulation with its graphical interface (interactive mode)

make mm_omp - to compile only the batch version for OpenMP parallel threads

make mmp - to compile only the batch version for MPI clusters

make cam - to compile only the graphical interace (needed to see the simulation film)

make base - to compile only the code needed to generate the simulation vectors base

make confinit - to compile only the code needed to generate random intitial configurations

make pavage - to compile only the code needed to generate the database needed for the simulation interfaces

make clean - to sweep out all the useless pieces of codes

make cleanall - to clean up everything

Run the simulation

To run the simulation, simply type:

mm > screen & - to run the simulation in batch mode

gmm - to run the simulation in interactive mode and with the graphic interface

mm_omp - to run the OpenMP-based simulation in batch mode, assuming all the OpenMP-related environment variables are set (see the next subsection for more details).

Additional tools

cam - The camera code to see after and during calculations the film of the simulation

confinit - The code used to buildup initial configurations

base - The code you can use to generate alone the base of vectors used in the simulation

pavage - The code used to generate the interfaces files "b_poly" needed to simulate periodic polycrystals

Where and who is who

All the inputs data are defined in the directory "mM/in". Take a look to the README file in this directory for more information.

All the outputs data are written in the directory "mM/out". Take a look to the README file in this directory for more information.

Running microMegas

A typical simulation run in Micromegas requires somewhere between 10^6 to 10^9 time steps to gain more insight about the plastic deformation range. Simulations with a smaller number of steps will very likely not capture the plastic range of deformation – the region of interest for the materials scientists studying plastic deformation. A simulation run over 10,000 steps using serial version of Micromegas requires 68 hours on average and reaches 0.2% of the plastic deformation on a Nehalem quad-core Xeon W3570 processor, with 6GB of triple channel 133MHz DDR-3 RAM. Simulations of about 10^9 time steps are needed to reach the desired percentage of deformation, that is, the strain rate as high over 1% as possible.

To get an idea of the type of simulations that can be conducted with microMegas, we give here the parameters of a representative simulation selected in the input files, the compilation and execution commands. The simulation parameters of a representative microMegas simulation are:

0.5% plastic deformation

10x10x10 µm^3 simulation box dimensions

1012 1/m^2 initial density

10 1/s strain rate in multi-slip conditions

Note: Multi-slip calculations were performed to evaluate and demonstrate the efficiency of the parallel version of microMegas.

Simple batch execution

Serial microMegas (mm)

To run serial microMegas for production simulations from the command line add the corresponding software modules (compilers, libraries, etc.) to load in your ‘.bashrc’ file (in your home directory, i.e. /home/<your_username>/). To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

swsetup intel

Then in the 'mM/bin’ directory, to compile only the batch version of the simulation type:

make –f serial-Makefile clean
make –f serial-Makefile mm

Launch the serial version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

Parallel microMegas (mm_omp - OpenMP version)

To run parallel microMegas for production simulations add the corresponding software modules (compilers, libraries, visualisers, etc.) to load in your ‘.bashrc’ file. To load the compiler of your choice, e.g., my choice is Intel Fortran, type:

swsetup intel

Then in the ‘mMpar/bin’, tp compile only the batch version of the simulation type:

make –f openMP-Makefile clean

make –f openMP-Makefile mm_omp

Before running mm_omp, one needs to configure the target system for executing OpenMP programs. This is done by ensuring that the environment variables used by the chosen compiler and its OpenMP extension are properly set. For a quad-core Linux system running SuSE SLES 10, and the Intel Compiler version 11.1, the following values are recommended.

export OMP_THREAD_NUM=4

This value can be adjusted to match the existing number of cores in the compute node of your choice. E.g., in talon nodes, this can be set to 12.

export KMP_AFFINITY=verbose,respect,granularity=core,scatter

export KMP_LIBRARY=turnaround

export KMP_SETTINGS=1

export KMP_STACKSIZE=512m

export KMP_VERSION=.TRUE.

For more details on the values and meaning of these environment variables, please consult the Intel Compiler manual and its OpenMP specification. Note that these environment variables are specific to the Intel Compiler and its OpenMP specification, and that they may differ based on the compiler of your choice and the specifics of its own OpenMP extension.

Launch the parallel OpenMP version of the simulation from the same directory by typing:

mkdir ../production_runs

to run the simulation in batch mode, or to record the running time and save the output in a separate directory and files, type:

Parallel microMegas (mmp - MPI version)

For the parallel microMegas simulations, also load the MPI libraries, e.g. OpenMPI ver. 1.4.2, by typing:

swsetup openmpi-intel-64

Note: To avoid any compilation or execution errors, please make sure that during the selection of any additional libraries, such as MPI, you choose the library version that was compiled using the same compiler of your choice. For instance, if you compile the code using Intel compilers, please select the MPI library that was compiled using Intel compilers. Not doing so, may cause unpredictable errors during the simulation.

Then in the ‘mM/bin’ directory, to compile only the batch version of the simulation, type:

make –f openMPI-Makefile clean

make –f openMPI-Makefile mmp

Launch the parallel MPI version of the simulation from the same directory by typing:

PBS batch execution

The serial code (mm), the OpenMP-based code (mm_omp) and the OpenMPI-based code (mmp) can be launched either locally (as described in the Subsection above - Simple batch execution) or remotely. For remote execution on high performance compute clusters, a PBS (Portable Batch Script) is needed to submit the execution as a job.

Below are three sample PBS scripts one could use to run microMegas in any of the three versions on the talon.hpc.msstate.edu high-performance cluster at HPC2. Each of these scripts can be cut and pasted into a file, e.g., mm.pbs.talon or mm_omp.pbs.talon or mmp.pbs.talon. To submit a pbs script to the jobs queue on talon, first log on to the talon-login node, typing:

rlogin talon-login

from any HPC2 machine, and then type:

qsub mm.pbs.talon or qsub mm_omp.pbs.talon or qsub mmp.pbs.talon.

Note: microMegas is a long running code. To run long simulations please contact the HPC2 administrators to request access to the 'special' queue.

Output files

The output files are located in the mM/out directory. The most important output are briefly described below. For more details on the content and meaning of each file, please refer to the actual content of these files.

BVD.CFC - the set of reference vectors used in the simulation for a given crystal

bigsave.bin - a binary file containing everything needed to re-start a simulation if it is accidentally stopped

film.bin - a binary file where the coordinates of segments are periodically saved to buildup a trajectory file

gamma - a file containing the evolution of gamma for all existing slip systems

gammap - a file containing the evolution of the instantaneous gamma dot for all the slip systems

rau - a file containing the evolution of rho, the dislocation density, for all the slip systems

raujonc - a file containing the evolution of the junction density and number for all slip systems

resul - a GNU plotting script for plotting various simulation data (run 'gnuplot resul' to see the results)