Skip to content

Monte Carlo Simulations

Marc DeGraef edited this page Nov 13, 2018 · 3 revisions

In this section, we briefly review the Monte Carlo programs used for the simulation of the back-scattered electron distributions for EBSD, ECP, and TKD diffraction modalities.

Monte Carlo Simulations: Background

The forward models used in EMsoft for the simulation of EBSD, ECP, and TKD patterns require knowledge of the energy, depth, and directional distributions of back scattered electrons (BSEs) for a given incident electron energy and sample type and orientation.

The underlying Monte Carlo code is based on David Joy's implementation of the Continuous Slowing Down Approximation (due to Bethe); for detailed information we refer to D. Joy, Monte Carlo Modeling for Electron Microscopy and Microanalysis, Oxford University Press, USA, 1995. The code is executed on a GPU using OpenCL, and consists of a fortran-90 program that initializes simulation, handles communication with the GPU and creates the HDF5 output file. On a GTX 1080 gaming card, a two-billion electron simulation takes about 4 minutes. A more advanced version of the Monte Carlo code, using a Discrete Losses Approximation, is currently under development.

Preparation for the Monte Carlo Simulation

Since the main Monte Carlo program employs OpenCL to address a Graphical Processing Unit or GPU, you should make sure that the OpenCL environment is properly installed on your computer. On the command line, enter the following command:

EMOpenCLinfo

This program will query the OpenCL platform and device configurations on your computer and list some information about the device capabilities. On a 2015 MacBook Pro, this program produces the following output:

Number of Platforms: 1
--------
Platform: 1

 Profile: FULL_PROFILE
 Version: OpenCL 1.2 (Apr  4 2017 19:07:42)
 Name: Apple
 Vendor: Apple
 Extensions: cl_APPLE_SetMemObjectDestructor cl_APPLE_ContextLoggingFunctions cl_APPLE_clut cl_APPLE_query_kernel_names cl_APPLE_gl_sharing cl_khr_gl_event


Num CPU Devices:  1
 Device (# 1, CU/MWGS/MWIS/GMS:    8/1024/1024,   1,   1/ 16) -  Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz


Num GPU Devices:  2
 Device (# 1, CU/MWGS/MWIS/GMS/MAS:   40/ 512/ 512, 512, 512/  1, 384) -  Iris Pro
 Device (# 2, CU/MWGS/MWIS/GMS/MAS:   10/ 256/ 256, 256, 256/  2, 512) -  AMD Radeon R9 M370X Compute Engine
  
 [CU = Compute Units; MWGS = Maximum Work Group Size; MWIS = Maximum Work Item Sizes (3D); GMS = Global Memory Size (Gb); MAS = Maximum Allocatable Memory Size (Mb)]

--------

There is a single configured OpenCL platform that has a single CPU device, consisting of 8 compute units with the stated Global Memory Size of 16 Gb, as well as two GPU devices (Iris Pro, which drives the laptop screen, and an AMD Radeon Compute Engine with 10 compute units). You should use the second GPU device for your computation on this computer (platform ID = 1, device ID = 2). Your configuration will likely be different; it is possible that the CPUs do not show up as an OpenCL device, and there may be multiple OpenCL platforms. If no platform is listed, or if the program fails to produce any output, then this is a sign that OpenCL is either not installed or incorrectly installed on your computer; in that case the EMMCOpenCL program described in the next section will not function properly.

Monte Carlo Simulations for EBSD and ECP

The first step in any EBSD/ECPpattern simulation is the Monte Carlo program EMMCOpenCL; this program performs a Monte Carlo simulation using the Bethe Continuous Slowing Down Approximation along with the scattering cross section for Rutherford scattering, and produces histograms for the depth, directional, and energy distributions of the back-scattered electrons (BSEs). This version of the program employs a GPU (Graphical Processing Unit) for the main part of the program; if you do not have a GPU card, then you can run the older EMMC program instead, but you should be aware that this will be much, much slower than on the GPU.

To set up the simulation parameters, go to the folder where you wish to keep the data and type

EMMCOpenCL -t

The -t option forces the program to create a template file without performing any simulations. The file will have the same name as the program name, with extension .template. Rename the file to whatever name you prefer to use, and make sure to use the extension .nml, which stands for name list file, the standard fortran-90 mechanism for IO of name-value pairs. Edit the file, which contains the following default content:

 &MCCLdata
! only bse1, full or Ivol simulation
 mode = 'full' ! 'bse1' or 'full', 'Ivol',
! name of the crystal structure file
 xtalname = 'undefined',
! for full mode: sample tilt angle from horizontal [degrees]
 sig = 70.0,
! for bse1 mode: start angle
 sigstart = 0.0,
! for bse1 mode: end angle
 sigend = 30.0,
! for bse1 mode: sig step size
 sigstep = 2.0,
! sample tilt angle around RD axis [degrees]
 omega = 0.0,
! number of pixels along x-direction of square projection [odd number!]
 numsx = 501,
! number of incident electrons per thread
 num_el = 10,
! GPU platform ID selector 
 platid = 1,
! GPU device ID selector 
 devid = 1,
! number of work items (depends on GPU card; leave unchanged)
 globalworkgrpsz = 150,
! total number of incident electrons and multiplier (to get more than 2^(31)-1 electrons)
 totnum_el = 2000000000,
 multiplier = 1,
! incident beam energy [keV]
 EkeV = 30.D0,
! minimum energy to consider [keV]
 Ehistmin = 15.D0,
! energy binsize [keV]
 Ebinsize = 1.0D0,
! maximum depth to consider for exit depth statistics [nm]
 depthmax = 100.D0,
! depth step size [nm]
 depthstep = 1.0D0,
! should the user be notified by email or Slack that the program has completed its run?
 Notify = 'Off',
! output data file name; pathname is relative to the EMdatapathname path !!!
 dataname = 'MCoutput.h5'
 /

Note that there are two types of lines in this file: lines starting with ! are comment lines; lines starting with a space contain name-value pairs and they should end with a comma. The comment lines are meant to provide a brief explanation of the meaning of each parameter; they will also state the units, if any, and the valid parameter range when appropriate.

The number of electrons to be used for the simulation deserves a brief explanation. The default value for totnum_el is 2 billion incident electrons, which is roughly equivalent of one-third of a nano-Coulomb. To get precisely 1 nC of incident electrons, you can set totnum_el to 2080000000, and multiplier to 3. Experience has shown that the default value is likely good enough for most simulations, except perhaps when you have a material with a very low average atomic number. For most simulations, the precise value of the number of incident electrons will not matter all that much. The only situation in which you need to pay attention to the numbers is when you are simulating EBSD patterns, and you would like to have a reasonable estimate of the actual number of electrons hitting the detector in, say, one millisecond; if you wish to include Poisson noise, then the actual intensity numbers will matter, since this noise goes as the square root of the intensity.

The output of the program is stored in an HDF5-formatted file dataname; the Monte Carlo histograms are stored in square Lambert projection form in an array of numsx * numsx pixels (odd value). The next series of parameters handles the GPU setting; each GPU work item will attempt to compute the trajectories for num_el electrons before returning data to the calling program. The GPU card can be specified using the platform ID platid and device ID devid; valid ranges for these parameters can be obtained by using the EMOpenCLinfo utility program. The globalworkgrpsz parameter is set to a reasonable default value but on some cards it can be changed to a high number, e.g., 512 or more; if the BSE yield is zero at the end of a run, then this parameter may have an incorrect value, but there is no way to determine which values are acceptable on a given GPU (other than trial-and-error). The total number of electrons to be considered in the simulation is multiplier * totnum_el; the maximum value for totnum_el is 2^32 -1, but more electrons can be obtained by setting the multiplier parameter to a number larger than 1. Finally, the Notify parameter allows for the generation of an email to the user or a Slack message push to the user's Slack channel.

Note that for the ECP modality, the mode parameter must be set to bse1, while the for EBSD one should use full.

Edit the namelist file (see Example 2), then execute the program as follows:

EMMCOpenCL filename.nml

The EMMCOpenCL program will create an HDF5-formatted output file that serves as one of the inputs for the EBSD master pattern simulation.

###Note for low voltage EBSD simulations The parameters listed above for the voltage step size and the integration thickness and step size are adequate for conventional SEM voltages between 15 and 30 kV. If a simulation at a lower voltage is needed, in particular below 10 kV, then one must be careful to set the step sizes to smaller values. For instance, for a simulation at 3 kV, one should reduce the Ebinsize parameter to 0.25, and set Ehistmin to 1.5 or 1.0 kV. Since the penetration depth decreases substantially at lower voltages, the depthmax should be set to 10 or 20 nm, and the depthstep value to 0.1 nm; this will cause the subsequent master pattern simulation to be a little slower, but it should produce reliable master pattern results. The use of larger values for the step sizes may result in an incorrect computation of the master pattern (in some test cases, negative intensities were found in the master pattern array, rendering the pattern useless).

###Monte Carlo Simulations for TKD For the TKD modality, a separate Monte Carlo is used: EMMCfoil. The main reason for a separate program is the fact that the geometry for TKD is quite different from that for EBSD and ECP. The scattered electrons emerge from the bottom surface of the sample, and the sample tilt is in the opposite direction, so that the bottom surface faces the detector; the sample tilt angle must hence be a negative quantity. The name list file for the EMMCfoil program contains the following entries

&MCCLfoildata
! name of the crystal structure file
 xtalname = 'undefined',
! for full mode: sample tilt angle from horizontal [degrees]
 sig = -20.0,
! sample tilt angle around RD axis [degrees]
 omega = 0.0,
! number of pixels along x-direction of square projection [odd number!]
 numsx = 501,
! number of incident electrons per thread
 num_el = 10,
! GPU platform ID selector 
 platid = 1,
! GPU device ID selector 
 devid = 1,
! number of work items (depends on GPU card; leave unchanged)
 globalworkgrpsz = 150,
! total number of incident electrons and multiplier (to get more than 2^(31)-1 electrons)
 totnum_el = 2000000000,
 multiplier = 1,
! incident beam energy [keV]
 EkeV = 30.D0,
! minimum energy to consider [keV]
 Ehistmin = 10.D0,
! energy binsize [keV]
 Ebinsize = 1.0D0,
! max depth [nm] (this is the maximum distance from the bottom foil surface to be considered)
 depthmax = 100.0D0,
! depth step size [nm]
 depthstep = 1.0D0,
! total foil thickness (must be larger than depth)
 thickness = 200.0,
! output data file name; pathname is relative to the EMdatapathname path !!!
 dataname = 'MCoutput.h5'
 /

A second difference is the presence of the thickness parameter, which is the foil thickness; in the TKD mode, most of the electrons originate from close to the exit surface, and the energy distribution will depend on how much material is present above the exit surface. The larger the foil thickness, the more energy the electrons will lose on average before they reach the exit surface; hence, the TKD pattern will have broader Kikuchi bands for a thick sample than for a thin foil. The other parameters listed above are identical to those used in the EMMCOpenCL program.

Information for Users

Home

SEM Modalities     - Monte Carlo Simulations
    - EBSD Master Pattern Simulations
    - EBSD Overlap Master Patterns
    - EBSD Pattern Simulations
    - EBSD Dictionary Indexing
    - EBSD Spherical Indexing
    - EBSD Reflector Ranking
    - EBSD HREBSD
    - ECP Master Pattern Simulations
    - ECP Pattern Simulations
    - TKD Master Pattern Simulations
    - TKD Pattern Simulations
    - ECCI Defect Image Simulations
TEM Modalities     - HH4
    - PED
    - CBED Pattern Simulations
    - STEM-DCI Image Simulations
    - EMIntegrateSTEM utility
XRD Modalities     - Laue Master Pattern Simulation
    - EMLaue
    - EMLaueSlit
General Parameter Definitions * Foil Defect Configuration Definitions
EMsoftWorkbench
Utility Programs     - EMConvertOrientations
    - EMDisorientations
    - EMHOLZ
    - EMKikuchiMap
    - EMOpenCLinfo
    - EMZAgeom
    - EMcuboMK
    - EMdpextract
    - EMdpmerge
    - EMdrawcell
    - EMeqvPS
    - EMeqvrot
    - EMfamily
    - EMGBO
    - EMGBOdm
    - EMgetEulers
    - EMgetOSM
    - EMlatgeom
    - EMlistSG
    - EMlistTC
    - EMmkxtal
    - EMorbit
    - EMorient
    - EMqg
    - EMsampleRFZ
    - EMshowxtal
    - EMsoftSlackTest
    - EMsoftinit
    - EMstar
    - EMstereo
    - EMxtalExtract
    - EMxtalinfo
    - EMzap
IDL Scripts     - Virtual Machine Apps
    - SEMDisplay
    - Efit
    - CBEDDisplay
python wrappers     - python examples
Docker     - Docker Image

Complete Examples

  1. Crystal Data Entry Example
  2. EBSD Example
  3. ECP Example
  4. TKD Example
  5. ECCI Example
  6. CBED Example
  7. Dictionary Indexing Example
  8. DItutorial

Information for Developers

Clone this wiki locally