Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segfault with vectorization #106

Closed
Tissot11 opened this issue Apr 15, 2019 · 57 comments
Closed

Segfault with vectorization #106

Tissot11 opened this issue Apr 15, 2019 · 57 comments
Labels

Comments

@Tissot11
Copy link

Tissot11 commented Apr 15, 2019

Description

I guess there is a problem with ee collisions. Turning them on cause Smilei to crash saying some segmentation error. I talked with our Sys Admin and he suggests, after looking into core. files that problem is in the Borish Pusher of Smilei. I have fetched the latest version of Smilei via Github to avoid the bug reported in #79

If available, copy-paste faulty code, warnings, errors, etc.

Output of stderr

[lfc182][[13255,1],49][btl_tcp_frag.c:230:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)

Output of stdout

Got 256.

                    _            _
  ___           _  | |        _  \ \   Version : v4.1-334-g67b25fff-master
 / __|  _ __   (_) | |  ___  (_)  | |   
 \__ \ | '  \   _  | | / -_)  _   | |
 |___/ |_|_|_| |_| |_| \___| |_|  | |  
                                 /_/    
 
 

 Reading the simulation parameters
 --------------------------------------------------------------------------------
 HDF5 version 1.8.16
	 Python version 2.7.5
	 Parsing pyinit.py
	 Parsing v4.1-334-g67b25fff-master
	 Parsing pyprofiles.py
	 Parsing Shock_Coll_2D.py
	 Parsing pycontrol.py
	 Check for function preprocess()
	 python preprocess function does not exist
	 Calling python _smilei_check
	 Calling python _prepare_checkpoint_dir
	[WARNING] Resources allocated 3584 underloaded regarding the total number of patches 256
	[WARNING] Change patches distribution to hilbertian
	[WARNING] For collisions, particles have been forced to be sorted per cell
	[WARNING] simulation_time has been redefined from 2387.610417 to 2387.610417 to match timestep.
	[WARNING] Particles cluster width set to : 100
	[WARNING] Particles cluster width set to: 200 for the adaptive vectorization mode
 

 Geometry: 2Dcartesian
 --------------------------------------------------------------------------------
	 Interpolation order : 2
	 Maxwell solver : Yee
	 (Time resolution, Total simulation time) : (23.873241, 2387.610417)
	 (Total number of iterations,   timestep) : (57000, 0.041888)
	            timestep  = 0.377124 * CFL
	 dimension 0 - (Spatial resolution, Grid length) : (6.366198, 502.654825)
	             - (Number of cells,    Cell length)  : (3200, 0.157080)
	             - Electromagnetic boundary conditions: (silver-muller, silver-muller)
                     - Electromagnetic boundary conditions k    : ( [1.00, 0.00] , [-1.00, -0.00] )
	 dimension 1 - (Spatial resolution, Grid length) : (6.37, 37.70)
	             - (Number of cells,    Cell length)  : (240, 0.16)
	             - Electromagnetic boundary conditions: (periodic, periodic)
                     - Electromagnetic boundary conditions k    : ( [0.00, 1.00] , [-0.00, -1.00] )
 

 Vectorization: 
 --------------------------------------------------------------------------------
	 Mode: adaptive
	 Default mode: off
	 Time selection: never
 

 Patch arrangement : 
 --------------------------------------------------------------------------------
 

 Initializing MPI
 --------------------------------------------------------------------------------
	 applied topology for periodic BCs in y-direction
	 MPI_THREAD_MULTIPLE enabled
	 Number of MPI process : 256
	 Number of patches : 
		 dimension 0 - number_of_patches : 16
		 dimension 1 - number_of_patches : 16
	 Patch size :
		 dimension 0 - n_space : 200 cells.
		 dimension 1 - n_space : 15 cells.
	 Dynamic load balancing: never
 

 OpenMP
 --------------------------------------------------------------------------------
	 Number of thread per MPI process : 14
 

 Initializing the restart environment
 --------------------------------------------------------------------------------
 

 Initializing moving window
 --------------------------------------------------------------------------------
 

 Initializing particles & fields
 --------------------------------------------------------------------------------
	 Creating Species : ion
	 Creating Species : eon
	 Laser parameters :
 		Laser #0: separable profile
			omega              : 1
			chirp_profile      : 1D built-in profile `tconstant`
			time envelope      : 1D built-in profile `tconstant`
			space envelope (y) : 1D built-in profile `constant`
			space envelope (z) : 1D built-in profile `constant`
			phase          (y) : 1D built-in profile `constant`
			phase          (z) : 1D built-in profile `constant`
		delay phase      (y) : 0
		delay phase      (z) : 0
	 Parameters for collisions #0 :
		 Collisions between species (1) and (0)
		 Coulomb logarithm: 0.00
		 Debug every 1500 timesteps
	 Parameters for collisions #1 :
		 Intra collisions within species (1)
		 Coulomb logarithm: 0.00
		 Debug every 1500 timesteps
	 Adding particle walls:
		 Nothing to do
 

 Initializing Patches
 --------------------------------------------------------------------------------
	 First patch created
	 All patches created
 

 Creating Diagnostics, antennas, and external fields
 --------------------------------------------------------------------------------
	 Created ParticleBinning diagnostic #0: species eon
		 Axis ekin from 0.02 to 1000 in 4000 steps [LOGSCALE] 
	 Created ParticleBinning diagnostic #1: species ion
		 Axis ekin from 0.02 to 200 in 4000 steps
	 Created ParticleBinning diagnostic #2: species eon
		 Axis x from 0 to 502.655 in 1000 steps
		 Axis px from -150 to 150 in 4000 steps
	 Created ParticleBinning diagnostic #3: species ion
		 Axis x from 0 to 502.655 in 1000 steps
		 Axis px from -1000 to 1000 in 4000 steps
	 Created ParticleBinning diagnostic #4: species ion,eon
		 Axis x from 0 to 502.655 in 4000 steps
		 Axis y from 0 to 37.6991 in 4000 steps
	 Diagnostic Fields #0  :
		 Ex Ey Bz Rho_ion Rho_eon 
	 Done initializing diagnostics, antennas, and external fields
 

 Applying external fields at time t = 0
 --------------------------------------------------------------------------------
 
 
 

 Solving Poisson at time t = 0
 --------------------------------------------------------------------------------
	 Poisson solver converged at iteration: 0, relative err is ctrl = 0.00 x 1e-14
	 Poisson equation solved. Maximum err = 0.00 at i= -1
 Time in Poisson : 0.03
 

 Initializing diagnostics
 --------------------------------------------------------------------------------
 

 Running diags at time t = 0
 --------------------------------------------------------------------------------
 

 Species creation summary
 --------------------------------------------------------------------------------
		 Species 0 (ion) created with 23520000 particles
		 Species 1 (eon) created with 23520000 particles
 

 Patch arrangement : 
 --------------------------------------------------------------------------------
 

 Memory consumption
 --------------------------------------------------------------------------------
	 (Master) Species part = 0 MB
	 Global Species part = 4.381 GB
	 Max Species part = 28 MB
	 (Master) Fields part = 0 MB
	 Global Fields part = 0.117 GB
	 Max Fields part = 0 MB
	 (Master) ParticleBinning0.h5  = 0 MB
	 Global ParticleBinning0.h5 = 0.008 GB
	 Max ParticleBinning0.h5 = 0 MB
	 (Master) ParticleBinning1.h5  = 0 MB
	 Global ParticleBinning1.h5 = 0.008 GB
	 Max ParticleBinning1.h5 = 0 MB
	 (Master) ParticleBinning2.h5  = 30 MB
	 Global ParticleBinning2.h5 = 7.629 GB
	 Max ParticleBinning2.h5 = 30 MB
	 (Master) ParticleBinning3.h5  = 30 MB
	 Global ParticleBinning3.h5 = 7.629 GB
	 Max ParticleBinning3.h5 = 30 MB
	 (Master) ParticleBinning4.h5  = 122 MB
	 Global ParticleBinning4.h5 = 30.518 GB
	 Max ParticleBinning4.h5 = 122 MB
 

 Expected disk usage (approximate)
 --------------------------------------------------------------------------------
	 WARNING: disk usage by non-uniform particles maybe strongly underestimated,
	    especially when particles are created at runtime (ionization, pair generation, etc.)
	 
	 Expected disk usage for diagnostics:
		 File Fields0.h5: 1.09 G
		 File scalars.txt: 30.09 K
		 File ParticleBinning0.h5: 1.18 M
		 File ParticleBinning1.h5: 1.18 M
		 File ParticleBinning2.h5: 1.13 G
		 File ParticleBinning3.h5: 1.13 G
		 File ParticleBinning4.h5: 4.53 G
	 Total disk usage for diagnostics: 7.89 G
	 
 

 Cleaning up python runtime environement
 --------------------------------------------------------------------------------
	 Checking for cleanup() function:
	 python cleanup function does not exist
	 Calling python _keep_python_running() :
		 Closing Python
 

 Time-Loop started: number of time-steps n_time = 57000
 --------------------------------------------------------------------------------
      timestep       sim time   cpu time [s]   (    diff [s] )
    5700/57000     2.3878e+02     1.1716e+04   (  1.1716e+04 )
--------------------------------------------------------------------------
mpirun noticed that process rank 68 with PID 60013 on node lfc176.mpi-hd.mpg.de exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

Steps to reproduce the problem

If relevant, provide a step-by-step guide

Parameters

# ----------------------------------------------------------------------------------------
#                     SIMULATION PARAMETERS FOR THE PIC-CODE SMILEI
# ----------------------------------------------------------------------------------------

import math

c = 299792458
electron_mass = 9.10938356e-31
electron_charge = 1.60217662e-19
lambdar = 1e-6                          # Normalization wavelength
wr = 2*math.pi*c/lambdar                # Normalization frequency


l0 = 2.0*math.pi       # laser wavelength
t0 = l0                # optical cycle
Lsim = [80.*l0,6.*l0]  # length of the simulation
Tsim = 380.*t0          # duration of the simulation
resx = 40.            # nb of cells in on laser wavelength
rest = 150.            # time of timestep in one optical cycle 

Main(
    geometry = "2Dcartesian",
    
    interpolation_order = 2 ,
    
    cell_length = [l0/resx,l0/resx],
    grid_length  = Lsim,
    
    number_of_patches = [ 16, 16 ],
    
    timestep = t0/rest,
    simulation_time = Tsim,
     
    EM_boundary_conditions = [
        ['silver-muller'],
        ['periodic'],
    ],
    reference_angular_frequency_SI = wr,
    random_seed = smilei_mpi_rank
)


# We build a gaussian laser from scratch instead of using LaserGaussian2D
# The goal is to test the space_time_profile attribute




Laser(
    box_side       = "xmin",
    #omega          = 1.,
    time_envelope  = tconstant(),#tgaussian(start = 0., duration = 120., fwhm = 201., center = 60.), # 
    space_envelope = [ 0., constant(60.) ],
    phase          = [ 0., 0. ],
    delay_phase    = [ 0., 0. ]
)


#LaserPlanar1D(
#    box_side         = "xmin",
#    a0               = 300.,
#    omega            = 1.,
#    polarization_phi = 0.,
#    ellipticity      = 0.,
#    time_envelope    = tgaussian(start = 0., duration = 400.)
#)


Species(
    name = "ion",
    position_initialization = 'regular',
    momentum_initialization = 'mj',
    particles_per_cell = 49,
    mass = 1*1836.0,
    charge = 1.0,   
    number_density = trapezoidal(50.0,xvacuum=6*l0, xslope1=0. ,xplateau=50*l0, xslope2 = 0.),
    temperature = [0.0017],
    boundary_conditions = [
        ["remove", "remove"],
        ["periodic", "periodic"],
    ],
)


Species(
    name = "eon",
    position_initialization = 'regular',
    momentum_initialization = 'mj',
    particles_per_cell = 49,
    mass = 1.0,
    charge = -1.0,
    number_density = trapezoidal(50.0,xvacuum=6*l0 , xslope1=0. ,xplateau=50*l0, xslope2=0. ),
    temperature = [0.0017],
    boundary_conditions = [
        ["remove", "remove"],
        ["periodic", "periodic"],
    ], 

  
)

Collisions(
        species1 = ["eon"],
        species2 = ["ion"],
        debug_every = 1500,
)

Collisions(
    species1 = ["eon"],
    species2 = ["eon"],
    debug_every = 1500,
)


DiagFields(
    every = 1500,
    fields = ['Ex','Ey','Bz','Rho_ion','Rho_eon']
    #fields = ['Ex','Ey','Bz']
)

DiagScalar(every=1500,
)

DiagParticleBinning(

    deposited_quantity = "weight",
    every = 1500,
    time_average = 1,
    species = ["eon"],
    axes = [ ["ekin", 0.02, 1000., 4000., "logscale" ] ],
)

DiagParticleBinning(

    deposited_quantity = "weight",
    every = 1500,
    time_average = 1,
    species = ["ion"],
    axes = [ ["ekin", 0.02, 200., 4000 ] ],
)

DiagParticleBinning(
    deposited_quantity = "weight",
    every = 1500,
    time_average = 1,
    species = ["eon"],
    axes = [ ["x",    0.,    80.*l0,    1000],
             ["px",   -150.,   150.,    4000] ]
)

DiagParticleBinning(
    deposited_quantity = "weight",
    every = 1500,
    time_average = 1,
    species = ["ion"],
    axes = [ ["x",    0.,    80.*l0 ,   1000],
             ["px",   -1000.,   1000.,    4000] ]
)

DiagParticleBinning(
    deposited_quantity = "weight",
    every = 1500,
    time_average = 1,
    species = ["ion","eon"],
    axes = [ ["x",    0.,    80.*l0 ,   4000],
             ["y",    0.,     6.*l0,    4000] ]
)
@Tissot11 Tissot11 added the bug label Apr 15, 2019
@mccoys
Copy link
Contributor

mccoys commented Apr 15, 2019

First remark: you have 256 MPI x 14 threads, which is way more than the number of patches (16*16) that you defined. This means most of your resources will not be used.

That said, your error should not happen. I am not sure collisions are the real problem, but will investigate.

@mccoys
Copy link
Contributor

mccoys commented Apr 15, 2019

Actually, this error can be caused by the simulation exceeding the time limit on your job request. Have you checked that ?

@Tissot11
Copy link
Author

This thread and patches issue is a bit unclear to me. What I understand that number of patches should be equal or larger than number of slots the job is submitted to. On our cluster I submitted the job to 256 slots. I don't know if I can control the no of MPI threads per slot.

The second issue, I didn't fully understand. If I submit this job without ee collisions and it takes one or two days to finish. But with ee collisions on, it crashed after 2 hours of running. In the stdout, one can see only one entry for the output. It should have at least 10 more entries (n_time=57000) before the simulation is finished. If I haven't understood something then please let me know.

@mccoys
Copy link
Contributor

mccoys commented Apr 15, 2019

concerning the first issue, there is some documentation here: http://www.maisondelasimulation.fr/smilei/parallelization.html
You requested 256 MPI processes. In each process, there can be a certain number of threads (you requested 14 apparently). This makes a total of 3584 threads. This number should be smaller than the number of patches that is 256 in your case. You probably want to request less processes. Knowing that we recommend 1 thread = 1 core, then you should adjust the number of cores accordingly. Tell us more about your cluster if you need some more precise help, although this should be asked to your admin first.

Concerning the second issue, I just suggested that the problem could have been the requested time. After what you said, I think this was not true, so I will investigate. Note that 2 days to run this case is waaaay too much. This is probably due to the first issue.

@Tissot11
Copy link
Author

Actually I have no control over these 14 threads being requested. I talked with our SysAdmin and he said that this is done by Smilei. It would be nice if one can force 1 core = 1 thread in Smilei? We have cluster where each node has 28 CPU cores and when running the EPOCH code, we just specify the number of slots or CPU cores to the run the simulation. I agree that if total threads are too large than Smilei runs slower as I had noticed it a while ago. Normally, I try running Smilei on 64 slots/CPU, so that total no of threads do not become large. It would be nice if you can instruct how to limit the no of MPI threads generation in the python script specifying the parameters itself.

@mccoys
Copy link
Contributor

mccoys commented Apr 15, 2019

There is a problem of vocabulary here. We do not seem to use the same conventions. Let me give you the naming we use:

  • MPI process (sometimes called task): These are sorts of copies of smilei that will be run simultaneously. They have separate memory and communicate using the MPI protocol. They are chosen by the command mpirun. For example, mpirun -np 16 smilei ... will run on 16 processes.
  • OpenMP threads: These are computing instructions that are handled by the OpenMP protocol within each MPI process. They share the memory that this MPI process owns. The number of threads per process is chosen by setting the environment variable OMP_NUM_THREADS.

If you choose Nomp threads per process, and Nmpi MPI processes, then you obtain a total of Nomp x Nmpi threads.

I don't know what you mean by slot.

Smilei does not control the number of MPI processes and OpenMP threads. You have to provide them as explained above. Usually, it is good practice to choose the number of MPI processes (Nmpi) equal to the number of nodes that you intend on running; and to choose the number of threads per process (Nomp) equal to the number of cores per node. However, this depends on the machine and on the simulation). You must also verify that the total number of threads is (much) smaller than the number of patches of your simulation. Again, this is detailed here: http://www.maisondelasimulation.fr/smilei/parallelization.html

I don't know how EPOCH works, but it may be that EPOCH does not use this hybrid MPI/OpenMP technique, thus would be simpler to use, but would lack some performance.

@Tissot11
Copy link
Author

Thanks for the detailed answer. I meant by slot the number of CPU cores (without hyperthreading). I have now set the OMP_NUM_THREADS=1 and it has indeed launched 1 OpenMP thread per core. I hope this doesn't compromise the speed advantage of Smilei? I'm running the same simulation again with 256 MPI processes (slots in our convention) to see if it runs now without crashing. I'll let you know the result.

@mccoys
Copy link
Contributor

mccoys commented Apr 16, 2019

You misunderstood the option OMP_NUM_THREADS. It sets the number of OpenMP threads per MPI process, NOT PER CORE.

A core is the smallest computing element (it is a piece of hardware). Cores are bundled in nodes: in your case, 28 cores inside one node. Again, a node is a piece of hardware. Do not call a core a CPU core, because it is very misleading: some software call CPU the nodes, not the cores. Furthermore, do not get confused with threads and processes, which are not hardware, but software.

You should set export OMP_NUM_THREADS=28, so that you have 28 threads per MPI process, and choose 1 MPI process for each node. For instance, if you run on 10 nodes, you should set 10 MPI processes, which makes a total of 280 threads. This is pretty close to what you are used to do with EPOCH apparently. Your commands should look like:

export OMP_NUM_THREADS=28
export OMP_SCHEDULE=dynamic
export OMP_PROC_BIND=TRUE
mpirun -np 10 --bind-to node smilei my_input.py

@Tissot11
Copy link
Author

Thanks for the quick reply. I agree that terminology is a bit confusing. I talked with our SysAdmin and he says we can't use 10 nodes and OMP_NUM_THREADS=28 as cluster is never empty. This configuration can only work with roundrobin method of Sun Grid Engine on our cluster, which is a special requirement. Our cluster support filling up slots (MPI processes) by default. I can try 60 nodes and OMP_NUM_THREADS=5. I read the Parallelization basics page to understand Smilei working. However, I couldn't find any info about the scaling of Smilei for a given problem. Is it fast with less number of MPI process (10) and higher number of OpenMP threads (28), as you just suggested? Or it doesn't matter, one can choose a higher number of MPI processes (60) and less number of OpenMP threads (5) as I just mentioned?

@Tissot11
Copy link
Author

Ok. Now I have found out, that choosing less no of MPI processes and higher number of threads is beneficial as shows here.
http://www.maisondelasimulation.fr/smilei/highlights.html

I'll see how should I efficiently run Smilei on our cluster.

@Tissot11
Copy link
Author

But surprisingly, on choosing OMP_NUM_THREADS=1, the code is running and it hasn't crashed yet. In fact it has surpassed the point where it was crashing earlier with ee collisions. This is puzzling...

@xxirii
Copy link
Contributor

xxirii commented Apr 16, 2019

What @mccoys said is correct. I just want to add that the last configuration you used (full mpi and 1 openmp thread per mpi) is totally correct contrary to what you did at first. It is correct but in many cases not the most efficient. This what Fred was saying. Nonetheless this configuration appears to be not that bad when the load remains balanced during the whole simulation among the different mpi process. I would keep your simulation running to the end and when you have the time run it again with 60 nodes and 5 Omp threads to compare.

@Tissot11
Copy link
Author

Tissot11 commented Apr 17, 2019

When I ran with 50 MPI processes and OMP_NUM_THREADS=5, it crashed after 1 hours 22 mins without displaying any output (See below). With 256 MPI processes OMP_NUM_THREADS=1, it took 5 hours 16 mins to finish. So this is puzzling and it appears that with ee collisions on, Smilei crashes with higher no. OMP_NUM_THREADS. I can try further reducing the total no of threads from the no. of patches (256).

------------------
Got 50.
                    _            _
  ___           _  | |        _  \ \   Version : v4.1-334-g67b25fff-master
 / __|  _ __   (_) | |  ___  (_)  | |   
 \__ \ | '  \   _  | | / -_)  _   | |
 |___/ |_|_|_| |_| |_| \___| |_|  | |  
                                 /_/    
 
 

 Reading the simulation parameters
 --------------------------------------------------------------------------------
 HDF5 version 1.8.16
	 Python version 2.7.5
	 Parsing pyinit.py
	 Parsing v4.1-334-g67b25fff-master
	 Parsing pyprofiles.py
	 Parsing Shock_Coll_2D.py
	 Parsing pycontrol.py
	 Check for function preprocess()
	 python preprocess function does not exist
	 Calling python _smilei_check
	 Calling python _prepare_checkpoint_dir
	[WARNING] Change patches distribution to hilbertian
	[WARNING] For collisions, particles have been forced to be sorted per cell
	[WARNING] simulation_time has been redefined from 2387.610417 to 2387.610417 to match timestep.
	[WARNING] Particles cluster width set to : 100
	[WARNING] Particles cluster width set to: 200 for the adaptive vectorization mode
 

 Geometry: 2Dcartesian
 --------------------------------------------------------------------------------
	 Interpolation order : 2
	 Maxwell solver : Yee
	 (Time resolution, Total simulation time) : (23.873241, 2387.610417)
	 (Total number of iterations,   timestep) : (57000, 0.041888)
	            timestep  = 0.377124 * CFL
	 dimension 0 - (Spatial resolution, Grid length) : (6.366198, 502.654825)
	             - (Number of cells,    Cell length)  : (3200, 0.157080)
	             - Electromagnetic boundary conditions: (silver-muller, silver-muller)
                     - Electromagnetic boundary conditions k    : ( [1.00, 0.00] , [-1.00, -0.00] )
	 dimension 1 - (Spatial resolution, Grid length) : (6.37, 37.70)
	             - (Number of cells,    Cell length)  : (240, 0.16)
	             - Electromagnetic boundary conditions: (periodic, periodic)
                     - Electromagnetic boundary conditions k    : ( [0.00, 1.00] , [-0.00, -1.00] )
 

 Vectorization: 
 --------------------------------------------------------------------------------
	 Mode: adaptive
	 Default mode: off
	 Time selection: never
 

 Patch arrangement : 
 --------------------------------------------------------------------------------
 

 Initializing MPI
 --------------------------------------------------------------------------------
	 applied topology for periodic BCs in y-direction
	 MPI_THREAD_MULTIPLE enabled
	 Number of MPI process : 50
	 Number of patches : 
		 dimension 0 - number_of_patches : 16
		 dimension 1 - number_of_patches : 16
	 Patch size :
		 dimension 0 - n_space : 200 cells.
		 dimension 1 - n_space : 15 cells.
	 Dynamic load balancing: never
 

 OpenMP
 --------------------------------------------------------------------------------
	 Number of thread per MPI process : 5
 

 Initializing the restart environment
 --------------------------------------------------------------------------------
 

 Initializing moving window
 --------------------------------------------------------------------------------
 

 Initializing particles & fields
 --------------------------------------------------------------------------------
	 Creating Species : ion
	 Creating Species : eon
	 Laser parameters :
 		Laser #0: separable profile
			omega              : 1
			chirp_profile      : 1D built-in profile `tconstant`
			time envelope      : 1D built-in profile `tconstant`
			space envelope (y) : 1D built-in profile `constant`
			space envelope (z) : 1D built-in profile `constant`
			phase          (y) : 1D built-in profile `constant`
			phase          (z) : 1D built-in profile `constant`
		delay phase      (y) : 0
		delay phase      (z) : 0
	 Parameters for collisions #0 :
		 Collisions between species (1) and (0)
		 Coulomb logarithm: 0.00
		 Debug every 1500 timesteps
	 Parameters for collisions #1 :
		 Intra collisions within species (1)
		 Coulomb logarithm: 0.00
		 Debug every 1500 timesteps
	 Adding particle walls:
		 Nothing to do
 

 Initializing Patches
 --------------------------------------------------------------------------------
	 First patch created
		 Approximately 10% of patches created
		 Approximately 20% of patches created
		 Approximately 30% of patches created
		 Approximately 40% of patches created
	 All patches created
 

 Creating Diagnostics, antennas, and external fields
 --------------------------------------------------------------------------------
	 Created ParticleBinning diagnostic #0: species eon
		 Axis ekin from 0.02 to 1000 in 4000 steps [LOGSCALE] 
	 Created ParticleBinning diagnostic #1: species ion
		 Axis ekin from 0.02 to 200 in 4000 steps
	 Created ParticleBinning diagnostic #2: species eon
		 Axis x from 0 to 502.655 in 1000 steps
		 Axis px from -150 to 150 in 4000 steps
	 Created ParticleBinning diagnostic #3: species ion
		 Axis x from 0 to 502.655 in 1000 steps
		 Axis px from -1000 to 1000 in 4000 steps
	 Created ParticleBinning diagnostic #4: species ion,eon
		 Axis x from 0 to 502.655 in 4000 steps
		 Axis y from 0 to 37.6991 in 4000 steps
	 Diagnostic Fields #0  :
		 Ex Ey Bz Rho_ion Rho_eon 
	 Done initializing diagnostics, antennas, and external fields
 

 Applying external fields at time t = 0
 --------------------------------------------------------------------------------
 
 
 

 Solving Poisson at time t = 0
 --------------------------------------------------------------------------------
	 Poisson solver converged at iteration: 0, relative err is ctrl = 0.00 x 1e-14
	 Poisson equation solved. Maximum err = 0.00 at i= -1
 Time in Poisson : 0.00
 

 Initializing diagnostics
 --------------------------------------------------------------------------------
 

 Running diags at time t = 0
 --------------------------------------------------------------------------------
 

 Species creation summary
 --------------------------------------------------------------------------------
		 Species 0 (ion) created with 23520000 particles
		 Species 1 (eon) created with 23520000 particles
 

 Patch arrangement : 
 --------------------------------------------------------------------------------
 

 Memory consumption
 --------------------------------------------------------------------------------
	 (Master) Species part = 44 MB
	 Global Species part = 4.381 GB
	 Max Species part = 168 MB
	 (Master) Fields part = 2 MB
	 Global Fields part = 0.117 GB
	 Max Fields part = 2 MB
	 (Master) ParticleBinning0.h5  = 0 MB
	 Global ParticleBinning0.h5 = 0.001 GB
	 Max ParticleBinning0.h5 = 0 MB
	 (Master) ParticleBinning1.h5  = 0 MB
	 Global ParticleBinning1.h5 = 0.001 GB
	 Max ParticleBinning1.h5 = 0 MB
	 (Master) ParticleBinning2.h5  = 30 MB
	 Global ParticleBinning2.h5 = 1.490 GB
	 Max ParticleBinning2.h5 = 30 MB
	 (Master) ParticleBinning3.h5  = 30 MB
	 Global ParticleBinning3.h5 = 1.490 GB
	 Max ParticleBinning3.h5 = 30 MB
	 (Master) ParticleBinning4.h5  = 122 MB
	 Global ParticleBinning4.h5 = 5.960 GB
	 Max ParticleBinning4.h5 = 122 MB
 

 Expected disk usage (approximate)
 --------------------------------------------------------------------------------
	 WARNING: disk usage by non-uniform particles maybe strongly underestimated,
	    especially when particles are created at runtime (ionization, pair generation, etc.)
	 
	 Expected disk usage for diagnostics:
		 File Fields0.h5: 1.09 G
		 File scalars.txt: 30.09 K
		 File ParticleBinning0.h5: 1.18 M
		 File ParticleBinning1.h5: 1.18 M
		 File ParticleBinning2.h5: 1.13 G
		 File ParticleBinning3.h5: 1.13 G
		 File ParticleBinning4.h5: 4.53 G
	 Total disk usage for diagnostics: 7.89 G
	 
 

 Cleaning up python runtime environement
 --------------------------------------------------------------------------------
	 Checking for cleanup() function:
	 python cleanup function does not exist
	 Calling python _keep_python_running() :
		 Closing Python
 

 Time-Loop started: number of time-steps n_time = 57000
 --------------------------------------------------------------------------------
      timestep       sim time   cpu time [s]   (    diff [s] )
--------------------------------------------------------------------------
mpirun noticed that process rank 17 with PID 0 on node lfc151 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

@xxirii
Copy link
Contributor

xxirii commented Apr 17, 2019

It is probably not the cause of the crash but you should not have the vectorization mode in adaptive. The default mode is off and you do not specify anything in your namelist. There is something strange here that is not your mistake. Can you, for the next runs, add the following section in your namelist:

Vectorization(
    mode = "off",
)

@mccoys
Copy link
Contributor

mccoys commented Apr 17, 2019

@xxirii the vectorization mode is expected. It is due to the fact that collisions require particle sorting.

@xxirii
Copy link
Contributor

xxirii commented Apr 17, 2019

@mccoys in this case:

Vectorization(
    mode = "on",
)

@mccoys
Copy link
Contributor

mccoys commented Apr 17, 2019

@Tissot11 Your crash is puzzling, because I have successfully ran 12000+ iterations on 2 MPI process, each containing 64 threads (total of 128 threads).

There might be an issue with your installation. Do you have more details? Compilers? Libraries?
It might also be a strange memory issue ...

By the way, if your sysadmin tells you that you cannot use whole nodes (you have to share with other users), then smilei will certainly get less performant, because the communications are increased. The best performance is always achieved by requesting full nodes. If impossible, request smaller portions, but make sure that you obtain as many cores as threads, and that you bind threads to cores.

@Tissot11
Copy link
Author

I paste below the output of ldd smilei command. We use GCC 6.1, OpenMPI 2.0.1, HDF 1.8.16. I have a launched a new job with vectorisation mode on. Let's see if that runs. I'll try to use roundrobin queue to launch job on 4 MPI processes with 28 thread.

linux-vdso.so.1 => (0x00007ffdebba7000)
libhdf5.so.10 => /usr/local/Packages/PIC/hdf5/lib/libhdf5.so.10 (0x00007f50694a0000)
libpython2.7.so.1.0 => /lib64/libpython2.7.so.1.0 (0x00007f50690d4000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f5068eb8000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f5068cb4000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f5068ab1000)
libm.so.6 => /lib64/libm.so.6 (0x00007f50687af000)
libmpi.so.20 => /usr/local/Packages/openmpi-2.0.1-gcc62-sl7/lib/libmpi.so.20 (0x00007f50684a9000)
libstdc++.so.6 => /usr/local/Packages/gcc-6.1.0/lib64/libstdc++.so.6 (0x00007f5068114000)
libgomp.so.1 => /usr/local/Packages/gcc-6.1.0/lib64/libgomp.so.1 (0x00007f5067ee7000)
libgcc_s.so.1 => /usr/local/Packages/gcc-6.1.0/lib64/libgcc_s.so.1 (0x00007f5067cd1000)
libc.so.6 => /lib64/libc.so.6 (0x00007f5067904000)
libz.so.1 => /lib64/libz.so.1 (0x00007f50676ee000)
/lib64/ld-linux-x86-64.so.2 (0x00007f50699a2000)
libopen-rte.so.20 => /usr/local/Packages/openmpi-2.0.1-gcc62-sl7/lib/libopen-rte.so.20 (0x00007f5067464000)
libopen-pal.so.20 => /usr/local/Packages/openmpi-2.0.1-gcc62-sl7/lib/libopen-pal.so.20 (0x00007f506716f000)
librt.so.1 => /lib64/librt.so.1 (0x00007f5066f67000)

@Tissot11
Copy link
Author

Neither with vectorisation mode nor with launching the job on a machine (occupying all cores) with more than 1 thread per core worked for me. Code always crashed saying the segmentation fault. Which version of Openmpi, gcc and hdf5 are you using?

@mccoys
Copy link
Contributor

mccoys commented Apr 17, 2019

in my case, gcc 8.3, openmpi 3.1 and hfd5 1.10.

The code is usually also tested with intel 2018 and hdf5 1.8.16

@Tissot11
Copy link
Author

And which version of python? I need to have exact configuration as yours ( 2 MPI process, each containing 64 threads (total of 128 threads) for this run to figure out the problem on our cluster and also to try running more threads and less cores as you had suggested.

@mccoys
Copy link
Contributor

mccoys commented Apr 17, 2019

i use python 3.7, but you should not try my exact configuration. I have run 64 threads per process on another machine which is KNL, thus very different from yours.

Your configuration should be fine: I used it recently. The problem is either that it has not been installed properly, or not setup correctly with respect to your hardware.

@Tissot11
Copy link
Author

Our SysAdmin wanted to have exact configuration that you used in order to find the problem. He will look again after the Easter holidays. In the meantime if you have any idea about what might be wrong on our cluster then please let me know. I would very much want to run Smilei in the most efficient manner. Have a nice eater holidays!

@mccoys
Copy link
Contributor

mccoys commented Apr 18, 2019

Wait, I finally got the segmentation fault, after 23000 iterations. This will be hard to investigate ...

@Tissot11
Copy link
Author

Ok. Please let me know if you figure out the reasons...

@mccoys
Copy link
Contributor

mccoys commented Apr 19, 2019

@Tissot11 By any chance did you manage to get a stack trace from your failed simulation ?

@mccoys
Copy link
Contributor

mccoys commented Apr 19, 2019

@Tissot11 In fact, we have had several reports, I think, of issues between openmpi and the MPI_THREAD_MULTIPLE option.

Could you try either:

  • To switch to another version of MPI (for instance intelMPI if available)
  • Or compile smilei with the option config=no_mpi_tm

The second option should do the trick. It might be slightly slower in some cases, but not much.

Side note: you are using a Field diagnostic, but for good performances, the 'Probe' diagnostic is MUCH better.

@Tissot11
Copy link
Author

I don't have the stack trace of the failed simulations. I can ask our SysAdmin next week when he is back. We have MVAPICH2 also available on our cluster if that is recommended? I'll have to ask if intelMPI can be arranged. I'll compile and run Smilei with config=no_mpi_tm and let you know how it goes. Do you also think that I should take care of the vader issue as well while running the simulation?

http://www.maisondelasimulation.fr/smilei/run.html

Thanks for the tips on diagnostics! I'm a noob to Smilei.

@mccoys
Copy link
Contributor

mccoys commented Apr 19, 2019

You could try the vader thing but I thought it didn't apply to openmpi 3. Not sure though.

The best thing to try is no_mpi_tm.

@Tissot11
Copy link
Author

On the suggestion of our SysAdmin, I compiled the code in debug mode with no_mpi_tm and ran the gdb command on the core files. The output looks different now (because of the debug mode). I paste below the section of backtrace full output. If you have nay suggestions for trying a particular command then please let me know.

#0 0x00002b82f488c18c in malloc () from /lib64/libc.so.6
No symbol table info available.
#1 0x00002b82f40bde88 in operator new (sz=320) at ../../../../libstdc++-v3/libsupc++/new_op.cc:50
p =
#2 0x000000000071b3dd in __gnu_cxx::new_allocator<void*>::allocate (this=0x2b830e403440, __n=40)
at /nfs/us1-linux/Local/Packages/gcc-6.2.0/include/c++/6.2.0/ext/new_allocator.h:104
No locals.
#3 0x000000000071a480 in std::allocator_traits<std::allocator<void*> >::allocate (__a=..., __n=40)
at /nfs/us1-linux/Local/Packages/gcc-6.2.0/include/c++/6.2.0/bits/alloc_traits.h:416
No locals.
#4 0x0000000000718f03 in std::_Vector_base<void*, std::allocator<void*> >::_M_allocate (this=0x2b830e403440, __n=40)
at /nfs/us1-linux/Local/Packages/gcc-6.2.0/include/c++/6.2.0/bits/stl_vector.h:170
No locals.
#5 0x0000000000717c99 in std::vector<void*, std::allocator<void*> >::_M_default_append (this=0x2b830e403440, __n=40)
at /nfs/us1-linux/Local/Packages/gcc-6.2.0/include/c++/6.2.0/bits/vector.tcc:557
__len = 40
__old_size = 0
__new_start = 0x28
__new_finish = 0x28
#6 0x0000000000716cca in std::vector<void*, std::allocator<void*> >::resize (this=0x2b830e403440, __new_size=40)
at /nfs/us1-linux/Local/Packages/gcc-6.2.0/include/c++/6.2.0/bits/stl_vector.h:677
No locals.
#7 0x0000000000714aec in backward::StackTraceImplbackward::system_tag::linux_tag::load_here (this=0x2b830e403430, depth=40)
at src/Tools/backward.hpp:704
trace_cnt = 48624700
#8 0x0000000000714b7c in backward::StackTraceImplbackward::system_tag::linux_tag::load_from (this=0x2b830e403430,
addr=0x2b82f490708a <__mcount_internal+186>, depth=32) at src/Tools/backward.hpp:711
No locals.
#9 0x0000000000716980 in backward::SignalHandling::handleSignal (info=0x2b830e4035f0, _ctx=0x2b830e4034c0)
at src/Tools/backward.hpp:2233
uctx = 0x2b830e4034c0
st = {<backward::StackTraceImplbackward::system_tag::linux_tag> = {backward::StackTraceImplHolder = {<backward::Stac

@mccoys
Copy link
Contributor

mccoys commented Apr 25, 2019

I have a lead to investigate, but this is not an easy task. I will get back to you as soon as possible

@Tissot11
Copy link
Author

Ok. Thanks for looking into this!

@mccoys
Copy link
Contributor

mccoys commented Apr 29, 2019

I have made a small progress showing that the same error without collisions, as long as the following option is set for vectorization.

Vectorization(
    mode = "adaptive",
    initial_mode = "off",
    reconfigure_every = 0
)

Note that this option is automatically set for collisions. This shows there is some memory corruption due to the adaptive mode, not collisions. Now I will check whether this is also present in the full-vecto mode.

@mccoys
Copy link
Contributor

mccoys commented Apr 29, 2019

Error happens again with vectorization "on" but less arrays seem corrupted. Still, nparts is negative in PusherBorisV::operator()

@Tissot11
Copy link
Author

Thanks for the update! Does the error happen after few iterations as in my case, or it still takes larger number of iterations (>1000) for you?

@mccoys
Copy link
Contributor

mccoys commented Apr 30, 2019

I significantly changed your input to reproduce the problem in less time. Still, it takes a few 100s of iterations, and runs for half an hour. We are still investigating

@Tissot11
Copy link
Author

Ok. Please let me know if you want me to run any test etc on our cluster with respect to this problem.

@mccoys mccoys changed the title Electron-electron collisions Segfault with vectorization May 6, 2019
@Tissot11
Copy link
Author

I was wondering if you have any update on this issue?

@mccoys
Copy link
Contributor

mccoys commented May 13, 2019

We are continuing to investigate, but it is a difficult issue, and the last week was holiday for some people here. We will keep you posted.

@Tissot11
Copy link
Author

Please take your time. I was just curious to ask about it as I plan to run some big simulations soon.

@jderouillat
Copy link
Contributor

Hi,
Not sure that it will solve your problem but it solves the problem on a reproducer provided by @mccoys extracted from your case.
Could you try to replace in src/Pusher/PusherFactory.h : PusherBorisV by PusherBoris ?
The first class was initially developed for the vectorized algorithm but now both are very similar, the main difference has been moved outside of the main method of the class.
Regards

Julien

@Tissot11
Copy link
Author

Tissot11 commented Jun 5, 2019

Hi Julien,

This is the line 23 in the PusherFactory.h file? I have replaced it with PusherBoris ?.h. Just to be sure, I need to compile it the code again?

Best regards
Naveen

@jderouillat
Copy link
Contributor

No the one line 59 in the method static Pusher *create( Params &params, Species *species ).
You have to recompile.

@Tissot11
Copy link
Author

Tissot11 commented Jun 5, 2019

Thanks for the quick response. Actually I'm having error in compiling. Could you please write down the whole line that need to be changed? My new line 59 now reads as

Push = new PusherBoris ?( params, species );

And I tried with the spacing of ? after PusherBorisV. Below is the output on compiling.

In file included from src/Species/SpeciesFactory.h:25:0,
from src/Patch/VectorPatch.h:9,
from src/Collisions/CollisionsSingle.cpp:13:
src/Pusher/PusherFactory.h: In static member function 'static Pusher* PusherFactory::create(Params&, Species*)':
src/Pusher/PusherFactory.h:59:32: error: no matching function for call to 'PusherBoris::PusherBoris()'
Push = new PusherBoris ?( params, species );
^~~~~~~~~~~
In file included from src/Pusher/PusherFactory.h:13:0,
from src/Species/SpeciesFactory.h:25,
from src/Patch/VectorPatch.h:9,
from src/Collisions/CollisionsSingle.cpp:13:
src/Pusher/PusherBoris.h:20:5: note: candidate: PusherBoris::PusherBoris(Params&, Species*)
PusherBoris( Params &params, Species species );
^~~~~~~~~~~
src/Pusher/PusherBoris.h:20:5: note: candidate expects 2 arguments, 0 provided
src/Pusher/PusherBoris.h:16:7: note: candidate: PusherBoris::PusherBoris(const PusherBoris&)
class PusherBoris : public Pusher
^~~~~~~~~~~
src/Pusher/PusherBoris.h:16:7: note: candidate expects 1 argument, 0 provided
In file included from src/Species/SpeciesFactory.h:25:0,
from src/Patch/VectorPatch.h:9,
from src/Collisions/CollisionsSingle.cpp:13:
src/Pusher/PusherFactory.h:59:47: warning: left operand of comma operator has no effect [-Wunused-value]
Push = new PusherBoris ?( params, species );
^~~~~~
src/Pusher/PusherFactory.h:59:64: error: expected ':' before ';' token
Push = new PusherBoris ?( params, species );
^
src/Pusher/PusherFactory.h:59:64: error: expected primary-expression before ';' token
In file included from src/Species/SpeciesFactory.h:25:0,
from src/Patch/VectorPatch.h:9,
from src/Patch/PatchesFactory.h:4,
from src/Checkpoint/Checkpoint.cpp:25:
src/Pusher/PusherFactory.h: In static member function 'static Pusher
PusherFactory::create(Params&, Species*)':
src/Pusher/PusherFactory.h:59:32: error: no matching function for call to 'PusherBoris::PusherBoris()'
Push = new PusherBoris ?( params, species );
^~~~~~~~~~~
In file included from src/Pusher/PusherFactory.h:13:0,
from src/Species/SpeciesFactory.h:25,
from src/Patch/VectorPatch.h:9,
from src/Patch/PatchesFactory.h:4,
from src/Checkpoint/Checkpoint.cpp:25:
src/Pusher/PusherBoris.h:20:5: note: candidate: PusherBoris::PusherBoris(Params&, Species*)
PusherBoris( Params &params, Species species );
^~~~~~~~~~~
src/Pusher/PusherBoris.h:20:5: note: candidate expects 2 arguments, 0 provided
src/Pusher/PusherBoris.h:16:7: note: candidate: PusherBoris::PusherBoris(const PusherBoris&)
class PusherBoris : public Pusher
^~~~~~~~~~~
src/Pusher/PusherBoris.h:16:7: note: candidate expects 1 argument, 0 provided
In file included from src/Species/SpeciesFactory.h:25:0,
from src/Patch/VectorPatch.h:9,
from src/Patch/PatchesFactory.h:4,
from src/Checkpoint/Checkpoint.cpp:25:
src/Pusher/PusherFactory.h:59:47: warning: left operand of comma operator has no effect [-Wunused-value]
Push = new PusherBoris ?( params, species );
^~~~~~
src/Pusher/PusherFactory.h:59:64: error: expected ':' before ';' token
Push = new PusherBoris ?( params, species );
^
src/Pusher/PusherFactory.h:59:64: error: expected primary-expression before ';' token
In file included from src/Species/SpeciesFactory.h:25:0,
from src/Patch/VectorPatch.h:9,
from src/Collisions/Collisions.cpp:13:
src/Pusher/PusherFactory.h: In static member function 'static Pusher
PusherFactory::create(Params&, Species*)':
src/Pusher/PusherFactory.h:59:32: error: no matching function for call to 'PusherBoris::PusherBoris()'
Push = new PusherBoris ?( params, species );
^~~~~~~~~~~
In file included from src/Pusher/PusherFactory.h:13:0,
from src/Species/SpeciesFactory.h:25,
from src/Patch/VectorPatch.h:9,
from src/Collisions/Collisions.cpp:13:
src/Pusher/PusherBoris.h:20:5: note: candidate: PusherBoris::PusherBoris(Params&, Species*)
PusherBoris( Params &params, Species *species );
^~~~~~~~~~~
src/Pusher/PusherBoris.h:20:5: note: candidate expects 2 arguments, 0 provided
src/Pusher/PusherBoris.h:16:7: note: candidate: PusherBoris::PusherBoris(const PusherBoris&)
class PusherBoris : public Pusher
^~~~~~~~~~~
src/Pusher/PusherBoris.h:16:7: note: candidate expects 1 argument, 0 provided
In file included from src/Species/SpeciesFactory.h:25:0,
from src/Patch/VectorPatch.h:9,
from src/Collisions/Collisions.cpp:13:
src/Pusher/PusherFactory.h:59:47: warning: left operand of comma operator has no effect [-Wunused-value]
Push = new PusherBoris ?( params, species );
^~~~~~
src/Pusher/PusherFactory.h:59:64: error: expected ':' before ';' token
Push = new PusherBoris ?( params, species );
^
src/Pusher/PusherFactory.h:59:64: error: expected primary-expression before ';' token
make: *** [build/src/Collisions/CollisionsSingle.o] Error 1
make: *** Waiting for unfinished jobs....
make: *** [build/src/Collisions/Collisions.o] Error 1
make: *** [build/src/Checkpoint/Checkpoint.o] Error 1

jderouillat added a commit that referenced this issue Jun 5, 2019
@jderouillat
Copy link
Contributor

Could you test the branch fixtest_106 ?

@Tissot11
Copy link
Author

Tissot11 commented Jun 6, 2019

Thanks for the fix. I compiled the code and now I'm running it with 4 MPI processes with 32 threads (4✕32 configuration). Though the code is running but it seems that this fix has made the SMILEI slower. The same run in a 256✕1 configuration yielded first output in less than 1000 seconds while with 4✕32 configuration now it's yet to produce the first output entry after two hours.

@jderouillat
Copy link
Contributor

It could indeed slowdown slightly the behavior of the code (depending of the number of particles per cell) but not like you see it.
I'm suspecting the process and threads placement/affinity.
To check that the workaround proposed has an effect I suggest you to not modify the context in which you see the crash, so in a first time keep the 256 x 1.

jderouillat added a commit that referenced this issue Jun 6, 2019
@Tissot11
Copy link
Author

Tissot11 commented Jun 6, 2019

Actually the crash before was occurring for 4✕32 type configurations. 256 x 1 type configurations I could run successfully. But the SMILEI is supposed to be faster in 4✕32 type configurations. I did some quick check and with the fix SMILEI is a bit slower in 256 x 1 type configurations (compared to previous version of SMILEI) but still much faster than 4✕32 type configurations (run with round robin queue system on our cluster). But now I see that you have pushed a new fix. I'll fetch it and compile the code again. I'll let you know if the new fix speeds up the runtime.

@mccoys
Copy link
Contributor

mccoys commented Jun 6, 2019

The most important aspect is to solve the segfault ! Please confirm if the segfault is gone.

@Tissot11
Copy link
Author

Tissot11 commented Jun 6, 2019

With the second fix also segmentation fault seems to be gone based on the results so-far! The code is also much faster compared to the first fix in 4✕32 type configurations. However, 128 x 1 type configurations are still a bit faster than 4✕32 type configurations with this second fix. The speed in 128 x 1 type configurations remain almost same with the first and second fixes.

@Tissot11
Copy link
Author

Tissot11 commented Jun 6, 2019

Actually, now it seems to run with same speed in both configurations. The previous post was concerning the first output which was slower with 4✕32 type configurations. But for the second output 4✕32 type configurations seem to be marginally faster than 128 x 1 type configurations. I guess I should wait for the simulation to finish. I'll let you know tomorrow. Do you plan to push the fix to the master branch?

@mccoys
Copy link
Contributor

mccoys commented Jun 14, 2019

@Tissot11 As this has been merged, I am closing the issue. Don't hesitate to reopen if you see it reappear.

@mccoys mccoys closed this as completed Jun 14, 2019
@Tissot11
Copy link
Author

Yeah, thanks for fixing this issue! Till now it's been working fine. I'll let you know in case it comes again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants