Skip to content
This repository has been archived by the owner on Mar 20, 2023. It is now read-only.

Require C++17 #834

Merged
merged 9 commits into from
Jul 6, 2022
Merged

Require C++17 #834

merged 9 commits into from
Jul 6, 2022

Conversation

olupton
Copy link
Contributor

@olupton olupton commented Jul 1, 2022

Description
Bump required C++ standard to C++17. Make some changes that were blocked by C++14.

TODO:

Use certain branches in CI pipelines.

CI_BRANCHES:NEURON_BRANCH=olupton/c++17,NMODL_BRANCH=master,SPACK_BRANCH=develop

@olupton olupton mentioned this pull request Jul 1, 2022
6 tasks
@bbpbuildbot

This comment was marked as outdated.

@bbpbuildbot

This comment was marked as outdated.

@bbpbuildbot

This comment was marked as outdated.

@bbpbuildbot

This comment was marked as outdated.

@bbpbuildbot

This comment was marked as outdated.

@olupton olupton closed this Jul 5, 2022
@olupton olupton reopened this Jul 5, 2022
@olupton olupton marked this pull request as ready for review July 5, 2022 08:19
@bbpbuildbot

This comment was marked as outdated.

@bbpbuildbot

This comment was marked as outdated.

@olupton
Copy link
Contributor Author

olupton commented Jul 5, 2022

Failures all came from the same GPU node (ldir01u13), but the GPU tests that passed also ran on the same node.

One distinguishing feature between the GPU tests that passed and the ones that failed is that the passing ones do not have OpenMP host threading enabled. This may or may not be relevant, I can try and reproduce locally.

Some backtraces from the core dumps in job 283903 (I only checked these 3, there are 7 more -- they'll be deleted in O(24hrs)):

test/reduced_dentate/coreneuron_gpu:

(gdb) bt 30
#0  0x00002aaaaea26387 in raise () from /lib64/libc.so.6
#1  0x00002aaaaea27a78 in abort () from /lib64/libc.so.6
#2  0x00002aaaad68f972 in launchInternal (module=<optimized out>, launchConfig=<optimized out>, async=<optimized out>, streamId=<optimized out>, pLaunchInfo=<optimized out>) at platform_cuda/hxCuda.c:3637
#3  0x00002aaaad68aacb in targetLaunch (module=0x4c30100 <__PGI_CUDA_LOC>, threadModel=0x7ffffffebcf8, hostFunc=0x498e40 <__nv__ZN10coreneuron15nrn_state_ccanlEPNS_9NrnThreadEPNS_9Memb_listEi_F1L491_5()>, streamId=-1364995379, args=0xffffffffffffffff,
    async=<optimized out>) at hxInterface.c:502
#4  launchInternal (module=0x4c30100 <__PGI_CUDA_LOC>, threadModel=0x7ffffffebcf8, hostFunc=0x498e40 <__nv__ZN10coreneuron15nrn_state_ccanlEPNS_9NrnThreadEPNS_9Memb_listEi_F1L491_5()>, args=0xffffffffffffffff, async=<optimized out>, streamId=-1364995379,
    willJoin=<optimized out>) at hxInterface.c:550
#5  0x00002aaaad68a7d3 in hxLaunch (module=0xd646, threadModel=0xd646, hostFunc=0x6, args=0xffffffffffffffff) at hxInterface.c:140
#6  0x00002aaaad66e0da in launchHXTarget (filename=<optimized out>, funcname=<optimized out>, lineno=<optimized out>, module=<optimized out>, deviceId=3, hostFuncPtr=<optimized out>, numArgs=<optimized out>, deviceArgBuffer=0x7ffffffec240, deviceArgBufferSize=80,
    numTeams=0, threadLimit=0, numThreads=0, preferredNumThreads=1240, maxThreadsPerBlock=128, maxBlocks=0, mode=mode_target_teams_distribute_parallel_for, flags=7, sharedMemBytes=0, async=-1) at nvomp_target.c:382
#7  0x00002aaaad66a54b in launchTarget (
    filename=0x7dfaa0 <.F001140646__ZN52_INTERNAL_30_x86_64_corenrn_mod2c_ccanl_cpp_c84493ba10coreneuron6detail24nrn_buildjacobian_threadINS0_54_GLOBAL__N__30_x86_64_corenrn_mod2c_ccanl_cpp_c84493ba23_newton_integrate_ccanlEEEvPNS0_11NewtonSpaceEiPiRKT_PdPSB_iiSB_S7_PNS0_11ThreadDatumEPNS0_9NrnThreadEd> "/gpfs/bbp.cscs.ch/ssd/gitlab_map_jobs/bbpcihpcproj12/P62963/J280445/spack-build/spack-stage-neuron-develop-ttzbnmyxat3koqok7vfmt4wx5p43atgy/spack-build-ttzbnmy/test/nrnivmodl/658865e2ea1a069e909a99753"...,
    funcname=0x7dfbf0 <.F003941205__ZN52_INTERNAL_30_x86_64_corenrn_mod2c_ccanl_cpp_c84493ba10coreneuron6detail24nrn_buildjacobian_threadINS0_54_GLOBAL__N__30_x86_64_corenrn_mod2c_ccanl_cpp_c84493ba23_newton_integrate_ccanlEEEvPNS0_11NewtonSpaceEiPiRKT_PdPSB_iiSB_S7_PNS0_11ThreadDatumEPNS0_9NrnThreadEd> "_ZN10coreneuron15nrn_state_ccanlEPNS_9NrnThreadEPNS_9Memb_listEi", lineno=491, module=0x4c30100 <__PGI_CUDA_LOC>, deviceId=3, host_ptr=0x498e40 <__nv__ZN10coreneuron15nrn_state_ccanlEPNS_9NrnThreadEPNS_9Memb_listEi_F1L491_5()>,
    args_num=10, args_base=0x7ffffffec968, args=0x7ffffffec9b8, arg_sizes=0x7ffffffec900, arg_types=0x7ffffffec8b0, num_teams=0, thread_limit=0, num_threads=0, mode=<optimized out>, flags=7, loop_trip_count=1240, sharedMemBytes=0, globalMemBytes=0, async=-1,
    targetargs_ptr=0x7ffffffec950, targetargs_size=3, ndeps=0, dep_list=0x0) at nvomp_target.c:1085
#8  0x00002aaaad669851 in __nvomp_target (filename=0xd646 <error: Cannot access memory at address 0xd646>, funcname=0xd646 <error: Cannot access memory at address 0xd646>, lineno=6, module=0xffffffffffffffff, device_id_64bit=<optimized out>,
    host_ptr=0x498e40 <__nv__ZN10coreneuron15nrn_state_ccanlEPNS_9NrnThreadEPNS_9Memb_listEi_F1L491_5()>, args_num=<optimized out>, args_base=<optimized out>, args=<optimized out>, arg_sizes=<optimized out>, arg_types=<optimized out>, num_teams=<optimized out>,
    thread_limit=<optimized out>, num_threads=<optimized out>, mode=<optimized out>, flags=<optimized out>, loop_trip_count=<optimized out>, sharedMemBytes=<optimized out>, globalMemBytes=<optimized out>, nowait=<optimized out>, targetargs_ptr=<optimized out>,
    targetargs_size=<optimized out>) at nvomp_target.c:1180
#9  0x0000000000498dfb in coreneuron::nrn_state_ccanl (nt=<optimized out>, ml=<optimized out>) at x86_64/corenrn/mod2c/ccanl.cpp:493
#10 0x00000000007192db in coreneuron::nonvint (_nt=0x7d1f1e8) at ../spack-src/coreneuron/sim/fadvance_core.cpp:247
#11 0x0000000000719d72 in coreneuron::nrn_fixed_step_lastpart (nth=0x7d1f1e8) at ../spack-src/coreneuron/sim/fadvance_core.cpp:385
#12 0x00000000007183be in __nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1 () at ../spack-src/coreneuron/sim/multicore.hpp:167
#13 0x00002aaaad667975 in targetFuncHostTrampoline (teamArgs=0x7ffffffee2c0, gtid=0x7ffffffecc24, btid=0x7ffffffecc20, argv=0x7ffffffee2e0, argc=1, generic=<optimized out>) at nvomp_team.c:1234
#14 targetFuncHostTrampoline_inline (gtid=0x7ffffffecc24, btid=0x7ffffffecc20, args=0x7ffffffee2c0, argc=1) at nvomp_team.c:1308
#15 targetFuncHostTrampoline_1 (gtid=0x7ffffffecc24, btid=0x7ffffffecc20, args=0x7ffffffee2c0) at nvomp_team.c:1326
#16 0x00002aaaad6982c9 in hxiEmulateHostThreadLaunch (tid=0, hostFunc=0x2aaaad667900 <targetFuncHostTrampoline_1>, args=0x7ffffffee2c0, userData=<optimized out>, cachedUserData=0x7ffffffee1e0, flags=0) at platform_host/hxHostThreads.c:559
#17 0x00002aaaad68a8c2 in launchInternal (module=0x0, threadModel=0x7ffffffee210, hostFunc=0x2aaaad667900 <targetFuncHostTrampoline_1>, args=0x7ffffffee2c0, async=<optimized out>, streamId=0, willJoin=<optimized out>) at hxInterface.c:575
#18 0x00002aaaad68a7d3 in hxLaunch (module=0xd646, threadModel=0xd646, hostFunc=0x6, args=0xffffffffffffffff) at hxInterface.c:140
#19 0x00002aaaad667233 in launchTeam (module=0x0, hxThreadModel=0xd646, funcToLaunch=0x6, funcToLaunchArgs=0xffffffffffffffff) at nvomp_team.c:1004
#20 0x00002aaaad6644d6 in runNewTeam (num_threads=<optimized out>, targetFunc=0x718300 <__nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1()>, argc=1, argsV1=<optimized out>, argsV2=0x0, argsV3=0x0, targetTaskArgs=0x0,
    targetTask=<optimized out>, disableOffload=<optimized out>) at nvomp_team.c:827
#21 nvompRunNewTeam (num_threads=<optimized out>, targetFunc=0x718300 <__nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1()>, argc=1, args=<optimized out>) at nvomp_team.c:108
#22 0x00002aaaad67ec8e in __kmpc_fork_call (loc=<optimized out>, argc=1, microtask=0x718300 <__nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1()>) at kmpc.cpp:123
#23 0x0000000000718295 in coreneuron::nrn_multithread_job (job=<optimized out>) at ../spack-src/coreneuron/sim/multicore.hpp:168
#24 coreneuron::nrn_fixed_step_minimal () at ../spack-src/coreneuron/sim/fadvance_core.cpp:105
#25 0x00000000007184b8 in coreneuron::nrn_fixed_single_steps_minimal (total_sim_steps=<optimized out>, tstop=<optimized out>) at ../spack-src/coreneuron/sim/fadvance_core.cpp:139
#26 0x0000000000662048 in coreneuron::ncs2nrn_integrate (tstop=0) at ../spack-src/coreneuron/network/netcvode.cpp:494
#27 0x0000000000680429 in coreneuron::BBS_netpar_solve (tstop=10) at ../spack-src/coreneuron/network/netpar.cpp:622
#28 0x0000000000510bac in run_solve_core (argc=<optimized out>, argv=<optimized out>) at ../spack-src/coreneuron/apps/main1.cpp:628

test/olfactory-bulb-3d/coreneuron_gpu_online:

(gdb) bt 30
#0  0x00002aaaae997387 in raise () from /lib64/libc.so.6
#1  0x00002aaaae998a78 in abort () from /lib64/libc.so.6
#2  0x00002aaaad600972 in launchInternal (module=<optimized out>, launchConfig=<optimized out>, async=<optimized out>, streamId=<optimized out>, pLaunchInfo=<optimized out>) at platform_cuda/hxCuda.c:3637
#3  0x00002aaaad5fbacb in targetLaunch (module=0x3f23fc0 <__PGI_CUDA_LOC>, threadModel=0x7ffffffece38, hostFunc=0x486e00 <__nv__ZN10coreneuron13nrn_state_ornEPNS_9NrnThreadEPNS_9Memb_listEi_F1L803_13()>, streamId=-1365581107, args=0xffffffffffffffff,
    async=<optimized out>) at hxInterface.c:502
#4  launchInternal (module=0x3f23fc0 <__PGI_CUDA_LOC>, threadModel=0x7ffffffece38, hostFunc=0x486e00 <__nv__ZN10coreneuron13nrn_state_ornEPNS_9NrnThreadEPNS_9Memb_listEi_F1L803_13()>, args=0xffffffffffffffff, async=<optimized out>, streamId=-1365581107,
    willJoin=<optimized out>) at hxInterface.c:550
#5  0x00002aaaad5fb7d3 in hxLaunch (module=0xeeb7, threadModel=0xeeb7, hostFunc=0x6, args=0xffffffffffffffff) at hxInterface.c:140
#6  0x00002aaaad5df0da in launchHXTarget (filename=<optimized out>, funcname=<optimized out>, lineno=<optimized out>, module=<optimized out>, deviceId=3, hostFuncPtr=<optimized out>, numArgs=<optimized out>, deviceArgBuffer=0x7ffffffed380, deviceArgBufferSize=80,
    numTeams=0, threadLimit=0, numThreads=0, preferredNumThreads=15, maxThreadsPerBlock=128, maxBlocks=0, mode=mode_target_teams_distribute_parallel_for, flags=7, sharedMemBytes=0, async=-1) at nvomp_target.c:382
#7  0x00002aaaad5db54b in launchTarget (
    filename=0x7ae5e0 <.F001441067__ZN50_INTERNAL_28_x86_64_corenrn_mod2c_orn_cpp_7815f77312bbcore_writeEPdPiS1_S1_iiS0_S1_PN10coreneuron11ThreadDatumEPNS2_9NrnThreadEd> "/gpfs/bbp.cscs.ch/ssd/gitlab_map_jobs/bbpcihpcproj12/P62963/J280445/spack-build/spack-stage-neuron-develop-ttzbnmyxat3koqok7vfmt4wx5p43atgy/spack-build-ttzbnmy/test/nrnivmodl/14c92ceb412f1b4e766bfda05"...,
    funcname=0x7ae7b0 <.F005442596__ZN50_INTERNAL_28_x86_64_corenrn_mod2c_orn_cpp_7815f77312bbcore_writeEPdPiS1_S1_iiS0_S1_PN10coreneuron11ThreadDatumEPNS2_9NrnThreadEd> "_ZN10coreneuron13nrn_state_ornEPNS_9NrnThreadEPNS_9Memb_listEi", lineno=803,
    module=0x3f23fc0 <__PGI_CUDA_LOC>, deviceId=3, host_ptr=0x486e00 <__nv__ZN10coreneuron13nrn_state_ornEPNS_9NrnThreadEPNS_9Memb_listEi_F1L803_13()>, args_num=10, args_base=0x7ffffffedaa8, args=0x7ffffffedaf8, arg_sizes=0x7ffffffeda40, arg_types=0x7ffffffed9f0,
    num_teams=0, thread_limit=0, num_threads=0, mode=<optimized out>, flags=7, loop_trip_count=15, sharedMemBytes=0, globalMemBytes=0, async=-1, targetargs_ptr=0x7ffffffeda90, targetargs_size=3, ndeps=0, dep_list=0x0) at nvomp_target.c:1085
#8  0x00002aaaad5da851 in __nvomp_target (filename=0xeeb7 <error: Cannot access memory at address 0xeeb7>, funcname=0xeeb7 <error: Cannot access memory at address 0xeeb7>, lineno=6, module=0xffffffffffffffff, device_id_64bit=<optimized out>,
    host_ptr=0x486e00 <__nv__ZN10coreneuron13nrn_state_ornEPNS_9NrnThreadEPNS_9Memb_listEi_F1L803_13()>, args_num=<optimized out>, args_base=<optimized out>, args=<optimized out>, arg_sizes=<optimized out>, arg_types=<optimized out>, num_teams=<optimized out>,
    thread_limit=<optimized out>, num_threads=<optimized out>, mode=<optimized out>, flags=<optimized out>, loop_trip_count=<optimized out>, sharedMemBytes=<optimized out>, globalMemBytes=<optimized out>, nowait=<optimized out>, targetargs_ptr=<optimized out>,
    targetargs_size=<optimized out>) at nvomp_target.c:1180
#9  0x0000000000486dbb in coreneuron::nrn_state_orn (nt=<optimized out>, ml=<optimized out>) at x86_64/corenrn/mod2c/orn.cpp:805
#10 0x00000000006e879b in coreneuron::nonvint (_nt=0x1155ae58) at ../spack-src/coreneuron/sim/fadvance_core.cpp:247
#11 0x00000000006e9232 in coreneuron::nrn_fixed_step_lastpart (nth=0x1155ae58) at ../spack-src/coreneuron/sim/fadvance_core.cpp:385
#12 0x00000000006e787e in __nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1 () at ../spack-src/coreneuron/sim/multicore.hpp:167
#13 0x00002aaaad5d8975 in targetFuncHostTrampoline (teamArgs=0x7ffffffef400, gtid=0x7ffffffedd64, btid=0x7ffffffedd60, argv=0x7ffffffef420, argc=1, generic=<optimized out>) at nvomp_team.c:1234
#14 targetFuncHostTrampoline_inline (gtid=0x7ffffffedd64, btid=0x7ffffffedd60, args=0x7ffffffef400, argc=1) at nvomp_team.c:1308
#15 targetFuncHostTrampoline_1 (gtid=0x7ffffffedd64, btid=0x7ffffffedd60, args=0x7ffffffef400) at nvomp_team.c:1326
#16 0x00002aaaad6092c9 in hxiEmulateHostThreadLaunch (tid=0, hostFunc=0x2aaaad5d8900 <targetFuncHostTrampoline_1>, args=0x7ffffffef400, userData=<optimized out>, cachedUserData=0x7ffffffef320, flags=0) at platform_host/hxHostThreads.c:559
#17 0x00002aaaad5fb8c2 in launchInternal (module=0x0, threadModel=0x7ffffffef350, hostFunc=0x2aaaad5d8900 <targetFuncHostTrampoline_1>, args=0x7ffffffef400, async=<optimized out>, streamId=0, willJoin=<optimized out>) at hxInterface.c:575
#18 0x00002aaaad5fb7d3 in hxLaunch (module=0xeeb7, threadModel=0xeeb7, hostFunc=0x6, args=0xffffffffffffffff) at hxInterface.c:140
#19 0x00002aaaad5d8233 in launchTeam (module=0x0, hxThreadModel=0xeeb7, funcToLaunch=0x6, funcToLaunchArgs=0xffffffffffffffff) at nvomp_team.c:1004
#20 0x00002aaaad5d54d6 in runNewTeam (num_threads=<optimized out>, targetFunc=0x6e77c0 <__nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1()>, argc=1, argsV1=<optimized out>, argsV2=0x0, argsV3=0x0, targetTaskArgs=0x0,
    targetTask=<optimized out>, disableOffload=<optimized out>) at nvomp_team.c:827
#21 nvompRunNewTeam (num_threads=<optimized out>, targetFunc=0x6e77c0 <__nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1()>, argc=1, args=<optimized out>) at nvomp_team.c:108
#22 0x00002aaaad5efc8e in __kmpc_fork_call (loc=<optimized out>, argc=1, microtask=0x6e77c0 <__nv__ZN10coreneuron19nrn_multithread_jobIRFPvPNS_9NrnThreadEEJEEEvOT_DpOT0__F208L166_1()>) at kmpc.cpp:123
#23 0x00000000006e7755 in coreneuron::nrn_multithread_job (job=<optimized out>) at ../spack-src/coreneuron/sim/multicore.hpp:168
#24 coreneuron::nrn_fixed_step_minimal () at ../spack-src/coreneuron/sim/fadvance_core.cpp:105
#25 0x00000000006e7978 in coreneuron::nrn_fixed_single_steps_minimal (total_sim_steps=<optimized out>, tstop=<optimized out>) at ../spack-src/coreneuron/sim/fadvance_core.cpp:139
#26 0x0000000000631508 in coreneuron::ncs2nrn_integrate (tstop=0) at ../spack-src/coreneuron/network/netcvode.cpp:494
#27 0x000000000064f8e9 in coreneuron::BBS_netpar_solve (tstop=50) at ../spack-src/coreneuron/network/netpar.cpp:622
#28 0x00000000004e006c in run_solve_core (argc=<optimized out>, argv=<optimized out>) at ../spack-src/coreneuron/apps/main1.cpp:628
#29 0x0000000000465a4e in corenrn_embedded_run (nthread=<optimized out>, have_gaps=<optimized out>, use_mpi=<optimized out>, use_fast_imem=<optimized out>, mpi_lib=<optimized out>, nrn_arg=<optimized out>)
    at ../../../../../../../software/install_nvhpc-22.3-skylake/coreneuron-develop-3ua57a/share/coreneuron/enginemech.cpp:109

test/testcorenrn_conc/coreneuron_gpu_online:

(gdb) bt 30
#0  0x00002aaaae997387 in raise () from /lib64/libc.so.6
#1  0x00002aaaae998a78 in abort () from /lib64/libc.so.6
#2  0x00002aaaad600972 in launchInternal (module=<optimized out>, launchConfig=<optimized out>, async=<optimized out>, streamId=<optimized out>, pLaunchInfo=<optimized out>) at platform_cuda/hxCuda.c:3637
#3  0x00002aaaad5fbacb in targetLaunch (module=0x3ae5980 <__PGI_CUDA_LOC>, threadModel=0x7ffffffed6d8, hostFunc=0x472580 <__nv__ZN10coreneuron17nrn_state_hhderivEPNS_9NrnThreadEPNS_9Memb_listEi_F1L685_9()>, streamId=-1365581107, args=0xffffffffffffffff,
    async=<optimized out>) at hxInterface.c:502
#4  launchInternal (module=0x3ae5980 <__PGI_CUDA_LOC>, threadModel=0x7ffffffed6d8, hostFunc=0x472580 <__nv__ZN10coreneuron17nrn_state_hhderivEPNS_9NrnThreadEPNS_9Memb_listEi_F1L685_9()>, args=0xffffffffffffffff, async=<optimized out>, streamId=-1365581107,
    willJoin=<optimized out>) at hxInterface.c:550
#5  0x00002aaaad5fb7d3 in hxLaunch (module=0x6dfe, threadModel=0x6dfe, hostFunc=0x6, args=0xffffffffffffffff) at hxInterface.c:140
#6  0x00002aaaad5df0da in launchHXTarget (filename=<optimized out>, funcname=<optimized out>, lineno=<optimized out>, module=<optimized out>, deviceId=0, hostFuncPtr=<optimized out>, numArgs=<optimized out>, deviceArgBuffer=0x7ffffffedc20, deviceArgBufferSize=80,
    numTeams=0, threadLimit=0, numThreads=0, preferredNumThreads=1, maxThreadsPerBlock=128, maxBlocks=0, mode=mode_target_teams_distribute_parallel_for, flags=7, sharedMemBytes=0, async=-1) at nvomp_target.c:382
#7  0x00002aaaad5db54b in launchTarget (
    filename=0x79f720 <.F000841640__ZN54_INTERNAL_32_x86_64_corenrn_mod2c_hhderiv_cpp_636e65f610coreneuron6detail24nrn_buildjacobian_threadINS0_56_GLOBAL__N__32_x86_64_corenrn_mod2c_hhderiv_cpp_636e65f622_newton_states_hhderivEEEvPNS0_11NewtonSpaceEiPiRKT_PdPSB_iiSB_S7_PNS0_11ThreadDatumEPNS0_9NrnThreadEd> "/gpfs/bbp.cscs.ch/ssd/gitlab_map_jobs/bbpcihpcproj12/P62963/J280445/spack-build/spack-stage-neuron-develop-ttzbnmyxat3koqok7vfmt4wx5p43atgy/spack-build-ttzbnmy/test/nrnivmodl/b2ea4e0b95169c3eca66d9e51"...,
    funcname=0x79f8e0 <.F004243029__ZN54_INTERNAL_32_x86_64_corenrn_mod2c_hhderiv_cpp_636e65f610coreneuron6detail24nrn_buildjacobian_threadINS0_56_GLOBAL__N__32_x86_64_corenrn_mod2c_hhderiv_cpp_636e65f622_newton_states_hhderivEEEvPNS0_11NewtonSpaceEiPiRKT_PdPSB_iiSB_S7_PNS0_11ThreadDatumEPNS0_9NrnThreadEd> "_ZN10coreneuron17nrn_state_hhderivEPNS_9NrnThreadEPNS_9Memb_listEi", lineno=685, module=0x3ae5980 <__PGI_CUDA_LOC>, deviceId=0, host_ptr=0x472580 <__nv__ZN10coreneuron17nrn_state_hhderivEPNS_9NrnThreadEPNS_9Memb_listEi_F1L685_9()>,
    args_num=10, args_base=0x7ffffffee348, args=0x7ffffffee398, arg_sizes=0x7ffffffee2e0, arg_types=0x7ffffffee290, num_teams=0, thread_limit=0, num_threads=0, mode=<optimized out>, flags=7, loop_trip_count=1, sharedMemBytes=0, globalMemBytes=0, async=-1,
    targetargs_ptr=0x7ffffffee330, targetargs_size=3, ndeps=0, dep_list=0x0) at nvomp_target.c:1085
#8  0x00002aaaad5da851 in __nvomp_target (filename=0x6dfe <error: Cannot access memory at address 0x6dfe>, funcname=0x6dfe <error: Cannot access memory at address 0x6dfe>, lineno=6, module=0xffffffffffffffff, device_id_64bit=<optimized out>,
    host_ptr=0x472580 <__nv__ZN10coreneuron17nrn_state_hhderivEPNS_9NrnThreadEPNS_9Memb_listEi_F1L685_9()>, args_num=<optimized out>, args_base=<optimized out>, args=<optimized out>, arg_sizes=<optimized out>, arg_types=<optimized out>, num_teams=<optimized out>,
    thread_limit=<optimized out>, num_threads=<optimized out>, mode=<optimized out>, flags=<optimized out>, loop_trip_count=<optimized out>, sharedMemBytes=<optimized out>, globalMemBytes=<optimized out>, nowait=<optimized out>, targetargs_ptr=<optimized out>,
    targetargs_size=<optimized out>) at nvomp_target.c:1180
#9  0x000000000047253b in coreneuron::nrn_state_hhderiv (nt=<optimized out>, ml=<optimized out>) at x86_64/corenrn/mod2c/hhderiv.cpp:687
#10 0x00000000006dc39b in coreneuron::nonvint (_nt=0x9b4adb8) at ../spack-src/coreneuron/sim/fadvance_core.cpp:247
#11 0x00000000006dce32 in coreneuron::nrn_fixed_step_lastpart (nth=0x9b4adb8) at ../spack-src/coreneuron/sim/fadvance_core.cpp:385
#12 0x00000000006dcd29 in coreneuron::nrn_fixed_step_thread (nth=0x9b4adb8) at ../spack-src/coreneuron/sim/fadvance_core.cpp:371
#13 0x00000000006db938 in coreneuron::nrn_fixed_step_group_thread (nth=0x9b4adb8, step_group_max=4000, step_group_begin=<optimized out>, step_group_end=0x7fffffff0574) at ../spack-src/coreneuron/sim/fadvance_core.cpp:187
#14 0x00000000006db8e6 in __nv__ZN10coreneuron19nrn_multithread_jobIRFvPNS_9NrnThreadEiiRiEJS3_S3_S3_EEEvOT_DpOT0__F208L166_2 () at ../spack-src/coreneuron/sim/multicore.hpp:167
#15 0x00002aaaad5d8975 in targetFuncHostTrampoline (teamArgs=0x7ffffffefd80, gtid=0x7ffffffee6e4, btid=0x7ffffffee6e0, argv=0x7ffffffefda0, argc=1, generic=<optimized out>) at nvomp_team.c:1234
#16 targetFuncHostTrampoline_inline (gtid=0x7ffffffee6e4, btid=0x7ffffffee6e0, args=0x7ffffffefd80, argc=1) at nvomp_team.c:1308
#17 targetFuncHostTrampoline_1 (gtid=0x7ffffffee6e4, btid=0x7ffffffee6e0, args=0x7ffffffefd80) at nvomp_team.c:1326
#18 0x00002aaaad6092c9 in hxiEmulateHostThreadLaunch (tid=0, hostFunc=0x2aaaad5d8900 <targetFuncHostTrampoline_1>, args=0x7ffffffefd80, userData=<optimized out>, cachedUserData=0x7ffffffefca0, flags=0) at platform_host/hxHostThreads.c:559
#19 0x00002aaaad5fb8c2 in launchInternal (module=0x0, threadModel=0x7ffffffefcd0, hostFunc=0x2aaaad5d8900 <targetFuncHostTrampoline_1>, args=0x7ffffffefd80, async=<optimized out>, streamId=0, willJoin=<optimized out>) at hxInterface.c:575
#20 0x00002aaaad5fb7d3 in hxLaunch (module=0x6dfe, threadModel=0x6dfe, hostFunc=0x6, args=0xffffffffffffffff) at hxInterface.c:140
#21 0x00002aaaad5d8233 in launchTeam (module=0x0, hxThreadModel=0x6dfe, funcToLaunch=0x6, funcToLaunchArgs=0xffffffffffffffff) at nvomp_team.c:1004
#22 0x00002aaaad5d54d6 in runNewTeam (num_threads=<optimized out>, targetFunc=0x6db800 <__nv__ZN10coreneuron19nrn_multithread_jobIRFvPNS_9NrnThreadEiiRiEJS3_S3_S3_EEEvOT_DpOT0__F208L166_2()>, argc=1, argsV1=<optimized out>, argsV2=0x0, argsV3=0x0, targetTaskArgs=0x0,
    targetTask=<optimized out>, disableOffload=<optimized out>) at nvomp_team.c:827
#23 nvompRunNewTeam (num_threads=<optimized out>, targetFunc=0x6db800 <__nv__ZN10coreneuron19nrn_multithread_jobIRFvPNS_9NrnThreadEiiRiEJS3_S3_S3_EEEvOT_DpOT0__F208L166_2()>, argc=1, args=<optimized out>) at nvomp_team.c:108
#24 0x00002aaaad5efc8e in __kmpc_fork_call (loc=<optimized out>, argc=1, microtask=0x6db800 <__nv__ZN10coreneuron19nrn_multithread_jobIRFvPNS_9NrnThreadEiiRiEJS3_S3_S3_EEEvOT_DpOT0__F208L166_2()>) at kmpc.cpp:123
#25 0x00000000006db745 in coreneuron::nrn_multithread_job (args=<optimized out>, args=<optimized out>, args=<optimized out>) at ../spack-src/coreneuron/sim/multicore.hpp:168
#26 coreneuron::nrn_fixed_step_group_minimal (total_sim_steps=<optimized out>) at ../spack-src/coreneuron/sim/fadvance_core.cpp:157
#27 0x0000000000625101 in coreneuron::ncs2nrn_integrate (tstop=0) at ../spack-src/coreneuron/network/netcvode.cpp:492
#28 0x00000000006434e9 in coreneuron::BBS_netpar_solve (tstop=100) at ../spack-src/coreneuron/network/netpar.cpp:622
#29 0x00000000004d3c6c in run_solve_core (argc=<optimized out>, argv=<optimized out>) at ../spack-src/coreneuron/apps/main1.cpp:628

@olupton olupton merged commit 511613e into master Jul 6, 2022
@olupton olupton deleted the olupton/c++17 branch July 6, 2022 06:33
olupton added a commit to neuronsimulator/nrn that referenced this pull request Jul 6, 2022
alexsavulescu added a commit to neuronsimulator/nrn that referenced this pull request Jul 6, 2022
* flex: require >= 2.6 to avoid register keyword.
* flex/c++ standard documentation and fixes.
* Use std:: prefix on cout, endl, istream, ostream etc.
* drop gcc5 and gcc6, do not explicitly specify default cmake option values.
* Use newer python patch versions and target macOS 10.14 instead of 10.9.
* Dockerfile: newer flex 
* Dockerfile_gpu: base on `latest-x86_64`

* C++17 in submodules.
  *  Require C++17 in CoreNEURON too -> BlueBrain/CoreNeuron#834.
  *  Require C++17 in InterViews too->  iv#43 & Bump to C++17 iv#44.
  *  Require C++17 in NMODL too -> BlueBrain/nmodl#889 

Co-authored-by: Alexandru Săvulescu <[email protected]>
pramodk pushed a commit to neuronsimulator/nrn that referenced this pull request Nov 2, 2022
* Bump submodule past nmodlBlueBrain/CoreNeuron#889.

CoreNEURON Repo SHA: BlueBrain/CoreNeuron@511613e
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants