-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exception when structural plasticity and multiple threads are used #629
Conversation
…abled at the same time
I suggest @heplesser, @jakobj and @sanjayankur31 as potential reviewers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdiazpier I approve this already, but you need to fix the conflict in test_all.py, which is due to the fact that I just merged #604.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, apart from the my small comment, so 👍
void | ||
nest::SPManager::enable_structural_plasticity() | ||
{ | ||
if ( kernel().vp_manager.get_num_threads() > 1 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add curly braces around the statement
@@ -180,6 +180,10 @@ nest::VPManager::get_status( DictionaryDatum& d ) | |||
void | |||
nest::VPManager::set_num_threads( nest::thread n_threads ) | |||
{ | |||
if ( kernel().sp_manager.is_structural_plasticity_enabled() | |||
&& ( n_threads > 1 ) ) | |||
throw KernelException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(see above)
with self.assertRaises(nest.NESTError): | ||
nest.SetKernelStatus( | ||
{ | ||
'resolution': 0.1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you need to change resolution here?
@sdiazpier There are some small code-formatting problems and the test for issue #578 now seems to work no longer:
That test requires two threads (it is run as a single MPI process): nest.SetKernelStatus(
{
'resolution': 0.1,
'total_num_virtual_procs': 2
}
) So somehow that test and this PR seem orthogonal to each other? |
…only when MPI is available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdiazpier Fine in principle, but some details need fixing.
call(["mpiexec", "-n", "2", "python", "test_sp/test_issue_578_sp.py"]) | ||
except: | ||
print sys.exc_info()[0] | ||
print "Test call with MPI ended in error" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print
requires ()
to be Python3 compatible.
And shouldn't this exception propagate so that the test suite knows that the test failed? Currently, failing MPI will be registered as a working test.
from mpi4py import MPI | ||
except ImportError: | ||
# Test without MPI | ||
print "Tests without MPI" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print
needs ()
in Python3.
else: | ||
# Test with MPI | ||
mpi_test = 1 | ||
print "Testing with MPI" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print
needs ()
in Python3.
except ImportError: | ||
# Test without MPI | ||
print "Tests without MPI" | ||
mpi_test = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If MPI is not available, post a skipping message.
# MPI tests | ||
if mpi_test: | ||
try: | ||
call(["mpiexec", "-n", "2", "python", "test_sp/test_issue_578_sp.py"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be best to pick up the MPI-invocation rule from .nestrc
.
…mmand creation from nestrc file
Hi dear @heplesser, I have addressed most of your comments. I am not very acquainted with the format of the .nestrc file so I read the command in the most generic way I could with the knowledge that I have from it. Maybe I am not considering certain aspects of the format so If you have a better way or suggestion to make it easier/more robust please let me know. |
Dear @jougs, could you please take a look at this PR and let me know your opinion on handling the nestrc file to obtain the mpi execution command? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdiazpier: thanks for the PR. I've added some suggestions to make handling of the MPI command line more robust. I'll have a look again once you've addressed those.
print ("MPI test command: " + line) | ||
nestrcf.close() | ||
return line | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no need to parse and modify the line from .nestrc
manually here as this is anyway a SLI
function, which gives you the invocation line for a given number of processes, an executable and a script file. I suggest to replace the function signature by
def getMPITestCommand(np, script):
and the function body by something like this:
return nest.sli_func("mpirun", np, "python", script)
This is much safer, as it works for all possible definitions of mpirun
in the .nestrc
file.
command = getMPITestCommand() | ||
command = command.replace( | ||
" scriptfile", "test_sp/mpitest_issue_578_sp.py") | ||
print ("Executing test with command: " + command) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the changes to the getMPITestCommand
function detailed below, you can replace lines 53-55 by just
command = getMPITestCommand(2, "test_sp/mpitest_issue_578_sp.py")
Hi dear @jougs I have made the changes you suggested. Could you please take a look at it and let me know if anything is missing? |
… GSL is not enabled for this test and for the weight recorder test
Dear @jougs I implemented the changes we talked about and added the check for GSL in the test. Please let me know if anything else is needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for your changes. This looks very nice now.
This PR addresses issue #290 by throwing a kernel exception when structural plasticity is enabled and multiple threads have been set for simulation.