-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NEST is running out of space for synapse models #1043
Comments
A work-around for Issue nest#1043, as suggested by @heplesser's comment to PR nest#865.
yes, the fact that one can only create few additional synapse models is certainly an issue. here are my thoughts:
|
@jakobj: I was assigned to this mainly for mental support, not for the coding ;-) First, let me reply to your comment:
The current split of the I thus propose to alter the defaults towards a more John-Doe-friendly split and make it configurable from A default of 19 (instead of 20) bits for the rank (=524,288) and 9 (instead of 10) bits for the thread id (=512) in favor of 8 bit for the synapse type would probably already solve the problem, while also keeping most HPC users happy. There should also be a clear explanation of the split in the NEST documentation along the lines of the |
thanks for your support @jougs :) I agree that the current split is suboptimal for the normal (non-HPC) use case and that my previously suggested solution has significant drawbacks. However, giving users full flexibility with respect to the desired split as in your example makes me a bit uneasy as this complicates reproduction of results among users ("which split did you use?", "hmm, I don't remember, let me check my log files, ..., ughh, can find them right now, but it was something like XXX -- maybe?") without providing significant usability advantages. How about hardcoding two splits instead, one for the normal use case and one for HPC? It's not that I'm not intrigued by the technical side of your suggestion. ;) |
As mentioned in the full-day NEST dev visio, I would advise for a cmake option (one for HPC, one for laptop users), in agreement with @jakobj 's suggestion. |
Discussed in VC 26 Nov. Combine user-friendly cmake options as suggested by @jakobj and @Silmathoron and @jougs' suggestion for full control. To be implemented asap and included in NEST 2.16.1. |
i've now implemented the cmake option to set the bit sizes of various target members, see https://github.com/jakobj/nest-simulator/tree/feature/cmake-target-customization. However, this does not seem to solve the problem, since in various other places, we also assume maximal/minimal sizes of rank, thread id, syn id, lcid, see for example https://github.com/jakobj/nest-simulator/blob/feature/cmake-target-customization/nestkernel/target_data.h#L41. If a user chooses more than 8bits for syn id, this code will break. I will work on cleaning up this mess. |
Many thanks for working on this and good luck with the cleansing ;-) |
yes, the |
NEST currently supports only 63 different synapse models. The current master has already 55 synapse models, so that only very few additional synapse models can be created with
CopyModel
. Adding more synapse models makes the situation even worse, see #865.The number of synapse models available by default is as large as 55, because many synapse models exist in multiple versions (plain,
_lbl
,_hpc
).I believe we need to discuss increasing the space for synapse models to at least 255 and reconsider if all model variants should be created even if they are never used:
_lbl
is used by PyNN, which then does not use the plain version, and models using_hpc
variants do not use the plain variant simultaneously.The text was updated successfully, but these errors were encountered: