-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing Synaptic Delays with Delay Process #267
Conversation
…he outputs, when it needs to be initialized to the size of the inputs
Thank you for your contribution! We will take a look next week. |
@kds300 Thanks for your contribution! To get tests to work like all of our others can you copy an empty See dense folder for example: |
Hi @mgkwill , I've added the init file to delay folder. |
Awesome, Thanks @kds300. Now that we have the unit tests working in CI, I'll take a closer look at the code and do a formal review in the next day or so. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dear Kevin (@kds300),
Thank you very much for your contribution! Both for coding this important functionality; and for providing such extensive unit testing!
The delay is an important feature of Dense. Thus, I would suggest that you implement the delay behavior and the cyclic buffer directly into the Dense Process (or an inherited DenseDelay Process) and not into a separate Delay Process.
In addition, the delay feature will also be important for other connection Process, like Sparse or Conv. Maybe you see a chance for code reuse? For example, to factor out your cyclic buffer into a mixin class that both the Dense, Sparse, and Conv Process inherit from besides AbstractProcess. But that would cause additional work on your side; thus feel free to focus on the delay for Dense for now.
I would suggest not to create a Var for max_delay. For parameters that will never be changed during code runtime, the best practice is to store it in the process as
self.proc_params["variable_name"]
Please see lava.proc.sdn.process for an example on how to use it.
In the ProcessModel, you can then read out the value again as shown in lava.proc.sdn.models.AbstractSigmaDeltaModel
You may have noticed that we have recently released the major new Lava version v0.4.0. Unfortunately, the release has several breaking changes, also for your PR. Maybe you could check that your Process is still compatible, in particular concerning variable names in Dense. One major difference is that use_graded_spikes: bool has now been replaced by num_message_bits. The message is only a bool (-> astype(bool) ) if self.num_message_bits.item()== 0; otherwise the message is an integer.
In addition, RunCfg should now be fully functional. Thus, you won't need to use a dedicated DelayRunConfig anymore.
If you could provide these changes, I will be pleased to approve your PR and add your code to the repo!
Best,
Philipp
Hello Philipp (@phstratmann), I'll work on these changes and update the PR once they're complete! For the buffer, I'll implement it directly into the Dense process for now, but I could look into creating a mixin class later. I currently have the input spikes stored in a buffer, but would appreciate input on whether it is better to store the input spikes or the output activations in a buffer. Thanks, |
Dear Kevin (@kds300), I just noticed that your reply included a question. Sorry for the delayed response, and thanks for your patience. To answer your question, let me highlight two features that we would suggest a delaying Dense process to have:
Thus, what we would suggest:
I hope may explanation is clear. If not, let me know and we can quickly chat by phone. Once again, thanks a lot for your contribution! |
Thanks for pushing this discussion forward. However, I suggest not to create a ring buffer PER SYNAPSE. This would be unnecessarily costly in memory. Instead extend the existing Dendritic Accumulator from a 2 step to an N step ring buffer. Whenever you receive a spike, you can immediately multiply input activation s_(j) (in case of graded spikes) with weights w_(i,j) and accumulate it in a future time bucket of the dendritic accumulator dend_acc_(i,k). Here the delay index k into the dendritic accumulator for each post synaptic neuron i is the sum of the current time index and the delay corresponding to the synapse with weight w_(i,j) modulo the size of the buffer: mod(t + 1 + d_(i,j), d_max+1). The +1 accounts for the fact that for delay 0, spikes are at least accumulated in the next time bucket form the current time step t. One way of implementing this efficiently in vectorized form would be to concatenate the weight matrices corresponding to different delays along the i-dimension: |
Thanks, Andreas! You are absolutely right, that's a substantially more efficient implementation. |
Thanks for the suggestions! I'll rewrite the process based on this discussion. |
Hi @kds300 any progress on this? |
…he DenseDelay process model.
This feature was merged with PR #624. Thanks for your help! I am closing this PR now. |
Issue Number: #237
Objective of pull request: Implement synaptic delays in connections between neurons by introducing a new Delay process.
Pull request checklist
Your PR fulfills the following requirements:
flakeheaven lint src/lava tests/
) and (bandit -r src/lava/.
) pass locallypytest
) passes locallyPull request type
Please check your PR type:
What is the current behavior?
What is the new behavior?
Does this introduce a breaking change?
Supplemental information