Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing Synaptic Delays with Delay Process #267

Closed
wants to merge 26 commits into from

Conversation

kds300
Copy link
Contributor

@kds300 kds300 commented Jul 8, 2022

Issue Number: #237

Objective of pull request: Implement synaptic delays in connections between neurons by introducing a new Delay process.

Pull request checklist

Your PR fulfills the following requirements:

  • Issue created that explains the change and why it's needed
  • Tests are part of the PR (for bug fixes / features)
  • Docs reviewed and added / updated if needed (for bug fixes / features)
  • PR conforms to Coding Conventions
  • PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
  • Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
  • Build tests (pytest) passes locally

Pull request type

Please check your PR type:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation changes
  • Other (please describe):

What is the current behavior?

  • Synaptic delays are currently not supported by the Dense() object. Synaptic delays can be useful in manual SNN algorithm design using LIF neurons.

What is the new behavior?

  • Implements Delay process which can be called using weights and delays as inputs Delay(weights, delays). This creates a dense connection matrix (similar to the Dense process) with the defined weights and delays for each connection. Each connection waits a number of timesteps equal to the delay before sending a spike.

Does this introduce a breaking change?

  • Yes
  • No

Supplemental information

  • This is only a floating point, CPU implementation of the Delay process.

@awintel
Copy link
Contributor

awintel commented Jul 9, 2022

Thank you for your contribution! We will take a look next week.

@mgkwill
Copy link
Contributor

mgkwill commented Jul 11, 2022

@kds300 Thanks for your contribution!

To get tests to work like all of our others can you copy an empty __init__.py file from one of the other procs into the tests/lava/proc/delay/ folder?

See dense folder for example:
https://github.com/kds300/lava/tree/main/tests/lava/proc/dense

@mgkwill mgkwill added 1-feature New feature request 0-needs-review For all new issues area: proc Issues with something in lava/proc labels Jul 11, 2022
@kds300 kds300 marked this pull request as ready for review July 12, 2022 00:57
@kds300
Copy link
Contributor Author

kds300 commented Jul 12, 2022

@kds300 Thanks for your contribution!

To get tests to work like all of our others can you copy an empty __init__.py file from one of the other procs into the tests/lava/proc/delay/ folder?

See dense folder for example: https://github.com/kds300/lava/tree/main/tests/lava/proc/dense

Hi @mgkwill , I've added the init file to delay folder.

@mgkwill
Copy link
Contributor

mgkwill commented Jul 12, 2022

@kds300 Thanks for your contribution!
To get tests to work like all of our others can you copy an empty __init__.py file from one of the other procs into the tests/lava/proc/delay/ folder?
See dense folder for example: https://github.com/kds300/lava/tree/main/tests/lava/proc/dense

Hi @mgkwill , I've added the init file to delay folder.

Awesome, Thanks @kds300. Now that we have the unit tests working in CI, I'll take a closer look at the code and do a formal review in the next day or so.

Copy link
Contributor

@phstratmann phstratmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dear Kevin (@kds300),

Thank you very much for your contribution! Both for coding this important functionality; and for providing such extensive unit testing!

The delay is an important feature of Dense. Thus, I would suggest that you implement the delay behavior and the cyclic buffer directly into the Dense Process (or an inherited DenseDelay Process) and not into a separate Delay Process.

In addition, the delay feature will also be important for other connection Process, like Sparse or Conv. Maybe you see a chance for code reuse? For example, to factor out your cyclic buffer into a mixin class that both the Dense, Sparse, and Conv Process inherit from besides AbstractProcess. But that would cause additional work on your side; thus feel free to focus on the delay for Dense for now.

I would suggest not to create a Var for max_delay. For parameters that will never be changed during code runtime, the best practice is to store it in the process as
self.proc_params["variable_name"]
Please see lava.proc.sdn.process for an example on how to use it.
In the ProcessModel, you can then read out the value again as shown in lava.proc.sdn.models.AbstractSigmaDeltaModel

You may have noticed that we have recently released the major new Lava version v0.4.0. Unfortunately, the release has several breaking changes, also for your PR. Maybe you could check that your Process is still compatible, in particular concerning variable names in Dense. One major difference is that use_graded_spikes: bool has now been replaced by num_message_bits. The message is only a bool (-> astype(bool) ) if self.num_message_bits.item()== 0; otherwise the message is an integer.
In addition, RunCfg should now be fully functional. Thus, you won't need to use a dedicated DelayRunConfig anymore.

If you could provide these changes, I will be pleased to approve your PR and add your code to the repo!

Best,

Philipp

@kds300
Copy link
Contributor Author

kds300 commented Jul 19, 2022

Dear Kevin (@kds300),

Thank you very much for your contribution! Both for coding this important functionality; and for providing such extensive unit testing!

The delay is an important feature of Dense. Thus, I would suggest that you implement the delay behavior and the cyclic buffer directly into the Dense Process (or an inherited DenseDelay Process) and not into a separate Delay Process.

In addition, the delay feature will also be important for other connection Process, like Sparse or Conv. Maybe you see a chance for code reuse? For example, to factor out your cyclic buffer into a mixin class that both the Dense, Sparse, and Conv Process inherit from besides AbstractProcess. But that would cause additional work on your side; thus feel free to focus on the delay for Dense for now.

I would suggest not to create a Var for max_delay. For parameters that will never be changed during code runtime, the best practice is to store it in the process as self.proc_params["variable_name"] Please see lava.proc.sdn.process for an example on how to use it. In the ProcessModel, you can then read out the value again as shown in lava.proc.sdn.models.AbstractSigmaDeltaModel

You may have noticed that we have recently released the major new Lava version v0.4.0. Unfortunately, the release has several breaking changes, also for your PR. Maybe you could check that your Process is still compatible, in particular concerning variable names in Dense. One major difference is that use_graded_spikes: bool has now been replaced by num_message_bits. The message is only a bool (-> astype(bool) ) if self.num_message_bits.item()== 0; otherwise the message is an integer. In addition, RunCfg should now be fully functional. Thus, you won't need to use a dedicated DelayRunConfig anymore.

If you could provide these changes, I will be pleased to approve your PR and add your code to the repo!

Best,

Philipp

Hello Philipp (@phstratmann), I'll work on these changes and update the PR once they're complete!

For the buffer, I'll implement it directly into the Dense process for now, but I could look into creating a mixin class later. I currently have the input spikes stored in a buffer, but would appreciate input on whether it is better to store the input spikes or the output activations in a buffer.

Thanks,
Kevin

@phstratmann
Copy link
Contributor

I currently have the input spikes stored in a buffer, but would appreciate input on whether it is better to store the input spikes or the output activations in a buffer.

Dear Kevin (@kds300),

I just noticed that your reply included a question. Sorry for the delayed response, and thanks for your patience.

To answer your question, let me highlight two features that we would suggest a delaying Dense process to have:

  1. Within a Dense process, different neurons can have different delays. Thus, we may have a Dense layer with e.g. 10 synapses, of which 5 have a delay of 1 time steps, and 5 of 2 time steps.
    This is the reason why you cannot simply buffer a_out values: It is possible that the a_out value targeting one neuron is composed of spikes from some neurons with a delay of 1, and some with a delay of 2, in this example.
  2. In later stages, synapses will adapt their weights online. A spike arrives at t_0=0, and is forwarded to the downstream neuron at t_1=delay. Within t_1 – t_0, the weight may have adapted. Still, the spike should induce a change in the downstream neuron according to the weight w(t_0).
    This is the reason why you cannot simply buffer s_in values: The information w(t_0) would be lost once you forward the spike after "delay" time steps.

Thus, what we would suggest:

  1. Create a ring buffer (1D numpy array) for each synapse. For groups of synapses that share a delay, you may use a 2D array.
  2. When a spike arrives at a synapse w_i at t_0, write the value w_i into the delayed field of ring buffer i.
  3. In each time step, you derive the value a_out by summing the values w_i that must now be forwarded to the same neuron.

I hope may explanation is clear. If not, let me know and we can quickly chat by phone.

Once again, thanks a lot for your contribution!

@awintel
Copy link
Contributor

awintel commented Aug 15, 2022

Thanks for pushing this discussion forward. However, I suggest not to create a ring buffer PER SYNAPSE. This would be unnecessarily costly in memory. Instead extend the existing Dendritic Accumulator from a 2 step to an N step ring buffer.

Whenever you receive a spike, you can immediately multiply input activation s_(j) (in case of graded spikes) with weights w_(i,j) and accumulate it in a future time bucket of the dendritic accumulator dend_acc_(i,k). Here the delay index k into the dendritic accumulator for each post synaptic neuron i is the sum of the current time index and the delay corresponding to the synapse with weight w_(i,j) modulo the size of the buffer: mod(t + 1 + d_(i,j), d_max+1). The +1 accounts for the fact that for delay 0, spikes are at least accumulated in the next time bucket form the current time step t.

One way of implementing this efficiently in vectorized form would be to concatenate the weight matrices corresponding to different delays along the i-dimension:
W = np.vstack(w_(i,j, d==0), w_(i,j, d== 3), ..., w_(i,j, d==d_max))
Then you can just do an vector matrix multiplication and reshape the result:
act = np.reshape(W * s, dend_accum.shape)
The differently columns in this activation matrix can just be added to the dendritic accumulator at the right index locations corresponding to future delays.

@phstratmann
Copy link
Contributor

Thanks for pushing this discussion forward. However, I suggest not to create a ring buffer PER SYNAPSE. This would be unnecessarily costly in memory. Instead extend the existing Dendritic Accumulator from a 2 step to an N step ring buffer.

Thanks, Andreas! You are absolutely right, that's a substantially more efficient implementation.
Just one clarification for you, Kevin: By "dendritice accumulator", Andreas refers to the a_buff in the Dense PyProcModel.

@kds300
Copy link
Contributor Author

kds300 commented Aug 17, 2022

Thanks for the suggestions! I'll rewrite the process based on this discussion.

@mgkwill
Copy link
Contributor

mgkwill commented Sep 21, 2022

Hi @kds300 any progress on this?

@kds300
Copy link
Contributor Author

kds300 commented Sep 22, 2022

Hi @kds300 any progress on this?

Hi @mgkwill, I had been focusing on other parts of my group's research project so I haven't gotten much done on this yet. I'll work on the updates discussed above and update the PR.

@mathisrichter mathisrichter linked an issue Jan 4, 2023 that may be closed by this pull request
@mathisrichter mathisrichter linked an issue Jan 23, 2023 that may be closed by this pull request
6 tasks
@PhilippPlank PhilippPlank mentioned this pull request Feb 8, 2023
6 tasks
@PhilippPlank PhilippPlank removed the 0-needs-review For all new issues label Feb 18, 2023
@PhilippPlank
Copy link
Contributor

This feature was merged with PR #624. Thanks for your help! I am closing this PR now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1-feature New feature request area: proc Issues with something in lava/proc
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Synaptic delays Synaptic delays in Dense Process
5 participants