Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable get/set on learning rule parameters #622

Merged
merged 17 commits into from
Feb 17, 2023
Merged

Conversation

weidel-p
Copy link
Contributor

Issue Number:
#582 #542

Objective of pull request:
This PR enables to use get/set on parameters of the learning rule.
For that to work, we create Vars in the LearningDense Process and ProcessModel and initialize them with the values given by the learning rule object.
Furthermore, we add a function 'on_var_update' to the AbstractProcessModel which is called when a Var is being updated. This allows us to do further checks and computation if the updated Var leads to changes in the functionality.
In order to enable the get/set function for 'dw', 'dt' and 'dd', we had to implement a String LavaType. Strings can be communicated through ports by encoding/decoding them into ASCII.

Pull request checklist

Your PR fulfills the following requirements:

  • Issue created that explains the change and why it's needed
  • Tests are part of the PR (for bug fixes / features)
  • Docs reviewed and added / updated if needed (for bug fixes / features)
  • PR conforms to Coding Conventions
  • PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
  • Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
  • Build tests (pytest) passes locally

Pull request type

Please check your PR type:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation changes
  • Other (please describe):

What is the current behavior?

  • If a user wants to change the parameters of a learning rule during runtime, the complete Process had to be re-created.

What is the new behavior?

  • The parameters of a learning rule can be updated during the runtime by calling get/set.

Does this introduce a breaking change?

  • Yes
  • No

Supplemental information

Copy link
Contributor

@joyeshmishra joyeshmishra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I believe the runtime change is only at 2 places to encode the str as np.array of int32. Okay with me.

@weidel-p weidel-p force-pushed the dev/update_learning_rule_sim branch 3 times, most recently from de7128e to 59c02d5 Compare February 16, 2023 16:21
@PhilippPlank PhilippPlank linked an issue Feb 17, 2023 that may be closed by this pull request
8 tasks
@weidel-p weidel-p force-pushed the dev/update_learning_rule_sim branch 4 times, most recently from e397242 to 2c4c8d5 Compare February 17, 2023 12:23
Copy link
Contributor

@mathisrichter mathisrichter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just had a quick look - good overall, just some minor remarks.

src/lava/magma/core/learning/learning_rule.py Outdated Show resolved Hide resolved
src/lava/magma/core/learning/learning_rule.py Outdated Show resolved Hide resolved
src/lava/magma/core/learning/learning_rule.py Outdated Show resolved Hide resolved
src/lava/magma/core/learning/learning_rule.py Outdated Show resolved Hide resolved
src/lava/magma/core/learning/learning_rule.py Outdated Show resolved Hide resolved
src/lava/magma/core/model/py/connection.py Outdated Show resolved Hide resolved
src/lava/magma/core/model/py/model.py Show resolved Hide resolved
tests/lava/proc/dense/test_stdp_sim.py Show resolved Hide resolved
@weidel-p weidel-p merged commit 8cb6787 into main Feb 17, 2023
monkin77 pushed a commit to monkin77/thesis-lava that referenced this pull request Jul 12, 2024
* pre-traces and string for floating

* y params and tests

* tests for fixed pt

* minor cleanup

* lint

* adde lr to initial parameters of LearningDense

* rm unused imports

* avoid using learning lif

* temp: just one stdp test

* enable 2f learning rules

* revert neuron.py

* minor change

* lint

* clean up str representation of learning rule

* cleanup

* lint
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Change Loihi Learning Rule during runtime in simulation
5 participants