-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Initial version of CLP in CPU backend #707
Conversation
All check have passes |
Integration tests are added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just glanced over it, will review in more detail after some questions.
- Why did you call it prototype_lif? It seems like a 3F learning LIF neuron. Also, is this usable in general? If yes, we should not hide it inside clp folder.
- What does nsm mean? The process is called Readout, I could not figure out what nsm stands for.
- I saw print statements in the PyNoveltyDetectorModel. I assume they are left overs from the development?
|
NoveltyDetector
added; lr var removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now that all interfaces changes that we discussed have been incorporated. Thanks!
This reverts commit bde4fa9.
Signed-off-by: bamsumit <[email protected]>
* CLP initial commit: PrototypeLIF, NoveltyDetector, Readout procs/tests * small linting fix * Novelty detector upgraded to target next neuron; codacy errors fixed * integration test; small fixes * removed duplicate code in prototypeLIF process; linting fixes * linting fixes * Linting and codacy fixes * remove duplicate test; some more codacy fixes * PrototypeLIF spikes when it recieves a 3rd factor input * a test for PrototypeLIF output spike after 3rd factor input * Allocation & prototype id tracking is abstracted away from NoveltyDetector * Allocator process; Readout proc sends allocation trigger if error * introduce learning rate Var in PrototypeLIF * updated integration tests; full system test included * Linting fixes * Another small lintint fix * PrototypeLIF hard reset capability to enable faster temporal WTA * allocation mechanism changed; proc interfaces changes; dense conns added; lr var removed * small linting fix * small codacy fix * prints removed, spelling mistakes fixed * ignoring one check in an integration test * Revert "small linting fix" This reverts commit bde4fa9. * Fix linting in test_models.py * Test fix in utils.py * Fix test of bug fix in utils.py * Fix utils.py * Implemented individual threadsafe random call Signed-off-by: bamsumit <[email protected]> --------- Signed-off-by: bamsumit <[email protected]> Co-authored-by: PhilippPlank <[email protected]> Co-authored-by: Marcus G K Williams <[email protected]> Co-authored-by: bamsumit <[email protected]>
Issue Number: #706
Objective of pull request:
As a user, I want to be able to use the CLP algorithm in Lava, to learn continually from a stream of data in Loihi chip.
Pull request checklist
Your PR fulfills the following requirements:
flakeheaven lint src/lava tests/
) and (bandit -r src/lava/.
) pass locallypytest
) passes locallyPull request type
Please check your PR type:
What is the current behavior?
What is the new behavior?
This PR is the first step in the full implementation of CLP. With this version of CLP, three base processes are implemented in the CPU backend and have been tested in terms of the following behaviors:
In addition, several integration tests are written to test if LearningDense -> PrototypeLIF -> NoveltyDetection intengration works properly
Does this introduce a breaking change?
Supplemental information