Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add netx support for rf.Dense and rf_iz.Dense #110

Merged
merged 10 commits into from
Nov 17, 2022

Conversation

Michaeljurado42
Copy link
Contributor

@Michaeljurado42 Michaeljurado42 commented Sep 27, 2022

Issue Number:

Objective of pull request: Add netx support for rf.Dense and rf_iz.Dense

Pull request checklist

Your PR fulfills the following requirements:

  • Issue created that explains the change and why it's needed
  • Tests are part of the PR (for bug fixes / features)
  • Docs reviewed and added / updated if needed (for bug fixes / features)
  • PR conforms to Coding Conventions
  • PR applys BSD 3-clause or LGPL2.1+ Licenses to all code files
  • Lint (flakeheaven lint src/lava tests/) and (bandit -r src/lava/.) pass locally
  • Build tests (pytest) passes locally

Pull request type

Please check your PR type:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation changes
  • Other (please describe):

What is the current behavior?

  • Currently we are not able to export slayer trained rf models into lava.

What is the new behavior?

-With this change we can export slayer trained rf models into lava.

Does this introduce a breaking change?

  • Yes
  • No

In order to make this change work this lava pr must be finished and accepted

Supplemental information

This issue is a result of an ongoing discussion in which I ask whether or not rf neurons will be added to lava.

@Michaeljurado42
Copy link
Contributor Author

@bamsumit
So far I have been implemented netx support (and some tests) for for rf.Dense, rf_iz.Dense, rf.Input, and rf_iz.Input. Is this sufficient for a PR or is it also necessary to implement resonator convolutions in order for it to be accepted. (For my GT lab's use case, resonator convolutions is something that we are not using right now)

Copy link
Contributor

@bamsumit bamsumit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall. Thanks Michael

@bamsumit
Copy link
Contributor

@bamsumit So far I have been implemented netx support (and some tests) for for rf.Dense, rf_iz.Dense, rf.Input, and rf_iz.Input. Is this sufficient for a PR or is it also necessary to implement resonator convolutions in order for it to be accepted. (For my GT lab's use case, resonator convolutions is something that we are not using right now)

Hi @Michaeljurado24 , it's fine. Complex convolution can be added when needed.

Copy link
Contributor

@bamsumit bamsumit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh one thing.

Can you commit the gts/complex_dense numpy files using git-lfs?

@Michaeljurado42
Copy link
Contributor Author

Michaeljurado42 commented Nov 15, 2022

Oh one thing.

Can you commit the gts/complex_dense numpy files using git-lfs?

Unfortunately I do not think I can commit files via git-lfs and push to a fork. (See here and here. I tried git lfs tracking the files in complex dense, adding and commiting them. However, I failed to push it...

@Michaeljurado42
Copy link
Contributor Author

Looks good overall. Thanks Michael

No problem. Before you accept this PR, I want to make sure you are aware that I combined the testing of rf and rf_iz layers in the TestRFBlocks class. I did this to prevent code testing code and data bloat.

Also, in order for this code to work this really small issue needs to be resolved in the main lava codebase.

@bamsumit
Copy link
Contributor

@Michaeljurado24 got it. Don't worry about git lfs now then. I'll convert them once it is merged.

Combining the tests is okay. They are already tested on the main lava repo.
About the unique neuron parameter issue, tag me when you manage to issue a fix.

@Michaeljurado42
Copy link
Contributor Author

I created a PR here @bamsumit

Use latest lava-nc for CI tests, which includes the rf processes.
@PhilippPlank
Copy link
Contributor

@Michaeljurado24 If you are wondering why I changed the poetry.lock file -> It pointed to a specific revision of lava, which did not include your RF processes yet, which lead to CI fails.
In case one adds something to the lava repository which is also needed for a lava-dl PR, the poetry.lock file should also be updated to point to the newer revision.

It would just be a "poetry update" on the command line to update the poetry.lock file, if you need it again :)

CI should pass now.

@PhilippPlank PhilippPlank marked this pull request as ready for review November 17, 2022 14:43
@bamsumit
Copy link
Contributor

Looks good @Michaeljurado24 Thanks for the contribution.

@bamsumit bamsumit merged commit 550de8e into lava-nc:main Nov 17, 2022
@Michaeljurado42 Michaeljurado42 deleted the rf_neurons branch November 17, 2022 16:20
@Michaeljurado42
Copy link
Contributor Author

@Michaeljurado24 If you are wondering why I changed the poetry.lock file -> It pointed to a specific revision of lava, which did not include your RF processes yet, which lead to CI fails. In case one adds something to the lava repository which is also needed for a lava-dl PR, the poetry.lock file should also be updated to point to the newer revision.

It would just be a "poetry update" on the command line to update the poetry.lock file, if you need it again :)

CI should pass now.

Okay, thanks for the information. Might be worth adding it to the developer guide.

@Michaeljurado42 Michaeljurado42 restored the rf_neurons branch November 18, 2022 16:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants