-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move op methods into dedicated objects and export them to torch namespace #29
Comments
Totally agree! that file is indeed getting pretty large. While that bug is fixed I am thinking we could do try these workarounds (unsure if there are any negative implications of either)
Yesterday I spent a bit of time splitting them in files using option 1 - as well as the test suite. Everything seems to be working fine. I can upload a draft of my changes later today when I get home. Btw, thanks a lot for openly sharing your thoughts and progress with the Reduction Ops in order to avoid any duplicate work. Will try to do the same. |
Ah great I didn't think of 1. as a workaround. If you've already tried that successfully, let's go for it! I tried 2. but ran into some weird circular issues when compiling the package object because the traits were also somewhere under
Yeah I'm still trying to get into the habit of writing these things down more often. I think it's good exercise for me 😄 |
Fixes sbrunk#29 Bonus additions - Add torch.multinomial - Add torch.oneHot - Add torch.gradient
Fixes sbrunk#29 Bonus additions: - Add `torch.multinomial` - Add `torch.oneHot` - Add `torch.gradient`
Fixes sbrunk#29 Bonus additions: - Add `torch.multinomial` [RandomSamplingOps] - Add `torch.randint` [RandomSamplingOps] - Add `torch.gradient` [PointwiseOps] - Add `torch.nn.functional.oneHot`
Let's keep this open even though we've split ops things into different files while defining them in the The export variant has a few advantages like being able to put ops into actual objects which in turn enables Scaladoc macros to work so I'd still like to try this option when it becomes possible. |
If I am not mistaken, it seems like this issue was resolved by #32 🥳 |
You're right, for some reason 2. (traits and package objects inheriting from them) did work after all. Not sure what caused the issues last time I tried. |
torch.scala
is getting a bit large, and we still need to implement a bunch of ops. So I think it makes sense to move the op methods into smaller units and then define aliases via export clauses in thetorch
namespace/package like we already do withtorch.special
.The difference being that we should do it also for methods that are not aliased explicitly in PyTorch (methods that are defined in the
torch
namespace there). A natural way to split them, is to use the ops structure in the PyTorch docs. We already group into these op groups intorch.scala
already so it shouldn't be hard to move things out.The advantage should be more clarity, maintainability and smaller compilation units.
Here's an idea for a structure:
torch/ops/PointwiseOps.scala
torch/ops/ReductionOps.scala
torch/torch.scala
I just did a quick feasibility test and there seems an issue with default methods with default parameters which are lost when an export is imported in a different compilation unit. See scala/scala3#17930. I could not find a way around it. Since we're we're using default parameters extensively, this is a blocker and I guess we'll have to wait until it is fixed.
The text was updated successfully, but these errors were encountered: