-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Moving more modules to C#. #1259
Conversation
Re-opening to get some builds running. Still Work in Progress. |
I'm trying to deal with In PyTorch it is allowed to do like: import torch.nn.utils
x = torch.nn.AvgPool1d(1)
x.register_module("???", torch.nn.Linear(1, 1))
x.to(dtype=torch.complex128)
print(x.get_submodule("???").weight.dtype) # torch.complex128 Although I think nobody will do this... but if we want it consistent to PyTorch, then Perhaps at least we should disable |
I think that's a great idea, but not sure about |
To be radical, we could make all the I think the only problem is that it will cause a great inconsistence with PyTorch. (However throwing an exception is also somewhat inconsistent. I'm not really sure where the boundary is.) |
You can always throw an exception and then relax the constraint later. I'm adding logic to enforce that submodules must also be parameter-less. |
Pushed these changes to GH. |
@@ -45,13 +65,30 @@ public abstract class ParamLessModule<T1, T2, T3> : nn.Module<T1, T2, T3> | |||
protected internal override nn.Module _to(DeviceType deviceType, int deviceIndex = -1) => this; | |||
|
|||
protected internal override nn.Module _to(ScalarType dtype) => this; | |||
|
|||
public override void register_buffer(string name, Tensor tensor, bool persistent = true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps it is worth adding sealed to the override?
Closing it again, just to avoid a lot of CI builds. Will reopen again later. |
@ shaltielshmid Do you have the link to the todoList : what yet to be done? I hope this PR continue |
@shaltielshmid It is not normal for @NiklasGustafsson to be silent for so long without letting us know that he is on a long vacation. @shaltielshmid => I hope you are in communication with him on how best to proceed with this PR. |
This PR is opened and closed while the ongoing work is tracked in my personal repo. Once ready for release, this PR will be reopened again. |
In PyTorch, buffers are not automatically registered: import torch.nn
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.a_tensor = torch.zeros([])
print(len(MyModel().state_dict()))
# 0 However in current TorchSharp, using TorchSharp;
using static TorchSharp.torch;
Console.WriteLine(new MyModel().state_dict().Count);
// 1
class MyModel : torch.nn.Module
{
private Tensor aTensor;
public MyModel() : base(nameof(MyModel))
{
aTensor = torch.zeros([]);
this.RegisterComponents();
}
} So should we also modify that? |
Meanwhile I suppose the call of submodule's P.S. Current |
How is this task going?😊😊😊(☆▽☆) |
Ah, yes... I was pulled into other tasks (TorchSharp is a background activity for me), and I only found time for smallish things, while this is a big task. One of the smaller tasks was adding the non_blocking argument to a variety of |
@NiklasGustafsson I was also pulled into quite a few other tasks these past few weeks - let me know when you've resolved the merge conflicts, and I'll finish up the last few items on our list? |
I can probably set aside some time tomorrow to work in the merge conflicts. With a little bit of luck, it'll be pretty mechanical. |
I've taken care of the merge conflicts. I'm going to spend a bit of time on the default device issue that isn't completely working, it seems, and then I'll do a release. |
Instead of creating a native-code module object, this PR is starting to move the module logic to C#, when reasonable.
Also, add missing properties for various modules -- the two go hand in hand.