Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError when using Recurrent blocks #285

Open
naveedunjum opened this issue Feb 12, 2024 · 2 comments
Open

RuntimeError when using Recurrent blocks #285

naveedunjum opened this issue Feb 12, 2024 · 2 comments
Assignees
Labels
1-bug Something isn't working

Comments

@naveedunjum
Copy link

naveedunjum commented Feb 12, 2024

Describe the bug
When I try to use the cuba Recurrent blocks in my network, I get
RuntimeError: Output 0 of SelectBackward0 is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
I tried this with various networks, and also replaced the Dense layer with the Recurrent layer in the XOR regression or the Oxford Tutorial
Here is the network I am using which is same as the XOR network with recurrent block:

class Network(torch.nn.Module):
    def __init__(self):
        super(Network, self).__init__()

        neuron_params = {
                'threshold'     : 0.1,
                'current_decay' : 1,
                'voltage_decay' : 0.1,
                'requires_grad' : True,     
            }
        
        self.blocks = torch.nn.ModuleList([
                slayer.block.cuba.Dense(neuron_params, 100, 256),
                slayer.block.cuba.Recurrent(neuron_params, 256, 256),
                slayer.block.cuba.Dense(neuron_params, 256, 1),
            ])
    
    def forward(self, spike):
        for block in self.blocks:
            spike = block(spike)
        return spike

To reproduce current behavior
Steps to reproduce the behavior:

  1. Replace the Dense layer with the Recurrent layer in the XOR regression and Oxford Tutorial
  2. I get this error ...
    RuntimeError: Output 0 of SelectBackward0 is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.

Expected behavior
Normally it should work without any problems, because the network is working well with the Dense layers.

Environment (please complete the following information):

  • Device: Mac Air M2
  • OS: MacOS
  • Lava version [e.g. 0.6.1]
@naveedunjum naveedunjum added the 1-bug Something isn't working label Feb 12, 2024
@PhilippPlank
Copy link
Contributor

Thank you for reporting this issue. @bamsumit Could you take a look :)

@bamsumit
Copy link
Contributor

@naveedunjum can you check it again with the latest codebase? There was a change pushed recently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1-bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants