Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prune error:dimension out of range (expected to be in range of [-2, 1], but got 3) #12

Open
hewumars opened this issue Nov 3, 2017 · 17 comments

Comments

@hewumars
Copy link

hewumars commented Nov 3, 2017

error dimension out of range

values =
torch.sum((activation * grad), dim = 0).
sum(dim=2).sum(dim=3)[0, :, 0, 0].data

@longriyao
Copy link

Hi! How did you solve it?

@Kuldeep-Attri
Copy link

Hi!
I am also getting the same error. Did you solve this?
Thanks!!!

@hewumars
Copy link
Author

@longriyao @Kuldeep-Attri
pytorch version problem,this repository use pytorch 0.1
If you use pytorch 0.2,add param: keepDim=True

@Kuldeep-Attri
Copy link

Kuldeep-Attri commented Nov 29, 2017

Thanks!!! @hewumars

@RohitKeshari
Copy link

RohitKeshari commented Jan 14, 2018

Hi @hewumars
After your suggestion, I have modified and used the following code

values = \
			torch.sum((activation * grad), dim = 0, keepdim=True).\
				sum(dim=2,keepdim=True).sum(dim=3,keepdim=True)[0, :, 0, 0].data

The next error I am getting in prune.py in line 33

RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous

Before that, I am getting following intermediate output

Accuracy : 0.98454429573
Number of pruning iterations to reduce 67% filters 5
Ranking filters.. 
Layers that will be prunned {0: 1, 2: 5, 5: 6, 7: 6, 10: 21, 12: 16, 14: 21, 17: 64, 19: 64, 21: 60, 24: 62, 26: 79, 28: 107}
Prunning filters.. 

Please suggest how can I fix this problem

@hewumars
Copy link
Author

the origin code use python2,if you use 3,please notice the difference between the two versions.

@hewumars
Copy link
Author

I do not know where your problem is.

@RohitKeshari
Copy link

RohitKeshari commented Jan 14, 2018

Python 2.7.13
pytorch 0.2.0_4

I am writing all the massage which I have received while running python finetune.py --prune

/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torchvision-0.1.9-py2.7.egg/torchvision/transforms/transforms.py:155: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torchvision-0.1.9-py2.7.egg/torchvision/transforms/transforms.py:390: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
Accuracy : 0.98454429573
Number of prunning iterations to reduce 67% filters 5
Ranking filters.. 
Layers that will be prunned {0: 1, 2: 5, 5: 6, 7: 6, 10: 21, 12: 16, 14: 21, 17: 64, 19: 64, 21: 60, 24: 62, 26: 79, 28: 107}
Prunning filters.. 
Traceback (most recent call last):
  File "finetune.py", line 270, in <module>
    fine_tuner.prune()
  File "finetune.py", line 228, in prune
    model = prune_vgg16_conv_layer(model, layer_index, filter_index)
  File "/home/iab/Rohit/pytorch/filter_selection/ICLR2017/pytorch-pruning-master/prune.py", line 33, in prune_vgg16_conv_layer
    bias = conv.bias)
  File "/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 250, in __init__
    False, _pair(0), groups, bias)
  File "/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 34, in __init__
    if bias:
  File "/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/autograd/variable.py", line 123, in __bool__
    torch.typename(self.data) + " is ambiguous")
RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous

Is it because of ambiguous value in Layers that will be prunned?

@hewumars
Copy link
Author

@RohitKeshari
#hw modify
is_bias_present = False
if conv.bias is not None:
is_bias_present = True
new_conv =
torch.nn.Conv2d(in_channels = conv.in_channels,
out_channels = conv.out_channels - 1,
kernel_size = conv.kernel_size,
stride = conv.stride,
padding = conv.padding,
dilation = conv.dilation,
groups = conv.groups,
bias = is_bias_present)

@hewumars
Copy link
Author

@RohitKeshari step debug and print variable can be solve the problem , because your input param is error.

@RohitKeshari
Copy link

Thanks @hewumars, in place conv.bias It should have True. It works for me

@ssxlconch
Copy link

Hi,I run the python finetune.py --train
but the output accuracy is always 1.0
and python finetune.py --prune it shows:
Accuracy : 1.0
Number of prunning iterations to reduce 67% filters 5
can you help me ?

@ssxlconch
Copy link

@RohitKeshari where did you add the modify code?
I added it:
if not next_conv is None:
next_conv.bias = False
if conv.bias is not None:
next_conv.bias = True
next_new_conv =
torch.nn.Conv2d(in_channels = next_conv.in_channels - 1,
out_channels = next_conv.out_channels,
kernel_size = next_conv.kernel_size,
stride = next_conv.stride,
padding = next_conv.padding,
dilation = next_conv.dilation,
groups = next_conv.groups,
bias = next_conv.bias)
did not solve the problem

@RohitKeshari
Copy link

@tearhupo121031 what is your problem? please post the error message here. Are you using the same database? If you are using a different database then it might be the case that database is easy.

@MrLinNing
Copy link

sorry,I have the same problem,
I added it:

next_new_conv =
torch.nn.Conv2d(in_channels = next_conv.in_channels - 1,
out_channels = next_conv.out_channels,
kernel_size = next_conv.kernel_size,
stride = next_conv.stride,
padding = next_conv.padding,
dilation = next_conv.dilation,
groups = next_conv.groups,
bias = True)
image

@CF2220160244
Copy link

just delete the last line to this:

if not next_conv is None:
next_conv.bias = False
if conv.bias is not None:
next_conv.bias = True
next_new_conv =
torch.nn.Conv2d(in_channels = next_conv.in_channels - 1,
out_channels = next_conv.out_channels,
kernel_size = next_conv.kernel_size,
stride = next_conv.stride,
padding = next_conv.padding,
dilation = next_conv.dilation,
groups = next_conv.groups)

@ghost
Copy link

ghost commented Nov 30, 2018

@MrLinNing

Python 3
model.features._modules.items() -> it returns a generator and you cannot use index

Python 2.7
model.features._modules.items() -> it returns a list and you can use index

You can either use Python 2.7 or list(model.features._modules.items())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants