Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing model weights for Activity Net #4

Closed
arnavc1712 opened this issue Oct 6, 2020 · 9 comments
Closed

Sharing model weights for Activity Net #4

arnavc1712 opened this issue Oct 6, 2020 · 9 comments

Comments

@arnavc1712
Copy link

Hi,
Would it be possible to share model weights for the activity net training?

@kylemin
Copy link
Member

kylemin commented Oct 7, 2020

Hi again.
Please refer to the previous issue: link.
Thank you.

@arnavc1712
Copy link
Author

Hi Kyle,
Thanks for the revert. Yes but that would require training the model again. Would it be possible to share model weights after training on activity net?

@kylemin
Copy link
Member

kylemin commented Oct 8, 2020

It was already uploaded in the same Google drive link that was mentioned in the issue!
Thank you.

@arnavc1712
Copy link
Author

Hi Kyle,
Thank you for the quick reply. I tried loading the weights into the model and also added the 'tfilter' layer as described in the paper. However the numbers are extremely low. Where could I be going wrong?
0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95
val result at 0: 0.01 || 0.87 | 0.06, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.00

@kylemin
Copy link
Member

kylemin commented Oct 8, 2020

I think your tfilter layer is not defined properly. How did you add the layer? Please provide me the corresponding line of your code.

@arnavc1712
Copy link
Author

Hi Kyle,
I used this in the init() function - self.tfilter = nn.Conv1d(num_class, num_class, kernel_size=13, stride=1, padding=12, dilation=2, bias=False, groups=num_class)
And this in the forward() function tcam = F.relu(self.tfilter(tcam.permute([0,2,1]))).permute([0,2,1])

@kylemin
Copy link
Member

kylemin commented Oct 11, 2020

Please try without relu activation, as we just applied tfilter without it.

@memoryjing
Copy link

Hi Kyle,
Thanks very much for sharing the code. Could you please help me with where to add the filter?

@kylemin
Copy link
Member

kylemin commented Oct 28, 2020

Hi @memoryjing,
You can add it at the end of the forward function.

...
tcam = (cls_x_r+cls_x_ratself.omega) * self.mul_r + (cls_x_f+cls_x_fatself.omega) * self.mul_f
tcam = self.tfilter(tcam.permute(0,2,1)).permute(0,2,1)
return ...

@kylemin kylemin closed this as completed Oct 28, 2020
@MichiganCOG MichiganCOG locked as too heated and limited conversation to collaborators Oct 28, 2020
@kylemin kylemin reopened this Oct 28, 2020
@kylemin kylemin closed this as completed Jan 14, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants