-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] [CodeCamp #63] Add VIG Backbone #1304
Conversation
|
||
|
||
@MODELS.register_module() | ||
class pyramid_vig(BaseBackbone): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name of classes should obey Google python style
'ti': [[2, 2, 6, 2], [48, 96, 240, 384]], | ||
's': [[2, 2, 6, 2], [80, 160, 400, 640]], | ||
'm': [[2, 2, 16, 2], [96, 192, 384, 768]], | ||
'b': [[2, 2, 18, 2], [128, 256, 512, 1024]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to use full-name tiny
, small
, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change the config file names as well.
} | ||
|
||
def __init__(self, | ||
model_cnf, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_cfg
-> arch
k, | ||
act, | ||
norm, | ||
bias, | ||
epsilon, | ||
use_stochastic, | ||
conv, | ||
drop_path, | ||
dropout, | ||
n_classes, | ||
norm_eval=False, | ||
frozen_stages=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k, | |
act, | |
norm, | |
bias, | |
epsilon, | |
use_stochastic, | |
conv, | |
drop_path, | |
dropout, | |
n_classes, | |
norm_eval=False, | |
frozen_stages=0): | |
k=9, | |
graph_conv_type='mr', | |
graph_conv_bias=True, | |
norm_cfg=dict(type='BN'), | |
act_cfg=dict(type='GELU'), | |
epsilon=0.2, | |
use_stochastic=False, | |
drop_path=0., | |
dropout=0., | |
norm_eval=False, | |
frozen_stages=0, | |
init_cfg=None): |
n_classes, | ||
norm_eval=False, | ||
frozen_stages=0): | ||
super(pyramid_vig, self).__init__() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
super(pyramid_vig, self).__init__() | |
super().__init__(init_cfg=init_cfg) |
import torch.nn as nn | ||
import torch.nn.functional as F | ||
from mmcv.cnn import build_activation_layer, build_conv_layer | ||
from torch.nn import Sequential as Seq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use the Sequential
in mmengine, and don't use the alias.
from torch.nn import Sequential as Seq | |
from mmengine.model import Sequential |
torch.zeros(1, channels[0], 224 // 4, 224 // 4)) | ||
HW = 224 // 4 * 224 // 4 | ||
|
||
self.backbone = nn.ModuleList([]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This module is already backbone, don't use backbone
as name of sub-module.
self.prediction = Seq( | ||
build_conv_layer(None, channels[-1], 1024, 1, bias=True), | ||
nn.BatchNorm2d(1024), build_activation_layer(dict(type=act)), | ||
nn.Dropout(dropout), | ||
build_conv_layer(None, 1024, n_classes, 1, bias=True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The section should belong to the head, instead of the backbone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I should put it in mmcls/models/heads, or use the script in mmcls/models/heads?
mmcls/models/backbones/vig.py
Outdated
if use_dilation: | ||
self.stage_blocks = Sequential(*[ | ||
Sequential( | ||
Grapher( | ||
channels, | ||
num_knn[i], | ||
min(i // 4 + 1, max_dilation), | ||
graph_conv_type, | ||
act_cfg, | ||
norm_cfg, | ||
graph_conv_bias, | ||
use_stochastic, | ||
epsilon, | ||
1, | ||
drop_path=dpr[i], | ||
relative_pos=relative_pos), | ||
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i])) | ||
for i in range(self.n_blocks) | ||
]) | ||
else: | ||
self.stage_blocks = Sequential(*[ | ||
Sequential( | ||
Grapher( | ||
channels, | ||
num_knn[i], | ||
1, | ||
graph_conv_type, | ||
act_cfg, | ||
norm_cfg, | ||
graph_conv_bias, | ||
use_stochastic, | ||
epsilon, | ||
1, | ||
drop_path=dpr[i], | ||
relative_pos=relative_pos), | ||
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i])) | ||
for i in range(self.n_blocks) | ||
]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the only difference between use_dilation=True
and use_dilation=False
is the dilation
argument, directly use:
if use_dilation: | |
self.stage_blocks = Sequential(*[ | |
Sequential( | |
Grapher( | |
channels, | |
num_knn[i], | |
min(i // 4 + 1, max_dilation), | |
graph_conv_type, | |
act_cfg, | |
norm_cfg, | |
graph_conv_bias, | |
use_stochastic, | |
epsilon, | |
1, | |
drop_path=dpr[i], | |
relative_pos=relative_pos), | |
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i])) | |
for i in range(self.n_blocks) | |
]) | |
else: | |
self.stage_blocks = Sequential(*[ | |
Sequential( | |
Grapher( | |
channels, | |
num_knn[i], | |
1, | |
graph_conv_type, | |
act_cfg, | |
norm_cfg, | |
graph_conv_bias, | |
use_stochastic, | |
epsilon, | |
1, | |
drop_path=dpr[i], | |
relative_pos=relative_pos), | |
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i])) | |
for i in range(self.n_blocks) | |
]) | |
dilation = min(i // 4 + 1, max_dilation) if use_dilation else 1 | |
self.stage_blocks = Sequential(*[ | |
Sequential( | |
Grapher( | |
in_channels=channels, | |
k=num_knn[i], | |
dilation=dilation, | |
conv=conv_graph_conv_type, | |
act=act_cfg, | |
norm=norm_cfg, | |
bias=graph_conv_bias, | |
stochastic=use_stochastic, | |
epsilon=epsilon, | |
drop_path=dpr[i], | |
relative_pos=relative_pos), | |
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i])) | |
for i in range(self.n_blocks) | |
]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And better to use keyword arguments instead of positional arguments, so that it's easier for users to get the relationship.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will change it.
mmcls/models/backbones/vig.py
Outdated
@@ -92,6 +92,15 @@ def xy_pairwise_distance(x, y): | |||
x_square = torch.sum(torch.mul(x, x), dim=-1, keepdim=True) | |||
y_square = torch.sum(torch.mul(y, y), dim=-1, keepdim=True) | |||
return x_square + xy_inner + y_square.transpose(2, 1) | |||
return xy_inner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ignore this line, I will delete it
Codecov ReportBase: 0.02% // Head: 86.64% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## dev-1.x #1304 +/- ##
============================================
+ Coverage 0.02% 86.64% +86.61%
============================================
Files 121 163 +42
Lines 8217 13069 +4852
Branches 1368 2086 +718
============================================
+ Hits 2 11323 +11321
+ Misses 8215 1404 -6811
- Partials 0 342 +342
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Motivation
Add the VIG backbone and the convert weights.
paper:https://arxiv.org/abs/2206.00272
code:https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
Modification
添加模型文件与配置文件
TODO
test code
python tools/test.py configs/vig/vig_tiny_8xb32_in1k.py vig_checkpoint_covert/vig_ti.pth
python tools/test.py configs/vig/vig_small_8xb32_in1k.py vig_checkpoint_covert/vig_s.pth
python tools/test.py configs/vig/vig_base_8xb32_in1k.py vig_checkpoint_covert/vig_b.pth
python tools/test.py configs/vig/pvig_tiny_8xb32_in1k.py vig_checkpoint_covert/pvig_ti.pth
python tools/test.py configs/vig/pvig_small_8xb32_in1k.py vig_checkpoint_covert/pvig_s.pth
python tools/test.py configs/vig/pvig_medium_8xb32_in1k.py vig_checkpoint_covert/pvig_m.pth
python tools/test.py configs/vig/pvig_base_8xb32_in1k.py vig_checkpoint_covert/pvig_b.pth
Tips
权重转换文件为tools/model_converters/vig_to_mmcls.py