Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] [CodeCamp #63] Add VIG Backbone #1304

Merged
merged 30 commits into from
Jan 17, 2023
Merged

Conversation

szwlh-c
Copy link
Contributor

@szwlh-c szwlh-c commented Jan 6, 2023

Motivation

Add the VIG backbone and the convert weights.
paper:https://arxiv.org/abs/2206.00272
code:https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch

Modification

添加模型文件与配置文件

TODO

  • 对齐推理精度
  • Code Review 以及重构代码
  • 完成单元测试以及Doc

test code

python tools/test.py configs/vig/vig_tiny_8xb32_in1k.py vig_checkpoint_covert/vig_ti.pth
python tools/test.py configs/vig/vig_small_8xb32_in1k.py vig_checkpoint_covert/vig_s.pth
python tools/test.py configs/vig/vig_base_8xb32_in1k.py vig_checkpoint_covert/vig_b.pth

python tools/test.py configs/vig/pvig_tiny_8xb32_in1k.py vig_checkpoint_covert/pvig_ti.pth
python tools/test.py configs/vig/pvig_small_8xb32_in1k.py vig_checkpoint_covert/pvig_s.pth
python tools/test.py configs/vig/pvig_medium_8xb32_in1k.py vig_checkpoint_covert/pvig_m.pth
python tools/test.py configs/vig/pvig_base_8xb32_in1k.py vig_checkpoint_covert/pvig_b.pth

Tips

权重转换文件为tools/model_converters/vig_to_mmcls.py

@szwlh-c szwlh-c marked this pull request as ready for review January 6, 2023 13:49
@szwlh-c szwlh-c marked this pull request as draft January 6, 2023 14:20


@MODELS.register_module()
class pyramid_vig(BaseBackbone):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of classes should obey Google python style

Comment on lines 57 to 60
'ti': [[2, 2, 6, 2], [48, 96, 240, 384]],
's': [[2, 2, 6, 2], [80, 160, 400, 640]],
'm': [[2, 2, 16, 2], [96, 192, 384, 768]],
'b': [[2, 2, 18, 2], [128, 256, 512, 1024]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to use full-name tiny, small, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change the config file names as well.

}

def __init__(self,
model_cnf,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model_cfg -> arch

Comment on lines 65 to 76
k,
act,
norm,
bias,
epsilon,
use_stochastic,
conv,
drop_path,
dropout,
n_classes,
norm_eval=False,
frozen_stages=0):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
k,
act,
norm,
bias,
epsilon,
use_stochastic,
conv,
drop_path,
dropout,
n_classes,
norm_eval=False,
frozen_stages=0):
k=9,
graph_conv_type='mr',
graph_conv_bias=True,
norm_cfg=dict(type='BN'),
act_cfg=dict(type='GELU'),
epsilon=0.2,
use_stochastic=False,
drop_path=0.,
dropout=0.,
norm_eval=False,
frozen_stages=0,
init_cfg=None):

n_classes,
norm_eval=False,
frozen_stages=0):
super(pyramid_vig, self).__init__()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
super(pyramid_vig, self).__init__()
super().__init__(init_cfg=init_cfg)

import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn import build_activation_layer, build_conv_layer
from torch.nn import Sequential as Seq
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use the Sequential in mmengine, and don't use the alias.

Suggested change
from torch.nn import Sequential as Seq
from mmengine.model import Sequential

torch.zeros(1, channels[0], 224 // 4, 224 // 4))
HW = 224 // 4 * 224 // 4

self.backbone = nn.ModuleList([])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module is already backbone, don't use backbone as name of sub-module.

Comment on lines 125 to 129
self.prediction = Seq(
build_conv_layer(None, channels[-1], 1024, 1, bias=True),
nn.BatchNorm2d(1024), build_activation_layer(dict(type=act)),
nn.Dropout(dropout),
build_conv_layer(None, 1024, n_classes, 1, bias=True))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The section should belong to the head, instead of the backbone.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I should put it in mmcls/models/heads, or use the script in mmcls/models/heads?

@szwlh-c szwlh-c marked this pull request as ready for review January 11, 2023 11:32
@szwlh-c szwlh-c requested a review from mzr1996 January 11, 2023 11:34
Comment on lines 534 to 571
if use_dilation:
self.stage_blocks = Sequential(*[
Sequential(
Grapher(
channels,
num_knn[i],
min(i // 4 + 1, max_dilation),
graph_conv_type,
act_cfg,
norm_cfg,
graph_conv_bias,
use_stochastic,
epsilon,
1,
drop_path=dpr[i],
relative_pos=relative_pos),
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i]))
for i in range(self.n_blocks)
])
else:
self.stage_blocks = Sequential(*[
Sequential(
Grapher(
channels,
num_knn[i],
1,
graph_conv_type,
act_cfg,
norm_cfg,
graph_conv_bias,
use_stochastic,
epsilon,
1,
drop_path=dpr[i],
relative_pos=relative_pos),
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i]))
for i in range(self.n_blocks)
])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the only difference between use_dilation=True and use_dilation=False is the dilation argument, directly use:

Suggested change
if use_dilation:
self.stage_blocks = Sequential(*[
Sequential(
Grapher(
channels,
num_knn[i],
min(i // 4 + 1, max_dilation),
graph_conv_type,
act_cfg,
norm_cfg,
graph_conv_bias,
use_stochastic,
epsilon,
1,
drop_path=dpr[i],
relative_pos=relative_pos),
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i]))
for i in range(self.n_blocks)
])
else:
self.stage_blocks = Sequential(*[
Sequential(
Grapher(
channels,
num_knn[i],
1,
graph_conv_type,
act_cfg,
norm_cfg,
graph_conv_bias,
use_stochastic,
epsilon,
1,
drop_path=dpr[i],
relative_pos=relative_pos),
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i]))
for i in range(self.n_blocks)
])
dilation = min(i // 4 + 1, max_dilation) if use_dilation else 1
self.stage_blocks = Sequential(*[
Sequential(
Grapher(
in_channels=channels,
k=num_knn[i],
dilation=dilation,
conv=conv_graph_conv_type,
act=act_cfg,
norm=norm_cfg,
bias=graph_conv_bias,
stochastic=use_stochastic,
epsilon=epsilon,
drop_path=dpr[i],
relative_pos=relative_pos),
FFN(channels, channels * 4, act=act_cfg, drop_path=dpr[i]))
for i in range(self.n_blocks)
])

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And better to use keyword arguments instead of positional arguments, so that it's easier for users to get the relationship.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will change it.

@@ -92,6 +92,15 @@ def xy_pairwise_distance(x, y):
x_square = torch.sum(torch.mul(x, x), dim=-1, keepdim=True)
y_square = torch.sum(torch.mul(y, y), dim=-1, keepdim=True)
return x_square + xy_inner + y_square.transpose(2, 1)
return xy_inner
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore this line, I will delete it

@szwlh-c szwlh-c requested a review from mzr1996 January 12, 2023 09:32
@szwlh-c szwlh-c changed the title Add VIG Backbone [Feature] [CodeCamp #63] Add VIG Backbone Jan 16, 2023
@codecov
Copy link

codecov bot commented Jan 17, 2023

Codecov Report

Base: 0.02% // Head: 86.64% // Increases project coverage by +86.61% 🎉

Coverage data is based on head (dbd10cc) compared to base (b8b31e9).
Patch has no changes to coverable lines.

Additional details and impacted files
@@             Coverage Diff              @@
##           dev-1.x    #1304       +/-   ##
============================================
+ Coverage     0.02%   86.64%   +86.61%     
============================================
  Files          121      163       +42     
  Lines         8217    13069     +4852     
  Branches      1368     2086      +718     
============================================
+ Hits             2    11323    +11321     
+ Misses        8215     1404     -6811     
- Partials         0      342      +342     
Flag Coverage Δ
unittests 86.64% <ø> (+86.61%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/datasets/transforms/compose.py
mmcls/models/backbones/mixmim.py 86.84% <0.00%> (ø)
mmcls/models/backbones/deit3.py 94.52% <0.00%> (ø)
mmcls/models/backbones/davit.py 84.94% <0.00%> (ø)
mmcls/engine/optimizers/adan_t.py 10.60% <0.00%> (ø)
mmcls/engine/hooks/retriever_hooks.py 72.72% <0.00%> (ø)
mmcls/models/retrievers/image2image.py 90.90% <0.00%> (ø)
mmcls/models/classifiers/hugging_face.py 25.64% <0.00%> (ø)
mmcls/models/heads/margin_head.py 89.13% <0.00%> (ø)
mmcls/models/backbones/mobilevit.py 91.15% <0.00%> (ø)
... and 154 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@mzr1996 mzr1996 merged commit c98dc45 into open-mmlab:dev-1.x Jan 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants