Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Layer normalisation not implemented for transformers #1917

Open
teethoe opened this issue Jul 3, 2024 · 1 comment
Open

[Bug] Layer normalisation not implemented for transformers #1917

teethoe opened this issue Jul 3, 2024 · 1 comment

Comments

@teethoe
Copy link

teethoe commented Jul 3, 2024

Branch

main branch (mmpretrain version)

Describe the bug

I was running inference with the ViTPose model using the code from the MMPose tutorial on a custom image.
I have only changed the pose config and checkpoint to the following:

pose_config = MMPOSE_ROOT / Path('configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_ViTPose-huge_8xb64-210e_coco-256x192.py')
pose_checkpoint = 'https://download.openmmlab.com/mmpose/v1/pretrained_models/mae_pretrain_vit_huge_20230913.pth'

Environment

/home/ubuntu/miniconda3/envs/openmmlab/lib/python3.8/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
  warnings.warn(
{'sys.platform': 'linux',
 'Python': '3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0]',
 'CUDA available': True,
 'MUSA available': False,
 'numpy_random_seed': 2147483648,
 'GPU 0': 'NVIDIA A2',
 'CUDA_HOME': '/usr/local/cuda',
 'NVCC': 'Cuda compilation tools, release 12.1, V12.1.66',
 'GCC': 'gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0',
 'PyTorch': '2.3.0',
 'TorchVision': '0.18.0',
 'OpenCV': '4.10.0',
 'MMEngine': '0.10.4',
 'MMCV': '1.7.2',
 'MMPreTrain': '1.2.0+4e28c70'}

Other information

  1. No modifications to the code or config.
  2. The default norm_cfg of the encoder layer for mmpretrain.models.backbones.vision_transformer and any other transformer model is 'LN'. Layer normalisation for transformers was never implemented in mmpretrain.models.utils.norm and 'LN' is not registered in the MODELS registry.
@xiaojieli0903
Copy link
Collaborator

Hi, you can add following codes in the mmpretrain.models.utils.norm. It works for me.

MODELS.register_module('BN', module=nn.BatchNorm2d)
MODELS.register_module('BN1d', module=nn.BatchNorm1d)
MODELS.register_module('BN2d', module=nn.BatchNorm2d)
MODELS.register_module('BN3d', module=nn.BatchNorm3d)
MODELS.register_module('GN', module=nn.GroupNorm)
MODELS.register_module('LN', module=nn.LayerNorm)
MODELS.register_module('IN', module=nn.InstanceNorm2d)
MODELS.register_module('IN1d', module=nn.InstanceNorm1d)
MODELS.register_module('IN2d', module=nn.InstanceNorm2d)
MODELS.register_module('IN3d', module=nn.InstanceNorm3d)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants