You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am encountering a runtime error when attempting to train a model with a custom dataset using OpenPCDet. The error seems to be related to a mismatch in the number of input channels during the forward pass.
I followed the custom dataset tutorial provided here and modified the pv_rcnn.yaml file accordingly. Below are the relevant configuration files:
MAP_CLASS_TO_KITTI: {
'Antenne4G': 'Car',
'Antenne5G': 'Car'
} Model configuration:{
'class_name': 'Antenne4G',
'anchor_sizes': [[552.74, 325.86, 3030.90]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.55,
'unmatched_threshold': 0.4
},
{
'class_name': 'Antenne5G',
'anchor_sizes': [[545.50, 305.97, 837.39]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.5,
'unmatched_threshold': 0.35
}
Error Message: Upon launching the training script, I receive the following error:
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 233, in
main()
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 178, in main
train_model(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model
accumulated_iter = train_one_epoch(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/init.py", line 44, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/detectors/pv_rcnn.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 393, in forward
pooled_features = self.aggregate_keypoint_features_from_one_source(
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 325, in aggregate_keypoint_features_from_one_source
pooled_points, pooled_features = aggregate_func(
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/ops/pointnet2/pointnet2_stack/pointnet2_modules.py", line 95, in forward
new_features = self.mlpsk # (1, C, M1 + M2 ..., nsample)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 67, 1, 1], expected input[1, 3, 8192, 16] to have 67 channels, but got 3 channels instead
The error indicates that the input tensor is expected to have 67 channels, but it only has 3. I suspect this is due to a configuration mismatch, particularly in the NUM_OUTPUT_FEATURES or NUM_KEYPOINTS sections of the configuration. However, I have been unable to resolve the issue.
Any guidance on how to properly configure the model to accept the correct number of input channels would be greatly appreciated!I am encountering a runtime error when attempting to train a model with a custom dataset using OpenPCDet. The error seems to be related to a mismatch in the number of input channels during the forward pass.
I followed the custom dataset tutorial provided here and modified the pv_rcnn.yaml file accordingly. Below are the relevant configuration files:
MAP_CLASS_TO_KITTI: {
'Antenne4G': 'Car',
'Antenne5G': 'Car'
} Model configuration:{
'class_name': 'Antenne4G',
'anchor_sizes': [[552.74, 325.86, 3030.90]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.55,
'unmatched_threshold': 0.4
},
{
'class_name': 'Antenne5G',
'anchor_sizes': [[545.50, 305.97, 837.39]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.5,
'unmatched_threshold': 0.35
}
Error Message: Upon launching the training script, I receive the following error:
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 233, in
main()
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 178, in main
train_model(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model
accumulated_iter = train_one_epoch(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/init.py", line 44, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/detectors/pv_rcnn.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 393, in forward
pooled_features = self.aggregate_keypoint_features_from_one_source(
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 325, in aggregate_keypoint_features_from_one_source
pooled_points, pooled_features = aggregate_func(
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/ops/pointnet2/pointnet2_stack/pointnet2_modules.py", line 95, in forward
new_features = self.mlpsk # (1, C, M1 + M2 ..., nsample)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 67, 1, 1], expected input[1, 3, 8192, 16] to have 67 channels, but got 3 channels instead
The error indicates that the input tensor is expected to have 67 channels, but it only has 3. I suspect this is due to a configuration mismatch, particularly in the NUM_OUTPUT_FEATURES or NUM_KEYPOINTS sections of the configuration. However, I have been unable to resolve the issue.
Any guidance on how to properly configure the model to accept the correct number of input channels would be greatly appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am encountering a runtime error when attempting to train a model with a custom dataset using OpenPCDet. The error seems to be related to a mismatch in the number of input channels during the forward pass.
I followed the custom dataset tutorial provided here and modified the pv_rcnn.yaml file accordingly. Below are the relevant configuration files:
Data set config:
DATA_PATH: '../data/custom'
BASE_CONFIG: tools/cfgs/dataset_configs/custom_dataset.yaml
POINT_CLOUD_RANGE: [-70055.2, -46996.7, -60359.5, 98233.5, 72900.7, 94498.5]
DATA_SPLIT: {
'train': train,
'test': val
}
INFO_PATH: {
'train': [custom_infos_train.pkl],
'test': [custom_infos_val.pkl],
}
POINT_FEATURE_ENCODING: {
encoding_type: absolute_coordinates_encoding,
used_feature_list: ['x', 'y', 'z'],
src_feature_list: ['x', 'y', 'z'],
}
DATA_AUGMENTOR:
DISABLE_AUG_LIST: ['placeholder']
AUG_CONFIG_LIST:
- NAME: gt_sampling
USE_ROAD_PLANE: False
DB_INFO_PATH:
- custom_dbinfos_train.pkl
PREPARE: {
filter_by_min_points: ['Antenne4G:5', 'Antenne5G:5'],
}
SAMPLE_GROUPS: ['Antenne4G:20', 'Antenne5G:20']
NUM_POINT_FEATURES: 3
DATABASE_WITH_FAKELIDAR: False
REMOVE_EXTRA_WIDTH: [0.0, 0.0, 0.0]
LIMIT_WHOLE_SCENE: True
- NAME: random_world_flip
ALONG_AXIS_LIST: ['x', 'y']
- NAME: random_world_rotation
WORLD_ROT_ANGLE: [-0.78539816, 0.78539816]
- NAME: random_world_scaling
WORLD_SCALE_RANGE: [0.95, 1.05]
DATA_PROCESSOR:
- NAME: mask_points_and_boxes_outside_range
REMOVE_OUTSIDE_BOXES: True
- NAME: shuffle_points
SHUFFLE_ENABLED: {
'train': True,
'test': False
}
- NAME: transform_points_to_voxels
VOXEL_SIZE: [244.6, 241.7, 3871.45]
MAX_POINTS_PER_VOXEL: 5
MAX_NUMBER_OF_VOXELS: {
'train': 150000,
'test': 150000
}
CLASS_NAMES: ['Antenne4G', 'Antenne5G']
OPTIMIZATION:
BATCH_SIZE_PER_GPU: 2
NUM_EPOCHS: 80
OPTIMIZER: adam_onecycle
LR: 0.003
WEIGHT_DECAY: 0.01
MOMENTUM: 0.9
MOMS: [0.95, 0.85]
PCT_START: 0.4
DIV_FACTOR: 10
DECAY_STEP_LIST: [35, 45]
LR_DECAY: 0.1
LR_CLIP: 0.0000001
LR_WARMUP: False
WARMUP_EPOCH: 1
MAP_CLASS_TO_KITTI: {
'Antenne4G': 'Car',
'Antenne5G': 'Car'
}
Model configuration:
{'class_name': 'Antenne4G',
'anchor_sizes': [[552.74, 325.86, 3030.90]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.55,
'unmatched_threshold': 0.4
},
{
'class_name': 'Antenne5G',
'anchor_sizes': [[545.50, 305.97, 837.39]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.5,
'unmatched_threshold': 0.35
}
Error Message: Upon launching the training script, I receive the following error:
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 233, in
main()
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 178, in main
train_model(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model
accumulated_iter = train_one_epoch(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/init.py", line 44, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/detectors/pv_rcnn.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 393, in forward
pooled_features = self.aggregate_keypoint_features_from_one_source(
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 325, in aggregate_keypoint_features_from_one_source
pooled_points, pooled_features = aggregate_func(
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/ops/pointnet2/pointnet2_stack/pointnet2_modules.py", line 95, in forward
new_features = self.mlpsk # (1, C, M1 + M2 ..., nsample)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 67, 1, 1], expected input[1, 3, 8192, 16] to have 67 channels, but got 3 channels instead
The error indicates that the input tensor is expected to have 67 channels, but it only has 3. I suspect this is due to a configuration mismatch, particularly in the NUM_OUTPUT_FEATURES or NUM_KEYPOINTS sections of the configuration. However, I have been unable to resolve the issue.
Any guidance on how to properly configure the model to accept the correct number of input channels would be greatly appreciated!I am encountering a runtime error when attempting to train a model with a custom dataset using OpenPCDet. The error seems to be related to a mismatch in the number of input channels during the forward pass.
I followed the custom dataset tutorial provided here and modified the pv_rcnn.yaml file accordingly. Below are the relevant configuration files:
Data set config:
DATA_PATH: '../data/custom'
BASE_CONFIG: tools/cfgs/dataset_configs/custom_dataset.yaml
POINT_CLOUD_RANGE: [-70055.2, -46996.7, -60359.5, 98233.5, 72900.7, 94498.5]
DATA_SPLIT: {
'train': train,
'test': val
}
INFO_PATH: {
'train': [custom_infos_train.pkl],
'test': [custom_infos_val.pkl],
}
POINT_FEATURE_ENCODING: {
encoding_type: absolute_coordinates_encoding,
used_feature_list: ['x', 'y', 'z'],
src_feature_list: ['x', 'y', 'z'],
}
DATA_AUGMENTOR:
DISABLE_AUG_LIST: ['placeholder']
AUG_CONFIG_LIST:
- NAME: gt_sampling
USE_ROAD_PLANE: False
DB_INFO_PATH:
- custom_dbinfos_train.pkl
PREPARE: {
filter_by_min_points: ['Antenne4G:5', 'Antenne5G:5'],
}
SAMPLE_GROUPS: ['Antenne4G:20', 'Antenne5G:20']
NUM_POINT_FEATURES: 3
DATABASE_WITH_FAKELIDAR: False
REMOVE_EXTRA_WIDTH: [0.0, 0.0, 0.0]
LIMIT_WHOLE_SCENE: True
- NAME: random_world_flip
ALONG_AXIS_LIST: ['x', 'y']
- NAME: random_world_rotation
WORLD_ROT_ANGLE: [-0.78539816, 0.78539816]
- NAME: random_world_scaling
WORLD_SCALE_RANGE: [0.95, 1.05]
DATA_PROCESSOR:
- NAME: mask_points_and_boxes_outside_range
REMOVE_OUTSIDE_BOXES: True
- NAME: shuffle_points
SHUFFLE_ENABLED: {
'train': True,
'test': False
}
- NAME: transform_points_to_voxels
VOXEL_SIZE: [244.6, 241.7, 3871.45]
MAX_POINTS_PER_VOXEL: 5
MAX_NUMBER_OF_VOXELS: {
'train': 150000,
'test': 150000
}
CLASS_NAMES: ['Antenne4G', 'Antenne5G']
OPTIMIZATION:
BATCH_SIZE_PER_GPU: 2
NUM_EPOCHS: 80
OPTIMIZER: adam_onecycle
LR: 0.003
WEIGHT_DECAY: 0.01
MOMENTUM: 0.9
MOMS: [0.95, 0.85]
PCT_START: 0.4
DIV_FACTOR: 10
DECAY_STEP_LIST: [35, 45]
LR_DECAY: 0.1
LR_CLIP: 0.0000001
LR_WARMUP: False
WARMUP_EPOCH: 1
MAP_CLASS_TO_KITTI: {
'Antenne4G': 'Car',
'Antenne5G': 'Car'
}
Model configuration:
{'class_name': 'Antenne4G',
'anchor_sizes': [[552.74, 325.86, 3030.90]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.55,
'unmatched_threshold': 0.4
},
{
'class_name': 'Antenne5G',
'anchor_sizes': [[545.50, 305.97, 837.39]],
'anchor_rotations': [0, 1.57],
'anchor_bottom_heights': [0],
'align_center': False,
'feature_map_stride': 8,
'matched_threshold': 0.5,
'unmatched_threshold': 0.35
}
Error Message: Upon launching the training script, I receive the following error:
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 233, in
main()
File "/home/ubuntu/v-detr/OpenPCDet/tools/train.py", line 178, in main
train_model(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model
accumulated_iter = train_one_epoch(
File "/home/ubuntu/v-detr/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/init.py", line 44, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/detectors/pv_rcnn.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 393, in forward
pooled_features = self.aggregate_keypoint_features_from_one_source(
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 325, in aggregate_keypoint_features_from_one_source
pooled_points, pooled_features = aggregate_func(
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/v-detr/OpenPCDet/tools/../pcdet/ops/pointnet2/pointnet2_stack/pointnet2_modules.py", line 95, in forward
new_features = self.mlpsk # (1, C, M1 + M2 ..., nsample)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ubuntu/miniconda3/envs/pcd/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 67, 1, 1], expected input[1, 3, 8192, 16] to have 67 channels, but got 3 channels instead
The error indicates that the input tensor is expected to have 67 channels, but it only has 3. I suspect this is due to a configuration mismatch, particularly in the NUM_OUTPUT_FEATURES or NUM_KEYPOINTS sections of the configuration. However, I have been unable to resolve the issue.
Any guidance on how to properly configure the model to accept the correct number of input channels would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions