Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: cannot unpack non-iterable NoneType object #40

Open
jiaowanger opened this issue Aug 13, 2022 · 3 comments
Open

TypeError: cannot unpack non-iterable NoneType object #40

jiaowanger opened this issue Aug 13, 2022 · 3 comments

Comments

@jiaowanger
Copy link

D:\anaconda3.9\envs\zj\python.exe F:/1chen/DETR/jin/dn/DN-DETR/main.py
Not using distributed mode
[08/13 08:31:54.869]: git:
sha: a59a5de, status: has uncommited changes, branch: main

[08/13 08:31:54.869]: Command: F:/1chen/DETR/jin/dn/DN-DETR/main.py
[08/13 08:31:54.869]: Full config saved to log/r50\config.json
[08/13 08:31:54.869]: world size: 1
[08/13 08:31:54.869]: rank: 0
[08/13 08:31:54.869]: local_rank: 0
[08/13 08:31:54.870]: args: Namespace(amp=False, aux_loss=True, backbone='resnet50', backbone_freeze_keywords=None, batch_norm_type='FrozenBatchNorm2d', batch_size=2, bbox_loss_coef=5, box_noise_scale=0.4, clip_max_norm=0.1, cls_loss_coef=1, coco_panoptic_path=None, coco_path='COCODIR', dataset_file='coco', debug=False, dec_layers=6, dec_n_points=4, device='cuda', dice_loss_coef=1, dilation=False, dim_feedforward=2048, dist_url='env://', distributed=False, drop_lr_now=False, dropout=0.0, enc_layers=6, enc_n_points=4, eos_coef=0.1, epochs=10, eval=False, find_unused_params=False, finetune_ignore=None, fix_size=False, focal_alpha=0.25, frozen_weights=None, giou_loss_coef=2, hidden_dim=256, label_noise_scale=0.2, local_rank=0, lr=0.0001, lr_backbone=1e-05, lr_drop=40, mask_loss_coef=1, masks=False, modelname='dn_dab_deformable_detr', nheads=8, note='', num_feature_levels=4, num_patterns=0, num_queries=300, num_select=300, num_workers=10, output_dir='log/r50', pe_temperatureH=20, pe_temperatureW=20, position_embedding='sine', pre_norm=False, pretrain_model_path=None, random_refpoints_xy=False, rank=0, remove_difficult=False, resume='', return_interm_layers=False, save_checkpoint_interval=10, save_log=False, save_results=False, scalar=5, seed=42, set_cost_bbox=5, set_cost_class=2, set_cost_giou=2, start_epoch=0, transformer_activation='prelu', two_stage=False, use_dn=False, weight_decay=0.0001, world_size=1)

Namespace(amp=False, aux_loss=True, backbone='resnet50', backbone_freeze_keywords=None, batch_norm_type='FrozenBatchNorm2d', batch_size=2, bbox_loss_coef=5, box_noise_scale=0.4, clip_max_norm=0.1, cls_loss_coef=1, coco_panoptic_path=None, coco_path='COCODIR', dataset_file='coco', debug=False, dec_layers=6, dec_n_points=4, device='cuda', dice_loss_coef=1, dilation=False, dim_feedforward=2048, dist_url='env://', distributed=False, drop_lr_now=False, dropout=0.0, enc_layers=6, enc_n_points=4, eos_coef=0.1, epochs=10, eval=False, find_unused_params=False, finetune_ignore=None, fix_size=False, focal_alpha=0.25, frozen_weights=None, giou_loss_coef=2, hidden_dim=256, label_noise_scale=0.2, local_rank=0, lr=0.0001, lr_backbone=1e-05, lr_drop=40, mask_loss_coef=1, masks=False, modelname='dn_dab_deformable_detr', nheads=8, note='', num_feature_levels=4, num_patterns=0, num_queries=300, num_select=300, num_workers=10, output_dir='log/r50', pe_temperatureH=20, pe_temperatureW=20, position_embedding='sine', pre_norm=False, pretrain_model_path=None, random_refpoints_xy=False, rank=0, remove_difficult=False, resume='', return_interm_layers=False, save_checkpoint_interval=10, save_log=False, save_results=False, scalar=5, seed=42, set_cost_bbox=5, set_cost_class=2, set_cost_giou=2, start_epoch=0, transformer_activation='prelu', two_stage=False, use_dn=False, weight_decay=0.0001, world_size=1)
[08/13 08:31:55.431]: number of params:47206754
[08/13 08:31:55.433]: params:
{
"transformer.level_embed": 1024,
"transformer.encoder.layers.0.self_attn.sampling_offsets.weight": 65536,
"transformer.encoder.layers.0.self_attn.sampling_offsets.bias": 256,
"transformer.encoder.layers.0.self_attn.attention_weights.weight": 32768,
"transformer.encoder.layers.0.self_attn.attention_weights.bias": 128,
"transformer.encoder.layers.0.self_attn.value_proj.weight": 65536,
"transformer.encoder.layers.0.self_attn.value_proj.bias": 256,
"transformer.encoder.layers.0.self_attn.output_proj.weight": 65536,
"transformer.encoder.layers.0.self_attn.output_proj.bias": 256,
"transformer.encoder.layers.0.norm1.weight": 256,
"transformer.encoder.layers.0.norm1.bias": 256,
"transformer.encoder.layers.0.linear1.weight": 524288,
"transformer.encoder.layers.0.linear1.bias": 2048,
"transformer.encoder.layers.0.linear2.weight": 524288,
"transformer.encoder.layers.0.linear2.bias": 256,
"transformer.encoder.layers.0.norm2.weight": 256,
"transformer.encoder.layers.0.norm2.bias": 256,
"transformer.encoder.layers.1.self_attn.sampling_offsets.weight": 65536,
"transformer.encoder.layers.1.self_attn.sampling_offsets.bias": 256,
"transformer.encoder.layers.1.self_attn.attention_weights.weight": 32768,
"transformer.encoder.layers.1.self_attn.attention_weights.bias": 128,
"transformer.encoder.layers.1.self_attn.value_proj.weight": 65536,
"transformer.encoder.layers.1.self_attn.value_proj.bias": 256,
"transformer.encoder.layers.1.self_attn.output_proj.weight": 65536,
"transformer.encoder.layers.1.self_attn.output_proj.bias": 256,
"transformer.encoder.layers.1.norm1.weight": 256,
"transformer.encoder.layers.1.norm1.bias": 256,
"transformer.encoder.layers.1.linear1.weight": 524288,
"transformer.encoder.layers.1.linear1.bias": 2048,
"transformer.encoder.layers.1.linear2.weight": 524288,
"transformer.encoder.layers.1.linear2.bias": 256,
"transformer.encoder.layers.1.norm2.weight": 256,
"transformer.encoder.layers.1.norm2.bias": 256,
"transformer.encoder.layers.2.self_attn.sampling_offsets.weight": 65536,
"transformer.encoder.layers.2.self_attn.sampling_offsets.bias": 256,
"transformer.encoder.layers.2.self_attn.attention_weights.weight": 32768,
"transformer.encoder.layers.2.self_attn.attention_weights.bias": 128,
"transformer.encoder.layers.2.self_attn.value_proj.weight": 65536,
"transformer.encoder.layers.2.self_attn.value_proj.bias": 256,
"transformer.encoder.layers.2.self_attn.output_proj.weight": 65536,
"transformer.encoder.layers.2.self_attn.output_proj.bias": 256,
"transformer.encoder.layers.2.norm1.weight": 256,
"transformer.encoder.layers.2.norm1.bias": 256,
"transformer.encoder.layers.2.linear1.weight": 524288,
"transformer.encoder.layers.2.linear1.bias": 2048,
"transformer.encoder.layers.2.linear2.weight": 524288,
"transformer.encoder.layers.2.linear2.bias": 256,
"transformer.encoder.layers.2.norm2.weight": 256,
"transformer.encoder.layers.2.norm2.bias": 256,
"transformer.encoder.layers.3.self_attn.sampling_offsets.weight": 65536,
"transformer.encoder.layers.3.self_attn.sampling_offsets.bias": 256,
"transformer.encoder.layers.3.self_attn.attention_weights.weight": 32768,
"transformer.encoder.layers.3.self_attn.attention_weights.bias": 128,
"transformer.encoder.layers.3.self_attn.value_proj.weight": 65536,
"transformer.encoder.layers.3.self_attn.value_proj.bias": 256,
"transformer.encoder.layers.3.self_attn.output_proj.weight": 65536,
"transformer.encoder.layers.3.self_attn.output_proj.bias": 256,
"transformer.encoder.layers.3.norm1.weight": 256,
"transformer.encoder.layers.3.norm1.bias": 256,
"transformer.encoder.layers.3.linear1.weight": 524288,
"transformer.encoder.layers.3.linear1.bias": 2048,
"transformer.encoder.layers.3.linear2.weight": 524288,
"transformer.encoder.layers.3.linear2.bias": 256,
"transformer.encoder.layers.3.norm2.weight": 256,
"transformer.encoder.layers.3.norm2.bias": 256,
"transformer.encoder.layers.4.self_attn.sampling_offsets.weight": 65536,
"transformer.encoder.layers.4.self_attn.sampling_offsets.bias": 256,
"transformer.encoder.layers.4.self_attn.attention_weights.weight": 32768,
"transformer.encoder.layers.4.self_attn.attention_weights.bias": 128,
"transformer.encoder.layers.4.self_attn.value_proj.weight": 65536,
"transformer.encoder.layers.4.self_attn.value_proj.bias": 256,
"transformer.encoder.layers.4.self_attn.output_proj.weight": 65536,
"transformer.encoder.layers.4.self_attn.output_proj.bias": 256,
"transformer.encoder.layers.4.norm1.weight": 256,
"transformer.encoder.layers.4.norm1.bias": 256,
"transformer.encoder.layers.4.linear1.weight": 524288,
"transformer.encoder.layers.4.linear1.bias": 2048,
"transformer.encoder.layers.4.linear2.weight": 524288,
"transformer.encoder.layers.4.linear2.bias": 256,
"transformer.encoder.layers.4.norm2.weight": 256,
"transformer.encoder.layers.4.norm2.bias": 256,
"transformer.encoder.layers.5.self_attn.sampling_offsets.weight": 65536,
"transformer.encoder.layers.5.self_attn.sampling_offsets.bias": 256,
"transformer.encoder.layers.5.self_attn.attention_weights.weight": 32768,
"transformer.encoder.layers.5.self_attn.attention_weights.bias": 128,
"transformer.encoder.layers.5.self_attn.value_proj.weight": 65536,
"transformer.encoder.layers.5.self_attn.value_proj.bias": 256,
"transformer.encoder.layers.5.self_attn.output_proj.weight": 65536,
"transformer.encoder.layers.5.self_attn.output_proj.bias": 256,
"transformer.encoder.layers.5.norm1.weight": 256,
"transformer.encoder.layers.5.norm1.bias": 256,
"transformer.encoder.layers.5.linear1.weight": 524288,
"transformer.encoder.layers.5.linear1.bias": 2048,
"transformer.encoder.layers.5.linear2.weight": 524288,
"transformer.encoder.layers.5.linear2.bias": 256,
"transformer.encoder.layers.5.norm2.weight": 256,
"transformer.encoder.layers.5.norm2.bias": 256,
"transformer.decoder.layers.0.cross_attn.sampling_offsets.weight": 65536,
"transformer.decoder.layers.0.cross_attn.sampling_offsets.bias": 256,
"transformer.decoder.layers.0.cross_attn.attention_weights.weight": 32768,
"transformer.decoder.layers.0.cross_attn.attention_weights.bias": 128,
"transformer.decoder.layers.0.cross_attn.value_proj.weight": 65536,
"transformer.decoder.layers.0.cross_attn.value_proj.bias": 256,
"transformer.decoder.layers.0.cross_attn.output_proj.weight": 65536,
"transformer.decoder.layers.0.cross_attn.output_proj.bias": 256,
"transformer.decoder.layers.0.norm1.weight": 256,
"transformer.decoder.layers.0.norm1.bias": 256,
"transformer.decoder.layers.0.self_attn.in_proj_weight": 196608,
"transformer.decoder.layers.0.self_attn.in_proj_bias": 768,
"transformer.decoder.layers.0.self_attn.out_proj.weight": 65536,
"transformer.decoder.layers.0.self_attn.out_proj.bias": 256,
"transformer.decoder.layers.0.norm2.weight": 256,
"transformer.decoder.layers.0.norm2.bias": 256,
"transformer.decoder.layers.0.linear1.weight": 524288,
"transformer.decoder.layers.0.linear1.bias": 2048,
"transformer.decoder.layers.0.linear2.weight": 524288,
"transformer.decoder.layers.0.linear2.bias": 256,
"transformer.decoder.layers.0.norm3.weight": 256,
"transformer.decoder.layers.0.norm3.bias": 256,
"transformer.decoder.layers.1.cross_attn.sampling_offsets.weight": 65536,
"transformer.decoder.layers.1.cross_attn.sampling_offsets.bias": 256,
"transformer.decoder.layers.1.cross_attn.attention_weights.weight": 32768,
"transformer.decoder.layers.1.cross_attn.attention_weights.bias": 128,
"transformer.decoder.layers.1.cross_attn.value_proj.weight": 65536,
"transformer.decoder.layers.1.cross_attn.value_proj.bias": 256,
"transformer.decoder.layers.1.cross_attn.output_proj.weight": 65536,
"transformer.decoder.layers.1.cross_attn.output_proj.bias": 256,
"transformer.decoder.layers.1.norm1.weight": 256,
"transformer.decoder.layers.1.norm1.bias": 256,
"transformer.decoder.layers.1.self_attn.in_proj_weight": 196608,
"transformer.decoder.layers.1.self_attn.in_proj_bias": 768,
"transformer.decoder.layers.1.self_attn.out_proj.weight": 65536,
"transformer.decoder.layers.1.self_attn.out_proj.bias": 256,
"transformer.decoder.layers.1.norm2.weight": 256,
"transformer.decoder.layers.1.norm2.bias": 256,
"transformer.decoder.layers.1.linear1.weight": 524288,
"transformer.decoder.layers.1.linear1.bias": 2048,
"transformer.decoder.layers.1.linear2.weight": 524288,
"transformer.decoder.layers.1.linear2.bias": 256,
"transformer.decoder.layers.1.norm3.weight": 256,
"transformer.decoder.layers.1.norm3.bias": 256,
"transformer.decoder.layers.2.cross_attn.sampling_offsets.weight": 65536,
"transformer.decoder.layers.2.cross_attn.sampling_offsets.bias": 256,
"transformer.decoder.layers.2.cross_attn.attention_weights.weight": 32768,
"transformer.decoder.layers.2.cross_attn.attention_weights.bias": 128,
"transformer.decoder.layers.2.cross_attn.value_proj.weight": 65536,
"transformer.decoder.layers.2.cross_attn.value_proj.bias": 256,
"transformer.decoder.layers.2.cross_attn.output_proj.weight": 65536,
"transformer.decoder.layers.2.cross_attn.output_proj.bias": 256,
"transformer.decoder.layers.2.norm1.weight": 256,
"transformer.decoder.layers.2.norm1.bias": 256,
"transformer.decoder.layers.2.self_attn.in_proj_weight": 196608,
"transformer.decoder.layers.2.self_attn.in_proj_bias": 768,
"transformer.decoder.layers.2.self_attn.out_proj.weight": 65536,
"transformer.decoder.layers.2.self_attn.out_proj.bias": 256,
"transformer.decoder.layers.2.norm2.weight": 256,
"transformer.decoder.layers.2.norm2.bias": 256,
"transformer.decoder.layers.2.linear1.weight": 524288,
"transformer.decoder.layers.2.linear1.bias": 2048,
"transformer.decoder.layers.2.linear2.weight": 524288,
"transformer.decoder.layers.2.linear2.bias": 256,
"transformer.decoder.layers.2.norm3.weight": 256,
"transformer.decoder.layers.2.norm3.bias": 256,
"transformer.decoder.layers.3.cross_attn.sampling_offsets.weight": 65536,
"transformer.decoder.layers.3.cross_attn.sampling_offsets.bias": 256,
"transformer.decoder.layers.3.cross_attn.attention_weights.weight": 32768,
"transformer.decoder.layers.3.cross_attn.attention_weights.bias": 128,
"transformer.decoder.layers.3.cross_attn.value_proj.weight": 65536,
"transformer.decoder.layers.3.cross_attn.value_proj.bias": 256,
"transformer.decoder.layers.3.cross_attn.output_proj.weight": 65536,
"transformer.decoder.layers.3.cross_attn.output_proj.bias": 256,
"transformer.decoder.layers.3.norm1.weight": 256,
"transformer.decoder.layers.3.norm1.bias": 256,
"transformer.decoder.layers.3.self_attn.in_proj_weight": 196608,
"transformer.decoder.layers.3.self_attn.in_proj_bias": 768,
"transformer.decoder.layers.3.self_attn.out_proj.weight": 65536,
"transformer.decoder.layers.3.self_attn.out_proj.bias": 256,
"transformer.decoder.layers.3.norm2.weight": 256,
"transformer.decoder.layers.3.norm2.bias": 256,
"transformer.decoder.layers.3.linear1.weight": 524288,
"transformer.decoder.layers.3.linear1.bias": 2048,
"transformer.decoder.layers.3.linear2.weight": 524288,
"transformer.decoder.layers.3.linear2.bias": 256,
"transformer.decoder.layers.3.norm3.weight": 256,
"transformer.decoder.layers.3.norm3.bias": 256,
"transformer.decoder.layers.4.cross_attn.sampling_offsets.weight": 65536,
"transformer.decoder.layers.4.cross_attn.sampling_offsets.bias": 256,
"transformer.decoder.layers.4.cross_attn.attention_weights.weight": 32768,
"transformer.decoder.layers.4.cross_attn.attention_weights.bias": 128,
"transformer.decoder.layers.4.cross_attn.value_proj.weight": 65536,
"transformer.decoder.layers.4.cross_attn.value_proj.bias": 256,
"transformer.decoder.layers.4.cross_attn.output_proj.weight": 65536,
"transformer.decoder.layers.4.cross_attn.output_proj.bias": 256,
"transformer.decoder.layers.4.norm1.weight": 256,
"transformer.decoder.layers.4.norm1.bias": 256,
"transformer.decoder.layers.4.self_attn.in_proj_weight": 196608,
"transformer.decoder.layers.4.self_attn.in_proj_bias": 768,
"transformer.decoder.layers.4.self_attn.out_proj.weight": 65536,
"transformer.decoder.layers.4.self_attn.out_proj.bias": 256,
"transformer.decoder.layers.4.norm2.weight": 256,
"transformer.decoder.layers.4.norm2.bias": 256,
"transformer.decoder.layers.4.linear1.weight": 524288,
"transformer.decoder.layers.4.linear1.bias": 2048,
"transformer.decoder.layers.4.linear2.weight": 524288,
"transformer.decoder.layers.4.linear2.bias": 256,
"transformer.decoder.layers.4.norm3.weight": 256,
"transformer.decoder.layers.4.norm3.bias": 256,
"transformer.decoder.layers.5.cross_attn.sampling_offsets.weight": 65536,
"transformer.decoder.layers.5.cross_attn.sampling_offsets.bias": 256,
"transformer.decoder.layers.5.cross_attn.attention_weights.weight": 32768,
"transformer.decoder.layers.5.cross_attn.attention_weights.bias": 128,
"transformer.decoder.layers.5.cross_attn.value_proj.weight": 65536,
"transformer.decoder.layers.5.cross_attn.value_proj.bias": 256,
"transformer.decoder.layers.5.cross_attn.output_proj.weight": 65536,
"transformer.decoder.layers.5.cross_attn.output_proj.bias": 256,
"transformer.decoder.layers.5.norm1.weight": 256,
"transformer.decoder.layers.5.norm1.bias": 256,
"transformer.decoder.layers.5.self_attn.in_proj_weight": 196608,
"transformer.decoder.layers.5.self_attn.in_proj_bias": 768,
"transformer.decoder.layers.5.self_attn.out_proj.weight": 65536,
"transformer.decoder.layers.5.self_attn.out_proj.bias": 256,
"transformer.decoder.layers.5.norm2.weight": 256,
"transformer.decoder.layers.5.norm2.bias": 256,
"transformer.decoder.layers.5.linear1.weight": 524288,
"transformer.decoder.layers.5.linear1.bias": 2048,
"transformer.decoder.layers.5.linear2.weight": 524288,
"transformer.decoder.layers.5.linear2.bias": 256,
"transformer.decoder.layers.5.norm3.weight": 256,
"transformer.decoder.layers.5.norm3.bias": 256,
"transformer.decoder.query_scale.layers.0.weight": 65536,
"transformer.decoder.query_scale.layers.0.bias": 256,
"transformer.decoder.query_scale.layers.1.weight": 65536,
"transformer.decoder.query_scale.layers.1.bias": 256,
"transformer.decoder.ref_point_head.layers.0.weight": 131072,
"transformer.decoder.ref_point_head.layers.0.bias": 256,
"transformer.decoder.ref_point_head.layers.1.weight": 65536,
"transformer.decoder.ref_point_head.layers.1.bias": 256,
"transformer.decoder.bbox_embed.0.layers.0.weight": 65536,
"transformer.decoder.bbox_embed.0.layers.0.bias": 256,
"transformer.decoder.bbox_embed.0.layers.1.weight": 65536,
"transformer.decoder.bbox_embed.0.layers.1.bias": 256,
"transformer.decoder.bbox_embed.0.layers.2.weight": 1024,
"transformer.decoder.bbox_embed.0.layers.2.bias": 4,
"transformer.decoder.bbox_embed.1.layers.0.weight": 65536,
"transformer.decoder.bbox_embed.1.layers.0.bias": 256,
"transformer.decoder.bbox_embed.1.layers.1.weight": 65536,
"transformer.decoder.bbox_embed.1.layers.1.bias": 256,
"transformer.decoder.bbox_embed.1.layers.2.weight": 1024,
"transformer.decoder.bbox_embed.1.layers.2.bias": 4,
"transformer.decoder.bbox_embed.2.layers.0.weight": 65536,
"transformer.decoder.bbox_embed.2.layers.0.bias": 256,
"transformer.decoder.bbox_embed.2.layers.1.weight": 65536,
"transformer.decoder.bbox_embed.2.layers.1.bias": 256,
"transformer.decoder.bbox_embed.2.layers.2.weight": 1024,
"transformer.decoder.bbox_embed.2.layers.2.bias": 4,
"transformer.decoder.bbox_embed.3.layers.0.weight": 65536,
"transformer.decoder.bbox_embed.3.layers.0.bias": 256,
"transformer.decoder.bbox_embed.3.layers.1.weight": 65536,
"transformer.decoder.bbox_embed.3.layers.1.bias": 256,
"transformer.decoder.bbox_embed.3.layers.2.weight": 1024,
"transformer.decoder.bbox_embed.3.layers.2.bias": 4,
"transformer.decoder.bbox_embed.4.layers.0.weight": 65536,
"transformer.decoder.bbox_embed.4.layers.0.bias": 256,
"transformer.decoder.bbox_embed.4.layers.1.weight": 65536,
"transformer.decoder.bbox_embed.4.layers.1.bias": 256,
"transformer.decoder.bbox_embed.4.layers.2.weight": 1024,
"transformer.decoder.bbox_embed.4.layers.2.bias": 4,
"transformer.decoder.bbox_embed.5.layers.0.weight": 65536,
"transformer.decoder.bbox_embed.5.layers.0.bias": 256,
"transformer.decoder.bbox_embed.5.layers.1.weight": 65536,
"transformer.decoder.bbox_embed.5.layers.1.bias": 256,
"transformer.decoder.bbox_embed.5.layers.2.weight": 1024,
"transformer.decoder.bbox_embed.5.layers.2.bias": 4,
"class_embed.0.weight": 23296,
"class_embed.0.bias": 91,
"class_embed.1.weight": 23296,
"class_embed.1.bias": 91,
"class_embed.2.weight": 23296,
"class_embed.2.bias": 91,
"class_embed.3.weight": 23296,
"class_embed.3.bias": 91,
"class_embed.4.weight": 23296,
"class_embed.4.bias": 91,
"class_embed.5.weight": 23296,
"class_embed.5.bias": 91,
"label_enc.weight": 23460,
"tgt_embed.weight": 76500,
"refpoint_embed.weight": 1200,
"input_proj.0.0.weight": 131072,
"input_proj.0.0.bias": 256,
"input_proj.0.1.weight": 256,
"input_proj.0.1.bias": 256,
"input_proj.1.0.weight": 262144,
"input_proj.1.0.bias": 256,
"input_proj.1.1.weight": 256,
"input_proj.1.1.bias": 256,
"input_proj.2.0.weight": 524288,
"input_proj.2.0.bias": 256,
"input_proj.2.1.weight": 256,
"input_proj.2.1.bias": 256,
"input_proj.3.0.weight": 4718592,
"input_proj.3.0.bias": 256,
"input_proj.3.1.weight": 256,
"input_proj.3.1.bias": 256,
"backbone.0.body.layer2.0.conv1.weight": 32768,
"backbone.0.body.layer2.0.conv2.weight": 147456,
"backbone.0.body.layer2.0.conv3.weight": 65536,
"backbone.0.body.layer2.0.downsample.0.weight": 131072,
"backbone.0.body.layer2.1.conv1.weight": 65536,
"backbone.0.body.layer2.1.conv2.weight": 147456,
"backbone.0.body.layer2.1.conv3.weight": 65536,
"backbone.0.body.layer2.2.conv1.weight": 65536,
"backbone.0.body.layer2.2.conv2.weight": 147456,
"backbone.0.body.layer2.2.conv3.weight": 65536,
"backbone.0.body.layer2.3.conv1.weight": 65536,
"backbone.0.body.layer2.3.conv2.weight": 147456,
"backbone.0.body.layer2.3.conv3.weight": 65536,
"backbone.0.body.layer3.0.conv1.weight": 131072,
"backbone.0.body.layer3.0.conv2.weight": 589824,
"backbone.0.body.layer3.0.conv3.weight": 262144,
"backbone.0.body.layer3.0.downsample.0.weight": 524288,
"backbone.0.body.layer3.1.conv1.weight": 262144,
"backbone.0.body.layer3.1.conv2.weight": 589824,
"backbone.0.body.layer3.1.conv3.weight": 262144,
"backbone.0.body.layer3.2.conv1.weight": 262144,
"backbone.0.body.layer3.2.conv2.weight": 589824,
"backbone.0.body.layer3.2.conv3.weight": 262144,
"backbone.0.body.layer3.3.conv1.weight": 262144,
"backbone.0.body.layer3.3.conv2.weight": 589824,
"backbone.0.body.layer3.3.conv3.weight": 262144,
"backbone.0.body.layer3.4.conv1.weight": 262144,
"backbone.0.body.layer3.4.conv2.weight": 589824,
"backbone.0.body.layer3.4.conv3.weight": 262144,
"backbone.0.body.layer3.5.conv1.weight": 262144,
"backbone.0.body.layer3.5.conv2.weight": 589824,
"backbone.0.body.layer3.5.conv3.weight": 262144,
"backbone.0.body.layer4.0.conv1.weight": 524288,
"backbone.0.body.layer4.0.conv2.weight": 2359296,
"backbone.0.body.layer4.0.conv3.weight": 1048576,
"backbone.0.body.layer4.0.downsample.0.weight": 2097152,
"backbone.0.body.layer4.1.conv1.weight": 1048576,
"backbone.0.body.layer4.1.conv2.weight": 2359296,
"backbone.0.body.layer4.1.conv3.weight": 1048576,
"backbone.0.body.layer4.2.conv1.weight": 1048576,
"backbone.0.body.layer4.2.conv2.weight": 2359296,
"backbone.0.body.layer4.2.conv3.weight": 1048576
}
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Start training
F:\1chen\DETR\jin\dn\DN-DETR\models\dn_dab_deformable_detr\position_encoding.py:53: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
Traceback (most recent call last):
File "F:/1chen/DETR/jin/dn/DN-DETR/main.py", line 426, in
main(args)
File "F:/1chen/DETR/jin/dn/DN-DETR/main.py", line 352, in main
train_stats = train_one_epoch(
File "F:\1chen\DETR\jin\dn\DN-DETR\engine.py", line 52, in train_one_epoch
outputs = model(samples)
File "D:\anaconda3.9\envs\zj\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "F:\1chen\DETR\jin\dn\DN-DETR\models\dn_dab_deformable_detr\dab_deformable_detr.py", line 206, in forward
prepare_for_dn(dn_args, tgt_all_embed, refanchor, src.size(0), self.training, self.num_queries, self.num_classes,
File "F:\1chen\DETR\jin\dn\DN-DETR\models\dn_dab_deformable_detr\dn_components.py", line 61, in prepare_for_dn
targets, scalar, label_noise_scale, box_noise_scale, num_patterns = dn_args
TypeError: cannot unpack non-iterable NoneType object

Process finished with exit code 1

@FengLi-ust
Copy link
Collaborator

It seems you set args.use_dn to false so that dn_args does not go into the forward function.
You need to set args.use_dn to true by adding --use_dn.

@llljjj88
Copy link

Hello, I also encountered the same problem and I printed out the --use_ dn is already true, but this error cannot be solved

@AkimotoAyako
Copy link

I think you might have forgotten to remove "# replace the args to your COCO path," which is preventing the command-line parser from interpreting the --use_dn parameter correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants