You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suspect it's the PyTorch version and StackOverflow doesn't have a solution to the problem 1. The following error occurs when I execute python demo.py --gpu 0 --stage param --test_epoch 8
RuntimeError: Error(s) in loading state_dict for DataParallel:
size mismatch for module.param_regressor.fc_pose.0.weight: copying a param with shape torch.Size([144, 512]) from checkpoint, the shape in current model is torch.Size([96, 512])
.
size mismatch for module.param_regressor.fc_pose.0.bias: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for module.human_model_layer.th_shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([778, 3,
10]).
size mismatch for module.human_model_layer.th_posedirs: copying a param with shape torch.Size([6890, 3, 207]) from checkpoint, the shape in current model is torch.Size([778, 3,
135]).
size mismatch for module.human_model_layer.th_v_template: copying a param with shape torch.Size([1, 6890, 3]) from checkpoint, the shape in current model is torch.Size([1, 778,
3]).
size mismatch for module.human_model_layer.th_J_regressor: copying a param with shape torch.Size([24, 6890]) from checkpoint, the shape in current model is torch.Size([16, 778])
.
size mismatch for module.human_model_layer.th_weights: copying a param with shape torch.Size([6890, 24]) from checkpoint, the shape in current model is torch.Size([778, 16]).
size mismatch for module.human_model_layer.th_faces: copying a param with shape torch.Size([13776, 3]) from checkpoint, the shape in current model is torch.Size([1538, 3]).
2.The following error occurs when I execute python demo.py --gpu 0 --stage param --test_epoch 12
Traceback (most recent call last):
File "demo.py", line 99, in
out = model(inputs, targets, meta_info, 'test')
File "E:\Software\Anaconda3\envs\I2L\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "E:\Software\Anaconda3\envs\I2L\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "E:\Software\Anaconda3\envs\I2L\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "..\main\model.py", line 75, in forward
joint_img_from_mesh = torch.bmm(torch.from_numpy(self.joint_regressor).cuda()[None,:,:].repeat(mesh_coord_img.shape[0],1,1), mesh_coord_img)
RuntimeError: batch1 dim 2 must match batch2 dim 1
The text was updated successfully, but these errors were encountered:
I suspect it's the PyTorch version and StackOverflow doesn't have a solution to the problem
1. The following error occurs when I execute python demo.py --gpu 0 --stage param --test_epoch 8
RuntimeError: Error(s) in loading state_dict for DataParallel:
size mismatch for module.param_regressor.fc_pose.0.weight: copying a param with shape torch.Size([144, 512]) from checkpoint, the shape in current model is torch.Size([96, 512])
.
size mismatch for module.param_regressor.fc_pose.0.bias: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for module.human_model_layer.th_shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([778, 3,
10]).
size mismatch for module.human_model_layer.th_posedirs: copying a param with shape torch.Size([6890, 3, 207]) from checkpoint, the shape in current model is torch.Size([778, 3,
135]).
size mismatch for module.human_model_layer.th_v_template: copying a param with shape torch.Size([1, 6890, 3]) from checkpoint, the shape in current model is torch.Size([1, 778,
3]).
size mismatch for module.human_model_layer.th_J_regressor: copying a param with shape torch.Size([24, 6890]) from checkpoint, the shape in current model is torch.Size([16, 778])
.
size mismatch for module.human_model_layer.th_weights: copying a param with shape torch.Size([6890, 24]) from checkpoint, the shape in current model is torch.Size([778, 16]).
size mismatch for module.human_model_layer.th_faces: copying a param with shape torch.Size([13776, 3]) from checkpoint, the shape in current model is torch.Size([1538, 3]).
2.The following error occurs when I execute python demo.py --gpu 0 --stage param --test_epoch 12
Traceback (most recent call last):
File "demo.py", line 99, in
out = model(inputs, targets, meta_info, 'test')
File "E:\Software\Anaconda3\envs\I2L\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "E:\Software\Anaconda3\envs\I2L\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "E:\Software\Anaconda3\envs\I2L\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "..\main\model.py", line 75, in forward
joint_img_from_mesh = torch.bmm(torch.from_numpy(self.joint_regressor).cuda()[None,:,:].repeat(mesh_coord_img.shape[0],1,1), mesh_coord_img)
RuntimeError: batch1 dim 2 must match batch2 dim 1
The text was updated successfully, but these errors were encountered: