-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: shape '[52, 128, 128]' is invalid for input of size 212992 RuntimeError: Caught RuntimeError in DataLoader worker process 0. #24
Comments
I assumed that you changed |
Thank you very much for your reply. python main.py --mode train --experiment_name dvf --model_name main_model |
Please check if here reads the correct neural rasterizer checkpoint (64x64 version you wanted). To make you own dataset with 128*128 reslotion, please see |
Thank you very much for your sound and timely advice, I solved this problem!!! |
Thank you sincerely for your previous help, and now I have another question to ask you. I am looking forward to hearing from you! (base) miao@miao:~/data/file/bwt/deepvecfont-master/data_utils$ python convert_ttf_to_sfd_mp.py --split train (base) miao@miao:~/data/file/bwt/deepvecfont-master/data_utils$ fontforge --version |
As I mentioned in readme.md, you need to first exit the conda environment:
then install fontforge by:
hope it works for you. |
As you mentioned in readme.md.I exit the conda environment: However, the above problems occur and cannot be installed.Therefore,I used the following command to install according to the installation tutorial given on the official website and the actual installation was successful! But when running the command, there is still no module. What should I do, hope to get your advice.thank you very much. |
It works fine in Ubuntu 20.04.1, other versions may not work. |
How did you solve this problem, I also ran into this issue |
I downloaded link "The Neural Rasterizer :image size 128x128","The Image Super-resolution model image size 128x128 -> 256x256","The Main model image size 128x128",and put them in the corresponding folder,To train our main model, run python main.py --mode train --experiment_name dvf --model_name main_model , The following error appears
(dvf) miao@miao:~/data/file/bwt/deepvecfont-master$ python main.py --mode train --experiment_name dvf --model_name main_model
Training on experiment dvf_main_model...
Loading data/vecfont_dataset_pkls/train/train_all.pkl pickle file ...
Finished loading pkls
Loading data/vecfont_dataset_pkls/test/test_all.pkl pickle file ...
Finished loading pkls
Traceback (most recent call last):
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 351, in
main()
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 345, in main
train(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 320, in train
train_main_model(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 99, in train_main_model
for idx, data in enumerate(train_loader):
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/miao/data/file/bwt/deepvecfont-master/dataloader.py", line 52, in getitem
item['rendered'] = torch.FloatTensor(cur_glyph['rendered']).view(self.char_num, self.img_size, self.img_size) / 255.
RuntimeError: shape '[52, 128, 128]' is invalid for input of size 212992
The following error occurs when I change image_size default=128 in line 13 of options.py to 64
(dvf) miao@miao:~/data/file/bwt/deepvecfont-master$ python main.py --mode train --experiment_name dvf --model_name main_model
Training on experiment dvf_main_model...
Loading data/vecfont_dataset_pkls/train/train_all.pkl pickle file ...
Finished loading pkls
Loading data/vecfont_dataset_pkls/test/test_all.pkl pickle file ...
Finished loading pkls
Traceback (most recent call last):
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 351, in
main()
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 345, in main
train(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 320, in train
train_main_model(opts)
File "/home/miao/data/file/bwt/deepvecfont-master/main.py", line 64, in train_main_model
neural_rasterizer.load_state_dict(torch.load(neural_rasterizer_fpath))
File "/home/miao/data/soft/python/anaconda/install/envs/dvf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for NeuralRasterizer:
Unexpected key(s) in state_dict: "decode.21.weight", "decode.21.bias", "decode.19.weight", "decode.19.bias".
size mismatch for decode.0.weight: copying a param with shape torch.Size([1024, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 512, 3, 3]).
size mismatch for decode.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decode.1.weight: copying a param with shape torch.Size([1024, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 2, 2]).
size mismatch for decode.1.bias: copying a param with shape torch.Size([1024, 2, 2]) from checkpoint, the shape in current model is torch.Size([512, 2, 2]).
size mismatch for decode.3.weight: copying a param with shape torch.Size([1024, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
size mismatch for decode.3.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decode.4.weight: copying a param with shape torch.Size([512, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 4, 4]).
size mismatch for decode.4.bias: copying a param with shape torch.Size([512, 4, 4]) from checkpoint, the shape in current model is torch.Size([256, 4, 4]).
size mismatch for decode.6.weight: copying a param with shape torch.Size([512, 256, 5, 5]) from checkpoint, the shape in current model is torch.Size([256, 128, 5, 5]).
size mismatch for decode.6.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decode.7.weight: copying a param with shape torch.Size([256, 8, 8]) from checkpoint, the shape in current model is torch.Size([128, 8, 8]).
size mismatch for decode.7.bias: copying a param with shape torch.Size([256, 8, 8]) from checkpoint, the shape in current model is torch.Size([128, 8, 8]).
size mismatch for decode.9.weight: copying a param with shape torch.Size([256, 128, 5, 5]) from checkpoint, the shape in current model is torch.Size([128, 64, 5, 5]).
size mismatch for decode.9.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for decode.10.weight: copying a param with shape torch.Size([128, 16, 16]) from checkpoint, the shape in current model is torch.Size([64, 16, 16]).
size mismatch for decode.10.bias: copying a param with shape torch.Size([128, 16, 16]) from checkpoint, the shape in current model is torch.Size([64, 16, 16]).
size mismatch for decode.12.weight: copying a param with shape torch.Size([128, 64, 5, 5]) from checkpoint, the shape in current model is torch.Size([64, 32, 5, 5]).
size mismatch for decode.12.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for decode.13.weight: copying a param with shape torch.Size([64, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 32, 32]).
size mismatch for decode.13.bias: copying a param with shape torch.Size([64, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 32, 32]).
size mismatch for decode.15.weight: copying a param with shape torch.Size([64, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 16, 5, 5]).
size mismatch for decode.15.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for decode.16.weight: copying a param with shape torch.Size([32, 64, 64]) from checkpoint, the shape in current model is torch.Size([16, 64, 64]).
size mismatch for decode.16.bias: copying a param with shape torch.Size([32, 64, 64]) from checkpoint, the shape in current model is torch.Size([16, 64, 64]).
size mismatch for decode.18.weight: copying a param with shape torch.Size([32, 16, 5, 5]) from checkpoint, the shape in current model is torch.Size([1, 16, 7, 7]).
size mismatch for decode.18.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([1]).
What is the problem and how can I fix it?Thank you!!
The text was updated successfully, but these errors were encountered: