Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert .pkl error that 'utf-8' codec cant't decode #270

Closed
yangli-lab opened this issue Nov 22, 2021 · 5 comments
Closed

convert .pkl error that 'utf-8' codec cant't decode #270

yangli-lab opened this issue Nov 22, 2021 · 5 comments

Comments

@yangli-lab
Copy link

by running the command

(torch_gpu) E:\fork\fork file\stylegan2-pytorch>python convert_weight.py --repo ~/stylegan2 stylegan2-ffhq-config-f.pkl
I got the error that 'utf-8' codec can't decode xxxx: invaliad continuation byte

and the following are the trackback:
D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py:190: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
Traceback (most recent call last):
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 1030, in build_extension_module
check=True)
File "D:\Anaconda\envs\torch_gpu\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "convert_weight.py", line 11, in
from model import Generator, Discriminator
File "E:\fork\fork file\stylegan2-pytorch\model.py", line 11, in
from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix
File "E:\fork\fork file\stylegan2-pytorch\op_init
.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "E:\fork\fork file\stylegan2-pytorch\op\fused_act.py", line 15, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 661, in load
is_python_module)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 830, in _jit_compile
with_cuda=with_cuda)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 883, in _write_ninja_file_and_build
_build_extension_module(name, build_directory, verbose)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 1042, in _build_extension_module
message += ": {}".format(error.output.decode())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd3 in position 1167: invalid continuation byte

@rosinality
Copy link
Owner

I think you need to check you can compile custom operators. I don't know much about compiling cuda kernels on windows, but I think you will need to setup cuda build environments and install ninja.

@yangli-lab
Copy link
Author

yangli-lab commented Nov 23, 2021

thanks @rosinality, and I've solved the question by replacing the decode setting in file cpp_xetention.py
however, another problem come out.
I used the same command->python convert_weight.py --repo ~/stylegan2 stylegan2-ffhq-config-f.pkl,but an error of no module named fused occured. The following are the details
Traceback (most recent call last):
File "convert_weight.py", line 11, in
from model import Generator, Discriminator
File "E:\fork\fork file\stylegan2-pytorch\model.py", line 11, in
from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d, conv2d_gradfix
File "E:\fork\fork file\stylegan2-pytorch\op_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "E:\fork\fork file\stylegan2-pytorch\op\fused_act.py", line 15, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 663, in load
is_python_module)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 843, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
File "D:\Anaconda\envs\torch_gpu\lib\site-packages\torch\utils\cpp_extension.py", line 1050, in _import_module_from_library
file, path, description = imp.find_module(module_name, [path])
File "D:\Anaconda\envs\torch_gpu\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'fused'
do you know the reason? my thanks

@rosinality
Copy link
Owner

Have you tested you can build custom operations? I think the problem is at it. (It is suggested by that subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. occurred in the error message.)

@yangli-lab
Copy link
Author

sorry for the late reply, I've fixed this problem by using the method provided here
Once again, thanks for your patience and the helpful work you've done

1 similar comment
@yangli-lab
Copy link
Author

sorry for the late reply, I've fixed this problem by using the method provided here
Once again, thanks for your patience and the helpful work you've done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants