Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run on docker #18

Open
manchuwook opened this issue Jul 4, 2024 · 0 comments
Open

Cannot run on docker #18

manchuwook opened this issue Jul 4, 2024 · 0 comments

Comments

@manchuwook
Copy link

Made a fork of the repo, cloned to my Windows 11 desktop, ran docker compose up -d and got this error when adding a picture, wav, and ran generate.

2024-07-04 01:30:47 app-1  | 
2024-07-04 01:30:47 app-1  | ==========
2024-07-04 01:30:47 app-1  | == CUDA ==
2024-07-04 01:30:47 app-1  | ==========
2024-07-04 01:30:47 app-1  | 
2024-07-04 01:30:47 app-1  | CUDA Version 12.2.0
2024-07-04 01:30:47 app-1  | 
2024-07-04 01:30:47 app-1  | Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2024-07-04 01:30:47 app-1  | 
2024-07-04 01:30:47 app-1  | This container image and its contents are governed by the NVIDIA Deep Learning Container License.
2024-07-04 01:30:47 app-1  | By pulling and using the container, you accept the terms and conditions of this license:
2024-07-04 01:30:47 app-1  | https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
2024-07-04 01:30:47 app-1  | 
2024-07-04 01:30:47 app-1  | A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
2024-07-04 01:30:47 app-1  | 
2024-07-04 01:31:31 app-1  | /usr/local/lib/python3.10/dist-packages/gradio/processing_utils.py:588: UserWarning: Trying to convert audio automatically from int32 to 16-bit int format.
2024-07-04 01:31:31 app-1  |   warnings.warn(warning.format(data.dtype))
2024-07-04 01:31:32 app-1  | The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
2024-07-04 01:31:32 app-1  | 
0it [00:00, ?it/s]
0it [00:00, ?it/s]
2024-07-04 01:31:35 app-1  | Traceback (most recent call last):
2024-07-04 01:31:35 app-1  |   File "/app/scripts/inference.py", line 424, in <module>
2024-07-04 01:31:35 app-1  |     inference_process(
2024-07-04 01:31:35 app-1  |   File "/app/scripts/inference.py", line 181, in inference_process
2024-07-04 01:31:35 app-1  |     with ImageProcessor(img_size, face_analysis_model_path) as image_processor:
2024-07-04 01:31:35 app-1  |   File "/app/hallo/datasets/image_processor.py", line 97, in __init__
2024-07-04 01:31:35 app-1  |     self.face_analysis = FaceAnalysis(
2024-07-04 01:31:35 app-1  |   File "/usr/local/lib/python3.10/dist-packages/insightface/app/face_analysis.py", line 31, in __init__
2024-07-04 01:31:35 app-1  |     model = model_zoo.get_model(onnx_file, **kwargs)
2024-07-04 01:31:35 app-1  |   File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
2024-07-04 01:31:35 app-1  |     model = router.get_model(providers=providers, provider_options=provider_options)
2024-07-04 01:31:35 app-1  |   File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
2024-07-04 01:31:35 app-1  |     session = PickableInferenceSession(self.onnx_file, **kwargs)
2024-07-04 01:31:35 app-1  |   File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
2024-07-04 01:31:35 app-1  |     super().__init__(model_path, **kwargs)
2024-07-04 01:31:35 app-1  |   File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
2024-07-04 01:31:35 app-1  |     self._create_inference_session(providers, provider_options, disabled_optimizers)
2024-07-04 01:31:35 app-1  |   File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 472, in _create_inference_session
2024-07-04 01:31:35 app-1  |     sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
2024-07-04 01:31:35 app-1  | onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from ./pretrained_models/face_analysis/models/1k3d68.onnx failed:Protobuf parsing failed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant