-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue when running inference : Protobuf parsing failed. #3
Comments
Got the same issue but I have 24gb VRAM. Used docker. How to fix? |
@rvsh2 i created the docker container for this repo, if you used that it might be the source of our problem |
same problem in a docker container. Same error, impossible to run. at all |
This is because the models inside '/pretrained_models` folder are outdated when pulling from what this repo was originally built with. FOR NOW : https://huggingface.co/fudan-generative-ai/hallo/tree/main and get the latest to replace manually. It works. |
@TemporalLabsLLC-SOL can you share a full example of how to use the new models ? they are not total replacement as i just tested them and i am getting the same error :
|
For me use command
to fix it. |
@bensonbs how much VRAM you have ?mine iss 8GB |
Hi whenever I am running the inference I am getting this error :
I want to know if this is related to the model size as i only have 8GB of VRAM ?
if not is there a way to fix it ?
The text was updated successfully, but these errors were encountered: