Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Xited with code 137 #331

Open
CaitlinVa opened this issue Oct 14, 2024 · 9 comments
Open

Xited with code 137 #331

CaitlinVa opened this issue Oct 14, 2024 · 9 comments
Assignees
Labels
bug Something isn't working

Comments

@CaitlinVa
Copy link

Which OS are you using?

  • OS: Apple Sequoia 15.0.1

The program starts with no problem but when the model.bin is finalized I get te following error.
I use Docker Desktop.
image

@CaitlinVa CaitlinVa added the bug Something isn't working label Oct 14, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Oct 15, 2024

Hi. I just tried to reproduce the error but failed.

Exited with code 137

According to the stackoverflow discussion here, code 137 means your device doesn't have enough RAM.

Docker uses a lot of RAM and your device may not be able to run the container.

Alternatively you can download Whisper-WebUI-Portable-Mac.zip and install it locally.

@CaitlinVa
Copy link
Author

don't think it's ram related. I have 18GB RAM and I just monitored it it doesn't go over 13GB. This is what docker says about RAM and CPU usage:
image

@jhj0517
Copy link
Owner

jhj0517 commented Oct 16, 2024

OK, since you're using a Mac device (Apple Sequoia 15.0.1), check if you might have misconfigured the memory limit for your container. (Default setting is supposed to use Nvidia GPU, so Mac needs different setup) :

If these lines exist in your docker-compose.yaml, remove them all:

# If you're not using nvidia GPU, Update device to match yours.
# See more info at : https://docs.docker.com/compose/compose-file/deploy/#driver
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [ gpu ]

Also, remove these lines in requirements.txt if it exists.

# Remove the --extra-index-url line below if you're not using Nvidia GPU.
# If you're using it, update url to your CUDA version (CUDA 12.1 is minimum requirement):
# For CUDA 12.1, use : https://download.pytorch.org/whl/cu121
# For CUDA 12.4, use : https://download.pytorch.org/whl/cu124
--extra-index-url https://download.pytorch.org/whl/cu121

If you rebuild the image again it should work fine. Please let me know if you have the same problem.

@CaitlinVa
Copy link
Author

The lines were already removed. If I don't remove them the docker file doesn't want to compose.

@jhj0517
Copy link
Owner

jhj0517 commented Oct 17, 2024

when the model.bin is finalized I get te following error.

I still think this is probably related to OOM, memory error.
Do you still get the same error when using smaller models, like tiny or tiny.en?

@CaitlinVa
Copy link
Author

i've tested it with the medium model, this works. But I also have an laptop with only 8gb RAM and there the large model works.

@jhj0517
Copy link
Owner

jhj0517 commented Oct 17, 2024

I don't know why, but it seems that your Docker + Mac is somehow limiting the memory usage.

Try running image with --oom-kill-disable, like adding this to your docker-compose.yml:

oom_kill_disable: true

@CaitlinVa
Copy link
Author

I get this error when running it:
image

@jhj0517
Copy link
Owner

jhj0517 commented Oct 17, 2024

I'm not sure if this will help, but according to the documentation,

You can set the memory limit manually in docker-composer.yml, probably something like 12g.

services:
  app:
    mem_limit: 12g

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants