-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there GPU support for box-prompted SAM? #58
Comments
I have found that GPU can effectively accelerate, with approximately 30-40ms per image on 3090ti. The problem is that the speed will be slower when running the first inference, I'm not sure why |
@silinsi, it should very fast to run EfficientSAM on GPU. Can you share more information? |
@liutongkun, for the first inference, loading the model to GPU and moving the data to GPU may take time. Can you share the latency for the first inference? |
Thanks for your reply. I put the data and model to GPU before starting the timing, here are my codes based on EfficientSAM_example.py
and it shows: |
@liutongkun, can you move the model/data before the loop? |
I modify the code to:
and it shows: |
Thanks, it is useful |
It seems that efficientSAM provided only works on CPU, I tried to use cuda() to move model and data to GPU , but it doesn't help a lot.
Also tried the seg-everything to cuda method in pull requests, doesn't help a lot as well.
Maybe the box-promted SAM needs a function like " predictor.set_image() " in SAM and Mobile-SAM to save time used in a same image.
The text was updated successfully, but these errors were encountered: