You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm able to Post to the API in docker on my local machine. I get a 200 success after the Inpainting function is finished. Then on my fontend when i get the Data back it's returning this:
{"$error":{"code":"PIPELINE_ERROR","name":"RuntimeError","message":"CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.16 GiB already allocated; 0 bytes free; 7.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation
I have a:
Aleen Laptop
RTX 3070
64 mb Ram
Thanks! -foo
The text was updated successfully, but these errors were encountered:
Hey, welcome! Unfortunately it's as it says, you're out of GPU RAM. There is a way to get it to work on 8GB RAM but I haven't prioritized that until now (you're the first person to need it 😅). Makes things a lot slower but definitely great to be able to dev locally.
I'm about to get on an international flight, and have a few other higher priorities issues for as soon as I'm back, but I'll try get something out in the next week. Watch this space 😁
Ok thanks... I'm the mean time I forked the repo and uploaded to Banana... I'm able to hit the api but it's re-downloading the models on every request not just the first time. I'll get hacking around. Have a safe flight! This stuff is exciting!
Oh and I like to local dev.. so what machine specs or brands do you recommend?
I'm able to Post to the API in docker on my local machine. I get a 200 success after the Inpainting function is finished. Then on my fontend when i get the Data back it's returning this:
{"$error":{"code":"PIPELINE_ERROR","name":"RuntimeError","message":"CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.16 GiB already allocated; 0 bytes free; 7.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation
I have a:
Aleen Laptop
RTX 3070
64 mb Ram
Thanks! -foo
The text was updated successfully, but these errors were encountered: