Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Suggestion] - multi GPU parallel usage #292

Open
Spirit-Catt opened this issue Aug 29, 2023 · 19 comments
Open

[Suggestion] - multi GPU parallel usage #292

Spirit-Catt opened this issue Aug 29, 2023 · 19 comments
Labels
enhancement New feature or request

Comments

@Spirit-Catt
Copy link

The software works fine with 1 gpu, but it completely ignores other. It would be nice if it could automaticly generate few images at the same time depending on the amount of gpus computer has.

@sfingali
Copy link

This

@pieceof
Copy link

pieceof commented Sep 11, 2023

where

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 1, 2024

Hey, parallel support has not (yet) been implemented. All issues with label Enhancement are ready for (re-)evaluation and prioritisation of the backlog.

@oldhand7
Copy link

Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you.
Thanks.

@oldhand7
Copy link

This

This

what does it mean?

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 18, 2024

Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you. Thanks.

@oldhand7

  1. advanced params refactoring + prevent users from skipping/stopping other users tasks in queue #981 has to be merged to make each generation independent from the global variables in advanced parameters, so they are truly separated.
  2. a flag -multi-gpu has to be implemented, which then spawns an async worker process for each GPU
  3. the GPU has to be persistently assigned to each worker process and its processing, leading th basically the whole ldm_patches pipeline calls to be refactored, incl. model management and VRAM improvements.

I assume that this comes down to an estimated 3-5 days of development effort.

@oldhand7
Copy link

Hi, @mashb1t , I would like to contact you on skype. Could you share your whatsapp number or skype id?

@oldhand7
Copy link

Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you. Thanks.

@oldhand7

  1. advanced params refactoring + prevent users from skipping/stopping other users tasks in queue #981 has to be merged to make each generation independent from the global variables in advanced parameters, so they are truly separated.
  2. a flag -multi-gpu has to be implemented, which then spawns an async worker process for each GPU
  3. the GPU has to be persistently assigned to each worker process and its processing, leading th basically the whole ldm_patches pipeline calls to be refactored, incl. model management and VRAM improvements.

I assume that this comes down to an estimated 3-5 days of development effort.
I would like to contact you on skype. Could you share your whatsapp number or skype id?

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 18, 2024

@oldhand7 No. All communication regarding Fooocus should happen in this repository. You can open a discussion in the category "ideas" or "q&a" to exchange.

@oldhand7
Copy link

Sorry, I am afraid I did wrong. I would like to know whether it is possible and if so, how to implement. You mentioned 3 ~ 5 days of work will be expected. Are those on the way or just estimate? Can I have a look at the current work status?

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 18, 2024

@oldhand7 the last update of fooocus introduced a --multi-users flag, which currently has no effect. I assume that either ldm_patched is being worked on or this has been added as a general preparation for the future already.
AFAIK there currently is no progresss on the feature, at least of what i can see in PRs/branches. The estimate is just a gut feeling, not planned yet.

@oldhand7
Copy link

oldhand7 commented Feb 18, 2024

Actually, I 've just tried to do it myself for multi processing, but I don't think I get the right point for this. I 've just changed webui.py for multi-threading, but It didn't work. May I ask what parts should be improved or What is the key part to be impleemented for this function? And Do I need to use other python libs or so? Do I have to changed whole structures? Maybe I would like to contribute to you. Thanks.

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 18, 2024

@oldhand7 Key part is to make the model management incl. all caches abd memory optimisations work for both one and multiple GPUs as well as handling multiple async_worker processes + yielding correctly to gradio.
ldm_patched may also have to be changed.

@oldhand7
Copy link

  1. advanced params refactoring + prevent users from skipping/stopping other users tasks in queue #981 has to be merged to make each generation independent from the global variables in advanced parameters, so they are truly separated.
  2. a flag -multi-gpu has to be implemented, which then spawns an async worker process for each GPU
  3. the GPU has to be persistently assigned to each worker process and its processing, leading th basically the whole ldm_patches pipeline calls to be refactored, incl. model management and VRAM improvements.

I assume that this comes down to an estimated 3-5 days of development effort.

AFAIK, you mentioned here it needs less than 5 days of work. So Do you have any exact plan or idea to implement in right way?
If so, could you share your idea? I mean I would like to collaborate with you on this point. Thanks for your understanding.

@oldhand7
Copy link

Can you help me with this implementation?

@oldhand7
Copy link

oldhand7 commented Feb 18, 2024

Is it possible? If so , Can I contribute to this implementation? Ofc, I may need your great hand.

@piramiday
Copy link

hey @oldhand7, I dig your enthusiasm but I find your netiquette quite lacking -- please stop spamming the multitude of users subscribed to this issue and open a new discussion about this topic instead, as mashb1t suggested earlier.

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 18, 2024

last comment for me on this matter: continued in #2292 for anybody who wants to follow along

@SurvivaLlama
Copy link

This

This

what does it mean?

That people want it to happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants