Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

option to keep multiple models in memory #12227

Merged
merged 5 commits into from
Aug 5, 2023
Merged

Conversation

AUTOMATIC1111
Copy link
Owner

@AUTOMATIC1111 AUTOMATIC1111 commented Jul 31, 2023

Description

  • option to specify how many loaded models you want to keep in memory
  • additional checkbox that lets you decide whether you want to keep just the one model in VRAM or all of them
  • if switching to an already loaded model, it's either instantaneous (if the checkbox is not checked) or very fast, something like 1s for me (if the checkbox is checked)
  • obsoletes the Checkpoints to cache in RAM setting.
  • tested to work with --medvram
  • Loras tested to work properly
  • additional things I put into the PR despite telling others to not put extra stuff
    • suppressed sgm/ldm print statements can be restored in settings
    • do_inpainting_hijack removed, the function is hijacked at startup instead
    • fixed a bug where get_empty_cond would create empty prompt cond using Lora weights

Screenshots/videos:

firefox_N63eoA4M7X

Checklist:

@AUTOMATIC1111 AUTOMATIC1111 merged commit c613416 into dev Aug 5, 2023
6 checks passed
@AUTOMATIC1111 AUTOMATIC1111 deleted the multiple_loaded_models branch August 5, 2023 04:52
@yoyoinneverland
Copy link

yoyoinneverland commented Oct 6, 2023

This introduced an issue where models get corrupted after reusing the older model to load the new.
Read #13516 for a temporary fix until it's sorted out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants