Skip to content

Diffusers

Vladimir Mandic edited this page Jul 20, 2023 · 33 revisions

SD.Next includes experimental support for additional model pipelines
This includes support for additional models such as:

  • Stable Diffusion XL
  • Kandinsky
  • Deep Floyd IF

And soon:

  • Shap-E, UniDiffuser, Consistency Models, Diffedit Zero-Shot
  • Text2Video, Video2Video, etc...

Note that support is experimental, do not open GitHub issues for those models
and instead reach-out on Discord using dedicated #diffusers-sdxl channel

This has been made possible by integration of huggingface diffusers library with the help of huggingface team!

How to

Nvidia Graphics Card Note:

  • Please note that if you are using an Nvidia graphics card in Windows 10/11, ensure that your driver version is optimial
    If needed, roll back to the last 531 version by downloading it from this link: Download Nvidia Driver 531.79.
  1. Install SD.Next:

    • Clone the repository by running git clone https://github.com/vladmandic/automatic <target dir name if desired> in the desired target directory.
    • Navigate into the cloned directory and run webui.bat (Windows) or webui.sh (Linux/Mac) to start the web interface.
    • It is recommended to include the --debug option for more detailed information during the initial setup and for later tech support if necessary, at least for the time being.
    • When asked to download the default model, you can safely choose "N" to skip the download.

    Note: If the web interface terminates with an ImportError: cannot import name 'deprecated' from 'typing_extensions' on the first run, this is expected and unavoidable, simply run it again.

  2. Initial Setup:

    • Once the web interface starts running after the second start-up, follow these settings in the Settings tab:
    • If you already have models, LoRAs, embeddings, LyCORIS, set your System Paths now to save trouble in the future.
      Pay special attention to the Diffusers entry as that is where diffusers models will download to.
    • Go to Settings/Diffusers Settings.
    • Check the following options:
      • Select diffuser pipeline when loading from safetensors (set it to Stable Diffusion XL)
      • Move base model to CPU when using refiner and Move refiner model to CPU when not in use.
    • Optionally, add sd_backend, diffusers_pipeline, and sd_model_refiner to the Quicksettings list in Settings/User Interface.
    • Consider disabling checkpoint autoload on server start if desired.
    • Apply the settings, which will also apply some new defaults.
  3. SD-XL Download

    • Go to the Models tab and select Huggingface.
    • Enter your Huggingface token in the provided area.
    • Download the following two models by placing these, one by one, into the Select model field and then hit Download model:
      • stabilityai/stable-diffusion-xl-base-0.9
      • stabilityai/stable-diffusion-xl-refiner-0.9

    Note: Wait until the first finishes before executing the second download. Check your console for live status.

  4. SD-XL VAE Download
    This is completely optional as SD-XL contains a built-in VAE.
    Only difference is that 3rd party VAE is pruned to fp16 format, so its lighter on VRAM usage.

    • Create folder under models/VAE and give it a name - folder name will be used as VAE name
      If folder name is exactly the same as model name, it will be auto-detected and used if VAE is set to "Automatic"
    • Go to https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/tree/main
      Download config.json
      Download diffusion_pytorch_model.safetensors or sdxl_vae.safetensors and rename it to diffusion_pytorch_model.fp16.safetensors (no other files are needed)
      Unlike with original workflow, VAE sits in a separate folder with config since each model has a different VAE configuration
    • Click refesh VAE list in UI and select VAE
    • VAE is loaded as part of model load, so either restart app or reload model
  5. Disable Extensions and Restart:

    • From the Extensions tab, uncheck the boxes for ControlNet and MultiDiffusion, as they are not currently compatible with SDXL.
      Then press Apply changes & restart server to initiate a restart to apply the changes.
    • After selecting the diffusers backend and performing the restart, users can explore the Settings/Samplers section to enable the desired samplers for their specific requirements.

Once you have completed these steps, you should be ready to generate images using SD.Next with the specified settings and models. Simply select stabilityai/stable-diffusion-xl-base-0.9 from the checkpoint dropdown, the Refiner is enabled in the Highres fix region.

Enjoy your new SDXL image generation!

Additional Notes:

  1. Minimum Image Size: Please ensure that the minimum image size is set to 1024x1024, as anything smaller will result in significantly reduced quality.
  2. Command line arguments: Users may execute SD.Next without specifying the --backend diffusers or --backend original options, as the application should switch between them live.
    However, these options are still available if you intend to only work with one of them.
  3. Performance: For users with graphics cards with 12GB VRAM or less, it is recommended to consider using the --medvram and/or --lowvram options for improved performance.
    While not necessary, these options can help ensure optimal performance on lower VRAM cards and allow the generation of larger images.

Integration

Standard workflows

  • txt2img
  • img2img
  • process

Model Access

  • For standard SD 1.5 and SD 2.1 models, you can use either
    standard safetensor models (single file) or diffusers models (folder structure)
  • For additional models, you can use diffusers models only
  • You can download diffuser models directly from Huggingface hub
    or use built-in model search & download in SD.Next: UI -> Models -> Huggingface
  • Note that access to some models is gated
    In which case, you need to accept model EULA and provide your huggingface token
  • When loading safetensors models, you must specify model pipeline type in:
    UI -> Settings -> Diffusers -> Pipeline
    When loading huggingface models, pipeline type is automatically detected
  • If you get this Diffuser model downloaded error: model=stabilityai/stable-diffusion-etc [Errno 2] No such file or directory:
    you need to go to the HuggingFace page and accept the EULA for that model.

Extra Networks

  • Lora networks
  • Textual inversions (embeddings)

Note that Lora and TI need are still model-specific, so you cannot use Lora trained on SD 1.5 on SD-XL
(just like you couldn't do it on SD 2.1 model) - it needs to be trained for a specific model

Support for SD-XL training is expected shortly

Diffuser Settings

  • UI -> Settings -> Diffuser Settings
    contains additional tunable parameters

Samplers

  • Samplers (schedulers) are pipeline specific, so when running with diffuser backend, you'll see a different list of samplers
  • UI -> Settings -> Sampler Settings shows different configurable parameters depending on backend
  • Recommended sampler for diffusers is DEIS

Other

  • Updated System Info tab with additional information
  • Support for lowvram and medvram modes - Both work extremely well
    Additional tunables are available in UI -> Settings -> Diffuser Settings
  • Support for both default SDP and xFormers cross-optimizations
    Other cross-optimization methods are not available
  • Extra Networks UI will show available diffusers models
  • CUDA model compile
    UI Settings -> Compute settings
    Requires GPU with high VRAM
    Diffusers recommend reduce overhead compile mode, but other methods are available as well
    Fullgraph compile is possible (with sufficient vram) when using diffusers
  • Note that some CUDA compile modes only work on Linux

SD-XL Notes

  • SD-XL Technical Report
  • SD-XL model is designed as two-stage model
    You can run SD-XL pipeline using just base model or load both base and refiner models
    • base: Trained on images with variety of aspect ratios and uses OpenCLIP-ViT/G and CLIP-ViT/L for text encoding
    • refiner: Trained to denoise small noise levels of high quality data and uses the OpenCLIP model
    • Having both base model and refiner model loaded can require significant VRAM
    • If you want to use refiner model, it is advised to add sd_model_refiner to quicksettings
      in UI Settings -> User Interface
  • SD-XL model was trained on 1024px images
    You can use it with smaller sizes, but you will likely get better results with SD 1.5 models
  • SD-XL model NSFW filter has been turned off

Download

  1. Go to https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9/tree/main and fill out form.
  2. On the Huggingface Website go to your Profile -> Settings -> Access Tokens. Generate an Access Token and copy that.
  3. Go to UI -> Models -> Huggingface and enter the Access Token in Huggingface token
  4. Enter stabilityai/stable-diffusion-xl-base-0.9 in Select Model and press Download
  5. Enter stabilityai/stable-diffusion-xl-refiner-0.9 in Select Model and press Download

Do not attempt to use safetensors version of SD-XL until full support is added (soon)

Limitations

  • Skip/Stop operations are not possible while running a diffusers model
  • Any extension that requires access to model internals will likely not work when using diffusers backend
    This for example includes standard extensions such as ControlNet, MultiDiffusion, LyCORIS
    Note: application will auto-disable incompatible built-in extensions when running in diffusers mode
    If you go back to original mode, you will need to re-enable extensions
  • Explict vae usage is not yet implemented
  • Explicit refiner as postprocessing is not yet implemented
  • Workflows such as hires fix are not yet implemented
  • Hypernetworks
  • Limited callbacks support for scripts/extensions: additional callbacks will be added as needed

Performance

Comparison of original stable diffusion pipeline and diffusers pipeline when using standard SD 1.5 model
Performance is measured for batch-size 1, 2, 4, 8 16

pipeline performance it/s memory cpu/gpu
original 7.99 / 7.93 / 8.83 / 9.14 / 9.2 6.7 / 7.2
original medvram 6.23 / 7.16 / 8.41 / 9.24 / 9.68 8.4 / 6.8
original lowvram 1.05 / 1.94 / 3.2 / 4.81 / 6.46 8.8 / 5.2
diffusers 9 / 7.4 / 8.2 / 8.4 / 7.0 4.3 / 9.0
diffusers medvram 7.5 / 6.7 / 7.5 / 7.8 / 7.2 6.6 / 8.2
diffusers lowvram 7.0 / 7.0 / 7.4 / 7.7 / 7.8 4.3 / 7.2
diffusers with safetensors 8.9 / 7.3 / 8.1 / 8.4 / 7.1 5.9 / 9.0

Notes:

  • Test environment: nVidia RTX 3060 GPU, Torch 2.1-nightly with CUDA 12.1, Cross-optimization: SDP
  • All being equal, diffussers seem to:
    • Use slightly less RAM and more VRAM
    • Have highly efficient medvram/lowvram equivalents which don't lose a lot of performance
    • Faster on smaller batch sizes, slower on larger batch sizes
Clone this wiki locally