Fooocus 2.5.0: Enhance! #42
Replies: 4 comments 9 replies
-
Please let me know your opinion about the importance of optional next steps for you, see #40 (comment) |
Beta Was this translation helpful? Give feedback.
-
there seems to be an issue , when using another model other than the default juggernaut one , it errors
(i'm using colab btw) |
Beta Was this translation helpful? Give feedback.
-
@ILikeToasters need your video tutorial) |
Beta Was this translation helpful? Give feedback.
-
I'm new here and I'm using fooocus via pinokio. How to run mashb1t's fork with pinokio? |
Beta Was this translation helpful? Give feedback.
-
Fooocus 2.5.0: Enhance!
This release includes a feature requested multiple times by the community (e.g. in lllyasviel#3113, lllyasviel#3089, lllyasviel#3039, and a few more, also see lllyasviel#3122).
What does this feature do?
Enhance allows you to automatically upscale and/or improve parts of the picture based on either a prompt or an input image.
It is comparable to ADetailer (repository), but offers better and more flexible object detection and replacement with detection and replacement prompts instead of static detection models, each having ~140MB.
How do i use it?
Disclaimer
It is highly recommended to use performance
Speed
orQuality
(no performance which loads a LoRA) as any inpaint engine uses a LoRA, which may not produce the best results when combined with other performance LoRAs.Using inpaint mode
Improve Detail (face, hand, eyes, etc.)
does not set an inpaint engine, making it compatible with all performances. The documentation of inpaint modes can be found here: lllyasviel#414All of this is also the case for normal inpainting without enhancements.
ControlNets (
ImagePrompt
,PyraCanny
,CPDS
,FaceSwap
) are currently not supported for enhance steps but can be used for image generation used as basis for enhance.With image generation
2.1. (optional) Enable and define order upscaling or variation (default disabled)
2.2. Enable and configure any amount of other improvement steps.
2.3. input detection prompt (what you want to detect in the image)
2.4. input positive / negative prompt (what you want to replace the detected masks with, defaults to your normal prompts if not set)
Based on an existing image
Simply open Image Input and upload an image to the Enhance tab. Prompt processing will be skipped and only enhancement steps are processed. Follow steps 2+ above.
You may set
--enable-auto-describe-image
to automatically generate the prompt after image upload.Examples
UI
#1
Yellow Sundress#2
Hands replacementUpscaling or Variation
Before First Enhancement
After Last Enhancement
Models
By default it uses the SAM (website, repository) masking model, backed by GroundingDINO (paper, diffusers docs), but offers support for all additional models currently supported by RemBG (repository). GroundingDINO + SAM do not use RemBG as handler, but have natively been implemented into focus for even better results and increased level control.
Currently supported models:
Tech Debt / Code Improvements
While implementing the enhance feature, multiple methods have been introduced to make code reusable and allow for iterations.
The whole async_worker.py has been restructured and is now much more clear to read and easier to use.
Debugging
Please find debugging options in Developer Debug Mode > Inpaint:
Beta Was this translation helpful? Give feedback.
All reactions