Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differential DIffusion: Giving Each Pixel Its Strength #2407

Open
exx8 opened this issue Mar 2, 2024 · 3 comments
Open

Differential DIffusion: Giving Each Pixel Its Strength #2407

exx8 opened this issue Mar 2, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@exx8
Copy link

exx8 commented Mar 2, 2024

Hello,
I would like to suggest implementing my paper: Differential Diffusion: Giving Each Pixel Its Strength.

Is your feature request related to a problem? Please describe.
The paper allows a user to edit a picture by a change map that describes how much each region should change.
The editing process is typically guided by textual instructions, although it can also be applied without guidance.
We support both continuous and discrete editing.
Our framework is training and fine tuning free! And has negligible penalty of the inference time.
Our implementation is diffusers-based.
We already tested it on 4 different diffusion models (Kadinsky, DeepFloyd IF, SD, SD XL).
We are confident that the framework can also be ported to other diffusion models, such as SD Turbo, Stable Cascade, and amused.
I notice that you usually stick to white==change convention, which is opposite to the convention we used in the paper.
The paper can be thought of as a generalization to some of the existing techniques.
A black map is just regular txt2img ("0"),
A map of one color (which isn't black) can be thought as img2img,
A map of two colors which one color is white can be thought as inpaint.
And the rest? It's completely new!
In the paper, we suggest some further applications such as soft inpainting and strength visualization.

Describe the idea you'd like
I believe that a user should supply an image and a change map, and the editor should output the result according to the algorithm.
Site:
https://differential-diffusion.github.io/
Paper:
https://differential-diffusion.github.io/paper.pdf
Repo:
https://github.com/exx8/differential-diffusion
It might also address: #1788

It has already been implemented by amazing @vladmandic at vladmandic/automatic@0239435
and incredible @shiimizu at: comfyanonymous/ComfyUI#2876 .

Thanks

@mashb1t mashb1t added the enhancement New feature or request label Mar 2, 2024
@mashb1t
Copy link
Collaborator

mashb1t commented Jun 16, 2024

=> included in #3084

@IPv6
Copy link

IPv6 commented Jun 18, 2024

Would be cool to have this as a mode in inpaint, to re-generate outside mask with common prompt and inside mask - with inpaint-specific prompt. With controllable denoising strength for outside part and inside part any kind of artistic-driven mix would be possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants