Replies: 31 comments 55 replies
-
Nice change, however the modified version is way to washed out for my liking. |
Beta Was this translation helpful? Give feedback.
-
Wow massive quality improvement! |
Beta Was this translation helpful? Give feedback.
-
I was trying to get a better look at this myself to evaluate and did a blowup swapping comparison. Maybe not the best naming but B is for Before Change and A is After Change, using the OP's images, and applying 20% contrast to the after image as per #8457 (reply in thread) |
Beta Was this translation helpful? Give feedback.
-
I like result but I don't like 0.15 in the code. It could be some preference. I have idea to produce series of the images from 0.05, 0.10 to for example 0.50. to get better idea what is this about, but I don't have a time recently, so if somebody likes this idea, then I would be very happy to see the results. |
Beta Was this translation helpful? Give feedback.
-
Might be something worth bringing up in the k-diffusion repo? The |
Beta Was this translation helpful? Give feedback.
-
@hallatore When I did some quick trials, there were always more 'macro' changes between the images that make it hard to compare, rather than only fuzziness/contrast differences like in your examples. By macro, I mean things like foliage pattern in the background, slight shift in stance or hair pattern, and other minor features. Edit: My test was with xformers disabled. I didn't see as obvious a difference in the sharpness and contrast like in yours, but the macro changes I ended up with made it hard to really tell. I'm wondering if the model/vae I used, or some other settings affects this. Can you give specifics of a prompt, model, etc. so we can compare baselines? (I'll try some more when I have time this evening regardless.) |
Beta Was this translation helpful? Give feedback.
-
I made some quick tests. It seems to have also some impact on other samplers Improved quality (wow! It feels like boosting resolution of 1.5 models to resolution of 2.1 :)):
Decreased quality(?):
Has someone else also tested this change with other samplers? |
Beta Was this translation helpful? Give feedback.
-
I think I might have found a/the bug. Replace this in stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py outdated ... |
Beta Was this translation helpful? Give feedback.
-
We need to get this fix into official repo :) |
Beta Was this translation helpful? Give feedback.
-
The change described is a shortening of the CFG Scale Scheduler option already included in the DynThresh extension - https://github.com/mcmonkeyprojects/sd-dynamic-thresholding (EDIT: At the time this was written, this was correct - the opening post just tacked in a scheduler. However the opening post has since been edited to use an entirely different technique) |
Beta Was this translation helpful? Give feedback.
-
Did a test with low steps. The difference is clearer at lower steps. I feel my version matches the 50 steps version better at lower steps. |
Beta Was this translation helpful? Give feedback.
-
Here is a version that works with DPM++ 2M. At least I seem to get pretty good results with it. And with "Always discard next-to-last sigma" turned OFF At 10 steps: https://imgsli.com/MTYxMjc5
|
Beta Was this translation helpful? Give feedback.
-
I think @mcmonkey4eva is probably correct. This new enhancement washes out the image when using it with img2img, which the vanilla sampler does not do. |
Beta Was this translation helpful? Give feedback.
-
I didn't find the mentioned paths, has something changed? I started using Stable Diffusion today, so I installed the current version. |
Beta Was this translation helpful? Give feedback.
-
This should be in webui by default, so i dont have to put it back in everytime i update |
Beta Was this translation helpful? Give feedback.
-
i noticed that most devs when they release the code after publishin papers they just move on to next stuff and dont really want to update the code ,probably working on something better, last response on any of the issues with k was in december 2022 |
Beta Was this translation helpful? Give feedback.
-
Any way to get this into the code as an alternative version @AUTOMATIC1111 ? |
Beta Was this translation helpful? Give feedback.
-
We need to either fork the k-diffusion repo and become new maintainers, or we take the main bits from the k-diffusion repo and maintain them inside of this repo. But we may break MacOS compatibility as they use a different k-diffusion. EDIT: On the other hand, if we take control, we could easily incorporate the MacOS fixes into the new code, which will make it even cleaner. |
Beta Was this translation helpful? Give feedback.
-
Ok I've gone and forked the k-diffusion repo, and incorporated @hallatore's change plus the MacOS MPS workaround from @brkirch. To use it, simply delete the webui-user.bat set K_DIFFUSION_REPO=https://github.com/wywywywy/k-diffusion.git
set K_DIFFUSION_COMMIT_HASH=ca06f522e6d3f202c25c3565c53afbd9c40ac53d webui-user.sh export K_DIFFUSION_REPO="https://github.com/wywywywy/k-diffusion.git"
export K_DIFFUSION_COMMIT_HASH="e3f853a8c9f70052aa1c4bb8cd0e4ec3af7ffaff" Please can someone with a Mac give a good test. |
Beta Was this translation helpful? Give feedback.
-
What does this change do at a conceptual level? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
If ever considered for merge. Yes I'm salty auto mia. Checking out a new
fork that's supposed to be pretty active, seems to be getting a lot of
attention.
https://www.reddit.com/r/StableDiffusion/comments/12grgwh/automatic1111_getting_rusty_future_of_this_repo_i/
…On Thu, Mar 16, 2023, 03:28 Andre Saddler ***@***.***> wrote:
in honesty, if you cant find the paths, i wouldnt suggest trying this out
until its considered for merge
—
Reply to this email directly, view it on GitHub
<#8457 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2755JZHZOK34ASIXX7IS23W4LFLDANCNFSM6AAAAAAVVGRDVQ>
.
You are receiving this because you commented.Message ID:
<AUTOMATIC1111/stable-diffusion-webui/repo-discussions/8457/comments/5330846
@github.com>
|
Beta Was this translation helpful? Give feedback.
-
I've been using this improvement for over a month... it's great, we need this officially somehow! |
Beta Was this translation helpful? Give feedback.
-
@Metachs: "It looks to me like all you are seeing is faster convergence due to loss of detail" : crowsonkb/k-diffusion#56 (comment) |
Beta Was this translation helpful? Give feedback.
-
just stumbled across this, and am very happy to have found it! |
Beta Was this translation helpful? Give feedback.
-
the result mostly blurring out lots details.... please fix it ? or is it unfixable.... |
Beta Was this translation helpful? Give feedback.
-
I think there are similarities between this new sampling (DPM++ 2M Karras) and kohya_ss's Network Alpha training parameters in terms of adjusting the noise into blur and drastically reducing the noise, but using the flower photos sent from nemilya's comment above, I feel there is a blurred out beauty, yes, only a blurred beauty, no sharpening involved |
Beta Was this translation helpful? Give feedback.
-
i seen that the script is updated ? any possible update ? |
Beta Was this translation helpful? Give feedback.
-
where do you get the code for DPM 2M Karass, I'm new to this and don't understand how to find the code |
Beta Was this translation helpful? Give feedback.
-
Did any submit a PR for this ? Seems quite trivial to do ? |
Beta Was this translation helpful? Give feedback.
-
I'm playing around with the sample_dpmpp_2m function in the k-diffusion repo.
I got a quality increase on my images by doing this trick/bug fix?.
I need help to test out if this is just a false positive that seems to work on my machine, or if it works in general.
Please test it out!
How to try out
Open stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py
Add the following code to the bottom
Beta Was this translation helpful? Give feedback.
All reactions