You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I generate an image, the previews show the "missing image" icon in browser. This did not occur prior to updating.
In dev console, it shows Failed to load resource: the server responded with a status of 403 (Forbidden). Following the error, {"detail":"File not allowed: P:/Computing/AI/Stable Diffusion/outputs/txt2img-images/2024-02-12/00028-1799655419.png."} json is returned. Entering the direct path into URL returns the correct image.
The images are being generated and show up correctly in file explorer, they only fail to show up in the interface.
I have narrowed it down to the folder junction. Somewhere between d11c9d7 to 3cdae09 something was added (possibly for security) that is preventing web-ui from accessing the folder junction. My output folder for web-ui is a folder junction to another folder (same drive) where I keep images from all the different interfaces.
When I change the output folder to something that is in the same root path as web-ui, images show up correctly. Changing back to the folder junction breaks it again.
Oddly, it is still able to access other folder junctions like checkpoint and lora.
Steps to reproduce the problem
Pull latest.
Create folder junction linking output in web-ui to another folder on the same driver but not within the web-ui directory.
Generate an image.
What should have happened?
The preview image should of showed up like prior to updating.
venv "P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\venv\Scripts\Python.exe"
Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Version: f0.0.12-latest-150-g3cdae096
Commit hash: 3cdae09639b9c6fe2a407ac8ae94d153df18aa8b
Launching Web UI with arguments: --xformers --no-gradio-queue
Total VRAM 8192 MB, total RAM 130918 MB
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 : native
VAE dtype: torch.float32
P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Using xformers cross attention
==============================================================================
You are running torch 2.0.1+cu118.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
=================================================================================
You are running xformers 0.0.20.
The program is tested to work with xformers 0.0.23.post1.
To reinstall the desired version, run with commandline flag --reinstall-xformers.
Use --skip-version-check commandline argument to disable this check.
=================================================================================
*** Error loading script: detect_extension.py
Traceback (most recent call last):
File "P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\modules\scripts.py", line 541, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\scripts\detect_extension.py", line 41, in<module>if not os.path.exists(canvasZoomPath) and gradio_version is not None:
NameError: name 'gradio_version' is not defined
---
Loading weights [c55486da6d] from P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\models\Stable-diffusion\EasyFluff\EasyFluffV11.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.Startup time: 16.1s (prepare environment: 2.5s, import torch: 7.7s, initialize shared: 0.1s, other imports: 0.7s, list SD models: 1.8s, load scripts: 2.4s, create ui: 0.4s, gradio launch: 0.3s).model_type EPSUNet ADM Dimension 0Using xformers attention in VAEWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.Using xformers attention in VAEextra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}Loading VAE weights specified in settings: P:\Computing\AI\Stable Diffusion\Web-UI\1.6.0\models\VAE\furception_vae_1-0.safetensorsTo load target model SD1ClipModelBegin to load 1 modelModel loaded in 2.4s (load weights from disk: 0.6s, forge load real models: 1.1s, load VAE: 0.1s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.2s).Token merging is under construction now and the setting will not take effect.To load target model BaseModelBegin to load 1 modelMoving model(s) has taken 0.49 seconds100%|████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.02it/s]To load target model AutoencoderKL████████████████████████████████████▍ | 19/20 [00:01<00:00, 10.75it/s]Begin to load 1 modelTotal progress: 100%|████████████████████████████████████████████████████| 20/20 [00:02<00:00, 8.44it/s]Total progress: 100%|████████████████████████████████████████████████████| 20/20 [00:02<00:00, 10.75it/s]
Additional information
No response
The text was updated successfully, but these errors were encountered:
Are you using a junction to specify the image save path by using "mklink" ? I encountered the same problem as yours when using junction. After I deleted the junctions and specified the image save directory in the settings instead, the problem was solved. It seems that the current forge version conflicts with junction in Windows.
Checklist
What happened?
Updated from d11c9d7 to 3cdae09 to get v-pred on 1.5 working. I updated running
When I generate an image, the previews show the "missing image" icon in browser. This did not occur prior to updating.
In dev console, it shows
Failed to load resource: the server responded with a status of 403 (Forbidden)
. Following the error,{"detail":"File not allowed: P:/Computing/AI/Stable Diffusion/outputs/txt2img-images/2024-02-12/00028-1799655419.png."}
json is returned. Entering the direct path into URL returns the correct image.The images are being generated and show up correctly in file explorer, they only fail to show up in the interface.
I have narrowed it down to the folder junction. Somewhere between d11c9d7 to 3cdae09 something was added (possibly for security) that is preventing web-ui from accessing the folder junction. My output folder for web-ui is a folder junction to another folder (same drive) where I keep images from all the different interfaces.
When I change the output folder to something that is in the same root path as web-ui, images show up correctly. Changing back to the folder junction breaks it again.
Oddly, it is still able to access other folder junctions like checkpoint and lora.
Steps to reproduce the problem
output
in web-ui to another folder on the same driver but not within the web-ui directory.What should have happened?
The preview image should of showed up like prior to updating.
What browsers do you use to access the UI ?
Brave
Sysinfo
sysinfo-2024-02-12-12-15.json
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: