Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Returned non-zero exit status 1 #1254

Closed
sgtsixpack opened this issue Jun 3, 2023 · 6 comments
Closed

Returned non-zero exit status 1 #1254

sgtsixpack opened this issue Jun 3, 2023 · 6 comments
Labels
new Just added, you should probably sort this.

Comments

@sgtsixpack
Copy link

⚠️If you do not follow the template, your issue may be closed without a response ⚠️

Kindly read and fill this form in its entirety.

0. Initial troubleshooting

Please check each of these before opening an issue. If you've checked them, delete this section of your bug report. Have you:

  • Updated the Stable-Diffusion-WebUI to the latest version?
  • Updated Dreambooth to the latest revision?
  • Completely restarted the stable-diffusion-webUI, not just reloaded the UI?
  • Read the Readme?

1. Please find the following lines in the console and paste them below.

#######################################################################################################
Initializing Dreambooth
If submitting an issue on github, please provide the below text for debugging purposes:

Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Dreambooth revision: bd3fecc3d27d777a4e8f3206a0b16e852877dbad
SD-WebUI revision: 

[+] torch version 2.0.0+cu118 installed.
[+] torchvision version 0.15.1+cu118 installed.
[+] xformers version 0.0.17+b6be33a.d20230315 installed.
[+] accelerate version 0.17.1 installed.
[+] bitsandbytes version 0.35.4 installed.
[+] diffusers version 0.14.0 installed.
[+] transformers version 4.27.1 installed.

#######################################################################################################

2. Describe the bug

(A clear and concise description of what the bug is)
I hit train after inputting my directories and model name and I get a "Returned non-zero exit status 1".

Screenshots/Config
If the issue is specific to an error while training, please provide a screenshot of training parameters or the
db_config.json file from /models/dreambooth/MODELNAME/db_config.json

All training parameters under dreambooth lora, training parameters are default.

3. Provide logs

If a crash has occurred, please provide the entire stack trace from the log, including the last few log messages before the crash occurred.

16:35:59-412472 INFO     Using CPU-only Torch
16:36:00-577514 INFO     Torch 2.0.1+cpu
16:36:00-579516 WARNING  Torch reports CUDA not available
16:36:00-580517 INFO     Validating that requirements are satisfied.
16:36:02-131736 INFO     All requirements satisfied.
16:36:04-431662 INFO     headless: False
16:36:04-437667 INFO     Load CSS...
Running on local URL:  http://127.0.0.1:7960

To create a public link, set `share=True` in `launch()`.
16:36:14-085577 INFO     Start training LoRA Standard ...
16:36:14-087579 INFO     Folder 100_LoraJanaDefi1st: 110 images found
16:36:14-088580 INFO     Folder 100_LoraJanaDefi1st: 11000 steps
16:36:14-088580 INFO     Total steps: 11000
16:36:14-089581 INFO     Train batch size: 1
16:36:14-090581 INFO     Gradient accumulation steps: 1.0
16:36:14-090581 INFO     Epoch: 1
16:36:14-091583 INFO     Regulatization factor: 1
16:36:14-092583 INFO     max_train_steps (11000 / 1 / 1.0 * 1 * 1) = 11000
16:36:14-093584 INFO     stop_text_encoder_training = 0
16:36:14-094586 INFO     lr_warmup_steps = 1100
16:36:14-094586 INFO     accelerate launch --num_cpu_threads_per_process=2 "train_network.py" --enable_bucket
                         --pretrained_model_name_or_path="J:/AI training/Stable
                         diffusion/stable-diffusion-webui-directml/models/Stable-diffusion/hardblend_.safetensors"
                         --train_data_dir="J:/AI training/Stable diffusion/stable-diffusion-webui-directml/SgtSixpack
                         Training/LoraJanaDefi1st/image1st" --resolution=512,512 --output_dir="J:/AI training/Stable
                         diffusion/stable-diffusion-webui-directml/SgtSixpack Training/LoraJanaDefi1st/model1st"
                         --logging_dir="J:/AI training/Stable diffusion/stable-diffusion-webui-directml/SgtSixpack
                         Training/LoraJanaDefi1st/log1st" --network_alpha="1" --training_comment="hardblend_.safetensors
                         training" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=5e-05
                         --unet_lr=0.0001 --network_dim=8 --output_name="JanaDefiNH_v1" --lr_scheduler_num_cycles="1"
                         --learning_rate="0.0001" --lr_scheduler="cosine" --lr_warmup_steps="1100"
                         --train_batch_size="1" --max_train_steps="11000" --save_every_n_epochs="1"
                         --mixed_precision="fp16" --save_precision="fp16" --cache_latents --optimizer_type="AdamW8bit"
                         --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale
2023-06-03 16:36:15.609170: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2023-06-03 16:36:15.609250: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-06-03 16:36:19.326460: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2023-06-03 16:36:19.326554: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.1+cpu)
    Python  3.10.11 (you have 3.10.6)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'J:\AI training\Stable diffusion\stable-diffusion-webui-directml\kohya_ss\venv\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
prepare tokenizer
Using DreamBooth method.
prepare images.
found directory J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st contains 110 image files
No caption file found for 110 images. Training will continue without captions for these images. If class token exists, it will be used. / 110枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学習 を続行します。class tokenが存在する場合はそれを使います。
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st\JanaDefi1st (1).png
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st\JanaDefi1st (10).png
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st\JanaDefi1st (100).png
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st\JanaDefi1st (101).png
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st\JanaDefi1st (102).png
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st\JanaDefi1st (103).png... and 105 more
11000 train images with repeating.
0 reg images.
no regularization images / 正則化画像が見つかりませんでした
[Dataset 0]
  batch_size: 1
  resolution: (512, 512)
  enable_bucket: True
  min_bucket_reso: 256
  max_bucket_reso: 1024
  bucket_reso_steps: 64
  bucket_no_upscale: True

  [Subset 0 of Dataset 0]
    image_dir: "J:\AI training\Stable diffusion\stable-diffusion-webui-directml\SgtSixpack Training\LoraJanaDefi1st\image1st\100_LoraJanaDefi1st"
    image_count: 110
    num_repeats: 100
    shuffle_caption: False
    keep_tokens: 0
    caption_dropout_rate: 0.0
    caption_dropout_every_n_epoches: 0
    caption_tag_dropout_rate: 0.0
    color_aug: False
    flip_aug: False
    face_crop_aug_range: None
    random_crop: False
    token_warmup_min: 1,
    token_warmup_step: 0,
    is_reg: False
    class_tokens: LoraJanaDefi1st
    caption_extension: .caption


[Dataset 0]
loading image sizes.
100%|██████████████████████████████████████████████████████████████████████████████| 110/110 [00:00<00:00, 7850.23it/s]
make buckets
min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます
number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)
bucket 0: resolution (320, 512), count: 100
bucket 1: resolution (512, 512), count: 10900
mean ar error (without repeats): 0.0004794034090909091
preparing accelerator
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\accelerate\accelerator.py:249: FutureWarning: `logging_dir` is deprecated and will be removed in version 0.18.0 of 🤗 Accelerate. Use `project_dir` instead.
  warnings.warn(
Using accelerator 0.15.0 or above.
loading model for process 0/1
load StableDiffusion checkpoint: J:/AI training/Stable diffusion/stable-diffusion-webui-directml/models/Stable-diffusion/hardblend_.safetensors
J:\AI training\Stable diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  with safe_open(filename, framework="pt", device=device) as f:
loading u-net: <All keys matched successfully>
loading vae: <All keys matched successfully>
loading text encoder: <All keys matched successfully>
CrossAttention.forward has been replaced to enable xformers.
import network module: networks.lora
[Dataset 0]
caching latents.
  0%|                                                                                          | 0/110 [00:00<?, ?it/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ J:\AI training\Stable diffusion\stable-diffusion-webui-directml\kohya_ss\train_network.py:814 in │
│ <module>                                                                                         │
│                                                                                                  │
│   811 │   args = parser.parse_args()                                                             │
│   812 │   args = train_util.read_config_from_file(args, parser)                                  │
│   813 │                                                                                          │
│ ❱ 814 │   train(args)                                                                            │
│   815                                                                                            │
│                                                                                                  │
│ J:\AI training\Stable diffusion\stable-diffusion-webui-directml\kohya_ss\train_network.py:180 in │
│ train                                                                                            │
│                                                                                                  │
│   177 │   │   vae.requires_grad_(False)                                                          │
│   178 │   │   vae.eval()                                                                         │
│   179 │   │   with torch.no_grad():                                                              │
│ ❱ 180 │   │   │   train_dataset_group.cache_latents(vae, args.vae_batch_size, args.cache_laten   │
│   181 │   │   vae.to("cpu")                                                                      │
│   182 │   │   if torch.cuda.is_available():                                                      │
│   183 │   │   │   torch.cuda.empty_cache()                                                       │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\library\train_util.py:1422 in cache_latents   │
│                                                                                                  │
│   1419 │   def cache_latents(self, vae, vae_batch_size=1, cache_to_disk=False, is_main_process=  │
│   1420 │   │   for i, dataset in enumerate(self.datasets):                                       │
│   1421 │   │   │   print(f"[Dataset {i}]")                                                       │
│ ❱ 1422 │   │   │   dataset.cache_latents(vae, vae_batch_size, cache_to_disk, is_main_process)    │
│   1423 │                                                                                         │
│   1424 │   def is_latent_cacheable(self) -> bool:                                                │
│   1425 │   │   return all([dataset.is_latent_cacheable() for dataset in self.datasets])          │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\library\train_util.py:814 in cache_latents    │
│                                                                                                  │
│    811 │   │   │   img_tensors = torch.stack(images, dim=0)                                      │
│    812 │   │   │   img_tensors = img_tensors.to(device=vae.device, dtype=vae.dtype)              │
│    813 │   │   │                                                                                 │
│ ❱  814 │   │   │   latents = vae.encode(img_tensors).latent_dist.sample().to("cpu")              │
│    815 │   │   │                                                                                 │
│    816 │   │   │   for info, latent in zip(batch, latents):                                      │
│    817 │   │   │   │   if cache_to_disk:                                                         │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\diffusers\models\vae.p │
│ y:566 in encode                                                                                  │
│                                                                                                  │
│   563 │   │   self.use_slicing = False                                                           │
│   564 │                                                                                          │
│   565 │   def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOut   │
│ ❱ 566 │   │   h = self.encoder(x)                                                                │
│   567 │   │   moments = self.quant_conv(h)                                                       │
│   568 │   │   posterior = DiagonalGaussianDistribution(moments)                                  │
│   569                                                                                            │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\torch\nn\modules\modul │
│ e.py:1501 in _call_impl                                                                          │
│                                                                                                  │
│   1498 │   │   if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks   │
│   1499 │   │   │   │   or _global_backward_pre_hooks or _global_backward_hooks                   │
│   1500 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1501 │   │   │   return forward_call(*args, **kwargs)                                          │
│   1502 │   │   # Do not call functions when jit is used                                          │
│   1503 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1504 │   │   backward_pre_hooks = []                                                           │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\diffusers\models\vae.p │
│ y:130 in forward                                                                                 │
│                                                                                                  │
│   127 │                                                                                          │
│   128 │   def forward(self, x):                                                                  │
│   129 │   │   sample = x                                                                         │
│ ❱ 130 │   │   sample = self.conv_in(sample)                                                      │
│   131 │   │                                                                                      │
│   132 │   │   # down                                                                             │
│   133 │   │   for down_block in self.down_blocks:                                                │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\torch\nn\modules\modul │
│ e.py:1501 in _call_impl                                                                          │
│                                                                                                  │
│   1498 │   │   if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks   │
│   1499 │   │   │   │   or _global_backward_pre_hooks or _global_backward_hooks                   │
│   1500 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1501 │   │   │   return forward_call(*args, **kwargs)                                          │
│   1502 │   │   # Do not call functions when jit is used                                          │
│   1503 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1504 │   │   backward_pre_hooks = []                                                           │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\torch\nn\modules\conv. │
│ py:463 in forward                                                                                │
│                                                                                                  │
│    460 │   │   │   │   │   │   self.padding, self.dilation, self.groups)                         │
│    461 │                                                                                         │
│    462 │   def forward(self, input: Tensor) -> Tensor:                                           │
│ ❱  463 │   │   return self._conv_forward(input, self.weight, self.bias)                          │
│    464                                                                                           │
│    465 class Conv3d(_ConvNd):                                                                    │
│    466 │   __doc__ = r"""Applies a 3D convolution over an input signal composed of several inpu  │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\torch\nn\modules\conv. │
│ py:459 in _conv_forward                                                                          │
│                                                                                                  │
│    456 │   │   │   return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=sel  │
│    457 │   │   │   │   │   │   │   weight, bias, self.stride,                                    │
│    458 │   │   │   │   │   │   │   _pair(0), self.dilation, self.groups)                         │
│ ❱  459 │   │   return F.conv2d(input, weight, bias, self.stride,                                 │
│    460 │   │   │   │   │   │   self.padding, self.dilation, self.groups)                         │
│    461 │                                                                                         │
│    462 │   def forward(self, input: Tensor) -> Tensor:                                           │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ J:\AI training\Python310\lib\runpy.py:196 in _run_module_as_main                                 │
│                                                                                                  │
│   193 │   main_globals = sys.modules["__main__"].__dict__                                        │
│   194 │   if alter_argv:                                                                         │
│   195 │   │   sys.argv[0] = mod_spec.origin                                                      │
│ ❱ 196 │   return _run_code(code, main_globals, None,                                             │
│   197 │   │   │   │   │    "__main__", mod_spec)                                                 │
│   198                                                                                            │
│   199 def run_module(mod_name, init_globals=None,                                                │
│                                                                                                  │
│ J:\AI training\Python310\lib\runpy.py:86 in _run_code                                            │
│                                                                                                  │
│    83 │   │   │   │   │      __loader__ = loader,                                                │
│    84 │   │   │   │   │      __package__ = pkg_name,                                             │
│    85 │   │   │   │   │      __spec__ = mod_spec)                                                │
│ ❱  86 │   exec(code, run_globals)                                                                │
│    87 │   return run_globals                                                                     │
│    88                                                                                            │
│    89 def _run_module_code(code, init_globals=None,                                              │
│                                                                                                  │
│ in <module>:7                                                                                    │
│                                                                                                  │
│   4 from accelerate.commands.accelerate_cli import main                                          │
│   5 if __name__ == '__main__':                                                                   │
│   6 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         │
│ ❱ 7 │   sys.exit(main())                                                                         │
│   8                                                                                              │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\accelerate\commands\ac │
│ celerate_cli.py:45 in main                                                                       │
│                                                                                                  │
│   42 │   │   exit(1)                                                                             │
│   43 │                                                                                           │
│   44 │   # Run                                                                                   │
│ ❱ 45 │   args.func(args)                                                                         │
│   46                                                                                             │
│   47                                                                                             │
│   48 if __name__ == "__main__":                                                                  │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\accelerate\commands\la │
│ unch.py:923 in launch_command                                                                    │
│                                                                                                  │
│   920 │   elif defaults is not None and defaults.compute_environment == ComputeEnvironment.AMA   │
│   921 │   │   sagemaker_launcher(defaults, args)                                                 │
│   922 │   else:                                                                                  │
│ ❱ 923 │   │   simple_launcher(args)                                                              │
│   924                                                                                            │
│   925                                                                                            │
│   926 def main():                                                                                │
│                                                                                                  │
│ J:\AI training\Stable                                                                            │
│ diffusion\stable-diffusion-webui-directml\kohya_ss\venv\lib\site-packages\accelerate\commands\la │
│ unch.py:579 in simple_launcher                                                                   │
│                                                                                                  │
│   576 │   process.wait()                                                                         │
│   577 │   if process.returncode != 0:                                                            │
│   578 │   │   if not args.quiet:                                                                 │
│ ❱ 579 │   │   │   raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)    │
│   580 │   │   else:                                                                              │
│   581 │   │   │   sys.exit(1)                                                                    │
│   582                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
CalledProcessError: Command '['J:\\AI training\\Stable
diffusion\\stable-diffusion-webui-directml\\kohya_ss\\venv\\Scripts\\python.exe', 'train_network.py', '--enable_bucket',
'--pretrained_model_name_or_path=J:/AI training/Stable
diffusion/stable-diffusion-webui-directml/models/Stable-diffusion/hardblend_.safetensors', '--train_data_dir=J:/AI
training/Stable diffusion/stable-diffusion-webui-directml/SgtSixpack Training/LoraJanaDefi1st/image1st',
'--resolution=512,512', '--output_dir=J:/AI training/Stable diffusion/stable-diffusion-webui-directml/SgtSixpack
Training/LoraJanaDefi1st/model1st', '--logging_dir=J:/AI training/Stable
diffusion/stable-diffusion-webui-directml/SgtSixpack Training/LoraJanaDefi1st/log1st', '--network_alpha=1',
'--training_comment=hardblend_.safetensors training', '--save_model_as=safetensors', '--network_module=networks.lora',
'--text_encoder_lr=5e-05', '--unet_lr=0.0001', '--network_dim=8', '--output_name=JanaDefiNH_v1',
'--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=1100',
'--train_batch_size=1', '--max_train_steps=11000', '--save_every_n_epochs=1', '--mixed_precision=fp16',
'--save_precision=fp16', '--cache_latents', '--optimizer_type=AdamW8bit', '--max_data_loader_n_workers=0',
'--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status 1.

4. Environment

What OS?
Windows 10 2h22

If Windows - WSL or native?
Don't understand question

What GPU are you using?
AMD x6800xt

@sgtsixpack sgtsixpack added the new Just added, you should probably sort this. label Jun 3, 2023
@sgtsixpack
Copy link
Author

I just updated and it said that I had torch 2.0.1 installed which is imcompatible, need torch 2.0.0.

I have no idea how to downgrade torch.

2023-06-04
2023-06-04 (1)

@ArrowM
Copy link
Collaborator

ArrowM commented Jun 4, 2023

Your pytorch and xformers libraries are messed up. Reinstall both.

@sgtsixpack
Copy link
Author

Can u explain how? Or point to a guide? Thanks!

I originally installed with this tutorial (and it installed automatically):
https://www.youtube.com/watch?v=toSIPXfv5PQ

@ArrowM
Copy link
Collaborator

ArrowM commented Jun 4, 2023

I think the following should work:

activate your venv by opening a terminal at {your a1111 project path}/venv/Scripts and run

activate
pip install --force-reinstall torch torchvision --index-url https://download.pytorch.org/whl/cu118
pip install --force-reinstall xformers

@ArrowM
Copy link
Collaborator

ArrowM commented Jun 4, 2023

Also, it looks like you're using Kohya, so please open issues on that repo and not this one

@ArrowM ArrowM closed this as not planned Won't fix, can't repro, duplicate, stale Jun 4, 2023
@sgtsixpack
Copy link
Author

Hey can u take a look here? I'm sure u know the answer:
bmaltais/kohya_ss#910

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new Just added, you should probably sort this.
Projects
None yet
Development

No branches or pull requests

2 participants