Skip to content

Commit

Permalink
fix(docs): add conversion error to user guide (#404)
Browse files Browse the repository at this point in the history
  • Loading branch information
ssube committed Dec 22, 2023
1 parent 1f778ab commit 108c502
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 9 deletions.
20 changes: 11 additions & 9 deletions docs/converted-readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ This is a copy of MODEL TITLE converted to the ONNX format for use with tools th
https://github.com/ssube/onnx-web. If you have questions about using this model, please see
https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#pre-converted-models.

FP16 WARNING: This model has been converted to FP16 format and will not run correctly on the CPU platform. If you are
using the CPU platform, please use the FP32 model instead.

As a derivative of MODEL TITLE, the model files that came with this README are licensed under the terms of TODO. A copy
of the license was included in the archive. Please make sure to read and follow the terms before you use this model or
redistribute these files.
Expand All @@ -12,6 +15,13 @@ If you are the author of this model and have questions about ONNX models or woul
distribution or moved to another site, please contact ssube on https://github.com/ssube/onnx-web/issues or
https://discord.gg/7CdQmutGuw.

## Adding models

Extract the entire ZIP archive into the models folder of your onnx-web installation and restart the server or click the
Restart Workers button in the web UI and then refresh the page.

Please see https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#adding-your-own-models for more details.

## Folder structure

- cnet
Expand All @@ -32,12 +42,4 @@ https://discord.gg/7CdQmutGuw.
- UNet model
- vae_decoder
- VAE decoder model
- vae_encoder
- VAE encoder model

## Adding models

Extract the entire ZIP archive into the models folder of your onnx-web installation and restart the server or click the
Restart Workers button in the web UI and then refresh the page.

Please see https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#adding-your-own-models for more details.
- vae_encoder
21 changes: 21 additions & 0 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ Please see [the server admin guide](server-admin.md) for details on how to confi
- [The expanded size of the tensor must match the existing size](#the-expanded-size-of-the-tensor-must-match-the-existing-size)
- [Shape mismatch attempting to re-use buffer](#shape-mismatch-attempting-to-re-use-buffer)
- [Cannot read properties of undefined (reading 'default')](#cannot-read-properties-of-undefined-reading-default)
- [Missing key(s) in state\_dict](#missing-keys-in-state_dict)
- [Output Image Sizes](#output-image-sizes)

## Outline
Expand Down Expand Up @@ -1719,6 +1720,26 @@ Could not fetch parameters from the onnx-web API server at http://10.2.2.34:5000
Cannot read properties of undefined (reading 'default')
```

#### Missing key(s) in state_dict

This can happen when you try to convert a newer Stable Diffusion checkpoint with Torch model extraction enabled. The
code used for model extraction does not support some keys in recent models and will throw an error.

Make sure you have set the `ONNX_WEB_CONVERT_EXTRACT` environment variable to `FALSE`.

Example error:

```none
Traceback (most recent call last):
File "/opt/onnx-web/api/onnx_web/convert/diffusion/checkpoint.py", line 1570, in extract_checkpoint
vae.load_state_dict(converted_vae_checkpoint)
File "/home/ssube/miniconda3/envs/onnx-web-rocm-pytorch2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AutoencoderKL:
Missing key(s) in state_dict: "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias".
Unexpected key(s) in state_dict: "encoder.mid_block.attentions.0.key.bias", "encoder.mid_block.attentions.0.key.weight", "encoder.mid_block.attentions.0.proj_attn.bias", "encoder.mid_block.attentions.0.proj_attn.weight", "encoder.mid_block.attentions.0.query.bias", "encoder.mid_block.attentions.0.query.weight", "encoder.mid_block.attentions.0.value.bias", "encoder.mid_block.attentions.0.value.weight", "decoder.mid_block.attentions.0.key.bias", "decoder.mid_block.attentions.0.key.weight", "decoder.mid_block.attentions.0.proj_attn.bias", "decoder.mid_block.attentions.0.proj_attn.weight", "decoder.mid_block.attentions.0.query.bias", "decoder.mid_block.attentions.0.query.weight", "decoder.mid_block.attentions.0.value.bias", "decoder.mid_block.attentions.0.value.weight".
```

## Output Image Sizes

You can use this table to figure out the final size for each image, based on the combination of parameters that you are
Expand Down

0 comments on commit 108c502

Please sign in to comment.