Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Image-to-Image for super-resolution #1317

Closed
xenova opened this issue Aug 25, 2023 · 0 comments · Fixed by #1492
Closed

[Feature request] Image-to-Image for super-resolution #1317

xenova opened this issue Aug 25, 2023 · 0 comments · Fixed by #1492

Comments

@xenova
Copy link
Contributor

xenova commented Aug 25, 2023

Feature request

Running

optimum-cli export onnx -m caidas/swin2SR-realworld-sr-x4-64-bsrgan-psnr out

Results in

Framework not specified. Using pt to export to ONNX.
Traceback (most recent call last):
  File "/usr/local/bin/optimum-cli", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/optimum/commands/optimum_cli.py", line 163, in main
    service.run()
  File "/usr/local/lib/python3.10/dist-packages/optimum/commands/export/onnx.py", line 225, in run
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 306, in main_export
    model = TasksManager.get_model_from_task(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/tasks.py", line 1481, in get_model_from_task
    if TasksManager._TASKS_TO_LIBRARY[task.replace("-with-past", "")] == "transformers":
KeyError: 'image-to-image'

meaning this task is not yet supported. I also tried selecting the masked-im task, however that results in

Framework not specified. Using pt to export to ONNX.
Traceback (most recent call last):
  File "/usr/local/bin/optimum-cli", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/optimum/commands/optimum_cli.py", line 163, in main
    service.run()
  File "/usr/local/lib/python3.10/dist-packages/optimum/commands/export/onnx.py", line 225, in run
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 306, in main_export
    model = TasksManager.get_model_from_task(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/tasks.py", line 1513, in get_model_from_task
    model = model_class.from_pretrained(model_name_or_path, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.swin2sr.configuration_swin2sr.Swin2SRConfig'> for this kind of AutoModel: AutoModelForMaskedImageModeling.
Model type should be one of DeiTConfig, FocalNetConfig, SwinConfig, Swinv2Config, ViTConfig.

See here for a list of available swin2sr models: https://huggingface.co/models?other=swin2sr

Motivation

Requested here by @josephrocca. With WebGPU support right around the corner, I'd like to have a variety of vision models ready to create demos for (on top of, for example, SAM which is now supported).

Your contribution

I will add support to transformers.js when ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant