Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scripts/vsmlrt.py: add support for esrgan janai models #42

Merged
merged 1 commit into from
Apr 25, 2023
Merged

scripts/vsmlrt.py: add support for esrgan janai models #42

merged 1 commit into from
Apr 25, 2023

Conversation

hooke007
Copy link
Contributor

@hooke007 hooke007 commented Apr 17, 2023

Continue from #41 (comment)

edit:
models - 1.zip

@AkarinVS
Copy link
Member

there will be more and more waifu2x/esrgan/cugan-like models in the future, and it's impractical to maintaining an up-to-date support of all such models in the python wrapper, at least not in a timely manner. so my questions are:

  1. what should be criteria for adding one? and also for including in the official builtin models package?
  2. could we design a generic python API (the current vsmlrt.inference is one such design, but obviously it fails to attract usage) to support using those models without having the users waiting for the vsmlrt.py to update?

@hooke007
Copy link
Contributor Author

and it's impractical to maintaining an up-to-date support of all such models in the python wrapper, at least not in a timely manner.

Agreed, actually this is a draft. I prefer to only keep two or three models of janai (one for speed and one for quality). Comparing the previous esrgan's official model. janai could preserve anime's original art style rather than totally rebuilding(/breaking?) it. Of course, from the perspective of serious encoding, there is still a gap in its level. This is what I think is lacking in the current ESRGAN model.

@AkarinVS
Copy link
Member

After extensive discussions, we propose the following policy regarding contributing models into vs-mlrt:

  1. we will add a new model package (besides the existing models.7z and ext-models.7z), contrib-models.7z to host community contributed onnx files to our releases.
  2. The python wrappers, depending on whether it needs to reuse existing model specific functions in vsmlrt.py, live in either of these files (both files will be included in main release, only the onnx files are packaged separately due to space constraints):
    • if it has to modify the existing code, like in this PR's case, you can modify vsmlrt.py, but please make the model id number starting from 5000, so that the code can know if the corresponding ONNX file is coming from models/ext-models or contrib-models. If you modify the vsmlrt.py, please make sure any code modifications has properly isolated from the rest of the code (to avoid breaking existing models).
    • If it's an entirely new model family that you have to write your own model specific functions (e.g. vsmlrt.Waifu2x), then you can add such code into a new contrib.py file (specifics to be released later after we've done some reorganizations).
  3. for a PR to be accepted, the author must agree to maintain the ONNX model files and corresponding python wrappers. If a contributed model is broken for some time and its original author fails to respond to issue reports, we reserve the rights to remove such models.
  4. we will add a rolling nightly prerelease so that users can immediately use contributed models without waiting for the next release.

What do you think about this proposed policy?

Thanks.

@hooke007
Copy link
Contributor Author

I agree with this policy.

@hooke007
Copy link
Contributor Author

hooke007 commented Apr 20, 2023

make the model id number starting from 5000

It would be out of tuple's range.

@AkarinVS AkarinVS force-pushed the master branch 4 times, most recently from b0118ae to 4bfb152 Compare April 20, 2023 08:07
scripts/vsmlrt.py Outdated Show resolved Hide resolved
scripts/vsmlrt.py Outdated Show resolved Hide resolved
scripts/vsmlrt.py Outdated Show resolved Hide resolved
scripts/vsmlrt.py Outdated Show resolved Hide resolved
scripts/vsmlrt.py Outdated Show resolved Hide resolved
@hooke007

This comment was marked as resolved.

@hooke007
Copy link
Contributor Author

The only solution I could work out is leave away from tuple. Now it does work properly.

@WolframRhodium
Copy link
Contributor

That's a better solution. LGTM. Please also attach your name.

@hooke007
Copy link
Contributor Author

hooke007 commented Apr 24, 2023

Please also attach your name.

Hmm, Where should I add into the code? I added it in Line470

scripts/vsmlrt.py Outdated Show resolved Hide resolved
scripts/vsmlrt.py Outdated Show resolved Hide resolved
@WolframRhodium WolframRhodium merged commit 9b23e40 into AmusementClub:master Apr 25, 2023
@hooke007 hooke007 deleted the janai branch April 25, 2023 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants