Replies: 1 comment
-
Actually so far the only thing I've changed is the optimiser to use 8bit instead of full. I haven't done a comparison yet. But tbf there are other models that are better for my use cases such as RVC so I haven't really done anything here |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Good day, could you please tell, if you have trained models on the same dataset with 8-bit quantization enabled and disabled, is there any result comparsion? Do you notice any quality degradation in general?
Also, I wonder, if quantization works for inference as well?
Beta Was this translation helpful? Give feedback.
All reactions